Movatterモバイル変換


[0]ホーム

URL:


CN106203429A - Based on the shelter target detection method under binocular stereo vision complex background - Google Patents

Based on the shelter target detection method under binocular stereo vision complex background
Download PDF

Info

Publication number
CN106203429A
CN106203429ACN201610530766.XACN201610530766ACN106203429ACN 106203429 ACN106203429 ACN 106203429ACN 201610530766 ACN201610530766 ACN 201610530766ACN 106203429 ACN106203429 ACN 106203429A
Authority
CN
China
Prior art keywords
sigma
pixel
camera
target detection
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610530766.XA
Other languages
Chinese (zh)
Inventor
杨涛
贺战男
任强
张艳宁
李广坡
刘小飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northwestern Polytechnical University
Original Assignee
Northwestern Polytechnical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northwestern Polytechnical UniversityfiledCriticalNorthwestern Polytechnical University
Priority to CN201610530766.XApriorityCriticalpatent/CN106203429A/en
Publication of CN106203429ApublicationCriticalpatent/CN106203429A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于双目立体视觉复杂背景下的遮挡目标检测方法,用于解决现有遮挡目标检测方法检测精度差的技术问题。技术方案是首先标定双目相机得到行对准的校正图像,然后通过立体匹配得到视差图进行背景建模,计算场景三维坐标生成俯视投影图,最后用MeanShift方法对俯视投影图进行聚类得到检测结果。本发明利用空间三维信息,有效地解决了目标遮挡、场景光线变化、阴影以及复杂背景的干扰等单目视觉中的技术问题,提高了检测精度。The invention discloses an occlusion target detection method based on binocular stereo vision in a complex background, which is used to solve the technical problem of poor detection accuracy of the existing occlusion target detection method. The technical solution is to first calibrate the binocular camera to obtain the corrected image for line alignment, then obtain the disparity map through stereo matching for background modeling, calculate the three-dimensional coordinates of the scene to generate a top-view projection map, and finally use the MeanShift method to cluster the top-view projection map to obtain detection result. The invention utilizes spatial three-dimensional information to effectively solve technical problems in monocular vision such as object occlusion, scene light changes, shadows, and interference from complex backgrounds, and improves detection accuracy.

Description

Translated fromChinese
基于双目立体视觉复杂背景下的遮挡目标检测方法Occluded object detection method in complex background based on binocular stereo vision

技术领域technical field

本发明涉及一种遮挡目标检测方法,特别是涉及一种基于双目立体视觉复杂背景下的遮挡目标检测方法。The invention relates to a method for detecting an occluded target, in particular to a method for detecting an occluded target based on a complex background of binocular stereo vision.

背景技术Background technique

传统的运动目标检测大多基于单目视觉的方法,相对于立体视觉而言单目视觉有其优点,但也存在很大缺陷。单目视觉的信息量小,每次只需处理一幅图像,运算速度相对较快,但是图像在投影过程中丢失了实际场景的三维信息,因此有着不可弥补的缺陷。在采用基于单目视觉的方法进行运动目标检测中,如文献“Effective Gaussian mixturelearning for video background subtraction.Pattern Analysis and MachineIntelligence,IEEE Transactions on,2005,27(5):827-832”常常存在着目标遮挡以及周围场景光线的变化和阴影的干扰等问题,如何解决这些问题一直是研究的难点。针对这些问题许多学者做了大量研究,针对遮挡问题提出来基于目标特征匹配的算法,多子模板匹配的算法,针对光线变化和阴影干扰等问题,提出多高斯背景模型和阴影消除算法等,但这些方法受环境因素的影响较大,在实际应用中很容易出现目标检测失败的现象。Most of the traditional moving target detection methods are based on monocular vision. Compared with stereo vision, monocular vision has its advantages, but it also has great defects. Monocular vision has a small amount of information, and only needs to process one image at a time, and the calculation speed is relatively fast, but the image loses the three-dimensional information of the actual scene during the projection process, so it has irreparable defects. In the detection of moving targets based on monocular vision, such as the document "Effective Gaussian mixture learning for video background subtraction. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2005, 27(5): 827-832", there is often target occlusion As well as the change of light in the surrounding scene and the interference of shadows, how to solve these problems has always been a difficult point in research. Many scholars have done a lot of research on these problems, proposed algorithms based on target feature matching, multi-sub-template matching algorithms for occlusion problems, proposed multi-Gaussian background models and shadow elimination algorithms for problems such as light changes and shadow interference, etc., but These methods are greatly affected by environmental factors, and it is easy to fail in target detection in practical applications.

发明内容Contents of the invention

为了克服现有遮挡目标检测方法检测精度差的不足,本发明提供一种基于双目立体视觉复杂背景下的遮挡目标检测方法。该方法首先标定双目相机得到行对准的校正图像,然后通过立体匹配得到视差图进行背景建模,计算场景三维坐标生成俯视投影图,最后用MeanShift方法对俯视投影图进行聚类得到检测结果。本发明利用空间三维信息,有效地解决了目标遮挡、场景光线变化、阴影以及复杂背景的干扰等单目视觉中的技术问题,提高了检测精度。In order to overcome the deficiency of poor detection accuracy of existing occlusion target detection methods, the present invention provides a occlusion target detection method based on binocular stereo vision in a complex background. This method firstly calibrates the binocular camera to obtain the corrected image for line alignment, then obtains the disparity map through stereo matching for background modeling, calculates the three-dimensional coordinates of the scene to generate the top view projection map, and finally uses the MeanShift method to cluster the top view projection map to obtain the detection results . The invention utilizes spatial three-dimensional information to effectively solve technical problems in monocular vision such as object occlusion, scene light changes, shadows, and interference from complex backgrounds, and improves detection accuracy.

本发明解决其技术问题所采用的技术方案是:一种基于双目立体视觉复杂背景下的遮挡目标检测方法,其特点是包括以下步骤:The technical solution adopted by the present invention to solve the technical problem is: a method for detecting an occluded target based on a complex background of binocular stereo vision, which is characterized in that it comprises the following steps:

步骤一、双目相机标定。Step 1, binocular camera calibration.

首先采用张正友棋盘格标定方法,分别拍摄多张棋盘格图像,用以标定两个相机的内部参数M1,外部参数M2通过拍摄放置在地面的标定板图像解算。图像坐标(u,v)与世界坐标(Xw,Yw,Zw)的齐次转换关系如下:Firstly, Zhang Zhengyou’s checkerboard calibration method is used to shoot multiple checkerboard images to calibrate the internal parameter M1 of the two cameras, and the external parameter M2 is calculated by taking images of the calibration board placed on the ground. The homogeneous transformation relationship between image coordinates (u, v) and world coordinates (Xw , Yw , Zw ) is as follows:

zzccuuvv11==11ddxx00uu000011ddythe yvv00000011ff00000000ff000000001100RRCCTTCC00→&Right Arrow;11XxwwYYwwZZww11====ffuu00uu000000ffvvvv000000001100RRCCTTCC00→&Right Arrow;11XxwwYYwwZZww11==Mm11Mm22XxwwYYwwZZww11------((11))

其中,f为相机焦距,(u0,v0)为图像主点坐标,dx和dy分别表示每个像素在横轴x和纵轴y上的物理坐标。Among them, f is the focal length of the camera, (u0 , v0 ) is the coordinates of the principal point of the image, dx and dy represent the physical coordinates of each pixel on the horizontal axis x and vertical axis y, respectively.

然后立体标定计算空间上两台摄像机P1和P2的几何关系,即两个相机之间的旋转矩阵R和平移矩阵T。选择右相机作为参考相机。关系如下:Then the stereo calibration calculates the geometric relationship of the two cameras P1 and P2 in space, that is, the rotation matrix R and the translation matrix T between the two cameras. Select the right camera as the reference camera. The relationship is as follows:

P1=R*(P2-T) (2)P1 =R*(P2 −T) (2)

最后通过非标定立体校正HartLey算法得到行对准的校正图像。要求双目相机采集图像同步。Finally, the line-aligned corrected image is obtained by uncalibrated stereo correction HartLey algorithm. It is required that the binocular camera acquires images synchronously.

步骤二、立体匹配获取视差。Step 2: Stereo matching to obtain disparity.

通过双目立体匹配计算左右相机视图之间的匹配点,得到视差图,选择高斯混合建模方法对视差图进行背景建模,消除复杂背景对目标检测的干扰。根据视差,基线和内参,采用三角测量计算场景三维坐标。选取以地面为XOY面的世界坐标系,将三维点投影到地面,投影到某个像素点的三维点的个数作为该像素点的颜色值,生成俯视投影图。The matching points between the left and right camera views are calculated through binocular stereo matching to obtain a disparity map, and the Gaussian mixture modeling method is selected to model the background of the disparity map to eliminate the interference of complex backgrounds on target detection. According to the disparity, baseline and internal parameters, triangulation is used to calculate the three-dimensional coordinates of the scene. Select the world coordinate system with the ground as the XOY plane, project the 3D points onto the ground, and use the number of 3D points projected to a certain pixel as the color value of the pixel to generate a top view projection map.

步骤三、俯视投影图聚类。Step 3: Clustering the top view projection map.

x处的概率密度为fh,k(x):The probability density at x is fh,k (x):

ffhh,,kk((xx))==ΣΣii==11nnoKK((||||xx--xxiihh||||))------((33))

其中,K(x)为核函数,h为半径Among them, K(x) is the kernel function, h is the radius

要使得fh,k(x)最大,对fh,k(x)求导得其中g(s)=-k'(s),To maximize fh,k (x), derive f h,k (x) where g(s)=-k'(s),

▿▿ffhh,,kk==ΣΣii==11nno((xx--xxii))gg((||||xx--xxiihh||||22))==[[ΣΣii==11nnogg((||||xx--xxiihh||||22))]][[ΣΣii==11nnoxxiigg((||||xx--xxiihh||||22))ΣΣii==11nnogg((||||xx--xxiihh||||22))--xx]]------((44))

令:make:

mmhh,,gg((xx))==ΣΣii==11nnoxxiigg((||||xx--xxiihh||||22))ΣΣii==11nnogg((||||xx--xxiihh||||22))--xx------((55))

要使得当且仅当mh,g(x)=0,得出新的圆心坐标:to make If and only if mh,g (x)=0, the new coordinates of the center of the circle are obtained:

xx==ΣΣii==11nnoxxiigg((||||xx--xxiihh||||22))ΣΣii==11nnogg((||||xx--xxiihh||||22))------((66))

由于投影图的特殊性,仅考虑像素点的距离无法得到准确的聚类结果,计算概率密度时,需要满足:(a)像素点的颜色值与中心像素点的颜色值越相近,概率密度越高;(b)离中心点的位置越近的像素点,概率密度越高。因此,选择核函数Kh(x):Due to the particularity of the projection map, accurate clustering results cannot be obtained only by considering the distance of the pixels. When calculating the probability density, it needs to satisfy: (a) the closer the color value of the pixel point is to the color value of the central pixel point, the higher the probability density is. High; (b) The closer the pixel is to the center point, the higher the probability density. Therefore, the kernel function Kh (x) is chosen:

KKhh((xx))==KK((||||xxsthe s--xxiisthe shh||||))**KK((||||xxrr--xxiirrhh||||))------((77))

MeanShift聚类后,每一类代表一个目标。将这一结果投影到原右图像中显示最终的检测结果。After MeanShift clustering, each class represents a target. Projecting this result into the original right image shows the final detection result.

本发明的有益效果是:该方法首先标定双目相机得到行对准的校正图像,然后通过立体匹配得到视差图进行背景建模,计算场景三维坐标生成俯视投影图,最后用MeanShift方法对俯视投影图进行聚类得到检测结果。本发明利用空间三维信息,有效地解决了目标遮挡、场景光线变化、阴影以及复杂背景的干扰等单目视觉中的技术问题,提高了检测精度。The beneficial effects of the present invention are: the method first calibrates the binocular camera to obtain the corrected image for line alignment, then obtains the disparity map through stereo matching for background modeling, calculates the three-dimensional coordinates of the scene to generate a top view projection map, and finally uses the MeanShift method to perform the top view projection The graph is clustered to get the detection results. The invention utilizes spatial three-dimensional information to effectively solve technical problems in monocular vision such as object occlusion, scene light changes, shadows, and interference from complex backgrounds, and improves detection accuracy.

下面结合具体实施方式对本发明作详细说明。The present invention will be described in detail below in combination with specific embodiments.

具体实施方式detailed description

本发明基于双目立体视觉复杂背景下的遮挡目标检测方法具体步骤如下:The present invention is based on binocular stereo vision occlusion target detection method under complex background specific steps are as follows:

步骤一、双目相机标定。Step 1, binocular camera calibration.

首先采用张正友的棋盘格标定方法,分别拍摄20张左右棋盘格图像,用以标定两个相机的内部参数M1,外部参数M2通过拍摄放置在地面的标定板图像解算。图像坐标(u,v)与世界坐标(Xw,Yw,Zw)的齐次转换关系如下:Firstly, Zhang Zhengyou’s checkerboard calibration method is adopted, and about 20 checkerboard images are taken respectively to calibrate the internal parameter M1 of the two cameras, and the external parameter M2 is calculated by taking images of the calibration board placed on the ground. The homogeneous transformation relationship between image coordinates (u, v) and world coordinates (Xw , Yw , Zw ) is as follows:

zzccuuvv11==11ddxx00uu000011ddythe yvv00000011ff00000000ff000000001100RRCCTTCC00→&Right Arrow;11XxwwYYwwZZww11==ffuu00uu000000ffvvvv000000001100RRCCTTCC00→&Right Arrow;11XxwwYYwwZZww11==Mm11Mm22XxwwYYwwZZww11------((11))

其中f为相机焦距,(u0,v0)为图像主点坐标,dx和dy分别表示每个像素在横轴x和纵轴y上的物理坐标,这些参数均可通过相机标定获取。Where f is the focal length of the camera, (u0 , v0 ) is the coordinates of the principal point of the image, dx and dy represent the physical coordinates of each pixel on the horizontal axis x and vertical axis y, respectively, and these parameters can be obtained through camera calibration .

然后立体标定计算空间上两台摄像机P1和P2的几何关系,即两个相机之间的旋转矩阵R和平移矩阵T。选择右相机作为参考相机。关系如下:Then the stereo calibration calculates the geometric relationship of the two cameras P1 and P2 in space, that is, the rotation matrix R and the translation matrix T between the two cameras. Select the right camera as the reference camera. The relationship is as follows:

P1=R*(P2-T) (2)P1 =R*(P2 −T) (2)

最后通过非标定立体校正HartLey算法得到行对准的校正图像。要求双目相机采集图像同步。Finally, the line-aligned corrected image is obtained by uncalibrated stereo correction HartLey algorithm. It is required that the binocular camera acquires images synchronously.

步骤二、立体匹配获取视差。Step 2: Stereo matching to obtain disparity.

通过双目立体匹配计算左右相机视图之间的匹配点,得到视差图,选择高斯混合建模方法对视差图进行背景建模,以消除复杂背景对目标检测的干扰。根据视差,基线和内参,采用三角测量计算场景三维坐标。选取以地面为XOY面的世界坐标系,将三维点投影到地面,投影到某个像素点的三维点的个数作为该像素点的颜色值,生成俯视投影图。The matching points between the left and right camera views are calculated by binocular stereo matching to obtain a disparity map, and the Gaussian mixture modeling method is selected to model the background of the disparity map to eliminate the interference of complex backgrounds on target detection. According to the disparity, baseline and intrinsic parameters, triangulation is used to calculate the three-dimensional coordinates of the scene. Select the world coordinate system with the ground as the XOY plane, project the 3D points onto the ground, and use the number of 3D points projected to a certain pixel as the color value of the pixel to generate a top view projection map.

步骤三、俯视投影图聚类。Step 3: Clustering the top view projection map.

x处的概率密度为fh,k(x):The probability density at x is fh,k (x):

ffhh,,kk((xx))==ΣΣii==11nnoKK((||||xx--xxiihh||||))------((33))

其中K(x)为核函数,h为半径Where K(x) is the kernel function and h is the radius

要使得fh,k(x)最大,对fh,k(x)求导得其中g(s)=-k'(s),To maximize fh,k (x), derive f h,k (x) where g(s)=-k'(s),

▿▿ffhh,,kk==ΣΣii==11nno((xx--xxii))gg((||||xx--xxiihh||||22))==[[ΣΣii==11nnogg((||||xx--xxiihh||||22))]][[ΣΣii==11nnoxxiigg((||||xx--xxiihh||||22))ΣΣii==11nnogg((||||xx--xxiihh||||22))--xx]]------((44))

令:make:

mmhh,,gg((xx))==ΣΣii==11nnoxxiigg((||||xx--xxiihh||||22))ΣΣii==11nnogg((||||xx--xxiihh||||22))--xx------((55))

要使得当且仅当mh,g(x)=0,可以得出新的圆心坐标:to make If and only if mh,g (x)=0, the new coordinates of the center of the circle can be obtained:

xx==ΣΣii==11nnoxxiigg((||||xx--xxiihh||||22))ΣΣii==11nnogg((||||xx--xxiihh||||22))------((66))

由于投影图的特殊性,仅考虑像素点的距离无法得到准确的聚类结果,计算概率密度时,需要满足:(a)像素点的颜色值与中心像素点的颜色值越相近,概率密度越高;(b)离中心点的位置越近的像素点,概率密度越高。因此,选择核函数Kh(x):Due to the particularity of the projection map, accurate clustering results cannot be obtained only by considering the distance of the pixels. When calculating the probability density, it needs to satisfy: (a) the closer the color value of the pixel point is to the color value of the central pixel point, the higher the probability density is. High; (b) The closer the pixel is to the center point, the higher the probability density. Therefore, the kernel function Kh (x) is chosen:

KKhh((xx))==KK((||||xxsthe s--xxiisthe shh||||))**KK((||||xxrr--xxiirrhh||||))------((77))

MeanShift聚类后,每一类代表一个目标。将这一结果投影到原右图像中显示最终的检测结果。After MeanShift clustering, each class represents a target. Projecting this result into the original right image shows the final detection result.

Claims (1)

CN201610530766.XA2016-07-062016-07-06Based on the shelter target detection method under binocular stereo vision complex backgroundPendingCN106203429A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201610530766.XACN106203429A (en)2016-07-062016-07-06Based on the shelter target detection method under binocular stereo vision complex background

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610530766.XACN106203429A (en)2016-07-062016-07-06Based on the shelter target detection method under binocular stereo vision complex background

Publications (1)

Publication NumberPublication Date
CN106203429Atrue CN106203429A (en)2016-12-07

Family

ID=57473634

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610530766.XAPendingCN106203429A (en)2016-07-062016-07-06Based on the shelter target detection method under binocular stereo vision complex background

Country Status (1)

CountryLink
CN (1)CN106203429A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107657643A (en)*2017-08-282018-02-02浙江工业大学A kind of parallax calculation method based on space plane constraint
CN108038866A (en)*2017-12-222018-05-15湖南源信光电科技股份有限公司A kind of moving target detecting method based on Vibe and disparity map Background difference
CN108346160A (en)*2017-12-222018-07-31湖南源信光电科技股份有限公司The multiple mobile object tracking combined based on disparity map Background difference and Meanshift
TWI658431B (en)*2017-10-022019-05-01緯創資通股份有限公司 Image processing method, image processing device, and computer readable recording medium
CN110505437A (en)*2018-05-182019-11-26杭州海康威视数字技术股份有限公司A kind of method, apparatus and system of object prompt
CN111598939A (en)*2020-05-222020-08-28中原工学院 A method of measuring human body circumference based on multi-eye vision system
CN113077510A (en)*2021-04-122021-07-06广州市诺以德医疗科技发展有限公司System for inspecting stereoscopic vision function under shielding
CN113139995A (en)*2021-04-192021-07-20杭州伯资企业管理合伙企业(有限合伙)Low-cost method for detecting and evaluating light occlusion between objects
CN119762685A (en)*2025-03-062025-04-04武汉海昌信息技术有限公司Binocular vision-based three-dimensional modeling method

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102129688A (en)*2011-02-242011-07-20哈尔滨工业大学Moving target detection method aiming at complex background
CN103106659A (en)*2013-01-282013-05-15中国科学院上海微系统与信息技术研究所Open area target detection and tracking method based on binocular vision sparse point matching
CN105160649A (en)*2015-06-302015-12-16上海交通大学Multi-target tracking method and system based on kernel function unsupervised clustering
CN105528785A (en)*2015-12-032016-04-27河北工业大学Binocular visual image stereo matching method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102129688A (en)*2011-02-242011-07-20哈尔滨工业大学Moving target detection method aiming at complex background
CN103106659A (en)*2013-01-282013-05-15中国科学院上海微系统与信息技术研究所Open area target detection and tracking method based on binocular vision sparse point matching
CN105160649A (en)*2015-06-302015-12-16上海交通大学Multi-target tracking method and system based on kernel function unsupervised clustering
CN105528785A (en)*2015-12-032016-04-27河北工业大学Binocular visual image stereo matching method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
吴静静等: "一种基于mean shift的多通道图像分割算法", 《包装工程》*
杨明: "基于双目立体视觉的三维重建研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》*
苑全德等: "一种基于特征点三维信息的自然路标提取与快速匹配方法", 《智能计算机与应用》*

Cited By (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107657643A (en)*2017-08-282018-02-02浙江工业大学A kind of parallax calculation method based on space plane constraint
CN107657643B (en)*2017-08-282019-10-25浙江工业大学 A Disparity Calculation Method Based on Spatial Plane Constraints
TWI658431B (en)*2017-10-022019-05-01緯創資通股份有限公司 Image processing method, image processing device, and computer readable recording medium
CN108038866A (en)*2017-12-222018-05-15湖南源信光电科技股份有限公司A kind of moving target detecting method based on Vibe and disparity map Background difference
CN108346160A (en)*2017-12-222018-07-31湖南源信光电科技股份有限公司The multiple mobile object tracking combined based on disparity map Background difference and Meanshift
CN110505437A (en)*2018-05-182019-11-26杭州海康威视数字技术股份有限公司A kind of method, apparatus and system of object prompt
CN111598939A (en)*2020-05-222020-08-28中原工学院 A method of measuring human body circumference based on multi-eye vision system
CN111598939B (en)*2020-05-222021-01-26中原工学院Human body circumference measuring method based on multi-vision system
CN113077510A (en)*2021-04-122021-07-06广州市诺以德医疗科技发展有限公司System for inspecting stereoscopic vision function under shielding
CN113077510B (en)*2021-04-122022-09-20广州市诺以德医疗科技发展有限公司System for inspecting stereoscopic vision function under shielding
CN113139995A (en)*2021-04-192021-07-20杭州伯资企业管理合伙企业(有限合伙)Low-cost method for detecting and evaluating light occlusion between objects
CN113139995B (en)*2021-04-192022-06-21杭州伯资企业管理合伙企业(有限合伙)Low-cost method for detecting and evaluating light occlusion between objects
CN119762685A (en)*2025-03-062025-04-04武汉海昌信息技术有限公司Binocular vision-based three-dimensional modeling method

Similar Documents

PublicationPublication DateTitle
CN106203429A (en)Based on the shelter target detection method under binocular stereo vision complex background
KR102206108B1 (en)A point cloud registration method based on RGB-D camera for shooting volumetric objects
CN103868460B (en)Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN105550670B (en)A kind of target object dynamically track and measurement and positioning method
CN104484648B (en) Robot variable viewing angle obstacle detection method based on contour recognition
JP6902028B2 (en) Methods and systems for large scale determination of RGBD camera orientation
CN103106688B (en)Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
WO2021004312A1 (en)Intelligent vehicle trajectory measurement method based on binocular stereo vision system
CN113393439A (en)Forging defect detection method based on deep learning
CN108053469A (en)Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN111563921A (en)Underwater point cloud acquisition method based on binocular camera
CN111998862B (en)BNN-based dense binocular SLAM method
CN106981081A (en)A kind of degree of plainness for wall surface detection method based on extraction of depth information
CN106485690A (en)Cloud data based on a feature and the autoregistration fusion method of optical image
Wei et al.An accurate stereo matching method based on color segments and edges
CN106033614B (en)A kind of mobile camera motion object detection method under strong parallax
CN107154014A (en)A kind of real-time color and depth Panorama Mosaic method
CN105184857A (en)Scale factor determination method in monocular vision reconstruction based on dot structured optical ranging
CN112801074A (en)Depth map estimation method based on traffic camera
JP6097903B2 (en) Three-dimensional shape acquisition apparatus, processing method, and program
CN106996748A (en)Wheel diameter measuring method based on binocular vision
CN119180908A (en)Gaussian splatter-based laser enhanced visual three-dimensional reconstruction method and system
Wu et al.Mm-gaussian: 3d gaussian-based multi-modal fusion for localization and reconstruction in unbounded scenes
CN107590444A (en)Detection method, device and the storage medium of static-obstacle thing
CN113313824A (en)Three-dimensional semantic map construction method

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
WD01Invention patent application deemed withdrawn after publication
WD01Invention patent application deemed withdrawn after publication

Application publication date:20161207


[8]ページ先頭

©2009-2025 Movatter.jp