Movatterモバイル変換


[0]ホーム

URL:


CN113793417A - Monocular SLAM method capable of creating large-scale map - Google Patents

Monocular SLAM method capable of creating large-scale map
Download PDF

Info

Publication number
CN113793417A
CN113793417ACN202111119850.XACN202111119850ACN113793417ACN 113793417 ACN113793417 ACN 113793417ACN 202111119850 ACN202111119850 ACN 202111119850ACN 113793417 ACN113793417 ACN 113793417A
Authority
CN
China
Prior art keywords
image
pose
initial
map
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111119850.XA
Other languages
Chinese (zh)
Inventor
段苏洋
苗芷萱
姜天奇
宁馨
吴雨薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry UniversityfiledCriticalNortheast Forestry University
Priority to CN202111119850.XApriorityCriticalpatent/CN113793417A/en
Publication of CN113793417ApublicationCriticalpatent/CN113793417A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种可创建大规模地图的单眼SLAM方法,包括通过图像采集装置获取所需建图的空间的上端图像信息,通过3D激光雷达获取所需建图的空间的非上端的图像信息,环境图像进行数据处理,构建初始环境图和识别图像采集装置的位姿。本发明通过3d激光雷达实现广角旋转扫描和定点探测,避免探测过程出现较大距离的盲区,保证了探测数据的全面性和高分辨率,优化了单眼SLAM的定位精度,且创建更加丰富的地图信息,通过两个图像采集装置分别拍摄得到两个初始图像,利用初始图像中相互匹配的特征点构建出初始的SLAM地图,在初始化成功之后,利用图像采集装置拍摄图像,以进行单眼SLAM的建图,提高建图的成功率,减少地图中的信息缺失。The invention discloses a monocular SLAM method capable of creating a large-scale map. , perform data processing on the environmental image, construct an initial environmental map and identify the pose of the image acquisition device. The invention realizes wide-angle rotation scanning and fixed-point detection through 3D laser radar, avoids blind spots with a large distance in the detection process, ensures the comprehensiveness and high resolution of detection data, optimizes the positioning accuracy of monocular SLAM, and creates more abundant maps Information, two initial images are obtained by taking two image acquisition devices respectively, and the initial SLAM map is constructed by using the feature points that match each other in the initial images. map, improve the success rate of mapping, and reduce the lack of information in the map.

Description

Monocular SLAM method capable of creating large-scale map
Technical Field
The invention relates to the field of map creation, in particular to a monocular SLAM method capable of creating a large-scale map.
Background
In recent years, computer vision has begun to gain much attention in the field of robotics as computer technology, digital image processing technology, and image processing hardware have further advanced. SLAM is short for synchronous positioning and mapping, and this concept was first proposed by Smith, Self and Cheeseman in 1988. This approach describes a scenario where the robot starts from an unknown location of the unknown environment and then explores the unknown environment: the robot repeatedly observes the environment in the motion process, positions the self pose and the posture according to the environmental characteristics sensed by the sensor, and builds a map in an incremental mode according to the self pose. Real-time monocular SLAM has become an increasingly popular research topic. One of the main advantages of monocular SLAM, while also being one of the biggest challenges, on one hand, the inherent scale of self-contained blur: this scale cannot be observed and drifted over time, becoming a major source of error. It has the advantage of allowing seamless switching between environments of different scales, such as indoor desktop environments and large scale outdoor environments. On the other hand, zoom sensors, such as depth or stereo cameras, have certain limitations that provide reliable measurements but do not guarantee their flexibility.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects of the prior art and provide a monocular SLAM method capable of creating a large-scale map.
In order to solve the technical problems, the invention provides the following technical scheme:
the invention relates to a monocular SLAM method capable of creating a large-scale map, which specifically comprises the following steps:
a. acquiring upper-end image information of a space needing to be mapped through an image acquisition device to obtain a primary environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image and identifying the pose of the image acquisition device;
d. transforming the pose of the image acquisition device by using the inherent pose transformation relation between the image acquisition device and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) carrying out monocular SLAM mapping on the secondary environment image with the matched pose and the initial SLAM map by combining the primary environment image obtained by the image acquisition device to obtain a final environment image.
As a preferred technical scheme of the invention, the image acquisition device comprises a TOF depth camera, a binocular stereo camera, a structured light depth camera and an obstacle avoidance camera.
As a preferred technical solution of the present invention, the primary environment image is a visible light image, and the secondary environment image is a depth image and a visible light image.
As a preferred technical solution of the present invention, the construction of the pose of the initial environment map and the image capturing device mainly includes the following steps:
g. acquiring two initial images respectively shot by any two image acquisition devices, wherein the viewing ranges of the two image acquisition devices are at least partially overlapped;
h. and determining the three-dimensional space positions of the mutually matched characteristic points according to the internal parameters and the external parameters which are calibrated in advance by the two image acquisition devices and the parallaxes of the mutually matched characteristic points in the two initial images so as to obtain map points corresponding to the mutually matched characteristic points, and constructing an initial SLAM map so as to complete initialization.
As a preferred technical solution of the present invention, the acquiring two initial images respectively captured by any two image capturing devices includes: acquiring an image A and an image B which are obtained by respectively shooting by two image acquisition devices; wherein, the focal lengths of the two image acquisition devices are different; the image a and the image B are processed in accordance with the image effect obtained by shooting with the same focal length, with the processed image a and image B as two initial images.
As a preferred embodiment of the present invention, the monocular SLAM is constructed by projecting a point p on the reference frame onto a ray of the current frame, the line is also called epipolar line, the process of determining p on the epipolar line is called epipolar line search, after the epipolar line search is used, the depth of p can be determined by using triangulation, and the depth is updated by using a filter.
As a preferred technical scheme of the invention, the epipolar line search specifically comprises the step of constructing two three-dimensional points p on a depth extension line of the seed p (x, y)1And p2The two three-dimensional points originate from the same pixel, but are determined to be different in depth. In depth filters typically set to p1(x, y, z-n sigma) and p2(x, y, z + n sigma), where z is the initial depth of the seed, sigma is the variance of the depth, and n can be adjusted to different values, such as 1, 2, 3.. etc., typically taking 3;
p is to be1 andp2projecting the pose of the frame to the current frame with the projection point u1And u2Is connected to u1And u2And obtaining the polar line.
As a preferred technical solution of the present invention, the filter is a depth filter, a statistical filter, or a voxel filter.
Compared with the prior art, the invention has the following beneficial effects:
1: according to the invention, wide-angle rotation scanning and fixed-point detection are realized through the 3d laser radar, a blind area with a large distance in the detection process is avoided, meanwhile, detailed detection information of an object can be obtained, the information of a reticular barrier and the side environment of the robot is detected, and the comprehensiveness and high resolution of the detection data are ensured.
2: the invention optimizes the positioning precision of monocular SLAM and creates richer map information. The current pose information and the map are calculated through the acquired image based on the SLAM, a depth sensor which is horizontally placed is newly added to acquire the depth information, real-time operation can be realized, more accurate pose can be provided, richer map information is created, and the advantages of the SLAM in positioning navigation and automatic obstacle avoidance are brought into play.
3: according to the invention, two initial images are obtained by respectively shooting through two image acquisition devices, an initial SLAM map is constructed by using the mutually matched characteristic points in the initial images, and after the initialization is successful, the images are shot by using the image acquisition devices to carry out image construction of the monocular SLAM, so that the success rate of image construction is improved, and the information loss in the map is reduced.
Detailed Description
The following description of the preferred embodiments of the present invention is provided for the purpose of illustration and description, and is in no way intended to limit the invention.
Example 1
The invention provides a monocular SLAM method capable of creating a large-scale map, which specifically comprises the following steps:
a. acquiring upper-end image information of a space needing to be mapped through a TOF depth camera, a structured light depth camera and an obstacle avoidance camera to obtain a first-level environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image, and identifying the poses of a TOF depth camera, a structured light depth camera and an obstacle avoidance camera;
d. transforming the poses of the TOF depth camera, the structured light depth camera and the obstacle avoidance camera by using the inherent pose transformation relation among the TOF depth camera, the structured light depth camera and the obstacle avoidance camera and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) establishing a monocular SLAM by combining a primary environment image obtained by a TOF depth camera, a structured light depth camera and an obstacle avoidance camera on the basis of the pose-matched secondary environment image and the initial SLAM map to obtain a final environment image.
Specifically, the establishment of the initial environment image and the poses of the TOF depth camera, the structured light depth camera and the obstacle avoidance camera mainly comprises the following steps:
g. acquiring any two initial images which are respectively obtained by shooting by a TOF depth camera, a structured light depth camera or an obstacle avoidance camera, and acquiring any two initial images which are respectively obtained by shooting by the TOF depth camera, the structured light depth camera or the obstacle avoidance camera comprises: acquiring an image A and an image B which are obtained by respectively shooting two TOF depth cameras, structured light depth cameras or obstacle avoidance cameras; the two TOF depth cameras, the structured light depth camera or the obstacle avoidance camera have different focal lengths; processing the image A and the image B according to an image effect obtained by shooting with the same focal length, taking the processed image A and the processed image B as two initial images, wherein at least part of the view ranges of the two TOF depth cameras, the structured light depth cameras or the obstacle avoidance cameras are overlapped;
h. and determining the three-dimensional space positions of the mutually matched feature points according to the internal parameters and the external parameters which are calibrated in advance by the two TOF depth cameras, the structured light depth cameras or the obstacle avoidance cameras and the parallax of the mutually matched feature points in the two initial images so as to obtain the map points corresponding to the mutually matched feature points, and constructing an initial SLAM map to complete initialization.
Example 2
The invention provides a monocular SLAM method capable of creating a large-scale map, which specifically comprises the following steps:
a. acquiring upper-end image information of a space needing to be mapped through a TOF depth camera and an obstacle avoidance camera to obtain a first-level environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image, and identifying the poses of a TOF depth camera and an obstacle avoidance camera;
d. transforming the poses of the TOF depth camera and the obstacle avoidance camera by using the inherent pose transformation relation between the TOF depth camera and the obstacle avoidance camera and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) carrying out monocular SLAM mapping on the secondary environment image with the matched pose and the initial SLAM map by combining the primary environment image obtained by the TOF depth camera and the obstacle avoidance camera to obtain a final environment image.
Specifically, the construction of the initial environment image and the poses of the TOF depth camera and the obstacle avoidance camera mainly comprises the following steps:
g. acquiring two initial images obtained by respectively shooting any two TOF depth cameras or obstacle avoidance cameras, wherein the acquiring of two initial images obtained by respectively shooting any two TOF depth cameras or obstacle avoidance cameras comprises: acquiring an image A and an image B which are obtained by respectively shooting two TOF depth cameras or obstacle avoidance cameras; the two TOF depth cameras or the obstacle avoidance cameras have different focal lengths; processing the image A and the image B according to an image effect obtained by shooting with the same focal length, taking the processed image A and the processed image B as two initial images, wherein the viewing ranges of the two TOF depth cameras or the obstacle avoidance cameras are at least partially overlapped;
h. and determining the three-dimensional space positions of the mutually matched feature points according to the internal parameters and the external parameters which are calibrated in advance by the two TOF depth cameras or the obstacle avoidance cameras and the parallaxes of the mutually matched feature points in the two initial images so as to obtain map points corresponding to the mutually matched feature points, and constructing an initial SLAM map so as to complete initialization.
Example 3
The invention provides a monocular SLAM method capable of creating a large-scale map, which specifically comprises the following steps:
a. acquiring upper-end image information of a space needing to be mapped through a binocular stereo camera, a structured light depth camera and an obstacle avoidance camera to obtain a primary environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image, and identifying the poses of a binocular stereo camera, a structured light depth camera and an obstacle avoidance camera;
d. transforming the poses of the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera by using the inherent pose transformation relation among the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) establishing a monocular SLAM by combining a primary environment image obtained by a binocular stereo camera, a structured light depth camera and an obstacle avoidance camera on the basis of the pose-matched secondary environment image and the initial SLAM map to obtain a final environment image.
Specifically, the establishment of the initial environment image and the poses of the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera mainly comprises the following steps:
g. the method comprises the following steps of acquiring two initial images obtained by respectively shooting any two binocular stereo cameras, any two structured light depth cameras and or any two obstacle avoidance cameras, and acquiring two initial images obtained by respectively shooting any two binocular stereo cameras, any two structured light depth cameras and any two obstacle avoidance cameras, wherein the two initial images comprise: acquiring an image A and an image B which are obtained by respectively shooting two binocular stereo cameras, a structured light depth camera and/or an obstacle avoidance camera; the two binocular stereo cameras, the structured light depth camera and/or the obstacle avoidance camera have different focal lengths; processing the image A and the image B according to an image effect obtained by shooting with the same focal length, taking the processed image A and the processed image B as two initial images, wherein the viewing ranges of the two binocular stereo cameras, the structured light depth camera and/or the obstacle avoidance camera are at least partially overlapped;
h. according to internal parameters and external parameters which are calibrated in advance by the two binocular stereo cameras, the structured light depth camera and/or the obstacle avoidance camera and the parallax error of the mutually matched feature points in the two initial images, the three-dimensional space positions of the mutually matched feature points are determined so as to obtain map points corresponding to the mutually matched feature points, and an initial SLAM map is constructed to complete initialization.
Example 4
The invention provides a monocular SLAM method capable of creating a large-scale map, which specifically comprises the following steps:
a. acquiring upper-end image information of a space needing to be mapped through a TOF depth camera, a binocular stereo camera and an obstacle avoidance camera to obtain a primary environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image, and identifying the poses of a TOF depth camera, a binocular stereo camera and an obstacle avoidance camera;
d. transforming the poses of the TOF depth camera, the binocular stereo camera and the obstacle avoidance camera by using the inherent pose transformation relation among the TOF depth camera, the binocular stereo camera and the obstacle avoidance camera and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) establishing a monocular SLAM by combining a primary environment image obtained by a TOF depth camera, a binocular stereo camera and an obstacle avoidance camera on the basis of the pose-matched secondary environment image and the initial SLAM map to obtain a final environment image.
Specifically, the establishment of the initial environment image and the poses of the TOF depth camera, the binocular stereo camera and the obstacle avoidance camera mainly comprises the following steps:
g. the method comprises the following steps of acquiring two initial images obtained by respectively shooting by two TOF depth cameras, two binocular stereo cameras or an obstacle avoidance camera, and acquiring two initial images obtained by respectively shooting by two TOF depth cameras, two binocular stereo cameras or an obstacle avoidance camera: acquiring an image A and an image B which are obtained by respectively shooting two TOF depth cameras, binocular stereo cameras or obstacle avoidance cameras; wherein the focal lengths of the two TOF depth cameras, the binocular stereo camera or the obstacle avoidance camera are different; processing the image A and the image B according to an image effect obtained by shooting with the same focal length, taking the processed image A and the processed image B as two initial images, wherein the viewing ranges of the two TOF depth cameras, the binocular stereo camera or the obstacle avoidance camera are at least partially overlapped;
h. and determining the three-dimensional space positions of the mutually matched feature points according to the internal parameters and the external parameters which are calibrated in advance by the two TOF depth cameras, the binocular stereo cameras or the obstacle avoidance cameras and the parallaxes of the mutually matched feature points in the two initial images so as to obtain the map points corresponding to the mutually matched feature points, and constructing an initial SLAM map to complete initialization.
Example 5
The invention provides a monocular SLAM method capable of creating a large-scale map, which specifically comprises the following steps:
a. acquiring upper-end image information of a space needing to be mapped through a TOF depth camera, a binocular stereo camera, a structured light depth camera and an obstacle avoidance camera to obtain a primary environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image, and identifying the poses of a TOF depth camera, a binocular stereo camera, a structured light depth camera and an obstacle avoidance camera;
d. transforming the poses of the TOF depth camera, the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera by using the inherent pose transformation relation between the TOF depth camera, the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) carrying out monocular SLAM mapping on the secondary environment image with the matched pose and the initial SLAM map by combining the primary environment image obtained by the TOF depth camera, the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera to obtain a final environment image.
Specifically, the establishment of the initial environment image and the poses of the TOF depth camera, the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera mainly comprises the following steps:
g. acquire two arbitrary TOF depth cameras, two mesh stereo cameras, structured light depth camera or keep away the barrier camera and shoot two initial images that obtain respectively, acquire two arbitrary TOF depth cameras, two mesh stereo cameras of structured light depth camera or keep away the barrier camera and shoot two initial images that obtain respectively and include: acquiring an image A and an image B which are obtained by respectively shooting two TOF depth cameras, a binocular stereo camera, a structured light depth camera or an obstacle avoidance camera; the two TOF depth cameras, the binocular stereo camera, the structured light depth camera or the obstacle avoidance camera have different focal lengths; processing the image A and the image B according to an image effect obtained by shooting with the same focal length, taking the processed image A and the processed image B as two initial images, wherein at least partial overlapping of view ranges of the two TOF depth cameras, the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera exists;
h. according to internal parameters and external parameters which are calibrated in advance by the two TOF depth cameras, the binocular stereo camera, the structured light depth camera or the obstacle avoidance camera and the parallax of the mutually matched feature points in the two initial images, the three-dimensional space positions of the mutually matched feature points are determined so as to obtain map points corresponding to the mutually matched feature points, and an initial SLAM map is constructed to complete initialization.
The invention realizes wide-angle rotation scanning and fixed-point detection through the 3d laser radar, avoids a blind area with a larger distance in the detection process, can acquire detailed detection information of an object, detects the information of a reticular obstacle and the side environment of the robot, ensures the comprehensiveness and high resolution of detection data, optimizes the positioning precision of a monocular SLAM, creates richer map information, calculates the current pose information and map based on the SLAM through the acquired images, and newly adds a depth sensor which is horizontally arranged to acquire the depth information, thereby not only realizing real-time operation, but also providing more accurate pose, creating richer map information, further playing the advantages of the SLAM in positioning navigation and automatic obstacle avoidance, respectively shooting through two image acquisition devices to obtain two initial images, constructing the initial SLAM map by utilizing mutually matched characteristic points in the initial images, after the initialization is successful, the image acquisition device is used for shooting the image so as to build the map of the monocular SLAM, the success rate of map building is improved, and the information loss in the map is reduced.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

CN202111119850.XA2021-09-242021-09-24Monocular SLAM method capable of creating large-scale mapPendingCN113793417A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202111119850.XACN113793417A (en)2021-09-242021-09-24Monocular SLAM method capable of creating large-scale map

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202111119850.XACN113793417A (en)2021-09-242021-09-24Monocular SLAM method capable of creating large-scale map

Publications (1)

Publication NumberPublication Date
CN113793417Atrue CN113793417A (en)2021-12-14

Family

ID=78879172

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202111119850.XAPendingCN113793417A (en)2021-09-242021-09-24Monocular SLAM method capable of creating large-scale map

Country Status (1)

CountryLink
CN (1)CN113793417A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105809687A (en)*2016-03-082016-07-27清华大学Monocular vision ranging method based on edge point information in image
CN109887087A (en)*2019-02-222019-06-14广州小鹏汽车科技有限公司A kind of SLAM of vehicle builds drawing method and system
CN110163963A (en)*2019-04-122019-08-23南京华捷艾米软件科技有限公司A kind of building based on SLAM and builds drawing method at map device
CN112785702A (en)*2020-12-312021-05-11华南理工大学SLAM method based on tight coupling of 2D laser radar and binocular camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105809687A (en)*2016-03-082016-07-27清华大学Monocular vision ranging method based on edge point information in image
CN109887087A (en)*2019-02-222019-06-14广州小鹏汽车科技有限公司A kind of SLAM of vehicle builds drawing method and system
CN110163963A (en)*2019-04-122019-08-23南京华捷艾米软件科技有限公司A kind of building based on SLAM and builds drawing method at map device
CN112785702A (en)*2020-12-312021-05-11华南理工大学SLAM method based on tight coupling of 2D laser radar and binocular camera

Similar Documents

PublicationPublication DateTitle
CN110728715B (en) A method for self-adaptive adjustment of the camera angle of an intelligent inspection robot
CN103337094B (en)A kind of method of applying binocular camera and realizing motion three-dimensional reconstruction
WO2021139176A1 (en)Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium
CN112894832A (en)Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN110264528B (en) A fast self-calibration method for fisheye lens binocular cameras
CN112207821B (en)Target searching method of visual robot and robot
CN111854636B (en)Multi-camera array three-dimensional detection system and method
CN102779347A (en)Method and device for tracking and locating target for aircraft
CN109658497B (en) A three-dimensional model reconstruction method and device
CN102072725A (en)Spatial three-dimension (3D) measurement method based on laser point cloud and digital measurable images
US20190079158A1 (en)4d camera tracking and optical stabilization
CN114359406A (en)Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
CN115222919B (en) A perception system and method for constructing a color point cloud map of mobile machinery
CN111105467B (en)Image calibration method and device and electronic equipment
WO2024164812A1 (en)Multi-sensor fusion-based slam method and device, and medium
CN115082570A (en) A calibration method of lidar and panoramic camera
CN105279736A (en)Method and system for generating depth image
CN112422848B (en)Video stitching method based on depth map and color map
CN114219866B (en)Binocular structured light three-dimensional reconstruction method, reconstruction system and reconstruction equipment
CN110068308A (en)A kind of distance measuring method and range-measurement system based on more mesh cameras
CN113538510A (en)Real-time workpiece tracking and positioning device on production line
CN117196955B (en)Panoramic image stitching method and terminal
CN113793417A (en)Monocular SLAM method capable of creating large-scale map
JP5925109B2 (en) Image processing apparatus, control method thereof, and control program

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WD01Invention patent application deemed withdrawn after publication
WD01Invention patent application deemed withdrawn after publication

Application publication date:20211214


[8]ページ先頭

©2009-2025 Movatter.jp