Movatterモバイル変換


[0]ホーム

URL:


CN108933902B - Panoramic image acquisition device, mapping method and mobile robot - Google Patents

Panoramic image acquisition device, mapping method and mobile robot
Download PDF

Info

Publication number
CN108933902B
CN108933902BCN201810852198.4ACN201810852198ACN108933902BCN 108933902 BCN108933902 BCN 108933902BCN 201810852198 ACN201810852198 ACN 201810852198ACN 108933902 BCN108933902 BCN 108933902B
Authority
CN
China
Prior art keywords
information
panoramic image
mobile robot
bracket
acquisition device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810852198.4A
Other languages
Chinese (zh)
Other versions
CN108933902A (en
Inventor
陈冬梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SF Technology Co Ltd
Original Assignee
SF Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SF Technology Co LtdfiledCriticalSF Technology Co Ltd
Priority to CN201810852198.4ApriorityCriticalpatent/CN108933902B/en
Publication of CN108933902ApublicationCriticalpatent/CN108933902A/en
Application grantedgrantedCritical
Publication of CN108933902BpublicationCriticalpatent/CN108933902B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本申请公开了一种全景图像采集装置、建图方法及移动机器人。该全景图像采集装置包括支架和若干深度相机,支架为多边形支架或圆形支架,各深度相机分别设置于多边形支架的每个边,或均匀设置于所述圆形支架的圆形边上。根据本申请实施例提供的技术方案,通过在多边形支架的每个边设置深度相机的全景图像采集装置,能够解决传统深度相机系统的视场角有限的问题。

The present application discloses a panoramic image acquisition device, a mapping method and a mobile robot. The panoramic image acquisition device includes a bracket and a plurality of depth cameras. The bracket is a polygonal bracket or a circular bracket. Each depth camera is respectively arranged on each edge of the polygonal bracket, or evenly arranged on the circular edge of the circular bracket. According to the technical solution provided in the embodiment of the present application, by arranging a panoramic image acquisition device with a depth camera on each edge of the polygonal bracket, the problem of limited field of view of the traditional depth camera system can be solved.

Description

Panoramic image acquisition device, image construction method and mobile robot
Technical Field
The present disclosure relates generally to a broad field (e.g., the field of computer technology), and more particularly to a panoramic image acquisition apparatus, a mapping method, and a mobile robot.
Background
The existing depth camera system has a limited field angle, small feature points in the field of view and limited application. Although the laser radar can provide information with a large field angle, the cost is high, the information of a perceived environment is not rich enough, the positioning operation is very complex, and the instant positioning and mapping are difficult to realize in a complex environment.
Disclosure of Invention
In view of the above-described drawbacks or shortcomings in the related art, it is desirable to provide an image pickup apparatus, a mapping method, and a mobile robot that are low-cost and capable of picking up a panorama.
In a first aspect, a panoramic image acquisition device is provided, the panoramic image acquisition device includes a support and a plurality of depth cameras, the support is a polygonal support or a circular support, and each depth camera is respectively disposed on each side of the polygonal support or is uniformly disposed on a circular side of the circular support.
In a second aspect, a mapping method based on a mobile robot is provided, the mobile robot includes a panoramic image acquisition device provided by embodiments of the present application,
Simultaneously acquiring pose information and azimuth image information, and generating a panoramic image sequence and a grid map;
And according to the pose information, fusing the panoramic image sequence and the corresponding grid map to obtain the grid map with the panoramic visual mark.
In a third aspect, a mobile robot is provided, which includes the panoramic image collection apparatus provided by the embodiments of the present application.
According to the technical scheme provided by the embodiment of the application, the problem of limited view angle of the traditional depth camera system can be solved by arranging the panoramic image acquisition device of the depth camera on each side of the polygonal bracket. Furthermore, according to some embodiments of the present application, by adopting a mapping method of fusing a panoramic image and a raster image, the problem that the traditional raster map is not intuitive can be solved, and the effect of improving the user experience can be obtained.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the accompanying drawings in which:
fig. 1 illustrates an exemplary structural diagram of a panoramic image acquisition apparatus according to an embodiment of the present application;
fig. 2 shows an exemplary schematic view of a field angle of the panoramic image collection apparatus of fig. 1;
FIG. 3 illustrates an exemplary flow chart of a mapping method according to an embodiment of the application;
Fig. 4 shows an exemplary block diagram of a robot according to an embodiment of the present application.
Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
Referring to fig. 1, an exemplary structural diagram of a panoramic image acquisition apparatus according to an embodiment of the present application is shown. As shown in the figure, a panoramic image acquisition device includes a support 1 and a plurality of depth cameras 2, where the support 1 is a polygonal support or a circular support (not shown in the figure), and each depth camera 2 is disposed on each side of the polygonal support or uniformly disposed on a circular side of the circular support.
The panorama image pickup device as shown in fig. 1 comprises an octagonal cradle 1 and a depth camera 2 provided at each side of the octagonal cradle. The depth cameras in all directions are used for collecting pictures corresponding to the field angles so as to form panoramic images. In practice, the polygonal stent may be configured to have any number of sides or circular stents depending on the situation.
In some embodiments, the depth cameras are at the same level. In order to facilitate calibration, calibration and image stitching, the depth cameras are arranged on the same horizontal plane.
In some embodiments, the stent is a regular polygon stent.
Referring to fig. 2, an exemplary schematic view of the field angle of the panoramic image collection device of fig. 1 is shown. As shown in fig. 2, a regular polygon mount is employed, and the angles of view of the depth cameras in the respective orientations are set to be the same. The panorama acquisition device in the figure comprises depth cameras 01 to 08, wherein a, B, C, D, E, F, G, H are the intersections of the fields of view of the eight depth cameras, which form a cross-point circle 302 with a diameter 303 of 2R. In the sensor acquisition range of more than 2R, a three-dimensional 360-degree panoramic depth image can be obtained. In application, panoramic images can be acquired, or images of some or part of the azimuth can be acquired according to the situation. It can be seen that in the acquirable range, there is a cross overlap area between the fields of view of each two adjacent depth cameras. And, the intersection points of the field of view of the depth camera are co-rounded. It can be seen that in the acquirable range, there is a cross overlap area for the field of view of each adjacent two depth cameras. And, the intersection points of the visual fields of the respective depth cameras are co-rounded.
Referring to fig. 3, an exemplary flow chart of a mapping method according to an embodiment of the application is shown. As shown in the figure, the mapping method comprises the following steps:
s10, simultaneously acquiring pose information and azimuth image information to generate a panoramic image sequence and a grid map;
And step S20, according to the pose information, fusing the panoramic image sequence and the corresponding grid map to obtain the grid map with the panoramic visual mark.
In step S10, image information of each azimuth is acquired by the panoramic image acquisition device, and simultaneously pose information is acquired by the pose acquisition device, and a panoramic image sequence with pose information and a grid map with pose information are generated. A panoramic image sequence is typically a time series of images acquired at different times.
In step S20, a grid map with a visual identifier is established by fusing the three-dimensional panoramic image sequence and the grid map, and accurate positioning can be performed with the aid of the pose acquisition device.
In some embodiments, prior to acquiring the image information, comprising:
step S1, setting internal parameters of each depth camera;
And S2, converting the coordinate system of each depth camera into a world coordinate system.
In step S1, the internal parameters of the depth camera include focal length, distortion parameters, etc. The parameters of the depth camera are set according to the characteristics of the selected depth camera and the distribution of the depth camera on the support.
Step S2 is a process of unifying coordinates of each depth camera, and the unifying coordinates is beneficial to splicing images acquired by each subsequent depth camera.
In some embodiments, step S10 includes:
Step S11, correlating the pose information with the azimuth image information;
step S12, splicing the images in all directions according to the three-dimensional point cloud information of the direction image information to obtain a panoramic image sequence with positioning information;
and S13, converting the three-dimensional point cloud information of the azimuth image into two-dimensional point cloud information to obtain a grid map.
In step S11, the acquired pose information and the image information at the same time are correlated to generate a panoramic image having the pose information.
In step S12, a first-matching and then-splicing manner may be adopted.
In particular, it is first determined whether the point clouds match,
If so, judging whether an overlapping area exists:
if there is an overlapping area, reserving one of the overlapping areas, if there is no overlapping area, butting the matched point clouds,
Otherwise, the dead zone exists, and the step S1 is executed again.
In step S13, the grid map is an information map obtained by performing grid processing on known environmental information. Rectangular modeling is performed on the environmental information, and the unit length of the grid is set according to the actual condition of the environment (for example, the grid can be set to be 0.1m by 0.1m/0.05m by 0.05m or other size values), so that given environmental information is converted into a rectangular grid map.
Each grid point is represented by a grid point center node, the whole grid map is corresponding to coordinates on a coordinate system, coordinates on the coordinate system corresponding to each node are represented by (x, y), x represents the column number of the node, y represents the row number of the node, the coordinate unit length of the whole coordinate system is the unit length of the grid, wherein the node containing barrier information is marked as 1, the node not containing barrier information is marked as 0, the grid with the barrier is called as an 'occupied' grid, and the grid not containing the barrier is called as an 'idle' grid. Thus, the entire grid map is composed of 'occupied' grids and 'free' grids. As to how to determine whether the grid contains obstacle information, a method may be employed in which a probability p is set, three-dimensional full view point cloud information is projected onto a two-dimensional plane in which the robot center is located, if the probability of an obstacle occurring in the grid is referred to as p, the probability p is compared with a predetermined probability threshold p_th, if p > =p_th, the grid is occupied, and if p < p_th, the grid is idle. The obtained grid map comprises pose information.
In some embodiments, step S20 includes:
And S21, fusing the panoramic image sequence with the same position information and the grid map.
The grid map shows the information in the environment in the form of grids, wherein some grids are occupied, some grids are idle, and different states of the grids can show the states of the surrounding environment of the robot, such as which position is provided with an obstacle. However, the distribution of the obstacles in the corresponding environment can only be known through the grid map, and specifically what type of obstacle, specifically what object, can not be represented by the grid map. The application thus contemplates compensation by image information.
The method comprises the steps of obtaining a grid map from three-dimensional point cloud information, storing the specific grid number and grid IDs corresponding to each recorded position, recording the information of panoramic pictures corresponding to all positions in the operation of a robot, forming an image sequence by all picture information, and finally integrating the image sequence with the same position information with the grid map to form the grid map with panoramic visual marks. A model of a three-dimensional descriptive environment is obtained. The formed map has better use experience for users, and the displayed environment is more visual.
In some embodiments, pose information is acquired by IMU sensors. An IMU, inertial measurement unit, is a device that uses accelerometers and gyroscopes to measure the three-axis attitude angle (or angular rate) and acceleration of an object. The gyroscope and the accelerometer are arranged on three orthogonal axes in one IMU, 6 degrees of freedom are adopted to measure the angular velocity and the acceleration of an object in a three-dimensional space, the IMU is called a 6-axis IMU, and a magnetometer can be added to the IMU on the basis of the accelerometer and the gyroscope, and the IMU is called a 9-axis IMU.
Fig. 4 shows an exemplary block diagram of a mobile robot according to an embodiment of the present application.
As shown in fig. 4, the mobile robot 400 includes one or more Central Processing Units (CPUs) 401, which can perform various appropriate actions and processes according to programs stored in a Read Only Memory (ROM) 402 or programs loaded from a storage section 408 into a Random Access Memory (RAM) 403. In RAM 403, various programs and data required for the operation of system 400 are also stored. The CPU 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Connected to the I/O interface 405 are a capturing section 406 which may include an image capturing device, a gesture sensing device, and the like, an output section 407 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like, a storage section 408 including a hard disk, and the like, and a communication section 409 including a network interface card such as a LAN card, a modem, and the like. The communication section 409 performs communication processing via a network such as the internet. The drive 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 410 as needed, so that a computer program read therefrom is installed into the storage section 408 as needed.
In particular, the process described above with reference to fig. 3 may be implemented as a computer software program according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing a mapping method. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 409 and/or installed from the removable medium 411.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In another aspect, the present application also provides a computer readable storage medium, which may be a computer readable storage medium included in the apparatus described in the above embodiment, or may be a computer readable storage medium that exists separately and is not assembled into a device. The computer-readable storage medium stores one or more programs for use by one or more processors in performing the mapping methods described herein.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the application referred to in the present application is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.

Claims (4)

CN201810852198.4A2018-07-272018-07-27 Panoramic image acquisition device, mapping method and mobile robotActiveCN108933902B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810852198.4ACN108933902B (en)2018-07-272018-07-27 Panoramic image acquisition device, mapping method and mobile robot

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810852198.4ACN108933902B (en)2018-07-272018-07-27 Panoramic image acquisition device, mapping method and mobile robot

Publications (2)

Publication NumberPublication Date
CN108933902A CN108933902A (en)2018-12-04
CN108933902Btrue CN108933902B (en)2025-02-07

Family

ID=64444278

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810852198.4AActiveCN108933902B (en)2018-07-272018-07-27 Panoramic image acquisition device, mapping method and mobile robot

Country Status (1)

CountryLink
CN (1)CN108933902B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109682381B (en)*2019-02-222020-09-25山东大学Omnidirectional vision based large-view-field scene perception method, system, medium and equipment
CN109900705B (en)*2019-03-182022-06-10合肥京东方光电科技有限公司 A substrate detection device and detection method
WO2023077432A1 (en)*2021-11-052023-05-11深圳市大疆创新科技有限公司Movable platform control method and apparatus, and movable platform and storage medium
WO2023077421A1 (en)*2021-11-052023-05-11深圳市大疆创新科技有限公司Movable platform control method and apparatus, and movable platform and storage medium
CN115065816B (en)*2022-05-092023-04-07北京大学Real geospatial scene real-time construction method and real-time construction device
CN116007623A (en)*2022-12-022023-04-25纯米科技(上海)股份有限公司 Robot navigation method, device and computer-readable storage medium
CN115855030B (en)*2023-02-282023-06-27麦岩智能科技(北京)有限公司Barrier retaining method, storage medium and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102997871A (en)*2012-11-232013-03-27南京大学Method for inverting effective leaf area index by utilizing geometric projection and laser radar
CN106705964A (en)*2017-01-062017-05-24武汉大学Panoramic camera fused IMU, laser scanner positioning and navigating system and method
CN208638479U (en)*2018-07-272019-03-22顺丰科技有限公司Panoramic picture acquisition device and mobile robot

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9129432B2 (en)*2010-01-282015-09-08The Hong Kong University Of Science And TechnologyImage-based procedural remodeling of buildings
US9674507B2 (en)*2013-04-302017-06-06Qualcomm IncorporatedMonocular visual SLAM with general and panorama camera movements

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102997871A (en)*2012-11-232013-03-27南京大学Method for inverting effective leaf area index by utilizing geometric projection and laser radar
CN106705964A (en)*2017-01-062017-05-24武汉大学Panoramic camera fused IMU, laser scanner positioning and navigating system and method
CN208638479U (en)*2018-07-272019-03-22顺丰科技有限公司Panoramic picture acquisition device and mobile robot

Also Published As

Publication numberPublication date
CN108933902A (en)2018-12-04

Similar Documents

PublicationPublication DateTitle
CN108933902B (en) Panoramic image acquisition device, mapping method and mobile robot
KR102414587B1 (en) Augmented reality data presentation method, apparatus, device and storage medium
US11165959B2 (en)Connecting and using building data acquired from mobile devices
JP6687204B2 (en) Projection image generation method and apparatus, and mapping method between image pixels and depth values
WO2022077296A1 (en)Three-dimensional reconstruction method, gimbal load, removable platform and computer-readable storage medium
CN112729327B (en)Navigation method, navigation device, computer equipment and storage medium
GB2591857A (en)Photographing-based 3D modeling system and method, and automatic 3D modeling apparatus and method
JP5214355B2 (en) Vehicle traveling locus observation system, vehicle traveling locus observation method, and program thereof
CN108810473B (en)Method and system for realizing GPS mapping camera picture coordinate on mobile platform
CN109298629A (en)For providing the fault-tolerant of robust tracking to realize from non-autonomous position of advocating peace
CN112771576A (en)Position information acquisition method, device and storage medium
WO2017152803A1 (en)Image processing method and device
Oskiper et al.Augmented reality binoculars
CN109883418A (en) Indoor positioning method and device
CN110276774B (en) Object drawing method, device, terminal and computer-readable storage medium
CN109559349A (en)A kind of method and apparatus for calibration
CN110275179A (en) A Map Construction Method Based on LiDAR and Vision Fusion
CA3069813C (en)Capturing, connecting and using building interior data from mobile devices
EP4394706A1 (en)Spatial positioning method and apparatus
CN113034347A (en)Oblique photographic image processing method, device, processing equipment and storage medium
CN109669533A (en)A kind of motion capture method, the apparatus and system of view-based access control model and inertia
CN112348887B (en) Terminal posture determination method and related device
EP3882846B1 (en)Method and device for collecting images of a scene for generating virtual reality data
CN110503684A (en)Camera position and orientation estimation method and device
CN208638479U (en)Panoramic picture acquisition device and mobile robot

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp