Detailed Description
The application is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the application are shown in the drawings.
It should be noted that, without conflict, the embodiments of the present application and features of the embodiments may be combined with each other. The application will be described in detail below with reference to the drawings in connection with embodiments.
Referring to fig. 1, an exemplary structural diagram of a panoramic image acquisition apparatus according to an embodiment of the present application is shown. As shown in the figure, a panoramic image acquisition device includes a support 1 and a plurality of depth cameras 2, where the support 1 is a polygonal support or a circular support (not shown in the figure), and each depth camera 2 is disposed on each side of the polygonal support or uniformly disposed on a circular side of the circular support.
The panorama image pickup device as shown in fig. 1 comprises an octagonal cradle 1 and a depth camera 2 provided at each side of the octagonal cradle. The depth cameras in all directions are used for collecting pictures corresponding to the field angles so as to form panoramic images. In practice, the polygonal stent may be configured to have any number of sides or circular stents depending on the situation.
In some embodiments, the depth cameras are at the same level. In order to facilitate calibration, calibration and image stitching, the depth cameras are arranged on the same horizontal plane.
In some embodiments, the stent is a regular polygon stent.
Referring to fig. 2, an exemplary schematic view of the field angle of the panoramic image collection device of fig. 1 is shown. As shown in fig. 2, a regular polygon mount is employed, and the angles of view of the depth cameras in the respective orientations are set to be the same. The panorama acquisition device in the figure comprises depth cameras 01 to 08, wherein a, B, C, D, E, F, G, H are the intersections of the fields of view of the eight depth cameras, which form a cross-point circle 302 with a diameter 303 of 2R. In the sensor acquisition range of more than 2R, a three-dimensional 360-degree panoramic depth image can be obtained. In application, panoramic images can be acquired, or images of some or part of the azimuth can be acquired according to the situation. It can be seen that in the acquirable range, there is a cross overlap area between the fields of view of each two adjacent depth cameras. And, the intersection points of the field of view of the depth camera are co-rounded. It can be seen that in the acquirable range, there is a cross overlap area for the field of view of each adjacent two depth cameras. And, the intersection points of the visual fields of the respective depth cameras are co-rounded.
Referring to fig. 3, an exemplary flow chart of a mapping method according to an embodiment of the application is shown. As shown in the figure, the mapping method comprises the following steps:
s10, simultaneously acquiring pose information and azimuth image information to generate a panoramic image sequence and a grid map;
And step S20, according to the pose information, fusing the panoramic image sequence and the corresponding grid map to obtain the grid map with the panoramic visual mark.
In step S10, image information of each azimuth is acquired by the panoramic image acquisition device, and simultaneously pose information is acquired by the pose acquisition device, and a panoramic image sequence with pose information and a grid map with pose information are generated. A panoramic image sequence is typically a time series of images acquired at different times.
In step S20, a grid map with a visual identifier is established by fusing the three-dimensional panoramic image sequence and the grid map, and accurate positioning can be performed with the aid of the pose acquisition device.
In some embodiments, prior to acquiring the image information, comprising:
step S1, setting internal parameters of each depth camera;
And S2, converting the coordinate system of each depth camera into a world coordinate system.
In step S1, the internal parameters of the depth camera include focal length, distortion parameters, etc. The parameters of the depth camera are set according to the characteristics of the selected depth camera and the distribution of the depth camera on the support.
Step S2 is a process of unifying coordinates of each depth camera, and the unifying coordinates is beneficial to splicing images acquired by each subsequent depth camera.
In some embodiments, step S10 includes:
Step S11, correlating the pose information with the azimuth image information;
step S12, splicing the images in all directions according to the three-dimensional point cloud information of the direction image information to obtain a panoramic image sequence with positioning information;
and S13, converting the three-dimensional point cloud information of the azimuth image into two-dimensional point cloud information to obtain a grid map.
In step S11, the acquired pose information and the image information at the same time are correlated to generate a panoramic image having the pose information.
In step S12, a first-matching and then-splicing manner may be adopted.
In particular, it is first determined whether the point clouds match,
If so, judging whether an overlapping area exists:
if there is an overlapping area, reserving one of the overlapping areas, if there is no overlapping area, butting the matched point clouds,
Otherwise, the dead zone exists, and the step S1 is executed again.
In step S13, the grid map is an information map obtained by performing grid processing on known environmental information. Rectangular modeling is performed on the environmental information, and the unit length of the grid is set according to the actual condition of the environment (for example, the grid can be set to be 0.1m by 0.1m/0.05m by 0.05m or other size values), so that given environmental information is converted into a rectangular grid map.
Each grid point is represented by a grid point center node, the whole grid map is corresponding to coordinates on a coordinate system, coordinates on the coordinate system corresponding to each node are represented by (x, y), x represents the column number of the node, y represents the row number of the node, the coordinate unit length of the whole coordinate system is the unit length of the grid, wherein the node containing barrier information is marked as 1, the node not containing barrier information is marked as 0, the grid with the barrier is called as an 'occupied' grid, and the grid not containing the barrier is called as an 'idle' grid. Thus, the entire grid map is composed of 'occupied' grids and 'free' grids. As to how to determine whether the grid contains obstacle information, a method may be employed in which a probability p is set, three-dimensional full view point cloud information is projected onto a two-dimensional plane in which the robot center is located, if the probability of an obstacle occurring in the grid is referred to as p, the probability p is compared with a predetermined probability threshold p_th, if p > =p_th, the grid is occupied, and if p < p_th, the grid is idle. The obtained grid map comprises pose information.
In some embodiments, step S20 includes:
And S21, fusing the panoramic image sequence with the same position information and the grid map.
The grid map shows the information in the environment in the form of grids, wherein some grids are occupied, some grids are idle, and different states of the grids can show the states of the surrounding environment of the robot, such as which position is provided with an obstacle. However, the distribution of the obstacles in the corresponding environment can only be known through the grid map, and specifically what type of obstacle, specifically what object, can not be represented by the grid map. The application thus contemplates compensation by image information.
The method comprises the steps of obtaining a grid map from three-dimensional point cloud information, storing the specific grid number and grid IDs corresponding to each recorded position, recording the information of panoramic pictures corresponding to all positions in the operation of a robot, forming an image sequence by all picture information, and finally integrating the image sequence with the same position information with the grid map to form the grid map with panoramic visual marks. A model of a three-dimensional descriptive environment is obtained. The formed map has better use experience for users, and the displayed environment is more visual.
In some embodiments, pose information is acquired by IMU sensors. An IMU, inertial measurement unit, is a device that uses accelerometers and gyroscopes to measure the three-axis attitude angle (or angular rate) and acceleration of an object. The gyroscope and the accelerometer are arranged on three orthogonal axes in one IMU, 6 degrees of freedom are adopted to measure the angular velocity and the acceleration of an object in a three-dimensional space, the IMU is called a 6-axis IMU, and a magnetometer can be added to the IMU on the basis of the accelerometer and the gyroscope, and the IMU is called a 9-axis IMU.
Fig. 4 shows an exemplary block diagram of a mobile robot according to an embodiment of the present application.
As shown in fig. 4, the mobile robot 400 includes one or more Central Processing Units (CPUs) 401, which can perform various appropriate actions and processes according to programs stored in a Read Only Memory (ROM) 402 or programs loaded from a storage section 408 into a Random Access Memory (RAM) 403. In RAM 403, various programs and data required for the operation of system 400 are also stored. The CPU 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Connected to the I/O interface 405 are a capturing section 406 which may include an image capturing device, a gesture sensing device, and the like, an output section 407 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like, a storage section 408 including a hard disk, and the like, and a communication section 409 including a network interface card such as a LAN card, a modem, and the like. The communication section 409 performs communication processing via a network such as the internet. The drive 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 410 as needed, so that a computer program read therefrom is installed into the storage section 408 as needed.
In particular, the process described above with reference to fig. 3 may be implemented as a computer software program according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing a mapping method. In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 409 and/or installed from the removable medium 411.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In another aspect, the present application also provides a computer readable storage medium, which may be a computer readable storage medium included in the apparatus described in the above embodiment, or may be a computer readable storage medium that exists separately and is not assembled into a device. The computer-readable storage medium stores one or more programs for use by one or more processors in performing the mapping methods described herein.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the application referred to in the present application is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.