Movatterモバイル変換


[0]ホーム

URL:


CN116152776A - Method, device, equipment and storage medium for identifying drivable area - Google Patents

Method, device, equipment and storage medium for identifying drivable area
Download PDF

Info

Publication number
CN116152776A
CN116152776ACN202211733036.1ACN202211733036ACN116152776ACN 116152776 ACN116152776 ACN 116152776ACN 202211733036 ACN202211733036 ACN 202211733036ACN 116152776 ACN116152776 ACN 116152776A
Authority
CN
China
Prior art keywords
boundary
obstacle
map
information
mobile robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211733036.1A
Other languages
Chinese (zh)
Inventor
李谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Human Horizons Shanghai Autopilot Technology Co Ltd
Original Assignee
Human Horizons Shanghai Autopilot Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Human Horizons Shanghai Autopilot Technology Co LtdfiledCriticalHuman Horizons Shanghai Autopilot Technology Co Ltd
Priority to CN202211733036.1ApriorityCriticalpatent/CN116152776A/en
Publication of CN116152776ApublicationCriticalpatent/CN116152776A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The application relates to a method, a device, equipment and a storage medium for identifying a drivable area. The main technical scheme comprises the following steps: the method comprises the steps of acquiring a surrounding environment map of the mobile robot through a camera and/or an image sensor, determining boundary information of obstacles in the surrounding environment map by utilizing a boundary recognition model, generating a grid map of the surrounding environment of the mobile robot according to the boundary information, and recognizing a travelable area of the mobile robot based on the grid map. According to the method and the device, the drivable area can be accurately identified under the low-calculation force scene, collision is avoided, and therefore the driving safety of the mobile robot is improved.

Description

Method, device, equipment and storage medium for identifying drivable area
Technical Field
The present disclosure relates to the field of robotics, and in particular, to a method, apparatus, device, and storage medium for identifying a drivable area.
Background
Currently, identification of drivable areas has been widely used in highly automated industries such as autopilot, robotics, etc. as a method for effectively avoiding collisions. The existing method for identifying the drivable area usually adopts a scheme of laser radar and monocular camera, a scheme of binocular (multi-eye) camera or a scheme of monocular camera depth estimation algorithm and drivable area identification, which all require high computational power support, but can not accurately identify the drivable area under the low-computational-force scene.
Disclosure of Invention
Based on the above, the application provides a method, a device, equipment and a storage medium for identifying a travelable region, which can accurately identify the travelable region under the condition of low computational force field.
In a first aspect, there is provided a method of identifying a drivable region, the method comprising:
acquiring a surrounding environment map of the mobile robot through a camera and/or an image sensor;
determining boundary information of obstacles existing in the surrounding environment map by utilizing a boundary recognition model;
generating a grid map of the surrounding environment of the mobile robot according to the boundary information;
based on the grid map, a travelable region of the mobile robot is identified.
According to one implementation manner in the embodiments of the present application, determining boundary information of an obstacle existing in a surrounding environment map by using a boundary recognition model includes:
obtaining a conversion matrix from an image of the mobile robot to real world physical coordinates through calibration;
generating a bird's eye view of the surrounding environment of the mobile robot according to the transformation matrix and the surrounding environment map;
according to the bird's eye view, boundary information of the obstacle is determined based on the boundary recognition model.
According to one implementation manner in the embodiment of the application, the boundary information includes boundary attribute information and boundary probability information; determining boundary information of the obstacle based on the boundary recognition model according to the bird's eye view, including:
generating a boundary segmentation image based on the boundary recognition model according to the aerial view;
and determining boundary attribute information and boundary probability information of the barrier according to the boundary segmentation image.
According to an implementation manner in the embodiments of the present application, the boundary segmentation image includes at least one segmentation line and a recognition result corresponding to the at least one segmentation line, and determining boundary attribute information and boundary probability information of the obstacle according to the boundary segmentation image includes:
constructing a contour of the obstacle according to the at least one parting line;
and determining boundary attribute information and boundary probability information of the obstacle according to the identification result corresponding to the outline of the obstacle.
According to one implementation manner in the embodiments of the present application, generating a grid map of a surrounding environment of a mobile robot according to boundary information includes:
generating an obstacle boundary map according to the boundary attribute information;
generating a boundary probability map of the obstacle according to the boundary map and the boundary probability information;
and generating a grid map of the surrounding environment of the mobile robot according to the boundary probability map and the preset boundary probability value.
According to one implementation manner in the embodiments of the present application, generating a boundary probability map of an obstacle according to the boundary map and the boundary probability information includes:
determining position information of the obstacle according to the distribution condition of the obstacle in the obstacle boundary map;
and generating a boundary probability map of the obstacle according to the position information and the boundary probability information.
According to an implementation manner in the embodiments of the present application, the boundary probability map includes boundary probability values of at least one obstacle, and generating a grid map of an environment surrounding the mobile robot according to the boundary probability map and a preset boundary probability value includes:
obtaining a boundary probability value of the obstacle according to the boundary probability map;
displaying the obstacle on the grid map when the boundary probability value of the obstacle is larger than the preset boundary probability value;
and when the boundary probability value of the obstacle is smaller than or equal to the preset boundary probability value, displaying the position of the obstacle as an unknown area on the grid map.
According to one implementation manner in an embodiment of the present application, a grid map includes a plurality of grids, and identifying a drivable area of a mobile robot based on the grid map includes:
determining the positions of the mobile robot and the obstacle according to the grids of the mobile robot and the obstacle in the grid map;
the area of the mobile robot in the direction of the obstacle rays is identified as a travelable area of the mobile robot according to the positions of the mobile robot and the obstacle.
According to one implementation manner in the embodiments of the present application, the method further includes:
updating boundary information of a current barrier in the current surrounding environment map according to the current surrounding environment map acquired in real time to acquire current boundary information;
and updating the grid map of the current surrounding environment of the mobile robot according to the current boundary information.
According to one implementation manner in the embodiments of the present application, the current boundary information includes current position information and current boundary probability information of a current obstacle, and updating a grid map of a current surrounding environment of the mobile robot according to the current boundary information includes:
determining an initial boundary probability value of a corresponding position of the current obstacle in the grid map according to the current position information, wherein the initial boundary probability value is the boundary probability value of the obstacle in the grid map before the current boundary information is updated;
superposing the current boundary probability value and the initial boundary probability value to obtain the latest boundary probability value;
and updating the grid map of the current surrounding environment of the mobile robot according to the latest boundary probability value.
In a second aspect, there is provided a computer device comprising:
at least one processor; and a memory communicatively coupled to the at least one processor; wherein,,
the memory stores computer instructions executable by the at least one processor to enable the at least one processor to perform the method referred to in the first aspect above.
In a third aspect, there is provided a computer readable storage medium having stored thereon computer instructions for causing a computer to perform the method referred to in the first aspect above.
According to the technical content provided by the embodiment of the application, the boundary recognition is carried out on the surrounding environment map of the mobile robot through the boundary recognition model, the boundary information of the obstacle is obtained, the grid map of the surrounding environment of the mobile robot is generated according to the boundary information, and the drivable area of the mobile robot is recognized based on the grid map. By identifying the boundary detection drivable area of the obstacle, lower calculation force is required compared with the whole image processing, and the drivable area can be accurately identified even under the low calculation force scene, so that collision is avoided, and the driving safety of the mobile robot is improved.
Drawings
FIG. 1 is an application environment diagram of a method of identifying a travelable region in one embodiment;
FIG. 2 is a flow chart of a method for identifying a travelable region in one embodiment;
FIG. 3 is a flow chart illustrating a boundary information determining step in one embodiment;
FIG. 4A is a schematic diagram of a bird's eye view of the environment surrounding a mobile robot in one embodiment;
FIG. 4B is a schematic diagram of a boundary-segmentation graph, according to one embodiment;
FIG. 5 is a flow diagram of a grid map generation step in one embodiment;
FIG. 6 is a schematic diagram of an obstacle boundary map in one embodiment;
FIG. 7 is a schematic diagram of a boundary probability map in one embodiment;
FIG. 8 is a schematic diagram of a travelable region of a mobile robot to a single pixel of an obstacle in one embodiment;
FIG. 9 is a flow chart of a region of travel identification step in one embodiment;
FIG. 10 is a schematic diagram of a travelable region of a mobile robot to an obstacle in one embodiment;
FIG. 11 is a flowchart illustrating a grid map update procedure in one embodiment;
FIG. 12 is a schematic block diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In the method for identifying a travelable area provided in the present application, a mobile robot is a machine device that automatically performs work, and may be classified into a wheeled mobile robot, a walking mobile robot, a crawler mobile robot, a crawling robot, a peristaltic robot, and the like according to a moving manner. The wheeled mobile robot comprises a double-wheeled mobile robot, a three-wheeled mobile robot, a four-wheeled mobile robot and the like. Taking a wheeled mobile robot as an example, for example, a vehicle, the method of identifying a drivable area may be applied to a system architecture as shown in fig. 1. Thevehicle 100 includes a vehicle-mountedterminal 110, the vehicle-mountedterminal 110 acquires a surrounding image of the mobile robot through a camera and/or an image sensor, the surrounding image includes at least one obstacle, boundary information of the obstacle existing in the surrounding image is determined by using a boundary recognition model, a grid map of the surrounding environment of the mobile robot is generated according to the boundary information, and a drivable area of the mobile robot is recognized based on the grid map.
The in-vehicle terminal 110 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices connected to a vehicle. It should be noted that the system architecture shown in fig. 1 is merely an example, and thevehicle 100 may be replaced with a probe, a sweeper, etc., and the vehicle-mountedterminal 110 may be replaced with a probe terminal, a sweeper terminal, etc.
Fig. 2 is a flowchart of a method for identifying a travelable region according to an embodiment of the present application, where the method may be performed by thevehicle terminal 110 in the system shown in fig. 1. As shown in fig. 2, the method may include the steps of:
s210, acquiring a surrounding environment map of the mobile robot through a camera and/or an image sensor.
Specifically, the surrounding environment map of the mobile robot is acquired through a shooting device configured by the mobile robot, the surrounding environment map can be an environment map within a preset distance of the mobile robot, and the preset distance can be determined according to actual service requirements, actual product requirements or actual application scenes. The surrounding environment map comprises at least one obstacle, and the obstacle is an object which causes interference to the running of the mobile robot, and can be a person, a vehicle, a small animal, a fence and the like.
The photographing device may include a camera and an image sensor, among others. The surrounding environment map of the mobile robot can be obtained in real time through the shooting device, the surrounding environment map which is shot and stored by the shooting device can also be obtained from the storage module of the mobile robot, and the corresponding obtaining mode is selected to obtain the surrounding environment map according to the requirements of different application scenes, so that an image foundation is provided for identifying an accurate drivable area.
The image data acquired by the camera or the image sensor is two-dimensional data, and compared with three-dimensional data acquired by other shooting equipment, the image data can be subjected to graphic processing by means of low calculation force, so that the calculation resources are saved, and the cost is reduced. Under the low-computation force scene, an accurate drivable region can be obtained by adopting the method for identifying the drivable region based on the acquired surrounding environment map.
S220, determining boundary information of obstacles existing in the surrounding environment map by utilizing the boundary recognition model.
The method comprises the steps of inputting a surrounding environment diagram into a pre-trained boundary recognition model, recognizing an obstacle in the surrounding environment diagram by the boundary recognition model, and outputting boundary information of the obstacle.
Specifically, the surrounding environment map is input into a boundary recognition model, and the boundary of the obstacle is recognized through the boundary recognition model, so that boundary information corresponding to the obstacle is obtained. The boundary identified by the boundary identification model is the boundary position of the obstacle and the drivable area, each boundary comprises at least one pixel point, and the boundary information comprises the boundary attribute of each pixel point and the boundary probability information for judging whether the pixel point is the obstacle. Compared with the conventionally used laser radar for identifying the boundary of an obstacle in an image, the laser radar can only identify distance information and cannot identify semantic information. The boundary recognition model is adopted to recognize the boundary of the obstacle in the image, the obtained boundary of the obstacle has semantic information, for example, whether the boundary is a vehicle boundary or a wall boundary can be determined through boundary attributes, and the method is beneficial to accurately dividing the drivable area.
By identifying obstacle boundaries in the surrounding map, less computational effort is required than identifying the entire image, reducing the computational effort requirements of the mobile robot. Under the low calculation force field, the calculation force pressure of the mobile robot can be reduced as much as possible, and the accuracy of the identified travelable area is ensured to the greatest extent.
S230, generating a grid map of the surrounding environment of the mobile robot according to the boundary information.
The grid map is a map that divides the surrounding map into a series of grids, wherein each grid corresponds to a pixel in the surrounding map, and if the pixel corresponding to the grid is an obstacle, the grid will give a probability value to indicate the probability that the grid is occupied, so as to further determine whether the grid really has the obstacle.
And determining boundary attribute values and boundary probability values of the obstacles in the surrounding environment map based on the boundary information, and displaying the obstacle conditions in the surrounding environment map of the mobile robot on the grid map according to the dividing ratio between the surrounding environment map and the grid map according to the boundary attribute values and the boundary probability values. When the grid map is generated, the mobile robot can be placed in the center of the grid map for display, and can also be placed at any other position of the grid map for display, and the mobile robot can only ensure that all barriers can be completely displayed on the grid map.
The dividing ratio of the grid map can be set arbitrarily according to the requirement, for example, the obstacle 1 is located at a position 50 meters above the mobile robot, and the length of each grid in the grid map can be set to 10 meters, so that in the grid map, the obstacle 1 is located at 5 grids above the mobile robot.
The position relationship between the mobile robot and the obstacle can be intuitively and vividly known through the grid map, and the method is beneficial to accurately identifying the drivable area.
S240, identifying a travelable area of the mobile robot based on the grid map.
The movable area is an unobstructed area which can be safely driven by the mobile robot. The distribution situation of the obstacles in the surrounding environment of the mobile robot can be known through the grid map, the connecting line area between the mobile robot and the obstacles is generally free of the obstacles, and the area without the obstacles is determined and divided into the areas capable of running. And knowing the distribution condition of global obstacles around the mobile robot according to the grid map, and comprehensively and accurately identifying the drivable area of the mobile robot.
It can be seen that, in the embodiment of the application, the surrounding environment map of the mobile robot is obtained through the camera and/or the image sensor, the boundary information of the obstacle existing in the surrounding environment map is determined by utilizing the boundary recognition model, the grid map of the surrounding environment of the mobile robot is generated according to the boundary information, the drivable area of the mobile robot is recognized based on the grid map, only the boundary of the obstacle is recognized, the distribution situation of the mobile robot and the obstacle is determined in the grid map, the drivable area can be accurately recognized under the low-calculation force field scene, collision is avoided, and the driving safety of the mobile robot is further improved.
The steps in the above-described process flow are described in detail below. The above-described S220, i.e. "determination of boundary information of an obstacle present in the surrounding environment map using the boundary recognition model" will be described in detail first with reference to the embodiment.
As one possible way, the conversion matrix of the image of the mobile robot to the real world physical coordinates is obtained by calibration;
generating a bird's eye view of the surrounding environment of the mobile robot according to the transformation matrix and the surrounding environment map;
according to the bird's eye view, boundary information of the obstacle is determined based on the boundary recognition model.
The calibration disc is placed in an area around the mobile robot and mainly provides reference points for calibration, and the calibration disc comprises a plurality of calibration points, wherein each calibration point is a reference point. The calibration disk can adopt a checkerboard calibration disk or a solid circle calibration disk, the checkerboard calibration disk comprises a plurality of black and white checks, the corner points of the checkerboard are used as calibration reference points, the solid circle calibration disk comprises a plurality of circle centers, and each circle center is used as a calibration reference point.
The camera of the mobile robot recognizes the pixel point of each marking point on the calibration disk, a coordinate system is established by taking the center of the mobile robot or a certain position on the mobile robot as an origin, and the real physical coordinates of each marking point are measured. And obtaining a transformation matrix from the pixel point to the real physical coordinates, namely, a projection transformation relation from the ground plane to the image plane according to the mapping from the pixel point to the real physical coordinates.
Further, each pixel point in the surrounding environment map can be mapped into a real physical coordinate system by adopting a transformation matrix, and the surrounding environment maps under different shooting angles are spliced to obtain a bird's eye view map of the surrounding environment of the mobile robot.
Finally, the bird's-eye view map can be input into a boundary recognition model, the boundary recognition model extracts characteristics of the input bird's-eye view map, the obstacles in the bird's-eye view map are classified according to the extracted characteristics, output information output by the boundary recognition model is obtained, and then boundary information of the obstacles is determined according to the output information. The boundary information includes boundary attribute information and boundary probability information, and the boundary attribute information represents the type of the obstacle and can include people, automobiles, walls and the like. The boundary probability information indicates probability values, for example, 0.9 and 0.85, that the pixel points in the bird's eye view are obstacles.
As one possible way, as shown in fig. 3, determining boundary information of an obstacle based on a boundary recognition model according to a bird's eye view includes:
s310, generating a boundary segmentation image based on the boundary recognition model according to the aerial view.
The boundary recognition model is a deep learning model, the bird's eye view is input into the boundary recognition model, the boundary recognition model generates a boundary segmentation image after forward reasoning, and the size of the boundary segmentation image is the same as that of the bird's eye view. The boundary segmentation image is an image obtained according to the boundary between the obstacle and the drivable area in the aerial view, and comprises at least one segmentation line and a recognition result corresponding to the at least one segmentation line, wherein the recognition result comprises a boundary attribute value and a boundary probability value corresponding to each pixel point of the obstacle.
Fig. 4A is a bird's eye view of the surrounding environment of the mobile robot, fig. 4B is a boundary-divided image, and fig. 4A is inputted into a boundary recognition model to obtain fig. 4B. As can be seen from fig. 4B, the plurality of dividing lines in the boundary-divided image separate the mobile robot from the obstacle, so that the travelable region can be further determined.
S320, determining boundary attribute information and boundary probability information of the barrier according to the boundary segmentation image.
The mobile robot reads the boundary segmentation image to obtain a boundary attribute value and a boundary probability value corresponding to each pixel point of the obstacle, and further determines boundary attribute information and boundary probability information of the obstacle. The boundary attribute information comprises boundary attribute values and position information corresponding to each pixel point of the obstacle, and the boundary probability information comprises boundary probability values corresponding to each pixel point of the obstacle.
As one implementation, S320 includes: constructing a contour of the obstacle according to the at least one parting line;
and determining boundary attribute information and boundary probability information of the obstacle according to the identification result corresponding to the outline of the obstacle.
Wherein each obstacle is basically a polyhedron or a single line, and the outline of the obstacle is divided according to at least one dividing line in the boundary dividing image. As shown in fig. 4B, the profile is a profile of all the obstacles, not a profile of a single obstacle, and the obstacle properties cannot be known only by the profile. According to the recognition result corresponding to the outline of the obstacle, the obstacle attribute value corresponding to each pixel point of the outline of the obstacle is known, and the type of the obstacle can be distinguished more accurately.
Further, the distance between each pixel point in the obstacle outline and the mobile robot can be obtained from the boundary segmentation image, so that the position information of each pixel point is obtained, then the boundary attribute value and the position information of each pixel point are used as boundary attribute information, the boundary probability value of each pixel point is used as boundary probability information, and finally the boundary attribute information and the boundary probability information of all the pixel points are integrated, so that the boundary attribute information and the boundary probability information of the obstacle are obtained.
The above S230, that is, "generating a grid map of the surrounding environment of the mobile robot from the boundary information" will be described in detail with reference to the embodiments.
As an implementation manner, as shown in fig. 5, S230 specifically includes:
s231, generating an obstacle boundary map according to the boundary attribute information.
Firstly, position information of each pixel point is obtained according to boundary attribute information, and a grid where each pixel is located is determined according to the position information of the pixel point. And then, writing the boundary attribute value of each pixel point into a corresponding grid to obtain an obstacle boundary map. Wherein the obstacle attribute values may be 0,1, 2, 3, etc., each number may represent a different obstacle, for example, 1 may represent a person, and 2 may represent a car.
As shown in the obstacle boundary map of fig. 6, the boundary attribute values in the 4 grids at the upper left of the mobile robot are 2, the boundary attribute values in the 2 grids at the upper right are 1, the boundary attribute values in the 5 grids at the lower left are 2, and the boundary attribute values in the 5 grids at the lower right are 3. Wherein 1 denotes a person, 2 denotes a car, and 3 denotes a wall. That is, the upper left and lower left obstacles of the mobile robot are vehicles, the upper right obstacle of the mobile robot is a person, and the lower right obstacle of the mobile robot is a wall.
S232, generating a boundary probability map of the obstacle according to the boundary map of the obstacle and the boundary probability information.
Since the boundary recognition model cannot completely and accurately recognize the obstacle on the aerial view, each pixel point recognized as the obstacle corresponds to one boundary probability value, so that recognition errors of the obstacle are reduced. The range of the boundary probability value is (0, 1).
After the obstacle boundary map is obtained, the boundary probability information is added to the obstacle boundary map to obtain the boundary probability map of the obstacle.
As one possible way, determining the position information of the obstacle according to the distribution condition of the obstacle in the obstacle boundary map;
and generating a boundary probability map of the obstacle according to the position information and the boundary probability information.
Wherein the position information includes coordinates of the obstacle in the obstacle boundary map. According to the distribution of the obstacle in the obstacle boundary map, the coordinates of the obstacle, that is, the coordinates of the obstacle corresponding to each pixel point, can be determined. And determining the grid of each pixel point according to the coordinates of the pixel point, and filling the boundary probability value corresponding to the pixel point into the grid of each pixel point to obtain the boundary probability map of the obstacle.
As shown in fig. 7, in the boundary probability map of fig. 6, 0.9, 0.89, and 0.91 are filled from top to bottom in the grids with boundary attribute values of 2 in the upper left part of the mobile robot, 0.9, and 0.8 are filled from top to bottom in the grids with boundary attribute values of 1 in the upper right part, 0.95, 0.93, 0.91, 0.87, and 0.7 are filled from top to bottom in the grids with boundary attribute values of 2 in the lower left part, and 0.8, and 0.9 are filled from top to bottom in the grids with boundary attribute values of 3 in the lower right part.
The probability of whether each pixel point is an obstacle can be intuitively seen through the boundary probability map, which is favorable for further determining the position of the obstacle and improving the accuracy of identifying the drivable area.
S233, generating a grid map of the surrounding environment of the mobile robot according to the boundary probability map and the preset boundary probability value.
Wherein the boundary probability map comprises boundary probability values for at least one obstacle. And finally determining the obstacle, the drivable area and the unknown area by comparing the boundary probability value with a preset boundary probability value. Based on the division of different areas, generating a grid map of the surrounding environment of the mobile robot, and marking each area with different colors.
As one implementation manner, S233 specifically includes: obtaining a boundary probability value of the obstacle according to the boundary probability map;
and displaying the obstacle on the grid map when the boundary probability value of the obstacle is larger than the preset boundary probability value.
The preset boundary probability value can be set according to actual requirements, if the scene of the obstacle needing to be identified with high precision, the preset boundary probability value can be set to be larger, and if the scene of the obstacle needing to be identified roughly, the preset boundary probability value can be set to be smaller.
Specifically, when the boundary probability value of the obstacle is greater than the preset boundary probability value, the probability that the obstacle is an obstacle in the real world is particularly high, the obstacle can be basically determined to be an obstacle in the real world, and the grid where the boundary probability value greater than the preset boundary probability value is located in the grid map is displayed as the obstacle. As shown in fig. 8, a certain pixel of the obstacle may be marked with black on the grid map.
Otherwise, when the boundary probability value of the obstacle is smaller than or equal to the preset boundary probability value, displaying the position of the obstacle as an unknown area on the grid map, wherein the unknown area can be an area which is not shot by the mobile robot or an area which is blocked by the obstacle.
Specifically, when the boundary probability value of the obstacle is smaller than or equal to the preset boundary probability value, the probability that the obstacle is an obstacle in the real world is relatively small, the obstacle cannot be completely determined to be an obstacle in the real world, and a grid where the boundary probability value smaller than or equal to the preset boundary probability value in the grid map is located is displayed as an unknown area. As shown in fig. 8, the unknown region may be marked with white on the grid map.
The above-described S240, that is, "identifying a travelable region of a mobile robot based on a grid map" will be described in detail with reference to an embodiment.
As one implementation, as shown in fig. 9, S240 may include:
s241, determining the positions of the mobile robot and the obstacle according to the grids of the mobile robot and the obstacle in the grid map.
S242, the area of the mobile robot in the direction of the obstacle ray is identified as the travelable area of the mobile robot according to the positions of the mobile robot and the obstacle.
Wherein, because each obstacle comprises at least one pixel point, each grid in the grid map corresponds to one pixel point. When the travelable area of the mobile robot is identified, the area of the mobile robot in the direction of each grid ray is identified, and as shown in fig. 8, the area of the mobile robot in the direction of the first grid ray of the 4 grids above and to the left thereof is identified as the travelable area and marked in gray. And after all the areas in the grid ray directions corresponding to the obstacle are identified, obtaining the travelable area in the aerial view. As shown in fig. 10, the black area is an obstacle, the white area is an unknown area, and the gray area is a travelable area.
Further, when a new surrounding environment map is acquired at the next moment, and then the bird's eye view map is updated, new obstacle distribution exists, and accordingly, the drivable area also changes.
As an implementation manner, as shown in fig. 11, the method further includes the following steps:
s1110, updating the boundary information of the current obstacle in the current surrounding environment map according to the current surrounding environment map acquired in real time to obtain the current boundary information.
Generating a current aerial view based on a current surrounding environment image acquired in real time based on the transformation matrix, and inputting the current aerial view into a boundary recognition model to obtain a corresponding current boundary segmentation image. Based on the current boundary segmentation map, boundary information of the current obstacle is read. The current boundary information includes a current boundary attribute value, current position information, and current boundary probability information of the current obstacle.
S1120, updating the grid map of the current surrounding environment of the mobile robot according to the current boundary information.
The boundary attribute value and the boundary probability value of the barrier in the grid map can be updated sequentially based on the current boundary information, and the grid map of the current surrounding environment of the mobile robot is updated based on the updated boundary probability value.
As one implementation, S1120 includes: determining an initial boundary probability value of a corresponding position of the current obstacle in the grid map according to the current position information, wherein the initial boundary probability value is the boundary probability value of the obstacle in the grid map before the current boundary information is updated;
superposing the current boundary probability value and the initial boundary probability value to obtain the latest boundary probability value;
and updating the grid map of the current surrounding environment of the mobile robot according to the latest boundary probability value.
The current boundary probability value is a boundary probability value corresponding to an obstacle at the current moment, and the initial boundary probability value is a boundary probability value of the obstacle in the grid map before the current boundary information is updated, i.e. the boundary probability value corresponding to the obstacle at the last moment. If the grid where the current obstacle is located has no obstacle at the previous time, the initial boundary probability value is 0.
Further, the current boundary probability value and the initial boundary probability value are overlapped to obtain a latest boundary probability value, the logarithmic value of the latest boundary probability value is obtained and is used as a final boundary probability value of the current obstacle, and the grid map of the current surrounding environment of the mobile robot is updated based on the final boundary probability value. And identifying the area of the mobile robot in the current obstacle ray direction, and obtaining a new travelable area.
The grid map is updated by superposing the boundary probability values of the current moment and the last moment, so that the previous obstacle is reserved, the current obstacle is additionally arranged, all obstacles in the moving range of the mobile robot are avoided being omitted, the identification result of the drivable area is more accurate, and collision is effectively avoided.
As one implementation, the method further includes:
acquiring a sample aerial view generated by the mobile robot according to the mobile environment map;
labeling the obstacles in the sample aerial view to obtain corresponding obstacle boundary labels;
and training an initial neural network model according to the sample aerial view and the obstacle boundary label to obtain a boundary recognition model.
The mobile environment map is an environment map through which the mobile robot runs in a plurality of scenes, the mobile environment map comprises obstacles in a plurality of scenes, and model training is performed based on the mobile environment map. The method described in S220 is used to generate a bird ' S-eye view from the moving environment map, and the bird ' S-eye view is used as a sample bird ' S-eye view. And marking boundary attribute values for the obstacles in the sample aerial view, and marking boundary dividing lines of the obstacles to obtain obstacle boundary labels.
Taking the sample aerial view and the obstacle boundary label as training sample data, and adopting a deep learning model as an initial neural network model. When training the boundary recognition model using the training sample data, a sample bird's-eye view may be used as an input of the deep learning model, an obstacle boundary tag in the sample bird's-eye view may be used as a target output of the deep learning model, and the deep learning model obtained by training may be used as the boundary recognition model. Since the forward reasoning algorithm of the deep learning model cannot identify the obstacle by 100%, the output result includes a boundary probability value in addition to the boundary attribute value, and the boundary probability value represents the uncertainty of the deep learning model.
Based on the boundary recognition model obtained by training, when the bird's-eye view generated by the moving environment map is input, the image features of the bird's-eye view can be extracted by the boundary recognition model, the boundary of the obstacle can be recognized based on the image features, the boundary attribute value and the boundary probability value can be confirmed, the boundary dividing line can be extracted, and the boundary dividing map corresponding to the bird's-eye view can be output.
It should be understood that, although the steps in the flowcharts of fig. 2, 3, 5, 9, and 11 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly stated in the present application, and the steps may be executed in other orders. Moreover, at least some of the steps of fig. 2, 3, 5, 9, and 11 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or stages are performed necessarily occur sequentially, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
According to embodiments of the present application, there is also provided a computer device, a computer-readable storage medium.
As shown in fig. 12, is a block diagram of a computer device according to an embodiment of the present application. Computer equipment is intended to represent various forms of digital computers or mobile devices. Wherein the digital computer may comprise a desktop computer, a portable computer, a workstation, a personal digital assistant, a server, a mainframe computer, and other suitable computers. The mobile device may include a tablet, a smart phone, a wearable device, etc.
As shown in fig. 12, theapparatus 1200 includes acomputing unit 1201, a ROM1202, a RAM1203, abus 1204, and an input/output (I/O)interface 1205, and thecomputing unit 1201, the ROM1202, and the RAM1203 are connected to each other through thebus 1204. An input/output (I/O)interface 1205 is also connected to thebus 1204.
Thecomputing unit 1201 may perform various processes in the method embodiments of the present application according to computer instructions stored in a Read Only Memory (ROM) 1202 or computer instructions loaded from astorage unit 1208 into a Random Access Memory (RAM) 1203. Thecomputing unit 1201 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Thecomputing unit 1201 may include, but is not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), as well as any suitable processor, controller, microcontroller, etc. In some embodiments, the methods provided by embodiments of the present application may be implemented as a computer software program tangibly embodied on a computer-readable storage medium, such asstorage unit 1208.
The RAM1203 may also store various programs and data required for the operation of thedevice 1200. Part or all of the computer program may be loaded and/or installed onto thedevice 1200 via the ROM802 and/or thecommunication unit 1209.
Aninput unit 1206, anoutput unit 1207, astorage unit 1208, and acommunication unit 1209 in thedevice 1200 may be connected to the I/O interface 1205. Wherein theinput unit 1206 may be, for example, a keyboard, mouse, touch screen, microphone, etc.; theoutput unit 1207 may be, for example, a display, a speaker, an indicator light, or the like. Thedevice 1200 can exchange information, data, and the like with other devices through thecommunication unit 1209.
It should be noted that the device may also include other components necessary to achieve proper operation. It may also include only the components necessary to implement the present application, and not necessarily all the components shown in the figures.
Various implementations of the systems and techniques described here can be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof.
Computer instructions for implementing the methods of the present application may be written in any combination of one or more programming languages. These computer instructions may be provided to acomputing unit 1201 such that the computer instructions, when executed by thecomputing unit 1201, such as a processor, cause the steps involved in the method embodiments of the present application to be performed.
The computer readable storage medium provided herein may be a tangible medium that may contain, or store, computer instructions for performing the steps involved in the method embodiments of the present application. The computer readable storage medium may include, but is not limited to, storage media in the form of electronic, magnetic, optical, electromagnetic, and the like.
The above embodiments do not limit the scope of the application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (12)

CN202211733036.1A2022-12-302022-12-30Method, device, equipment and storage medium for identifying drivable areaPendingCN116152776A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202211733036.1ACN116152776A (en)2022-12-302022-12-30Method, device, equipment and storage medium for identifying drivable area

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202211733036.1ACN116152776A (en)2022-12-302022-12-30Method, device, equipment and storage medium for identifying drivable area

Publications (1)

Publication NumberPublication Date
CN116152776Atrue CN116152776A (en)2023-05-23

Family

ID=86372889

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202211733036.1APendingCN116152776A (en)2022-12-302022-12-30Method, device, equipment and storage medium for identifying drivable area

Country Status (1)

CountryLink
CN (1)CN116152776A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2025098153A1 (en)*2023-11-062025-05-15深圳库犸科技有限公司Operation area determination method, computer device, and computer-readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2025098153A1 (en)*2023-11-062025-05-15深圳库犸科技有限公司Operation area determination method, computer device, and computer-readable storage medium

Similar Documents

PublicationPublication DateTitle
CN112417967B (en)Obstacle detection method, obstacle detection device, computer device, and storage medium
US11783568B2 (en)Object classification using extra-regional context
US11373067B2 (en)Parametric top-view representation of scenes
CN108764187A (en)Extract method, apparatus, equipment, storage medium and the acquisition entity of lane line
US11693415B2 (en)Predicting cut-in probabilities of surrounding agents
US11755917B2 (en)Generating depth from camera images and known depth data using neural networks
US20210364637A1 (en)Object localization using machine learning
CN112639822B (en)Data processing method and device
CN111401190A (en)Vehicle detection method, device, computer equipment and storage medium
CN116246033B (en)Rapid semantic map construction method for unstructured road
CN116778262A (en)Three-dimensional target detection method and system based on virtual point cloud
US12154347B2 (en)Region detection and geometry prediction
CN116152776A (en)Method, device, equipment and storage medium for identifying drivable area
CN113536867A (en)Object identification method, device and system
CN113126120B (en) Data labeling method, device, equipment, storage medium and computer program product
CN114675274A (en)Obstacle detection method, obstacle detection device, storage medium, and electronic apparatus
US20220318456A1 (en)Simulation method based on three-dimensional contour, storage medium, computer equipment
US20220180549A1 (en)Three-dimensional location prediction from images
CN116386003A (en)Three-dimensional target detection method based on knowledge distillation
CN120299008B (en) Mine area 3D occupancy prediction method, device, vehicle, equipment, medium and chip
CN119311002B (en) Mobile robot obstacle avoidance method, device, computer equipment and readable storage medium
CN116092042A (en) Mesh obstacle recognition method, device, electronic equipment and computer storage medium
CN120388344A (en) Obstacle detection method, device and electronic equipment
HK40037982B (en)Method and device for obstacle detection, computer apparatus and storage medium
HK40037982A (en)Method and device for obstacle detection, computer apparatus and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp