Disclosure of Invention
The embodiment of the application mainly aims to provide a method and a device for detecting obstacles in a blind area of a commercial vehicle, which can improve the accuracy of a detection result of the obstacles in the blind area of the commercial vehicle.
The embodiment of the application provides a method for detecting blind area obstacles of a commercial vehicle, which comprises the following steps:
acquiring a first target image to be detected through a vehicle-mounted fisheye camera of a target commercial vehicle, wherein the first target image is an image including a blind area of the target commercial vehicle;
acquiring a second target image to be detected through a vehicle-mounted plane camera in front of the target commercial vehicle;
performing fusion processing on the first target image and the second target image to obtain a fusion result;
and determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the fusion result.
Optionally, the mounting position of the fisheye camera is located at any height position between the vehicle-mounted plane camera and the ground.
Optionally, the shooting angle of the fisheye camera is 180 degrees wide angle.
Optionally, the fusing the first target image and the second target image to obtain a fusion result includes:
respectively carrying out image preprocessing on the first target image and the second target image to obtain a first target object contained in the first target image and a second target object contained in the second target image;
judging whether the relative distance between the first target object and the second target object is within a preset distance threshold range or not, and judging whether the relative speed between the first target object and the second target object is within a preset speed difference range or not;
if the relative distance between the first target object and the second target object is within a preset distance threshold range and the relative speed between the first target object and the second target object is within a preset speed difference range, determining that the first target object and the second target object are the same target object;
and if the relative distance between the first target object and the second target object is judged not to be within a preset distance threshold range and/or the relative speed between the first target object and the second target object is judged not to be within a preset speed difference range, determining that the first target object and the second target object are not the same target object.
Optionally, determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the fusion result includes:
mapping each target object in the fusion result to a world coordinate system to obtain the coordinate of each target object in the world coordinate system;
and determining the position of a target obstacle in the blind area in front of the target commercial vehicle according to the coordinates in the world coordinate system.
The embodiment of the application further provides a commercial car blind area barrier detection device, include:
the first target image acquisition unit is used for acquiring a first target image to be detected through a vehicle-mounted fisheye camera of a target commercial vehicle, wherein the first target image is an image including a blind area of the target commercial vehicle;
the second target image acquisition unit is used for acquiring a second target image to be detected through a vehicle-mounted plane camera in front of the target commercial vehicle;
the target image fusion unit is used for carrying out fusion processing on the first target image and the second target image to obtain a fusion result;
and the position determining unit is used for determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the fusion result.
Optionally, the mounting position of the fisheye camera is located at any height position between the vehicle-mounted plane camera and the ground.
Optionally, the shooting angle of the fisheye camera is 180 degrees wide angle.
Optionally, the target image fusion unit includes:
a preprocessing subunit, configured to perform image preprocessing on the first target image and the second target image, respectively, to obtain a first target object included in the first target image and a second target object included in the second target image;
the judging subunit is configured to judge whether a relative distance between the first target object and the second target object is within a preset distance threshold range, and judge whether a relative speed between the first target object and the second target object is within a preset speed difference range;
the first determining subunit is configured to determine that the first target object and the second target object are the same target object if it is determined that the relative distance between the first target object and the second target object is within a preset distance threshold range and the relative speed between the first target object and the second target object is within a preset speed difference range;
and the second determining subunit is configured to determine that the first target object and the second target object are not the same target object if it is determined that the relative distance between the first target object and the second target object is not within a preset distance threshold range and/or the relative speed between the first target object and the second target object is not within a preset speed difference range.
Optionally, the position determining unit includes:
the mapping subunit is used for mapping each target object in the fusion result to a world coordinate system to obtain the coordinate of each target object in the world coordinate system;
and the third determining subunit is used for determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the coordinates in the world coordinate system.
The embodiment of the application further provides commercial car blind area barrier check out test set, include: a processor, a memory, a system bus;
the processor and the memory are connected through the system bus;
the memory is used for storing one or more programs, the one or more programs comprising instructions, which when executed by the processor, cause the processor to perform any one implementation of the above-mentioned method for detecting obstacles in blind areas of a commercial vehicle.
The embodiment of the application further provides a computer-readable storage medium, wherein instructions are stored in the computer-readable storage medium, and when the instructions run on the terminal device, the terminal device is enabled to execute any implementation manner of the method for detecting the blind area obstacle of the commercial vehicle.
When detecting the obstacle in the blind area in front of the target commercial vehicle, the embodiment of the application firstly acquires a first target image to be detected through the vehicle-mounted fish-eye camera of the target commercial vehicle, wherein the first target image is an image including the blind area of the target commercial vehicle, and simultaneously acquires a second target image to be detected through the vehicle-mounted plane camera in front of the target commercial vehicle, and then performs fusion processing on the first target image and the second target image to obtain a fusion result, and further determines the position of the target obstacle in the blind area in front of the target commercial vehicle according to the fusion result. Therefore, according to the embodiment of the application, the fisheye camera is installed on the target commercial vehicle firstly, and the pre-installed vehicle-mounted fisheye camera is used for shooting the blind area image, so that the image and the front image shot by the vehicle-mounted plane camera can be subjected to image fusion, and the positions of all obstacles contained in the front blind area of the target commercial vehicle can be accurately identified according to the fusion result, so that the safe running of the target commercial vehicle is ensured.
Detailed Description
In some commercial vehicle front obstacle detection methods, images shot by a plane camera on the inner side of a windshield are generally detected, but the plane camera is not large in horizontal and vertical viewing angles, and a large blind area is often generated at the front near end of a commercial vehicle. For example, as shown in fig. 1, from a side view, if there is only a front view plane camera inside the windshield, there is a blind area at the near end, which is in a range of about 3 to 5 meters, and the front view plane camera can only capture an image within a range of about 40 degrees in the vertical direction, as shown in the top view of fig. 2, it can also be seen that accurate capture of an image within the blind area cannot be achieved only by the front view plane camera, and thus it cannot be accurately detected whether there is a pedestrian or a vehicle in the blind area ahead, which results in a great driving safety risk. Therefore, how to accurately detect the obstacle in the blind area of the commercial vehicle so as to ensure the safe driving of the commercial vehicle becomes a problem to be solved urgently.
In order to solve the above defects, an embodiment of the present application provides a method for detecting a blind area obstacle of a commercial vehicle, when detecting an obstacle in a blind area in front of a target commercial vehicle, a first target image to be detected is obtained through a vehicle-mounted fisheye camera of the target commercial vehicle, wherein the first target image is an image including the blind area of the target commercial vehicle, meanwhile, a second target image to be detected is obtained through a vehicle-mounted plane camera in front of the target commercial vehicle, then, the first target image and the second target image are subjected to fusion processing to obtain a fusion result, and then, the position of the target obstacle in the blind area in front of the target commercial vehicle can be determined according to the fusion result. Therefore, according to the embodiment of the application, the fisheye camera is installed on the target commercial vehicle firstly, and the pre-installed vehicle-mounted fisheye camera is used for shooting the blind area image, so that the image and the front image shot by the vehicle-mounted plane camera can be subjected to image fusion, and the positions of all obstacles contained in the front blind area of the target commercial vehicle can be accurately identified according to the fusion result, so that the safe running of the target commercial vehicle is ensured.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First embodiment
Referring to fig. 3, a schematic flow chart of a method for detecting obstacles in blind areas of a commercial vehicle according to this embodiment is shown, and the method includes the following steps:
s301: the method comprises the steps of obtaining a first target image to be detected through a vehicle-mounted fisheye camera of a target commercial vehicle, wherein the first target image is an image including a blind area of the target commercial vehicle.
In this embodiment, any commercial vehicle which realizes blind spot obstacle detection by using this embodiment is defined as a target commercial vehicle, and an image acquired by a vehicle-mounted fisheye camera of the target commercial vehicle is defined as a first target image. The first target image comprises an image of a target commercial vehicle blind area.
It should be noted that, because the commercial vehicle has the characteristics of vehicle height, large volume, long wheelbase and the like, especially, the cockpit is high, and the blind area cannot be effectively detected only by using the plane camera on the inner side of the windshield, therefore, in order to accurately detect the positions of all obstacles contained in the blind area in front of the target commercial vehicle, in this embodiment, a fisheye camera is installed below the plane camera in front of the target commercial vehicle in advance, an optional implementation manner is that the fisheye camera can be installed at any height position between the vehicle-mounted plane camera and the ground, as shown in fig. 4, it can be seen from a side view angle, after a fisheye camera is installed below the plane camera in front of the target commercial vehicle, a near-end blind area image can be basically shot, as shown in a top view of fig. 5, it can also be seen that after a fisheye camera is installed below the plane camera in front of the target commercial vehicle, because the shooting angle of the fisheye camera is 180 degrees wide angle, images of all near-end blind area areas can be basically shot.
S302: and acquiring a second target image to be detected through a vehicle-mounted plane camera in front of the target commercial vehicle.
In this embodiment, in order to ensure safe driving of the target commercial vehicle, not only the first target image to be detected needs to be acquired in step S301, but also the second target image to be detected needs to be acquired by the vehicle-mounted plane camera in front of the target commercial vehicle, so as to execute the subsequent step S303. .
S303: and carrying out fusion processing on the first target image and the second target image to obtain a fusion result.
In this embodiment, after the first target image to be detected is acquired in step S301 and the target image to be detected is acquired in step S302, the first target image and the target image to be detected may be further subjected to fusion processing by using an image fusion algorithm, so as to identify all obstacles (such as pedestrians or vehicles, etc.) included in the first target image and the target image.
Specifically, an alternative implementation manner is that the specific implementation procedure of this step S303 may include the following steps a1-a 4:
step A1: and respectively carrying out image preprocessing on the first target image and the second target image to obtain a first target object contained in the first target image and a second target object contained in the second target image.
In this implementation manner, after the first target image and the second target image are obtained, image preprocessing operations such as filtering, time-space synchronization, and the like may be performed on the first target image and the second target image by using an image preprocessing algorithm, so as to identify all target objects included in the first target image and the second target image, define an object identified in the first target image as a first target object, and define an object identified in the second target image as a second target object.
The first target object and the second target object may be pedestrians or vehicles.
Step A2: and judging whether the relative distance between the first target object and the second target object is within a preset distance threshold range or not, and judging whether the relative speed between the first target object and the second target object is within a preset speed difference range or not.
In this implementation, after the first target object included in the first target image and the second target object included in the second target image are determined through step a1, the first target object and the second target object further need to be matched and screened to determine whether the first target object and the second target object are the same target object, specifically, whether the first target object and the second target object are the same target object may be determined by calculating whether a relative distance between the first target object and the second target object is within a preset distance threshold range and calculating whether a relative speed between the first target object and the second target object is within a preset speed difference range.
Step A3: and if the relative distance between the first target object and the second target object is within the preset distance threshold range and the relative speed between the first target object and the second target object is within the preset speed difference range, determining that the first target object and the second target object are the same target object.
Step A4: and if the relative distance between the first target object and the second target object is judged not to be within the preset distance threshold range and/or the relative speed between the first target object and the second target object is judged not to be within the preset speed difference range, determining that the first target object and the second target object are not the same target object.
Further, all the same target objects contained in the first target image and the second target image can be obtained, the target objects are not objects in the blind area, and other first target objects except the target objects contained in the first target image are all obstacles in the blind area, and the subsequent step S304 can be executed.
It should be noted that the preset distance threshold range and the preset speed difference range are critical values used for determining whether the first target object and the second target object are the same target object, and specific values of the first target object and the second target object may be set according to actual situations, which is not limited in this embodiment of the application.
S304: and determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the fusion result.
In this embodiment, after all obstacles (such as pedestrians or vehicles and other target objects) included in the first target image and the second target image are recognized in step S303, further, a conversion process may be performed on coordinate values of pixel points corresponding to specific positions of obstacles such as pedestrians and/or vehicles inserted from the lateral rear direction in the blind area in front of the target commercial vehicle, so as to accurately determine the position of each obstacle in the three-dimensional space according to the processing result.
Specifically, an alternative implementation manner is that the specific implementation process of this step S304 may include the following steps B1-B2:
step B1: and mapping each target object in the fusion result to a world coordinate system to obtain the coordinate of each target object in the world coordinate system.
In this implementation manner, after all obstacles (such as pedestrians or vehicles and other target objects) included in the first target image and the second target image are identified in step S303, the existing or future 2D to 3D coordinate conversion method may be used to map the pixel point corresponding to each target object into the pre-established world coordinate system, so as to obtain the coordinates of the pixel point corresponding to each target object in the world coordinate system. The specific mapping process is consistent with the existing method, and is not described herein again.
Step B2: and determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the coordinates in the world coordinate system.
After the coordinates of the pixel points corresponding to each target object (i.e., the obstacle) in the world coordinate system in the fusion result are obtained in step B1, the positions of the representative frame positions of each target object in the three-dimensional space can be accurately determined according to the coordinates of the representative frame positions of each target object (e.g., the head, the body, the tail, etc. of the target vehicle) in the world coordinate system, and the positions of the representative frame positions can be integrated to accurately determine the position of each target object (i.e., the obstacle) in the three-dimensional space.
In summary, according to the method for detecting obstacles in blind areas of commercial vehicles provided by this embodiment, when detecting an obstacle in a blind area in front of a target commercial vehicle, a first target image to be detected is obtained through a vehicle-mounted fisheye camera of the target commercial vehicle, wherein the first target image is an image including a blind area of the target commercial vehicle, meanwhile, a second target image to be detected is obtained through a vehicle-mounted plane camera in front of the target commercial vehicle, then, the first target image and the second target image are subjected to fusion processing to obtain a fusion result, and then, the position of the target obstacle in the blind area in front of the target commercial vehicle can be determined according to the fusion result. Therefore, according to the embodiment of the application, the fisheye camera is installed on the target commercial vehicle firstly, and the pre-installed vehicle-mounted fisheye camera is used for shooting the blind area image, so that the image and the front image shot by the vehicle-mounted plane camera can be subjected to image fusion, and the positions of all obstacles contained in the front blind area of the target commercial vehicle can be accurately identified according to the fusion result, so that the safe running of the target commercial vehicle is ensured.
Second embodiment
In this embodiment, a device for detecting obstacles in blind areas of a commercial vehicle will be described, and please refer to the above method embodiment for related contents.
Referring to fig. 6, a schematic composition diagram of a device for detecting blind spot obstacles of a commercial vehicle according to this embodiment is shown, and the device includes:
the first targetimage acquisition unit 601 is used for acquiring a first target image to be detected through a vehicle-mounted fisheye camera of a target commercial vehicle, wherein the first target image is an image including a blind area of the target commercial vehicle;
a second targetimage obtaining unit 602, configured to obtain a second target image to be detected through a vehicle-mounted planar camera in front of the target commercial vehicle;
a targetimage fusion unit 603, configured to perform fusion processing on the first target image and the second target image to obtain a fusion result;
and aposition determining unit 604, configured to determine a position of a target obstacle in the blind area in front of the target commercial vehicle according to the fusion result.
In one implementation manner of this embodiment, the mounting position of the fisheye camera is located at any height position between the vehicle-mounted plane camera and the ground.
In one implementation manner of this embodiment, the shooting angle of the fisheye camera is a wide angle of 180 degrees.
In an implementation manner of this embodiment, the targetimage fusion unit 603 includes:
a preprocessing subunit, configured to perform image preprocessing on the first target image and the second target image, respectively, to obtain a first target object included in the first target image and a second target object included in the second target image;
the judging subunit is configured to judge whether a relative distance between the first target object and the second target object is within a preset distance threshold range, and judge whether a relative speed between the first target object and the second target object is within a preset speed difference range;
the first determining subunit is configured to determine that the first target object and the second target object are the same target object if it is determined that the relative distance between the first target object and the second target object is within a preset distance threshold range and the relative speed between the first target object and the second target object is within a preset speed difference range;
and the second determining subunit is configured to determine that the first target object and the second target object are not the same target object if it is determined that the relative distance between the first target object and the second target object is not within a preset distance threshold range and/or the relative speed between the first target object and the second target object is not within a preset speed difference range.
In an implementation manner of this embodiment, theposition determining unit 604 includes:
the mapping subunit is used for mapping each target object in the fusion result to a world coordinate system to obtain the coordinate of each target object in the world coordinate system;
and the third determining subunit is used for determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the coordinates in the world coordinate system.
In summary, the embodiment provides a commercial vehicle blind area obstacle detection device, when detecting an obstacle in a blind area ahead of a target commercial vehicle, first obtain a first target image to be detected through a vehicle-mounted fish-eye camera of the target commercial vehicle, wherein the first target image is an image including the blind area ahead of the target commercial vehicle, and simultaneously obtain a second target image to be detected through a vehicle-mounted plane camera ahead of the target commercial vehicle, and then fuse the first target image and the second target image to obtain a fusion result, and then, according to the fusion result, the position of the target obstacle in the blind area ahead of the target commercial vehicle can be determined. Therefore, according to the embodiment of the application, the fisheye camera is installed on the target commercial vehicle firstly, and the pre-installed vehicle-mounted fisheye camera is used for shooting the blind area image, so that the image and the front image shot by the vehicle-mounted plane camera can be subjected to image fusion, and the positions of all obstacles contained in the front blind area of the target commercial vehicle can be accurately identified according to the fusion result, so that the safe running of the target commercial vehicle is ensured.
Further, this application embodiment still provides a commercial car blind area barrier check out test set, includes: a processor, a memory, a system bus;
the processor and the memory are connected through the system bus;
the memory is used for storing one or more programs, the one or more programs comprise instructions, and the instructions when executed by the processor cause the processor to execute any one implementation method of the above-mentioned method for detecting obstacles in blind areas of a commercial vehicle.
Further, an embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is enabled to execute any implementation method of the above method for detecting obstacles in blind areas of a commercial vehicle.
As can be seen from the above description of the embodiments, those skilled in the art can clearly understand that all or part of the steps in the above embodiment methods can be implemented by software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network communication device such as a media gateway, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.