Movatterモバイル変換


[0]ホーム

URL:


CN111231947A - Method and device for detecting obstacles in dead zone of commercial vehicle - Google Patents

Method and device for detecting obstacles in dead zone of commercial vehicle
Download PDF

Info

Publication number
CN111231947A
CN111231947ACN202010181598.4ACN202010181598ACN111231947ACN 111231947 ACN111231947 ACN 111231947ACN 202010181598 ACN202010181598 ACN 202010181598ACN 111231947 ACN111231947 ACN 111231947A
Authority
CN
China
Prior art keywords
target
target object
image
commercial vehicle
target image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010181598.4A
Other languages
Chinese (zh)
Inventor
宋希强
赵永民
张春民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co LtdfiledCriticalNeusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202010181598.4ApriorityCriticalpatent/CN111231947A/en
Publication of CN111231947ApublicationCriticalpatent/CN111231947A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The embodiment of the application discloses a method and a device for detecting obstacles in dead zones of commercial vehicles, which are applied to the technical field of automatic driving and used for accurately detecting the positions of all obstacles contained in the dead zones in front of the commercial vehicles so as to ensure the safe driving of the commercial vehicles. The method of the present application comprises: the method comprises the steps of firstly obtaining a first target image to be detected through a vehicle-mounted fish eye camera of a target commercial vehicle, wherein the first target image is an image including a blind area of the target commercial vehicle, meanwhile, obtaining a second target image to be detected through a vehicle-mounted plane camera in front of the target commercial vehicle, then conducting fusion processing on the first target image and the second target image to obtain a fusion result, and further determining the position of a target obstacle in the blind area in front of the target commercial vehicle according to the fusion result.

Description

Method and device for detecting obstacles in dead zone of commercial vehicle
Technical Field
The application relates to the technical field of automatic driving, in particular to a method and a device for detecting obstacles in dead zones of commercial vehicles.
Background
As the intelligent system is applied to the field of vehicle driving, an increasing number of vehicles are equipped with an intelligent system capable of implementing an automatic driving function or a driving assistance function.
The important premise for ensuring that the automatic driving vehicle can safely run is to ensure that the vehicle can automatically detect the accurate position of the surrounding vehicle obstacles. However, in the current Advanced Driving Assistance System (ADAS) for commercial vehicles, when detecting a target object ahead of the vehicle by pure vision, a plane camera on the inner side of a windshield is often used for detection. Generally, the camera is not large in horizontal and vertical visual angles, and the height, the volume and the wheelbase of the commercial vehicle are long, particularly the cockpit is high, so that the carrying position of the plane camera is also high, and further a large blind area is generated at the front near end of the commercial vehicle. For obstacles such as pedestrians entering the front blind area or vehicles cut in from the side, the obstacles cannot be effectively detected only by the conventional plane camera, and therefore a great driving safety risk exists.
Therefore, how to accurately detect the obstacle in the blind area of the commercial vehicle to ensure the safe driving of the vehicle becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the application mainly aims to provide a method and a device for detecting obstacles in a blind area of a commercial vehicle, which can improve the accuracy of a detection result of the obstacles in the blind area of the commercial vehicle.
The embodiment of the application provides a method for detecting blind area obstacles of a commercial vehicle, which comprises the following steps:
acquiring a first target image to be detected through a vehicle-mounted fisheye camera of a target commercial vehicle, wherein the first target image is an image including a blind area of the target commercial vehicle;
acquiring a second target image to be detected through a vehicle-mounted plane camera in front of the target commercial vehicle;
performing fusion processing on the first target image and the second target image to obtain a fusion result;
and determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the fusion result.
Optionally, the mounting position of the fisheye camera is located at any height position between the vehicle-mounted plane camera and the ground.
Optionally, the shooting angle of the fisheye camera is 180 degrees wide angle.
Optionally, the fusing the first target image and the second target image to obtain a fusion result includes:
respectively carrying out image preprocessing on the first target image and the second target image to obtain a first target object contained in the first target image and a second target object contained in the second target image;
judging whether the relative distance between the first target object and the second target object is within a preset distance threshold range or not, and judging whether the relative speed between the first target object and the second target object is within a preset speed difference range or not;
if the relative distance between the first target object and the second target object is within a preset distance threshold range and the relative speed between the first target object and the second target object is within a preset speed difference range, determining that the first target object and the second target object are the same target object;
and if the relative distance between the first target object and the second target object is judged not to be within a preset distance threshold range and/or the relative speed between the first target object and the second target object is judged not to be within a preset speed difference range, determining that the first target object and the second target object are not the same target object.
Optionally, determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the fusion result includes:
mapping each target object in the fusion result to a world coordinate system to obtain the coordinate of each target object in the world coordinate system;
and determining the position of a target obstacle in the blind area in front of the target commercial vehicle according to the coordinates in the world coordinate system.
The embodiment of the application further provides a commercial car blind area barrier detection device, include:
the first target image acquisition unit is used for acquiring a first target image to be detected through a vehicle-mounted fisheye camera of a target commercial vehicle, wherein the first target image is an image including a blind area of the target commercial vehicle;
the second target image acquisition unit is used for acquiring a second target image to be detected through a vehicle-mounted plane camera in front of the target commercial vehicle;
the target image fusion unit is used for carrying out fusion processing on the first target image and the second target image to obtain a fusion result;
and the position determining unit is used for determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the fusion result.
Optionally, the mounting position of the fisheye camera is located at any height position between the vehicle-mounted plane camera and the ground.
Optionally, the shooting angle of the fisheye camera is 180 degrees wide angle.
Optionally, the target image fusion unit includes:
a preprocessing subunit, configured to perform image preprocessing on the first target image and the second target image, respectively, to obtain a first target object included in the first target image and a second target object included in the second target image;
the judging subunit is configured to judge whether a relative distance between the first target object and the second target object is within a preset distance threshold range, and judge whether a relative speed between the first target object and the second target object is within a preset speed difference range;
the first determining subunit is configured to determine that the first target object and the second target object are the same target object if it is determined that the relative distance between the first target object and the second target object is within a preset distance threshold range and the relative speed between the first target object and the second target object is within a preset speed difference range;
and the second determining subunit is configured to determine that the first target object and the second target object are not the same target object if it is determined that the relative distance between the first target object and the second target object is not within a preset distance threshold range and/or the relative speed between the first target object and the second target object is not within a preset speed difference range.
Optionally, the position determining unit includes:
the mapping subunit is used for mapping each target object in the fusion result to a world coordinate system to obtain the coordinate of each target object in the world coordinate system;
and the third determining subunit is used for determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the coordinates in the world coordinate system.
The embodiment of the application further provides commercial car blind area barrier check out test set, include: a processor, a memory, a system bus;
the processor and the memory are connected through the system bus;
the memory is used for storing one or more programs, the one or more programs comprising instructions, which when executed by the processor, cause the processor to perform any one implementation of the above-mentioned method for detecting obstacles in blind areas of a commercial vehicle.
The embodiment of the application further provides a computer-readable storage medium, wherein instructions are stored in the computer-readable storage medium, and when the instructions run on the terminal device, the terminal device is enabled to execute any implementation manner of the method for detecting the blind area obstacle of the commercial vehicle.
When detecting the obstacle in the blind area in front of the target commercial vehicle, the embodiment of the application firstly acquires a first target image to be detected through the vehicle-mounted fish-eye camera of the target commercial vehicle, wherein the first target image is an image including the blind area of the target commercial vehicle, and simultaneously acquires a second target image to be detected through the vehicle-mounted plane camera in front of the target commercial vehicle, and then performs fusion processing on the first target image and the second target image to obtain a fusion result, and further determines the position of the target obstacle in the blind area in front of the target commercial vehicle according to the fusion result. Therefore, according to the embodiment of the application, the fisheye camera is installed on the target commercial vehicle firstly, and the pre-installed vehicle-mounted fisheye camera is used for shooting the blind area image, so that the image and the front image shot by the vehicle-mounted plane camera can be subjected to image fusion, and the positions of all obstacles contained in the front blind area of the target commercial vehicle can be accurately identified according to the fusion result, so that the safe running of the target commercial vehicle is ensured.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is one of exemplary diagrams of a conventional method of detecting an obstacle in front of a vehicle;
fig. 2 is one of exemplary diagrams of a conventional method of detecting an obstacle in front of a vehicle;
FIG. 3 is a schematic flow chart of a method for detecting obstacles in blind areas of a commercial vehicle according to an embodiment of the present application;
fig. 4 is one of schematic diagrams of detecting a blind spot obstacle of a commercial vehicle by using a fisheye camera according to an embodiment of the present disclosure;
fig. 5 is a second schematic diagram of detecting obstacles in blind areas of a commercial vehicle by using a fisheye camera according to an embodiment of the present disclosure;
fig. 6 is a schematic composition diagram of a device for detecting blind spot obstacles of a commercial vehicle according to an embodiment of the present application.
Detailed Description
In some commercial vehicle front obstacle detection methods, images shot by a plane camera on the inner side of a windshield are generally detected, but the plane camera is not large in horizontal and vertical viewing angles, and a large blind area is often generated at the front near end of a commercial vehicle. For example, as shown in fig. 1, from a side view, if there is only a front view plane camera inside the windshield, there is a blind area at the near end, which is in a range of about 3 to 5 meters, and the front view plane camera can only capture an image within a range of about 40 degrees in the vertical direction, as shown in the top view of fig. 2, it can also be seen that accurate capture of an image within the blind area cannot be achieved only by the front view plane camera, and thus it cannot be accurately detected whether there is a pedestrian or a vehicle in the blind area ahead, which results in a great driving safety risk. Therefore, how to accurately detect the obstacle in the blind area of the commercial vehicle so as to ensure the safe driving of the commercial vehicle becomes a problem to be solved urgently.
In order to solve the above defects, an embodiment of the present application provides a method for detecting a blind area obstacle of a commercial vehicle, when detecting an obstacle in a blind area in front of a target commercial vehicle, a first target image to be detected is obtained through a vehicle-mounted fisheye camera of the target commercial vehicle, wherein the first target image is an image including the blind area of the target commercial vehicle, meanwhile, a second target image to be detected is obtained through a vehicle-mounted plane camera in front of the target commercial vehicle, then, the first target image and the second target image are subjected to fusion processing to obtain a fusion result, and then, the position of the target obstacle in the blind area in front of the target commercial vehicle can be determined according to the fusion result. Therefore, according to the embodiment of the application, the fisheye camera is installed on the target commercial vehicle firstly, and the pre-installed vehicle-mounted fisheye camera is used for shooting the blind area image, so that the image and the front image shot by the vehicle-mounted plane camera can be subjected to image fusion, and the positions of all obstacles contained in the front blind area of the target commercial vehicle can be accurately identified according to the fusion result, so that the safe running of the target commercial vehicle is ensured.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
First embodiment
Referring to fig. 3, a schematic flow chart of a method for detecting obstacles in blind areas of a commercial vehicle according to this embodiment is shown, and the method includes the following steps:
s301: the method comprises the steps of obtaining a first target image to be detected through a vehicle-mounted fisheye camera of a target commercial vehicle, wherein the first target image is an image including a blind area of the target commercial vehicle.
In this embodiment, any commercial vehicle which realizes blind spot obstacle detection by using this embodiment is defined as a target commercial vehicle, and an image acquired by a vehicle-mounted fisheye camera of the target commercial vehicle is defined as a first target image. The first target image comprises an image of a target commercial vehicle blind area.
It should be noted that, because the commercial vehicle has the characteristics of vehicle height, large volume, long wheelbase and the like, especially, the cockpit is high, and the blind area cannot be effectively detected only by using the plane camera on the inner side of the windshield, therefore, in order to accurately detect the positions of all obstacles contained in the blind area in front of the target commercial vehicle, in this embodiment, a fisheye camera is installed below the plane camera in front of the target commercial vehicle in advance, an optional implementation manner is that the fisheye camera can be installed at any height position between the vehicle-mounted plane camera and the ground, as shown in fig. 4, it can be seen from a side view angle, after a fisheye camera is installed below the plane camera in front of the target commercial vehicle, a near-end blind area image can be basically shot, as shown in a top view of fig. 5, it can also be seen that after a fisheye camera is installed below the plane camera in front of the target commercial vehicle, because the shooting angle of the fisheye camera is 180 degrees wide angle, images of all near-end blind area areas can be basically shot.
S302: and acquiring a second target image to be detected through a vehicle-mounted plane camera in front of the target commercial vehicle.
In this embodiment, in order to ensure safe driving of the target commercial vehicle, not only the first target image to be detected needs to be acquired in step S301, but also the second target image to be detected needs to be acquired by the vehicle-mounted plane camera in front of the target commercial vehicle, so as to execute the subsequent step S303. .
S303: and carrying out fusion processing on the first target image and the second target image to obtain a fusion result.
In this embodiment, after the first target image to be detected is acquired in step S301 and the target image to be detected is acquired in step S302, the first target image and the target image to be detected may be further subjected to fusion processing by using an image fusion algorithm, so as to identify all obstacles (such as pedestrians or vehicles, etc.) included in the first target image and the target image.
Specifically, an alternative implementation manner is that the specific implementation procedure of this step S303 may include the following steps a1-a 4:
step A1: and respectively carrying out image preprocessing on the first target image and the second target image to obtain a first target object contained in the first target image and a second target object contained in the second target image.
In this implementation manner, after the first target image and the second target image are obtained, image preprocessing operations such as filtering, time-space synchronization, and the like may be performed on the first target image and the second target image by using an image preprocessing algorithm, so as to identify all target objects included in the first target image and the second target image, define an object identified in the first target image as a first target object, and define an object identified in the second target image as a second target object.
The first target object and the second target object may be pedestrians or vehicles.
Step A2: and judging whether the relative distance between the first target object and the second target object is within a preset distance threshold range or not, and judging whether the relative speed between the first target object and the second target object is within a preset speed difference range or not.
In this implementation, after the first target object included in the first target image and the second target object included in the second target image are determined through step a1, the first target object and the second target object further need to be matched and screened to determine whether the first target object and the second target object are the same target object, specifically, whether the first target object and the second target object are the same target object may be determined by calculating whether a relative distance between the first target object and the second target object is within a preset distance threshold range and calculating whether a relative speed between the first target object and the second target object is within a preset speed difference range.
Step A3: and if the relative distance between the first target object and the second target object is within the preset distance threshold range and the relative speed between the first target object and the second target object is within the preset speed difference range, determining that the first target object and the second target object are the same target object.
Step A4: and if the relative distance between the first target object and the second target object is judged not to be within the preset distance threshold range and/or the relative speed between the first target object and the second target object is judged not to be within the preset speed difference range, determining that the first target object and the second target object are not the same target object.
Further, all the same target objects contained in the first target image and the second target image can be obtained, the target objects are not objects in the blind area, and other first target objects except the target objects contained in the first target image are all obstacles in the blind area, and the subsequent step S304 can be executed.
It should be noted that the preset distance threshold range and the preset speed difference range are critical values used for determining whether the first target object and the second target object are the same target object, and specific values of the first target object and the second target object may be set according to actual situations, which is not limited in this embodiment of the application.
S304: and determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the fusion result.
In this embodiment, after all obstacles (such as pedestrians or vehicles and other target objects) included in the first target image and the second target image are recognized in step S303, further, a conversion process may be performed on coordinate values of pixel points corresponding to specific positions of obstacles such as pedestrians and/or vehicles inserted from the lateral rear direction in the blind area in front of the target commercial vehicle, so as to accurately determine the position of each obstacle in the three-dimensional space according to the processing result.
Specifically, an alternative implementation manner is that the specific implementation process of this step S304 may include the following steps B1-B2:
step B1: and mapping each target object in the fusion result to a world coordinate system to obtain the coordinate of each target object in the world coordinate system.
In this implementation manner, after all obstacles (such as pedestrians or vehicles and other target objects) included in the first target image and the second target image are identified in step S303, the existing or future 2D to 3D coordinate conversion method may be used to map the pixel point corresponding to each target object into the pre-established world coordinate system, so as to obtain the coordinates of the pixel point corresponding to each target object in the world coordinate system. The specific mapping process is consistent with the existing method, and is not described herein again.
Step B2: and determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the coordinates in the world coordinate system.
After the coordinates of the pixel points corresponding to each target object (i.e., the obstacle) in the world coordinate system in the fusion result are obtained in step B1, the positions of the representative frame positions of each target object in the three-dimensional space can be accurately determined according to the coordinates of the representative frame positions of each target object (e.g., the head, the body, the tail, etc. of the target vehicle) in the world coordinate system, and the positions of the representative frame positions can be integrated to accurately determine the position of each target object (i.e., the obstacle) in the three-dimensional space.
In summary, according to the method for detecting obstacles in blind areas of commercial vehicles provided by this embodiment, when detecting an obstacle in a blind area in front of a target commercial vehicle, a first target image to be detected is obtained through a vehicle-mounted fisheye camera of the target commercial vehicle, wherein the first target image is an image including a blind area of the target commercial vehicle, meanwhile, a second target image to be detected is obtained through a vehicle-mounted plane camera in front of the target commercial vehicle, then, the first target image and the second target image are subjected to fusion processing to obtain a fusion result, and then, the position of the target obstacle in the blind area in front of the target commercial vehicle can be determined according to the fusion result. Therefore, according to the embodiment of the application, the fisheye camera is installed on the target commercial vehicle firstly, and the pre-installed vehicle-mounted fisheye camera is used for shooting the blind area image, so that the image and the front image shot by the vehicle-mounted plane camera can be subjected to image fusion, and the positions of all obstacles contained in the front blind area of the target commercial vehicle can be accurately identified according to the fusion result, so that the safe running of the target commercial vehicle is ensured.
Second embodiment
In this embodiment, a device for detecting obstacles in blind areas of a commercial vehicle will be described, and please refer to the above method embodiment for related contents.
Referring to fig. 6, a schematic composition diagram of a device for detecting blind spot obstacles of a commercial vehicle according to this embodiment is shown, and the device includes:
the first targetimage acquisition unit 601 is used for acquiring a first target image to be detected through a vehicle-mounted fisheye camera of a target commercial vehicle, wherein the first target image is an image including a blind area of the target commercial vehicle;
a second targetimage obtaining unit 602, configured to obtain a second target image to be detected through a vehicle-mounted planar camera in front of the target commercial vehicle;
a targetimage fusion unit 603, configured to perform fusion processing on the first target image and the second target image to obtain a fusion result;
and aposition determining unit 604, configured to determine a position of a target obstacle in the blind area in front of the target commercial vehicle according to the fusion result.
In one implementation manner of this embodiment, the mounting position of the fisheye camera is located at any height position between the vehicle-mounted plane camera and the ground.
In one implementation manner of this embodiment, the shooting angle of the fisheye camera is a wide angle of 180 degrees.
In an implementation manner of this embodiment, the targetimage fusion unit 603 includes:
a preprocessing subunit, configured to perform image preprocessing on the first target image and the second target image, respectively, to obtain a first target object included in the first target image and a second target object included in the second target image;
the judging subunit is configured to judge whether a relative distance between the first target object and the second target object is within a preset distance threshold range, and judge whether a relative speed between the first target object and the second target object is within a preset speed difference range;
the first determining subunit is configured to determine that the first target object and the second target object are the same target object if it is determined that the relative distance between the first target object and the second target object is within a preset distance threshold range and the relative speed between the first target object and the second target object is within a preset speed difference range;
and the second determining subunit is configured to determine that the first target object and the second target object are not the same target object if it is determined that the relative distance between the first target object and the second target object is not within a preset distance threshold range and/or the relative speed between the first target object and the second target object is not within a preset speed difference range.
In an implementation manner of this embodiment, theposition determining unit 604 includes:
the mapping subunit is used for mapping each target object in the fusion result to a world coordinate system to obtain the coordinate of each target object in the world coordinate system;
and the third determining subunit is used for determining the position of the target obstacle in the blind area in front of the target commercial vehicle according to the coordinates in the world coordinate system.
In summary, the embodiment provides a commercial vehicle blind area obstacle detection device, when detecting an obstacle in a blind area ahead of a target commercial vehicle, first obtain a first target image to be detected through a vehicle-mounted fish-eye camera of the target commercial vehicle, wherein the first target image is an image including the blind area ahead of the target commercial vehicle, and simultaneously obtain a second target image to be detected through a vehicle-mounted plane camera ahead of the target commercial vehicle, and then fuse the first target image and the second target image to obtain a fusion result, and then, according to the fusion result, the position of the target obstacle in the blind area ahead of the target commercial vehicle can be determined. Therefore, according to the embodiment of the application, the fisheye camera is installed on the target commercial vehicle firstly, and the pre-installed vehicle-mounted fisheye camera is used for shooting the blind area image, so that the image and the front image shot by the vehicle-mounted plane camera can be subjected to image fusion, and the positions of all obstacles contained in the front blind area of the target commercial vehicle can be accurately identified according to the fusion result, so that the safe running of the target commercial vehicle is ensured.
Further, this application embodiment still provides a commercial car blind area barrier check out test set, includes: a processor, a memory, a system bus;
the processor and the memory are connected through the system bus;
the memory is used for storing one or more programs, the one or more programs comprise instructions, and the instructions when executed by the processor cause the processor to execute any one implementation method of the above-mentioned method for detecting obstacles in blind areas of a commercial vehicle.
Further, an embodiment of the present application further provides a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the instructions are run on a terminal device, the terminal device is enabled to execute any implementation method of the above method for detecting obstacles in blind areas of a commercial vehicle.
As can be seen from the above description of the embodiments, those skilled in the art can clearly understand that all or part of the steps in the above embodiment methods can be implemented by software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network communication device such as a media gateway, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

CN202010181598.4A2020-03-162020-03-16Method and device for detecting obstacles in dead zone of commercial vehiclePendingCN111231947A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010181598.4ACN111231947A (en)2020-03-162020-03-16Method and device for detecting obstacles in dead zone of commercial vehicle

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010181598.4ACN111231947A (en)2020-03-162020-03-16Method and device for detecting obstacles in dead zone of commercial vehicle

Publications (1)

Publication NumberPublication Date
CN111231947Atrue CN111231947A (en)2020-06-05

Family

ID=70867491

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010181598.4APendingCN111231947A (en)2020-03-162020-03-16Method and device for detecting obstacles in dead zone of commercial vehicle

Country Status (1)

CountryLink
CN (1)CN111231947A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119636713A (en)*2025-01-072025-03-18一汽解放汽车有限公司 A commercial vehicle lateral blind spot warning method, device, equipment and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20180113320A (en)*2017-04-062018-10-16(주) 코스텍Blind spot monitoring system
CN109435852A (en)*2018-11-082019-03-08湖北工业大学A kind of panorama type DAS (Driver Assistant System) and method for large truck
CN109733284A (en)*2019-02-192019-05-10广州小鹏汽车科技有限公司A kind of safety applied to vehicle, which is parked, assists method for early warning and system
CN110059574A (en)*2019-03-232019-07-26浙江交通职业技术学院A kind of vehicle blind zone detection method
CN110356325A (en)*2019-09-042019-10-22魔视智能科技(上海)有限公司A kind of urban transportation passenger stock blind area early warning system
CN110466533A (en)*2019-07-252019-11-19东软睿驰汽车技术(沈阳)有限公司A kind of control method for vehicle, apparatus and system
CN110466512A (en)*2019-07-252019-11-19东软睿驰汽车技术(沈阳)有限公司A kind of vehicle lane change method, apparatus and system
CN110865360A (en)*2019-11-122020-03-06东软睿驰汽车技术(沈阳)有限公司Data fusion method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20180113320A (en)*2017-04-062018-10-16(주) 코스텍Blind spot monitoring system
CN109435852A (en)*2018-11-082019-03-08湖北工业大学A kind of panorama type DAS (Driver Assistant System) and method for large truck
CN109733284A (en)*2019-02-192019-05-10广州小鹏汽车科技有限公司A kind of safety applied to vehicle, which is parked, assists method for early warning and system
CN110059574A (en)*2019-03-232019-07-26浙江交通职业技术学院A kind of vehicle blind zone detection method
CN110466533A (en)*2019-07-252019-11-19东软睿驰汽车技术(沈阳)有限公司A kind of control method for vehicle, apparatus and system
CN110466512A (en)*2019-07-252019-11-19东软睿驰汽车技术(沈阳)有限公司A kind of vehicle lane change method, apparatus and system
CN110356325A (en)*2019-09-042019-10-22魔视智能科技(上海)有限公司A kind of urban transportation passenger stock blind area early warning system
CN110865360A (en)*2019-11-122020-03-06东软睿驰汽车技术(沈阳)有限公司Data fusion method and device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119636713A (en)*2025-01-072025-03-18一汽解放汽车有限公司 A commercial vehicle lateral blind spot warning method, device, equipment and medium

Similar Documents

PublicationPublication DateTitle
JP7025912B2 (en) In-vehicle environment recognition device
CN108243623B (en)Automobile anti-collision early warning method and system based on binocular stereo vision
JP6795027B2 (en) Information processing equipment, object recognition equipment, device control systems, moving objects, image processing methods and programs
CN107341454B (en)Method and device for detecting obstacles in scene and electronic equipment
CN109359409A (en)A kind of vehicle passability detection system of view-based access control model and laser radar sensor
CN105372654B (en) A Method for Reliable Quantification of Obstacle Classification
EP3545464B1 (en)Information processing device, imaging device, equipment control system, mobile object, information processing method, and computer-readable recording medium
US20030210807A1 (en)Monitoring device, monitoring method and program for monitoring
CN110069990B (en)Height limiting rod detection method and device and automatic driving system
CN105128836A (en)Autonomous emergency braking system and method for recognizing pedestrian therein
KR101103526B1 (en) Collision Avoidance Using Stereo Camera
CN110682907B (en)Automobile rear-end collision prevention control system and method
CN110341621B (en) An obstacle detection method and device
CN112651359A (en)Obstacle detection method, obstacle detection device, electronic apparatus, and storage medium
CN111351474B (en)Vehicle moving target detection method, device and system
CN112799098A (en)Radar blind area monitoring method and device, electronic equipment and storage medium
US10108866B2 (en)Method and system for robust curb and bump detection from front or rear monocular cameras
CN109448439A (en)Vehicle safe driving method and device
CN110765929A (en)Vehicle obstacle detection method and device
CN111723723A (en)Image detection method and device
JP2015179482A (en)In-vehicle image processing device and vehicle system using the same
CN111294564A (en)Information display method and wearable device
CN113591554B (en)Line pressing detection method and violation detection method
Lion et al.Smart speed bump detection and estimation with kinect
Petrovai et al.A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20200605

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp