Movatterモバイル変換


[0]ホーム

URL:


CN112837384B - Vehicle marking method and device and electronic equipment - Google Patents

Vehicle marking method and device and electronic equipment
Download PDF

Info

Publication number
CN112837384B
CN112837384BCN202110227154.4ACN202110227154ACN112837384BCN 112837384 BCN112837384 BCN 112837384BCN 202110227154 ACN202110227154 ACN 202110227154ACN 112837384 BCN112837384 BCN 112837384B
Authority
CN
China
Prior art keywords
vehicle
point cloud
cloud data
image
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110227154.4A
Other languages
Chinese (zh)
Other versions
CN112837384A (en
Inventor
张广晟
田欢
胡骏
于红绯
刘威
袁淮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co LtdfiledCriticalNeusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202110227154.4ApriorityCriticalpatent/CN112837384B/en
Publication of CN112837384ApublicationCriticalpatent/CN112837384A/en
Application grantedgrantedCritical
Publication of CN112837384BpublicationCriticalpatent/CN112837384B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention provides a vehicle marking method, a vehicle marking device and electronic equipment, comprising the following steps: acquiring vehicle point cloud data obtained by detecting a vehicle in a target environment by a laser radar and a vehicle image obtained by shooting the vehicle in the target environment by a camera; determining a point cloud image corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image; determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle instance segmentation result, and performing coordinate transformation on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under the world coordinate system; a true value of the signature for each vehicle in the vehicle image is determined based on the point cloud data for each vehicle in the world coordinate system. The method can automatically mark the marking true value of each vehicle, improves marking efficiency, has high intelligent degree, reduces manual workload and reduces marking cost.

Description

Vehicle marking method and device and electronic equipment
Technical Field
The present invention relates to the field of image recognition technologies, and in particular, to a vehicle marking method, a vehicle marking device, and an electronic device.
Background
Deep learning is widely used in the field of autopilot. In the deep learning process, the neural network needs to be trained, and a training sample data set needs to be used during training.
Currently, when acquiring a training sample data set, a vehicle image is generally acquired, and then the acquired vehicle image is marked (for example, a position coordinate of a vehicle, a 2D frame of the vehicle, a feature point of the vehicle, etc.) manually according to a specific training task, so as to obtain a marking true value.
The vehicle marking method has the technical problems of low efficiency, poor intelligence degree and high cost.
Disclosure of Invention
In view of the above, the present invention aims to provide a vehicle marking method, device and electronic equipment, so as to alleviate the technical problems of low efficiency, poor intelligence and high cost of the existing vehicle marking method.
In a first aspect, an embodiment of the present invention provides a vehicle marking method, including:
Acquiring vehicle point cloud data obtained by detecting a vehicle in a target environment by a laser radar and a vehicle image obtained by shooting the vehicle in the target environment by a camera;
Determining a point cloud image corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image;
Determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle instance segmentation result, and performing coordinate conversion on the point cloud data of each vehicle under the camera coordinate system to obtain point cloud data of each vehicle under a world coordinate system;
And determining a true value of the mark of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system.
Further, determining a point cloud image corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image, includes:
Performing coordinate transformation on the vehicle point cloud data to obtain a point cloud image corresponding to the pixels of the vehicle image;
and carrying out instance segmentation on the vehicle image to obtain a vehicle instance segmentation result.
Further, performing coordinate transformation on the vehicle point cloud data includes:
And converting the vehicle point cloud data into a camera coordinate system to obtain a point cloud image corresponding to the pixels of the vehicle image.
Further, performing instance segmentation on the vehicle image includes:
and carrying out instance segmentation on the vehicle image by adopting a vehicle instance segmentation model to obtain a vehicle instance segmentation result.
Further, determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle instance segmentation result includes:
And comparing the point cloud image with the vehicle example segmentation result to obtain point cloud data of each vehicle in the vehicle image under a camera coordinate system.
Further, the marking truth value includes at least one of: a coordinate truth value of a 3D model of the vehicle in the world coordinate system, a coordinate truth value of a 3D frame surrounding the vehicle in the world coordinate system, and a physical attribute truth value of the vehicle, determining a marking truth value of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system, including:
Clustering the point cloud data of each vehicle under the world coordinate system;
and obtaining a marking true value of each vehicle in the vehicle image according to the clustering result.
Further, the physical attribute truth value includes at least one of: length of the vehicle, width of the vehicle, and height of the vehicle.
In a second aspect, an embodiment of the present invention further provides a vehicle marking device, including:
The acquisition unit is used for acquiring vehicle point cloud data obtained by detecting the vehicle in the target environment by the laser radar and vehicle images obtained by shooting the vehicle in the target environment by the camera;
A first determination unit configured to determine a point cloud image corresponding to a pixel of the vehicle image based on the vehicle point cloud data, and determine a vehicle instance division result based on the vehicle image;
the second determining unit is used for determining the point cloud data of each vehicle in the vehicle image under the camera coordinate system based on the point cloud image and the vehicle instance segmentation result, and carrying out coordinate transformation on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under the world coordinate system;
and a third determining unit, configured to determine a true value of a marker of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the method according to any one of the first aspects when the processor executes the computer program.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium storing machine-executable instructions which, when invoked and executed by a processor, cause the processor to perform the method of any one of the first aspects.
In an embodiment of the present invention, there is provided a vehicle marking method including: acquiring vehicle point cloud data obtained by detecting a vehicle in a target environment by a laser radar and a vehicle image obtained by shooting the vehicle in the target environment by a camera; determining a point cloud image corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image; determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle instance segmentation result, and performing coordinate transformation on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under the world coordinate system; a true value of the signature for each vehicle in the vehicle image is determined based on the point cloud data for each vehicle in the world coordinate system. According to the vehicle marking method disclosed by the invention, the marking true value of each vehicle can be automatically marked, so that the marking efficiency is improved, the intelligent degree is high, the manual workload is reduced, the marking cost is reduced, and the technical problems of low efficiency, poor intelligent degree and high cost of the existing vehicle marking method are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a vehicle marking method according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for determining a point cloud image corresponding to pixels of a vehicle image based on vehicle point cloud data and determining a vehicle instance segmentation result based on the vehicle image according to an embodiment of the present invention;
FIG. 3 is a flowchart of a method for determining a true value of a marker for each vehicle in a vehicle image based on point cloud data of each vehicle in a world coordinate system according to an embodiment of the present invention;
FIG. 4 is a schematic view of a vehicle marking device according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be clearly and completely described in connection with the embodiments, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, when acquiring a training sample data set in the automatic driving field, a vehicle image is usually marked manually, so as to obtain a marking true value. The marking method has low efficiency, poor intelligence degree and high cost.
Based on the above, the embodiment provides a vehicle marking method, which can automatically mark the marking true value of each vehicle, thereby improving marking efficiency, having high intelligent degree, reducing manual workload and reducing marking cost.
Embodiments of the present invention are further described below with reference to the accompanying drawings.
Embodiment one:
according to an embodiment of the present invention, there is provided an embodiment of a vehicle marking method, it being noted that the steps shown in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions, and that, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order other than that shown or described herein.
Fig. 1 is a flowchart of a vehicle marking method according to an embodiment of the present invention, as shown in fig. 1, including the steps of:
Step S102, acquiring vehicle point cloud data obtained by detecting a vehicle in a target environment by a laser radar and a vehicle image obtained by shooting the vehicle in the target environment by a camera;
In the embodiment of the invention, when the vehicle point cloud data and the vehicle image are acquired, the vehicle point cloud data and the vehicle image can be acquired by installing a front-view camera on a front windshield of the data acquisition vehicle and installing a laser radar on the front bumper/the top of the data acquisition vehicle. The vehicle point cloud data is actually point cloud data of the vehicle under a laser radar coordinate system.
Generally, the detection range of the camera is smaller than that of the laser radar, that is, the information contained in the vehicle image shot by the camera is embodied in the vehicle point cloud data detected by the laser radar; or the vehicle data in the visual field intersection of the laser radar and the camera is taken, specifically, the vehicle point cloud data obtained by the detection of the laser radar is converted into the camera coordinate system, and the vehicle data in the visual field intersection of the laser radar and the camera can be obtained.
Step S104, determining a point cloud image corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image;
Step S106, determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle instance segmentation result, and carrying out coordinate transformation on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under the world coordinate system;
Step S108, determining a true value of the label of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system.
The following describes the processes from step S104 to step S108 in detail, and will not be repeated here.
In an embodiment of the present invention, there is provided a vehicle marking method including: acquiring vehicle point cloud data obtained by detecting a vehicle in a target environment by a laser radar and a vehicle image obtained by shooting the vehicle in the target environment by a camera; determining a point cloud image corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image; determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle instance segmentation result, and performing coordinate transformation on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under the world coordinate system; a true value of the signature for each vehicle in the vehicle image is determined based on the point cloud data for each vehicle in the world coordinate system. According to the vehicle marking method disclosed by the invention, the marking true value of each vehicle can be automatically marked, so that the marking efficiency is improved, the intelligent degree is high, the manual workload is reduced, the marking cost is reduced, and the technical problems of low efficiency, poor intelligent degree and high cost of the existing vehicle marking method are solved.
The foregoing briefly describes the vehicle marking method of the present invention, and the detailed description thereof will be directed to the specific details thereof.
In an alternative embodiment of the present invention, referring to fig. 2, step S104, a process of determining a point cloud image corresponding to pixels of a vehicle image based on vehicle point cloud data and determining a vehicle instance segmentation result based on the vehicle image includes the steps of:
Step S201, carrying out coordinate transformation on vehicle point cloud data to obtain a point cloud image corresponding to pixels of a vehicle image;
Specifically, the vehicle point cloud data is converted into a camera coordinate system, and a point cloud image corresponding to pixels of a vehicle image is obtained.
Step S202, performing instance segmentation on the vehicle image to obtain a vehicle instance segmentation result.
Specifically, a vehicle example segmentation model is adopted to carry out example segmentation on the vehicle image, and a vehicle example segmentation result is obtained.
In an optional embodiment of the present invention, step S106, determining, based on the point cloud image and the vehicle instance segmentation result, point cloud data of each vehicle in the vehicle image under the camera coordinate system specifically includes: and comparing the point cloud image with the vehicle example segmentation result to obtain the point cloud data of each vehicle in the vehicle image under the camera coordinate system.
In an alternative embodiment of the invention, the marking truth value comprises at least one of: referring to fig. 3, the process of determining a marking truth value for each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system in step S108 includes the steps of:
Step S301, clustering point cloud data of each vehicle under a world coordinate system;
Specifically, the physical attribute truth value includes at least one of the following: length of the vehicle, width of the vehicle, and height of the vehicle.
And step S302, obtaining a true value of the mark of each vehicle in the vehicle image according to the clustering result.
Specifically, a coordinate true value of the 3D model of the vehicle under the world coordinate system can be obtained according to the clustering result, and in addition, after the clustering result is obtained, a boundary value is taken, so that a coordinate true value of the 3D frame surrounding the vehicle under the world coordinate system and a physical attribute true value of the vehicle can be obtained.
It should be noted that, the vehicle marking method of the present invention may be to process the vehicle point cloud data detected by the laser radar and the vehicle image captured by the camera in real time, or to post-process the vehicle point cloud data detected by the laser radar and the vehicle image captured by the camera, so as to obtain the marking true value of each vehicle in the vehicle image.
The vehicle marking method can automatically mark the marking true value in a large batch, improves the marking efficiency, has high intelligent degree, reduces the manual workload and reduces the marking cost.
Embodiment two:
the embodiment of the invention also provides a vehicle marking device which is mainly used for executing the vehicle marking method provided by the embodiment of the invention, and the vehicle marking device provided by the embodiment of the invention is specifically described below.
Fig. 4 is a schematic view of a vehicle marking device according to an embodiment of the present invention, and as shown in fig. 4, the vehicle marking device mainly includes: an acquisition unit 10, a first determination unit 20, a second determination unit 30, and a third determination unit 40, wherein:
The acquisition unit is used for acquiring vehicle point cloud data obtained by detecting the vehicle in the target environment by the laser radar and vehicle images obtained by shooting the vehicle in the target environment by the camera;
A first determination unit configured to determine a point cloud image corresponding to pixels of a vehicle image based on vehicle point cloud data, and determine a vehicle instance division result based on the vehicle image;
The second determining unit is used for determining the point cloud data of each vehicle in the vehicle image under the camera coordinate system based on the point cloud image and the vehicle instance segmentation result, and carrying out coordinate transformation on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under the world coordinate system;
and a third determining unit for determining a true value of the mark of each vehicle in the vehicle image based on the point cloud data of each vehicle in the world coordinate system.
In an embodiment of the present invention, there is provided a vehicle marking apparatus including: acquiring vehicle point cloud data obtained by detecting a vehicle in a target environment by a laser radar and a vehicle image obtained by shooting the vehicle in the target environment by a camera; determining a point cloud image corresponding to pixels of the vehicle image based on the vehicle point cloud data, and determining a vehicle instance segmentation result based on the vehicle image; determining point cloud data of each vehicle in the vehicle image under a camera coordinate system based on the point cloud image and the vehicle instance segmentation result, and performing coordinate transformation on the point cloud data of each vehicle under the camera coordinate system to obtain the point cloud data of each vehicle under the world coordinate system; a true value of the signature for each vehicle in the vehicle image is determined based on the point cloud data for each vehicle in the world coordinate system. According to the vehicle marking device disclosed by the invention, the marking true value of each vehicle can be automatically marked, so that the marking efficiency is improved, the intelligent degree is high, the manual workload is reduced, the marking cost is reduced, and the technical problems of low efficiency, poor intelligent degree and high cost of the existing vehicle marking method are solved.
Optionally, the first determining unit is further configured to: performing coordinate transformation on the vehicle point cloud data to obtain a point cloud image corresponding to pixels of a vehicle image; and carrying out instance segmentation on the vehicle image to obtain a vehicle instance segmentation result.
Optionally, the first determining unit is further configured to: and converting the vehicle point cloud data into a camera coordinate system to obtain a point cloud image corresponding to the pixels of the vehicle image.
Optionally, the first determining unit is further configured to: and carrying out instance segmentation on the vehicle image by adopting a vehicle instance segmentation model to obtain a vehicle instance segmentation result.
Optionally, the second determining unit is further configured to: and comparing the point cloud image with the vehicle example segmentation result to obtain the point cloud data of each vehicle in the vehicle image under the camera coordinate system.
Optionally, the marking truth value includes at least one of: the third determining unit is further configured to: clustering point cloud data of each vehicle under a world coordinate system; and obtaining the marking true value of each vehicle in the vehicle image according to the clustering result.
Optionally, the physical attribute truth value includes at least one of: length of the vehicle, width of the vehicle, and height of the vehicle.
The device provided by the embodiment of the present invention has the same implementation principle and technical effects as those of the foregoing method embodiment, and for the sake of brevity, reference may be made to the corresponding content in the foregoing method embodiment where the device embodiment is not mentioned.
As shown in fig. 5, an electronic device 600 provided in an embodiment of the present application includes: a processor 601, a memory 602 and a bus, said memory 602 storing machine readable instructions executable by said processor 601, said processor 601 communicating with said memory 602 via the bus when the electronic device is running, said processor 601 executing said machine readable instructions to perform the steps of the vehicle marking method as described above.
Specifically, the above-described memory 602 and processor 601 can be general-purpose memories and processors, and are not particularly limited herein, and the above-described vehicle marking method can be executed when the processor 601 runs a computer program stored in the memory 602.
The processor 601 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware in the processor 601 or instructions in the form of software. The processor 601 may be a general-purpose processor, including a central processing unit (Central Processing Unit, abbreviated as CPU), a network processor (Network Processor, abbreviated as NP), and the like; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be embodied directly in the execution of a hardware decoding processor, or in the execution of a combination of hardware and software modules in a decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 602, and the processor 601 reads information in the memory 602 and performs the steps of the above method in combination with its hardware.
Corresponding to the above vehicle marking method, an embodiment of the present application also provides a computer-readable storage medium storing machine-executable instructions which, when invoked and executed by a processor, cause the processor to perform the steps of the above vehicle marking method.
The vehicle marking device provided by the embodiment of the application can be specific hardware on equipment or software or firmware installed on the equipment. The device provided by the embodiment of the present application has the same implementation principle and technical effects as those of the foregoing method embodiment, and for the sake of brevity, reference may be made to the corresponding content in the foregoing method embodiment where the device embodiment is not mentioned. It will be clear to those skilled in the art that, for convenience and brevity, the specific operation of the system, apparatus and unit described above may refer to the corresponding process in the above method embodiment, which is not described in detail herein.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
As another example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments provided in the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing an electronic device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the vehicle marking method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory RAM), a magnetic disk, or an optical disk, etc., which can store program codes.
It should be noted that: like reference numerals and letters in the following figures denote like items, and thus once an item is defined in one figure, no further definition or explanation of it is required in the following figures, and furthermore, the terms "first," "second," "third," etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present application, and are not intended to limit the scope of the present application, but it should be understood by those skilled in the art that the present application is not limited thereto, and that the present application is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit of the corresponding technical solutions. Are intended to be encompassed within the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (9)

CN202110227154.4A2021-03-012021-03-01Vehicle marking method and device and electronic equipmentActiveCN112837384B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110227154.4ACN112837384B (en)2021-03-012021-03-01Vehicle marking method and device and electronic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110227154.4ACN112837384B (en)2021-03-012021-03-01Vehicle marking method and device and electronic equipment

Publications (2)

Publication NumberPublication Date
CN112837384A CN112837384A (en)2021-05-25
CN112837384Btrue CN112837384B (en)2024-07-19

Family

ID=75934303

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110227154.4AActiveCN112837384B (en)2021-03-012021-03-01Vehicle marking method and device and electronic equipment

Country Status (1)

CountryLink
CN (1)CN112837384B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113281770A (en)*2021-05-282021-08-20东软睿驰汽车技术(沈阳)有限公司Coordinate system relation obtaining method and device
CN113780214B (en)*2021-09-162024-04-19上海西井科技股份有限公司Method, system, equipment and storage medium for image recognition based on crowd
CN114220083B (en)*2021-12-102025-04-18东软睿驰汽车技术(沈阳)有限公司 Method for determining vehicle posture, data collection method, device and vehicle
CN116736259B (en)*2023-07-202025-02-28北京东土科技股份有限公司 A method and device for calibrating laser point cloud coordinates for automatic tower crane driving

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110988912A (en)*2019-12-062020-04-10中国科学院自动化研究所Road target and distance detection method, system and device for automatic driving vehicle

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111222417A (en)*2019-12-242020-06-02武汉中海庭数据技术有限公司Method and device for improving lane line extraction precision based on vehicle-mounted image
CN111340797B (en)*2020-03-102023-04-28山东大学Laser radar and binocular camera data fusion detection method and system
CN112396650B (en)*2020-03-302023-04-07青岛慧拓智能机器有限公司Target ranging system and method based on fusion of image and laser radar
CN112037159B (en)*2020-07-292023-06-23中天智控科技控股股份有限公司Cross-camera road space fusion and vehicle target detection tracking method and system
CN111950426A (en)*2020-08-062020-11-17东软睿驰汽车技术(沈阳)有限公司 Target detection method, device and vehicle
CN112230204A (en)*2020-10-272021-01-15深兰人工智能(深圳)有限公司Combined calibration method and device for laser radar and camera
CN112348902B (en)*2020-12-032024-04-09苏州挚途科技有限公司Method, device and system for calibrating installation deviation angle of road-end camera

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110988912A (en)*2019-12-062020-04-10中国科学院自动化研究所Road target and distance detection method, system and device for automatic driving vehicle

Also Published As

Publication numberPublication date
CN112837384A (en)2021-05-25

Similar Documents

PublicationPublication DateTitle
CN112837384B (en)Vehicle marking method and device and electronic equipment
CN112560684B (en)Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
CN110472580B (en)Method, device and storage medium for detecting parking stall based on panoramic image
CN113591543B (en)Traffic sign recognition method, device, electronic equipment and computer storage medium
CN112767412B (en)Vehicle part classification method and device and electronic equipment
CN110176017B (en)Edge detection model, method and storage medium based on target detection
CN109658454A (en)Pose information determination method, related device and storage medium
CN111382625A (en)Road sign identification method and device and electronic equipment
CN111126209B (en)Lane line detection method and related equipment
CN111862208A (en) A vehicle positioning method, device and server based on screen optical communication
CN109523570B (en)Motion parameter calculation method and device
CN111950523A (en)Ship detection optimization method and device based on aerial photography, electronic equipment and medium
CN113112542A (en)Visual positioning method and device, electronic equipment and storage medium
CN108734161B (en)Method, device and equipment for identifying prefix number area and storage medium
CN112819953A (en)Three-dimensional reconstruction method, network model training method and device and electronic equipment
CN111950501B (en)Obstacle detection method and device and electronic equipment
CN113569812A (en) Method, device and electronic device for identifying unknown obstacles
CN114919584B (en)Fixed-point target ranging method and device for motor vehicle and computer readable storage medium
CN111950490B (en)Parking rod identification method and training method and device of identification model thereof
CN113642521B (en)Traffic light identification quality evaluation method and device and electronic equipment
CN110309741B (en)Obstacle detection method and device
CN110880003B (en)Image matching method and device, storage medium and automobile
CN116543365B (en)Lane line identification method and device, electronic equipment and storage medium
CN115713750B (en)Lane line detection method and device, electronic equipment and storage medium
CN109816709B (en)Monocular camera-based depth estimation method, device and equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp