Movatterモバイル変換


[0]ホーム

URL:


CN110058263B - A method for object localization during vehicle driving - Google Patents

A method for object localization during vehicle driving
Download PDF

Info

Publication number
CN110058263B
CN110058263BCN201910307774.1ACN201910307774ACN110058263BCN 110058263 BCN110058263 BCN 110058263BCN 201910307774 ACN201910307774 ACN 201910307774ACN 110058263 BCN110058263 BCN 110058263B
Authority
CN
China
Prior art keywords
image
vehicle
features
distance information
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910307774.1A
Other languages
Chinese (zh)
Other versions
CN110058263A (en
Inventor
魏巍
罗炜
陈铭泉
李家辉
马小峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou University
Original Assignee
Guangzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou UniversityfiledCriticalGuangzhou University
Priority to CN201910307774.1ApriorityCriticalpatent/CN110058263B/en
Publication of CN110058263ApublicationCriticalpatent/CN110058263A/en
Application grantedgrantedCritical
Publication of CN110058263BpublicationCriticalpatent/CN110058263B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明实施例公开了一种车辆行驶过程中的物体定位方法,包括:摄像头模块、图像数据处理模块、数据坐标转换模块和激光雷达扫描控制模块;所述摄像头模块获取车辆行驶过程中道路环境的二维图像;所述图像数据处理模块根据所述二维图像识别出车辆行驶过程中的物体;所述图像数据处理模块根据所述物体的图像特征,提取所述物体的主干部分,并进行标定,获取所述物体主干部分在二维图像上的图像二维坐标;所述数据坐标转换模块将所述图像二维坐标进行坐标转换为激光雷达扫描所需的位置参数;根据所述位置参数,所述激光雷达扫描控制模块控制激光雷达对所述物体进行扫描,获得距离信息。采用本发明可以快速精准的对车辆行驶过程中的物体进行定位。

Figure 201910307774

The embodiment of the present invention discloses a method for locating an object during driving of a vehicle, comprising: a camera module, an image data processing module, a data coordinate conversion module and a laser radar scanning control module; the camera module obtains the information of the road environment during the driving of the vehicle. A two-dimensional image; the image data processing module identifies the object in the process of driving the vehicle according to the two-dimensional image; the image data processing module extracts the main part of the object according to the image characteristics of the object, and performs calibration , obtain the two-dimensional coordinates of the image on the two-dimensional image of the main part of the object; the data coordinate conversion module converts the two-dimensional coordinates of the image into the position parameters required for lidar scanning; according to the position parameters, The lidar scanning control module controls the lidar to scan the object to obtain distance information. By adopting the present invention, the object in the running process of the vehicle can be positioned quickly and accurately.

Figure 201910307774

Description

Object positioning method in vehicle driving process
Technical Field
The invention relates to the field of positioning, in particular to an object positioning method in the driving process of a vehicle.
Background
The vehicle needs to acquire the position information of the surrounding objects during the driving process. The image data acquired by the camera only has two-dimensional information, and has no distance information of the object to be shot, so that the application of the camera in the driving field is limited. At present, the two-dimensional image processing technology is mature, the image processing time is short, the accuracy rate is high, and the method can be used for identifying objects. The laser radar is a system for detecting a characteristic quantity such as a position and a velocity of a target by emitting a laser beam. The laser radar can be used for analyzing laser beams emitted to and received from the target to obtain relevant information of the target. Therefore, the laser radar is often applied to a driving system of an automobile, a building mapping and the like, which require to acquire an accurate position or speed of an object.
The existing technology for positioning objects in the vehicle driving process mainly comprises the steps of obtaining three-dimensional point cloud information of the whole detected area through a laser radar and then carrying out the subsequent processing process of point cloud data, and the method is large in data volume and long in processing time, so that the driving system with extremely high real-time requirement is very unfavorable, and the reliability of the driving system is seriously influenced.
Disclosure of Invention
In order to solve the problems, the invention provides an object positioning method in the vehicle driving process, which can combine the existing technology of rapidly processing two-dimensional image information and the technology of acquiring object position information by a laser radar, make up for the defect that a two-dimensional image cannot reflect distance information, overcome the defects of large amount of point cloud data acquired by the laser radar and long processing time, and greatly improve the object positioning speed in the vehicle driving process.
Based on this, the invention provides an object positioning method in the driving process of a vehicle, which comprises the following steps: a method for locating an object while a vehicle is traveling, comprising:
the system comprises a camera module, an image data processing module, a data coordinate conversion module and a laser radar scanning control module;
the camera module acquires a two-dimensional image of the road environment in the running process of the vehicle;
the image data processing module identifies an object in the driving process of the vehicle according to the two-dimensional image;
the image data processing module extracts a trunk part of the object according to the image characteristics of the object, and calibrates the trunk part to obtain an image two-dimensional coordinate of the trunk part of the object on a two-dimensional image;
the data coordinate conversion module performs coordinate conversion on the two-dimensional image coordinate and converts the two-dimensional image coordinate into a position parameter required by laser radar scanning in the laser radar scanning control module;
and the laser radar scanning control module controls the laser radar to scan the object according to the position parameter to obtain distance information.
Wherein the image features of the object include: size, shape, brightness, color of the image. Wherein the extracting the stem portion of the object comprises:
and extracting geometric features of the object, wherein the geometric features comprise triangles, quadrangles, pentagons or combined polygons thereof, and vertexes of the geometric features form positioning points.
Wherein said extracting a trunk portion of the object further comprises: and extracting points of the reflection effect of the object on the radar laser source exceeding the preset effect through the brightness information of the image to serve as positioning points.
Wherein, laser radar scanning control module control laser radar is to the object scan includes:
and the laser radar control module calculates a scanning path according to the position parameters of the positioning points in each frame of the two-dimensional image data and a path planning algorithm.
Wherein the obtaining distance information comprises: distance information obtained when the vehicle is stationary and distance information obtained when the vehicle is moving.
Wherein, the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving have the same calculation mode, and both comprise:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object.
And the range of the area scanned by the laser radar is the same as the range captured by the camera.
Wherein the coordinate parameter conversion includes coordinate parameter conversion performed in a stationary or running state of the vehicle.
The camera module, the image data processing module, the data coordinate conversion module and the laser radar scanning control module are carried out simultaneously.
The invention utilizes the existing technology for rapidly processing the two-dimensional image information and the capability of the laser radar for acquiring the position information of the object. The camera and the laser radar are combined, so that the defects that the two-dimensional image cannot reflect distance information and the defects of large amount of point cloud data acquired by the laser radar and long processing time are overcome. By the method and the device, the distance information of nearby objects in the automatic driving process can be quickly obtained, and the response speed of the automatic driving system is greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of an object locating method during a vehicle driving process according to an embodiment of the present invention;
FIG. 2 is a flowchart of an object locating method during a driving process of a vehicle according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the positioning of an object during travel of a vehicle according to an embodiment of the present invention;
fig. 4 is a schematic diagram of the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving according to the embodiment of the present invention in the same calculation manner.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic diagram of an object locating method in a vehicle driving process provided by an embodiment of the invention, where the object locating method in the vehicle driving process includes:
the system comprises acamera module 101, an imagedata processing module 102, a datacoordinate conversion module 103 and a laser radarscanning control module 104;
thecamera module 101 acquires a two-dimensional image of a road environment in the running process of a vehicle;
the imagedata processing module 102 identifies an object in the driving process of the vehicle according to the two-dimensional image;
the imagedata processing module 102 extracts a trunk part of the object according to the image characteristics of the object, and calibrates the trunk part to obtain an image two-dimensional coordinate of the trunk part of the object on a two-dimensional image;
the datacoordinate conversion module 103 performs coordinate conversion on the two-dimensional coordinates of the image, and converts the two-dimensional coordinates into position parameters required by laser radar scanning in the laser radarscanning control module 104;
and the laser radarscanning control module 104 controls the laser radar to scan the object according to the position parameter, so as to obtain distance information.
Thecamera module 101 includes a camera, and captures a road environment image at a vehicle view angle, the road environment image being a two-dimensional image having a defect that information on a distance between objects cannot be reflected.
The image features of the object include: color features, texture features, shape features, and spatial relationships.
A color feature is a global feature that describes the surface properties of a scene to which an image or image region corresponds.
A texture feature is also a global feature that also describes the surface properties of the scene to which the image or image area corresponds. However, since texture is only a characteristic of the surface of an object and does not completely reflect the essential attributes of the object, high-level image content cannot be obtained by using texture features alone. Unlike color features, texture features are not based on the characteristics of the pixel points, which requires statistical calculations in regions containing multiple pixel points.
There are two types of representation methods for shape features, one is outline features and the other is region features. The outline features of the image are mainly directed to the outer boundary of the object, while the area features of the image are related to the entire shape area.
The spatial relationship refers to the spatial position or relative direction relationship between a plurality of objects divided from the image, and these relationships can be classified into a connection or adjacency relationship, an overlapping or overlapping relationship, an inclusion or inclusion relationship, and the like. In general, spatial location information can be divided into two categories: relative spatial position information and absolute spatial position information. The former relation emphasizes the relative situation between the objects, such as the upper, lower, left and right relations, and the latter relation emphasizes the distance and orientation between the objects.
Wherein the image characteristics of the object mainly include: size, shape, brightness, color of the image.
Wherein the extracting the stem portion of the object comprises:
and extracting geometric features of the object, wherein the geometric features comprise triangles, quadrangles, pentagons or combined polygons thereof, and vertexes of the geometric features form positioning points.
The extracting the trunk portion of the object further includes: and extracting points of the reflection effect of the object on the radar laser source exceeding the preset effect through the brightness information of the image to serve as positioning points.
Wherein the controlling of the lidar to scan the object by the lidarscanning control module 104 includes:
and the laser radar control module calculates a scanning path according to the position parameters of the positioning points in each frame of the two-dimensional image data and a path planning algorithm.
The obtaining distance information includes: distance information obtained when the vehicle is stationary and distance information obtained when the vehicle is moving.
Wherein, the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving have the same calculation mode, and both comprise:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object.
The range of the area scanned by the laser radar is the same as the range captured by the camera.
The coordinate parameter conversion includes coordinate parameter conversion performed in a stationary or running state of the vehicle. And thecamera module 101, the imagedata processing module 102, the data coordinateconversion module 103 and the laser radarscan control module 104 may be performed simultaneously.
Fig. 2 is a flowchart of an object locating method during driving of a vehicle according to an embodiment of the present invention, where the method includes:
s201, the camera module acquires a two-dimensional image of a road environment in the running process of a vehicle;
the camera module comprises a camera and is used for shooting a road environment image at a vehicle visual angle, wherein the road environment image is a two-dimensional image which has the defect that distance information between objects cannot be reflected.
The image features of the object include: color features, texture features, shape features, and spatial relationships.
A color feature is a global feature that describes the surface properties of a scene to which an image or image region corresponds.
A texture feature is also a global feature that also describes the surface properties of the scene to which the image or image area corresponds. However, since texture is only a characteristic of the surface of an object and does not completely reflect the essential attributes of the object, high-level image content cannot be obtained by using texture features alone. Unlike color features, texture features are not based on the characteristics of the pixel points, which requires statistical calculations in regions containing multiple pixel points.
There are two types of representation methods for shape features, one is outline features and the other is region features. The outline features of the image are mainly directed to the outer boundary of the object, while the area features of the image are related to the entire shape area.
The spatial relationship refers to the spatial position or relative direction relationship between a plurality of objects divided from the image, and these relationships can be classified into a connection or adjacency relationship, an overlapping or overlapping relationship, an inclusion or inclusion relationship, and the like. In general, spatial location information can be divided into two categories: relative spatial position information and absolute spatial position information. The former relation emphasizes the relative situation between the objects, such as the upper, lower, left and right relations, and the latter relation emphasizes the distance and orientation between the objects.
Wherein the image characteristics of the object mainly include: size, shape, brightness, color of the image.
S202, the image data processing module identifies an object in the driving process of the vehicle according to the two-dimensional image;
objects such as trees, pedestrians, other vehicles and the like existing in the two-dimensional image can be identified by the image data processing module.
S203, the image data processing module extracts a trunk part of the object according to the image characteristics of the object, and calibrates the trunk part to obtain an image two-dimensional coordinate of the trunk part of the object on a two-dimensional image;
and the image data processing module extracts the trunk part of the identified object according to the image characteristics of the object.
The extracting the stem portion of the object includes:
and extracting geometric features of the object, wherein the geometric features comprise triangles, quadrangles, pentagons or combined polygons thereof, and vertexes of the geometric features form positioning points.
Wherein said extracting a trunk portion of the object further comprises: and extracting points of the reflection effect of the object on the radar laser source exceeding the preset effect through the brightness information of the image to serve as positioning points.
For example, if the object is a tree, the tree may be regarded as a triangle and a quadrangle which are spliced up and down, three vertexes of the triangle and four vertexes of the quadrangle are used as positioning points, and the quadrangle may be regarded as a straight line and represented by a plurality of points, as shown in fig. 3.
And calibrating the positioning point, namely marking the positioning point, and acquiring the image two-dimensional coordinates of the positioning point in the two-dimensional image.
S204, the data coordinate conversion module performs coordinate conversion on the two-dimensional coordinates of the image and converts the two-dimensional coordinates into position parameters required by laser radar scanning in the laser radar scanning control module;
and the data coordinate conversion module is used for carrying out coordinate conversion on the two-dimensional coordinates of the image, namely converting the two-dimensional coordinates of a plane into three-dimensional coordinates of a space, and the radar in the laser radar scanning control module is used for scanning the object according to the three-dimensional coordinates of the space.
S205, the laser radar scanning control module controls the laser radar to scan the object according to the position parameter, and distance information is obtained.
And the laser radar control module calculates a scanning path according to the position parameters of the positioning points in each frame of the two-dimensional image data and a path planning algorithm.
Wherein the obtaining distance information comprises: distance information obtained when the vehicle is stationary and distance information obtained when the vehicle is moving.
Wherein, the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving have the same calculation mode, and both comprise:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object.
Fig. 4 is a schematic diagram of the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving according to the embodiment of the present invention in the same calculation manner, please refer to fig. 4, the reason why the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving are the same is that:
fig. 1 and 2 are schematic diagrams of a vehicle scanning lidar at a standstill, the vehicle emitting laser light, the vehicle receiving the reflected light via reflection from the object, the distance between the object and the vehicle being mathematically derived as:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object and the vehicle.
Fig. 3 and 4 are schematic diagrams of laser radar scanning performed by a vehicle in motion, wherein the vehicle emits laser light at one position, the vehicle receives reflected light via reflection of the object at another position, and the distance between the object and the vehicle is mathematically easy to obtain:
the method comprises the steps of obtaining the time difference between laser emission and reflected laser receiving, obtaining the speed difference between the light speed and the vehicle speed, and taking one half of the product of the difference and the time difference as the distance information between an object and a vehicle, wherein the vehicle speed is far smaller than the light speed compared with the vehicle speed, so the vehicle speed can be ignored, namely the distance between the object and the vehicle is one half of the product between the light speed and the time difference.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and substitutions can be made without departing from the technical principle of the present invention, and these modifications and substitutions should also be regarded as the protection scope of the present invention.

Claims (6)

1. A method for locating an object while a vehicle is traveling, comprising:
the system comprises a camera module, an image data processing module, a data coordinate conversion module and a laser radar scanning control module;
the camera module acquires a two-dimensional image of the road environment in the running process of the vehicle; the two-dimensional image of the road environment is obtained by shooting from a vehicle view angle;
the image data processing module identifies an object in the driving process of the vehicle according to the two-dimensional image; the image features of the object comprise color features, texture features, shape features and spatial relationships; the color features describe surface properties of a scene corresponding to the image or the image area; the texture features are obtained through statistical calculation in an image region containing a plurality of pixel points; the shape features comprise contour features and region features; the spatial relationship comprises relative spatial position information and absolute spatial position information;
the image data processing module extracts a trunk part of the object according to the image characteristics of the object, and calibrates the trunk part to obtain an image two-dimensional coordinate of the trunk part of the object on a two-dimensional image; the image features of the object include: size, shape, brightness, color of the image; the extracting the stem portion of the object includes: extracting geometric features of the object, wherein the geometric features comprise triangles, quadrangles, pentagons or polygons combined with the triangles, and vertexes of the geometric features form positioning points; the extracting the trunk portion of the object further includes: extracting points of the object with the reflection effect of the radar laser source exceeding the preset effect through the brightness information of the image to serve as positioning points;
the data coordinate conversion module performs coordinate conversion on the two-dimensional image coordinate and converts the two-dimensional image coordinate into a position parameter required by laser radar scanning in the laser radar scanning control module;
the laser radar scanning control module controls a laser radar to scan the object according to the position parameter to obtain distance information; and the laser radar control module calculates a scanning path according to the position parameters of the positioning points in each frame of the two-dimensional image data and a path planning algorithm.
2. The method of claim 1, wherein the obtaining distance information comprises: distance information obtained when the vehicle is stationary and distance information obtained when the vehicle is moving.
3. The method of claim 2, wherein the distance information obtained when the vehicle is stationary and the distance information obtained when the vehicle is moving are calculated in the same manner, comprising:
and acquiring the time difference between the emitted laser and the received reflected laser, and taking one half of the product of the light speed and the time difference as the distance information of the object.
4. The method according to claim 1, wherein the laser radar scans the same area as the camera.
5. The method for positioning an object during running of a vehicle according to claim 1, wherein the coordinate parameter conversion includes coordinate parameter conversion performed in a state where the vehicle is stationary or running.
6. The method for locating an object during traveling of a vehicle according to claim 1, wherein the camera module, the image data processing module, the data coordinate conversion module and the lidar scanning control module are performed simultaneously.
CN201910307774.1A2019-04-162019-04-16 A method for object localization during vehicle drivingExpired - Fee RelatedCN110058263B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910307774.1ACN110058263B (en)2019-04-162019-04-16 A method for object localization during vehicle driving

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910307774.1ACN110058263B (en)2019-04-162019-04-16 A method for object localization during vehicle driving

Publications (2)

Publication NumberPublication Date
CN110058263A CN110058263A (en)2019-07-26
CN110058263Btrue CN110058263B (en)2021-08-13

Family

ID=67319166

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910307774.1AExpired - Fee RelatedCN110058263B (en)2019-04-162019-04-16 A method for object localization during vehicle driving

Country Status (1)

CountryLink
CN (1)CN110058263B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP7147729B2 (en)*2019-10-282022-10-05株式会社デンソー Movement amount estimation device, movement amount estimation method, movement amount estimation program, and movement amount estimation system
CN113340313B (en)*2020-02-182024-04-16北京四维图新科技股份有限公司 Method and device for determining navigation map parameters
CN114841848A (en)*2022-04-192022-08-02珠海欧比特宇航科技股份有限公司 High bandwidth signal processing system, apparatus, method and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1628237A (en)*2002-09-302005-06-15石川岛播磨重工业株式会社 Object measuring method and object measuring device
CN103196418A (en)*2013-03-062013-07-10山东理工大学Measuring method of vehicle distance at curves
CN105629261A (en)*2016-01-292016-06-01大连楼兰科技股份有限公司 Non-scanning automotive collision avoidance lidar system based on structured light and its working method
CN106597469A (en)*2016-12-202017-04-26王鹏Actively imaging laser camera and imaging method thereof
CN106871799A (en)*2017-04-102017-06-20淮阴工学院A kind of full-automatic crops plant height measuring method and device
CN107622499A (en)*2017-08-242018-01-23中国东方电气集团有限公司A kind of identification and space-location method based on target two-dimensional silhouette model

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US7028899B2 (en)*1999-06-072006-04-18Metrologic Instruments, Inc.Method of speckle-noise pattern reduction and apparatus therefore based on reducing the temporal-coherence of the planar laser illumination beam before it illuminates the target object by applying temporal phase modulation techniques during the transmission of the plib towards the target
CN101388077A (en)*2007-09-112009-03-18松下电器产业株式会社 Target shape detection method and device
CN104715264A (en)*2015-04-102015-06-17武汉理工大学Method and system for recognizing video images of motion states of vehicles in expressway tunnel
CN108132025B (en)*2017-12-242020-04-14上海捷崇科技有限公司Vehicle three-dimensional contour scanning construction method
CN108876719B (en)*2018-03-292022-07-26广州大学 External parameter estimation method for vehicle panoramic image stitching based on virtual camera model

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN1628237A (en)*2002-09-302005-06-15石川岛播磨重工业株式会社 Object measuring method and object measuring device
CN103196418A (en)*2013-03-062013-07-10山东理工大学Measuring method of vehicle distance at curves
CN105629261A (en)*2016-01-292016-06-01大连楼兰科技股份有限公司 Non-scanning automotive collision avoidance lidar system based on structured light and its working method
CN106597469A (en)*2016-12-202017-04-26王鹏Actively imaging laser camera and imaging method thereof
CN106871799A (en)*2017-04-102017-06-20淮阴工学院A kind of full-automatic crops plant height measuring method and device
CN107622499A (en)*2017-08-242018-01-23中国东方电气集团有限公司A kind of identification and space-location method based on target two-dimensional silhouette model

Also Published As

Publication numberPublication date
CN110058263A (en)2019-07-26

Similar Documents

PublicationPublication DateTitle
US11719788B2 (en)Signal processing apparatus, signal processing method, and program
US10024965B2 (en)Generating 3-dimensional maps of a scene using passive and active measurements
WO2021223368A1 (en)Target detection method based on vision, laser radar, and millimeter-wave radar
CN113984081B (en)Positioning method, positioning device, self-mobile equipment and storage medium
TW202019745A (en)Systems and methods for positioning vehicles under poor lighting conditions
CN114155265B (en) 3D LiDAR Road Point Cloud Segmentation Method Based on YOLACT
CN114080625A (en)Absolute pose determination method, electronic equipment and movable platform
CN110853037A (en) A lightweight color point cloud segmentation method based on spherical projection
CN110058263B (en) A method for object localization during vehicle driving
CN108983248A (en)It is a kind of that vehicle localization method is joined based on the net of 3D laser radar and V2X
WO2021207954A1 (en)Target identification method and device
CN115151954B (en) Method and device for detecting drivable area
US10444398B2 (en)Method of processing 3D sensor data to provide terrain segmentation
CN112013858A (en)Positioning method, positioning device, self-moving equipment and storage medium
CN103424112A (en)Vision navigating method for movement carrier based on laser plane assistance
KR102484298B1 (en)An inspection robot of pipe and operating method of the same
CN109946703A (en) A sensor attitude adjustment method and device
CN114494075A (en)Obstacle identification method based on three-dimensional point cloud, electronic device and storage medium
JP2019191991A (en)Object information estimating apparatus estimating direction of object using point group, program therefor, and method thereof
US12299926B2 (en)Tracking with reference to a world coordinate system
CN114596358A (en)Object detection method and device and electronic equipment
TWI792108B (en)Inland river lidar navigation system for vessels and operation method thereof
US10643348B2 (en)Information processing apparatus, moving object, information processing method, and computer program product
CN113052916A (en)Laser radar and camera combined calibration method using specially-made calibration object
CN117409393A (en)Method and system for detecting laser point cloud and visual fusion obstacle of coke oven locomotive

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20210813


[8]ページ先頭

©2009-2025 Movatter.jp