Movatterモバイル変換


[0]ホーム

URL:


CN115588042B - Motion detection method, device and equipment based on event camera and lidar - Google Patents

Motion detection method, device and equipment based on event camera and lidar

Info

Publication number
CN115588042B
CN115588042BCN202211133686.2ACN202211133686ACN115588042BCN 115588042 BCN115588042 BCN 115588042BCN 202211133686 ACN202211133686 ACN 202211133686ACN 115588042 BCN115588042 BCN 115588042B
Authority
CN
China
Prior art keywords
point
event camera
laser radar
dimensional
dimensional projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211133686.2A
Other languages
Chinese (zh)
Other versions
CN115588042A (en
Inventor
张涛
王宇恒
董岩
朱俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua UniversityfiledCriticalTsinghua University
Priority to CN202211133686.2ApriorityCriticalpatent/CN115588042B/en
Publication of CN115588042ApublicationCriticalpatent/CN115588042A/en
Application grantedgrantedCritical
Publication of CN115588042BpublicationCriticalpatent/CN115588042B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供一种基于事件相机和激光雷达的运动检测方法、装置及设备,该方法明融合了事件相机和激光雷达的优势,既可以获取物体的空间位置,又判断物体是否运动,实现了在输出点云空间信息的同时判断每个点是否运动。相较于其他只使用激光雷达或者RGB相机的运动检测算法相比,本发明只需要激光雷达的一个扫描周期,例如一帧的数据,即可完成运动状态检测。此外,本发明还可利用包含运动信息的点云数据简化很多任务的实现。本发明的运动检测方法可以把所有运动的点云滤除,简化建图过程。在某些目标检测任务中,可以通过运动与否先对所有点云进行一次筛选,缩短识别时间,简化工作过程。

The present invention provides a motion detection method, device and equipment based on event cameras and laser radars. The method clearly combines the advantages of event cameras and laser radars, can not only obtain the spatial position of an object, but also determine whether the object is moving, and realizes the judgment of whether each point is moving while outputting point cloud spatial information. Compared with other motion detection algorithms that only use laser radars or RGB cameras, the present invention only requires one scanning cycle of the laser radar, such as one frame of data, to complete motion state detection. In addition, the present invention can also use point cloud data containing motion information to simplify the implementation of many tasks. The motion detection method of the present invention can filter out all moving point clouds and simplify the mapping process. In some target detection tasks, all point clouds can be screened once based on whether they are moving or not, which shortens the recognition time and simplifies the working process.

Description

Motion detection method, device and equipment based on event camera and laser radar
Technical Field
The invention relates to the technical fields of sensor fusion, computer vision and motion detection algorithms, in particular to a motion detection method, device and equipment based on an event camera and a laser radar.
Background
Along with the development of science and technology, the application of the laser radar in life becomes more and more common, so that the focusing precision of a smart phone in photographing or shooting can be improved, the automobile can be helped to realize different levels of automatic driving, a floor sweeping robot can be positioned to sweep a designated area, the mapping precision can be improved, and a clearer map can be constructed. The principle of the laser radar is that a laser beam is emitted to a target, then a received signal reflected from the target is compared with an emitted signal, and after correlation processing, point clouds containing object space position information can be obtained. However, the lidar cannot obtain dynamic information of the object, i.e., determine whether the object is moving or not, in one scan. In many application scenarios, for example, in synchronous positioning mapping (SLAM), if it is able to know whether the point cloud moves, the precision and speed of mapping and positioning can be greatly improved.
An event camera is a novel sensor, and is invented by researchers inspired by the principle of human retina, and only can shoot moving or surface illumination changing objects. If an object remains stationary and its surface illumination does not change, it will not be recorded by the event camera. In addition, imaging of the event camera is not a frame of picture, but a point cloud similar to that of a lidar. Corresponding to lidar, although an event camera can obtain dynamic information of an object, it does not include spatial depth information of the object, and only the position of a point cloud on a plane can be obtained.
Currently, some technologies for dynamically monitoring by using an event camera exist, for example, patent ZL202110811885.3 processes signals collected by the event camera through a neural network to determine whether an object is moving. The patent ZL202011088240.3 obtains the speed of each event camera data point through the way of event stream integration and filtering so as to judge whether the motion exists. In terms of lidar and event camera fusion, the patent zl202111502007.X increases the density of the lidar point cloud through the event camera.
However, in the prior art based on dynamic identification of event cameras, an inherent disadvantage is that spatial information of the scanned point cloud cannot be obtained even if it can be judged whether the object is moving or not, because it is not fused with the lidar. However, the dynamic identification technology based on the laser radar has the defect that at least two frames of laser radar information are needed to obtain a result. Neither the computation of neural networks nor the integration of event streams requires significant computational resources and training time, resulting in difficulty in real-time identification. Based on the fusion algorithm of the laser radar and the event camera, the density of the laser radar point cloud can be improved, but the dynamic information of the object cannot be judged.
Disclosure of Invention
In view of the above problems, the present invention provides a motion detection method, apparatus and device based on an event camera and a lidar, which determine whether each point moves while outputting point cloud space information.
In order to achieve the above purpose, the first aspect of the invention provides a motion detection method based on an event camera and a laser radar, which comprises the steps of S1, S2, obtaining a motion target to be detected, scanning the motion target, collecting a plurality of two-dimensional imaging points of the event camera and a plurality of three-dimensional point clouds of the laser radar in a scanning period, S3, respectively projecting the plurality of three-dimensional point clouds onto a plane where the event camera is located to obtain a plurality of two-dimensional projection points of the laser radar, S4, extracting distance data in the plurality of two-dimensional projection points, clustering the distance data according to the furthest effective distance zmax of the laser radar and a preset distance separation threshold m to obtain a zmax/m distance range category, S5, respectively calculating Euler distances between the two-dimensional projection points and all two-dimensional projection points of the laser radar in the event camera in the same distance range category in a scanning period, accumulating the Euler distances to be less than the preset distance, respectively, and comparing the two-dimensional projection points of the two-dimensional projection points with the two-dimensional projection points of the two-dimensional projection target in the two-dimensional range category in a preset distance range category, and obtaining a two-dimensional projection point state of the two-dimensional projection point cloud, and a two-dimensional projection point number of the two-dimensional projection point of the laser radar in the two-dimensional range category in the distance range category, and a preset distance range category, and comparing the two-dimensional projection point number of the two-dimensional projection point with a preset distance point number of the two-dimensional projection point to a preset point to a two-dimensional projection point to a two-dimensional object, respectively, and a two-dimensional projection point object, and a two-dimensional projection point, and a two-dimensional point moving state, and a corresponding state, and a two-dimensional point state, and a moving object, and a moving state, and a point state, and a step, and a point state, and a point, and a step.
The second aspect of the invention provides a motion detection device based on an event camera and a laser radar, which comprises a configuration calibration module, a calibration module and a control module, wherein the configuration calibration module is used for configuring and calibrating the event camera and the laser radar; the data acquisition module is used for acquiring a moving target to be detected, scanning the moving target, and acquiring a plurality of two-dimensional imaging points of the event camera and a plurality of three-dimensional point clouds of the laser radar in a scanning period; the system comprises a data projection module for respectively projecting a plurality of three-dimensional point clouds to a plane where an event camera is positioned to obtain a plurality of two-dimensional projection points of the laser radar, a distance clustering module for extracting distance data in the plurality of two-dimensional projection points, clustering the distance data according to the farthest effective distance zmax of the laser radar and a preset distance separation threshold m to obtain zmax/m distance range categories, a point accumulation module for traversing and calculating Euler distances between the two-dimensional projection points and all two-dimensional imaging points of the event camera and all two-dimensional projection points of the laser radar in the same distance range category respectively aiming at each two-dimensional projection point in each distance range category, and accumulating target points of the two-dimensional imaging points and target points of the two-dimensional projection points when the Euler distances are smaller than a preset distance, a comparison module for comparing the target points of the two-dimensional imaging points and the target points of the two-dimensional projection points with a preset first point number threshold and a second point threshold respectively to obtain a motion state detection result of the two-dimensional projection points and generate a corresponding motion state identification, a result output module for repeatedly comparing the accumulated points to the two-dimensional projection points until all the two-dimensional projection points of the laser radar module are in motion state identification, and combining and outputting the three-dimensional point cloud of the laser radar and the corresponding motion state identification.
A third aspect of the invention provides an electronic device comprising one or more processors and a memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method described above.
Compared with the prior art, the motion detection method, the device and the equipment based on the event camera and the laser radar have the following beneficial effects:
(1) The method combines the advantages of the event camera and the laser radar, not only can acquire the space position of the object, but also can judge whether the object moves, thereby realizing the judgment of whether each point moves while outputting the space information of the point cloud;
(2) Compared with other motion detection algorithms which only use a laser radar or an RGB camera, the invention can complete the motion state detection only by data of one scanning period (for example, one frame) of the laser radar;
(3) The invention can also simplify the implementation of many tasks using point cloud data containing motion information. For example, in the map mapping process, moving vehicles and pedestrians can interfere with mapping, and the motion detection method can filter all moving point clouds, so that the map building process is simplified. In addition, in some target detection tasks, all point clouds can be screened for motion at first. For example, if a tree or a guideboard is to be identified, the tree or the guideboard is identified only in the static point cloud, so that the identification time can be shortened, and the working process can be simplified.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of embodiments of the present invention with reference to the accompanying drawings, in which:
Fig. 1 (a), 1 (b) and 1 (c) schematically show front, top and side views, respectively, of an installation of an event camera and a lidar according to an embodiment of the present invention;
FIG. 2 schematically illustrates a flow chart of a method of event camera and lidar based motion detection according to an embodiment of the present invention;
FIG. 3 schematically illustrates a horizontal perspective geometry diagram of an event camera and lidar according to an embodiment of the invention;
FIG. 4 schematically illustrates a distribution diagram of a plurality of two-dimensional imaging points of an event camera and a plurality of two-dimensional projection points of a lidar according to an embodiment of the invention;
FIG. 5 schematically illustrates an event camera and lidar sampling timing diagram according to an embodiment of the invention;
FIG. 6 schematically illustrates a block diagram of an event camera and lidar based motion detection device according to an embodiment of the present invention;
fig. 7 schematically illustrates a block diagram of an electronic device adapted to implement an access control method according to an embodiment of the disclosure.
[ Reference numerals description ]
1-Laser radar, 2-event camera, lens center of 21-event camera, 3-protective shell and other supporting circuits.
Detailed Description
The present invention will be further described in detail below with reference to specific embodiments and with reference to the accompanying drawings, in order to make the objects, technical solutions and advantages of the present invention more apparent. It will be apparent that the described embodiments are some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Fig. 1 (a), 1 (b) and 1 (c) schematically show a front view, a top view and a side view, respectively, of an installation manner of an event camera and a lidar according to an embodiment of the present invention.
Before proceeding with the following steps, the event camera 2 and the laser radar 1 are installed according to the principle that the event camera 2 and the matched circuit thereof are fixed in a protective shell, the laser radar 1 is fixed on the protective shell 3, and the central axis of the laser radar 1 is overlapped with the lens center 21 of the event camera.
Specifically, from the front view and the perspective of the depression, the center axis of the lidar 1 and the lens center 21 of the event camera should coincide, as shown by the broken lines in fig. 1 (a) and 1 (b). As shown in fig. 1 (c), the lidar 1 cannot be mounted too far back from the side view, i.e., the lidar 1 should be as close to the z-axis origin as possible so that the protective case 3 does not block the beam scanned downward by the lidar 1, and the distance from the center of the lidar 1 to the event camera 2 is denoted as L.
Next, fig. 2 schematically shows a flowchart of a motion detection method based on the event camera and the lidar according to an embodiment of the present invention, after the event camera 2 and the lidar 1 are mounted.
As shown in fig. 2, the event camera and lidar-based motion detection method according to this embodiment may include operations S1 to S7.
And S1, configuring and calibrating the event camera and the laser radar.
In the embodiment of the invention, the configuration and calibration specifically comprise the steps of controlling the horizontal visual angle of the laser radar 1 to be smaller than the horizontal visual angle of the event camera 2, controlling the vertical visual angle of the laser radar 1 to be 60% -80% of the vertical visual angle of the event camera 2, uniformly distributing the wire bundles, and ensuring that the parallax between the event camera 2 and the laser radar 1 when detecting objects with different distances is smaller than a preset parallax threshold.
Referring to fig. 3, the incident camera 2 and the laser radar 1 are shown in a top view angle, the horizontal view angle of the laser radar 1 is shown by a dotted line and may be denoted as θLh, and the horizontal view angle of the incident camera 2 is shown by a solid line and may be denoted as θEh. For the vertical direction, the vertical view angle of the laser radar 1 is generally not adjustable, the vertical view angle is 60% -80% of the vertical view angle of the event camera 2 when equipment is purchased, and the wire bundles are uniformly distributed. The vertical view angle of the lidar 1 is denoted as θLv, and the vertical view angle of the event camera 2 is denoted as θEv. In the calibration process, through the setting of the parallax threshold and a related formula, the parallax of the event camera 2 and the laser radar 1 when detecting objects with different distances is ensured to be as small as possible. The parallax threshold may be set according to the needs of practical applications, and the present invention is not limited in particular.
Step S2, a moving target to be detected is obtained, the moving target is scanned, and a plurality of two-dimensional imaging points of an event camera and a plurality of three-dimensional point clouds of a laser radar are acquired in a scanning period.
And S3, respectively projecting the plurality of three-dimensional point clouds to a plane where the event camera is located, and obtaining a plurality of two-dimensional projection points of the laser radar.
In the embodiment of the invention, in one scanning period, a plurality of two-dimensional projection points of the laser radar are obtained according to the following formula:
x′li=Lxli/zli
y′li=Lyli/zli
Wherein (x'li,y′li) is the coordinates of a plurality of two-dimensional projection points of the laser radar, (xli,yli,zli) is the coordinates of a plurality of three-dimensional point clouds of the laser radar, and L is the distance from the center of the laser radar to the event camera.
Referring to fig. 4, a black solid box represents a plurality of two-dimensional imaging points acquired by an event camera, and a hollow circle represents a plurality of two-dimensional projection points of a lidar. Since both the horizontal view angle and the vertical view angle of the laser radar are smaller than those of the event camera when the laser radar is configured and calibrated, the projection range of the laser radar does not exceed the plane of the event camera. After projection, the distance data (i.e., z-axis data) of the lidar is not displayed on the map, but is still saved for use by subsequent tasks.
When a moving object is scanned, the sampling time of the event camera is longer than that of the laser radar. Referring to fig. 5, black squares represent time stamps of event camera samples, and open circles represent time stamps of lidar samples. After the data of the primary lidar is acquired, the recording of the coordinates of the point of each event camera in its plane is started, recorded as pei(xei,yei). Before the next data for lidar comes, calculations may be made for projected lidar point p'li(x′li,y′li) and event camera point pei(xei,yei). Here, the plurality of projection points of the lidar and the plurality of two-dimensional imaging points of the event camera satisfy the following relation:
Wherein, pei(xei,yei) is the two-dimensional coordinates of a plurality of two-dimensional imaging points of the event camera, p'li(x′li,y′li) is a plurality of two-dimensional projection points of the laser radar, set A represents the plane range imaged by the event camera, and set B represents the plane range after the laser radar is projected. It will be appreciated that the inclusion of set B by set a is ensured by the view angle of the lidar being less than the view angle of the event camera.
And S4, extracting distance data in a plurality of two-dimensional projection points, and clustering the distance data according to the farthest effective distance zmax of the laser radar and a preset distance separation threshold m to obtain zmax/m distance range categories.
For each two-dimensional projection point of the lidar, clustering is performed once according to its distance data (i.e., z-axis data). The most effective distance zmax is set by the laser radar itself, and after clustering, all the two-dimensional projection points of the laser radar are divided into zmax/m distance range categories, so that at least one two-dimensional projection point exists in each distance range category.
For example, if the furthest effective distance zmax is 200 and the distance separation threshold m is set to 10, then all two-dimensional projection points can be divided into 20 distance range categories, the first range category containing all two-dimensional projection points at distances 0-10, the second range category containing all two-dimensional projection points at distances 10-20, and so on.
Step S5, traversing and calculating Euler distances between the two-dimensional projection points and all two-dimensional imaging points of the event camera and all two-dimensional projection points of the laser radar in the same distance range class respectively aiming at each two-dimensional projection point in each distance range class, and accumulating the target points of the two-dimensional imaging points and the target points of the two-dimensional projection points when the Euler distances are smaller than a preset distance.
Specifically, for each two-dimensional projection point in each range category, traversing the two-dimensional projection points as reference points, respectively with all two-dimensional imaging points of the event camera in the corresponding range category, and all two-dimensional projection points of the laser radar in the same range category, and calculating the Euler distance between the reference point and any point in the two points.
And then, comparing each Euler distance with a preset distance, and when the Euler distance is smaller than the preset distance, attributing the two points meeting the conditions to the event camera or the laser radar according to the points, and accumulating the target points to the two-dimensional imaging point of the event camera or the target points of the two-dimensional projection point of the laser radar respectively.
The Euler distance is calculated according to the following formula:
Wherein (xi,yi) represents the coordinates of the i-th two-dimensional projection point currently selected, (xj,yj) represents the coordinates of any two-dimensional imaging point of the event camera or any two-dimensional projection point in the same range category with the i-th two-dimensional projection point, and d (i, j) represents the Euler distance.
Specifically, for a two-dimensional projection point of the laser radar in the range of 50-60, taking the two-dimensional projection point as a reference point, calculating Euler distances between the reference point and all two-dimensional imaging points of the event camera on the projection plane at the moment and all two-dimensional projection points of the laser radar in the range of 50-60. If any one of the calculated Euler distances is less than a predetermined distance (e.g., 1), the number of such points is recorded, denoted as N1, N2.
Based on the Euler formula described above, i will eventually traverse each two-dimensional projection point in the range category. I.e. every time one i is selected, all j, j points are updated, including all points of the event camera and points of the lidar in the same distance range. This process is then repeated on the next point i until all the two-dimensional projection points of the lidar are past the point i.
And S6, comparing the target point number of the two-dimensional imaging point and the target point number of the two-dimensional projection point with a preset first point number threshold value and a preset second point number threshold value respectively to obtain a motion state detection result of the two-dimensional projection point, and generating a corresponding motion state identifier.
In the embodiment of the invention, the first point threshold value and the second point threshold value are respectively different according to different use scenes or use environments.
In the embodiment of the invention, the motion state detection result comprises motion or stillness. On the basis, when the number of target points of the two-dimensional imaging point is larger than a first point threshold value and the number of target points of the two-dimensional projection point is larger than a second point threshold value, the motion state detection result of the two-dimensional projection point is motion, otherwise, the motion state detection result is stationary.
Then, according to the motion state detection result of the two-dimensional projection point, a corresponding motion state identifier is generated, for example, "0" may be represented as stationary, and "1" may be represented as motion.
And S7, repeating the steps S5-S6 until the motion state identifiers of all the two-dimensional projection points of the laser radar are generated, and combining and outputting the three-dimensional point cloud of the laser radar and the corresponding motion state identifiers.
In the embodiment of the invention, the detection result of the moving object is output according to the following data format:
data(x,y,z,p)
Wherein, (x, y, z) is the coordinates of the three-dimensional point cloud of the laser radar, p is a motion state identifier, which has two values, wherein '0' is represented as stationary, and '1' is represented as motion.
Through the embodiment, the advantages of the event camera and the laser radar are combined, the spatial position of the object can be obtained, whether the object moves or not is judged, and whether each point moves or not is judged while the point cloud spatial information is output.
Compared with other motion detection algorithms which only use a laser radar or an RGB camera, the invention can complete the motion state detection by only needing data of one scanning period (for example, one frame) of the laser radar.
In addition, the invention can also use the point cloud data containing motion information to simplify the realization of a plurality of tasks. For example, in the map mapping process, moving vehicles and pedestrians can interfere with mapping, and the motion detection method can filter all moving point clouds, so that the map building process is simplified. In addition, in some target detection tasks, all point clouds can be screened for motion at first. For example, if a tree or a guideboard is to be identified, the tree or the guideboard is identified only in the static point cloud, so that the identification time can be shortened, and the working process can be simplified.
Based on the motion detection method based on the event camera and the laser radar, the invention further provides a motion detection device based on the event camera and the laser radar, and the device is described in detail below with reference to fig. 6.
Fig. 6 schematically shows a block diagram of an event camera and lidar based motion detection device according to an embodiment of the invention.
As shown in fig. 6, the event camera and lidar based motion detection apparatus 600 according to this embodiment includes a configuration calibration module 610, a data acquisition module 620, a data projection module 630, a distance clustering module 640, a points accumulation module 650, a points comparison module 660, and a result output module 670.
A configuration calibration module 610, configured to configure and calibrate the event camera and the lidar;
the data acquisition module 620 is configured to acquire a moving target to be detected, scan the moving target, and acquire a plurality of two-dimensional imaging points of the event camera and a plurality of three-dimensional point clouds of the laser radar in a scanning period;
the data projection module 630 is configured to project the plurality of three-dimensional point clouds onto a plane where the event camera is located, so as to obtain a plurality of two-dimensional projection points of the laser radar;
The distance clustering module 640 is configured to extract distance data in the plurality of two-dimensional projection points, and cluster the distance data according to a farthest effective distance zmax of the laser radar and a preset distance separation threshold m to obtain zmax/m distance range categories;
the point accumulating module 650 is configured to, for each two-dimensional projection point in each distance range category, calculate, in a traversal manner, euler distances between the two-dimensional projection point and all two-dimensional imaging points of the event camera and all two-dimensional projection points in the same distance range category, and accumulate a target point number of the two-dimensional imaging point and a target point number of the two-dimensional projection point when the euler distances are smaller than a preset distance;
the point comparing module 660 is configured to compare the target point number of the two-dimensional imaging point and the target point number of the two-dimensional projection point with a preset first point number threshold value and a preset second point number threshold value, obtain a motion state detection result of the two-dimensional projection point, and generate a corresponding motion state identifier;
And the result output module 670 is configured to repeat the steps from the point accumulation module to the point comparison module until all the motion state identifiers of the two-dimensional projection points of the laser radar are generated, and combine and output the three-dimensional point cloud of the laser radar and the corresponding motion state identifiers.
It should be noted that, the embodiment mode of the apparatus portion is similar to the embodiment mode of the method portion, and the achieved technical effects are also similar, and specific details refer to the embodiment mode portion of the method and are not repeated herein.
Any of the configuration calibration module 610, the data acquisition module 620, the data projection module 630, the distance clustering module 640, the point accumulation module 650, the point comparison module 660, and the result output module 670 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules, according to an embodiment of the present invention. Or at least some of the functionality of one or more of the modules may be combined with, and implemented in, at least some of the functionality of other modules. According to embodiments of the invention, at least one of the configuration scaling module 610, the data acquisition module 620, the data projection module 630, the distance clustering module 640, the point accumulation module 650, the point comparison module 660, and the result output module 670 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system-on-chip, a system-on-substrate, a system-on-package, an Application Specific Integrated Circuit (ASIC), or as hardware or firmware in any other reasonable manner of integrating or packaging the circuit, or as any one of or a suitable combination of any of the three implementations of software, hardware, and firmware. Or at least one of the configuration calibration module 610, the data acquisition module 620, the data projection module 630, the distance clustering module 640, the point accumulation module 650, the point comparison module 660, and the result output module 670 may be at least partially implemented as a computer program module, which when executed, may perform the corresponding functions.
Fig. 7 schematically illustrates a block diagram of an electronic device adapted to implement an access control method according to an embodiment of the disclosure.
As shown in fig. 7, an electronic device 700 according to an embodiment of the present disclosure includes a processor 701 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. The processor 701 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. The processor 701 may also include on-board memory for caching purposes. The processor 701 may comprise a single processing unit or a plurality of processing units for performing different actions of the method flows according to embodiments of the disclosure.
In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are stored. The processor 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. The processor 701 performs various operations of the method flow according to the embodiments of the present disclosure by executing programs in the ROM 702 and/or the RAM 703. Note that the program may be stored in one or more memories other than the ROM 702 and the RAM 703. The processor 701 may also perform various operations of the method flow according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, the electronic device 700 may further include an input/output (I/O) interface 705, the input/output (I/O) interface 705 also being connected to the bus 704. The electronic device 700 may also include one or more of an input portion 706 including a keyboard, mouse, etc., an output portion 707 including a Cathode Ray Tube (CRT), liquid Crystal Display (LCD), etc., and speaker, etc., a storage portion 708 including a hard disk, etc., and a communication portion 709 including a network interface card such as a LAN card, modem, etc., connected to the I/O interface 705. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present invention, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise. Furthermore, the word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements.
While the foregoing is directed to embodiments of the present invention, other and further details of the invention may be had by the present invention, it should be understood that the foregoing description is merely illustrative of the present invention and that no limitations are intended to the scope of the invention, except insofar as modifications, equivalents, improvements or modifications are within the spirit and principles of the invention.

Claims (10)

CN202211133686.2A2022-09-152022-09-15 Motion detection method, device and equipment based on event camera and lidarActiveCN115588042B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202211133686.2ACN115588042B (en)2022-09-152022-09-15 Motion detection method, device and equipment based on event camera and lidar

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202211133686.2ACN115588042B (en)2022-09-152022-09-15 Motion detection method, device and equipment based on event camera and lidar

Publications (2)

Publication NumberPublication Date
CN115588042A CN115588042A (en)2023-01-10
CN115588042Btrue CN115588042B (en)2025-09-23

Family

ID=84778174

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202211133686.2AActiveCN115588042B (en)2022-09-152022-09-15 Motion detection method, device and equipment based on event camera and lidar

Country Status (1)

CountryLink
CN (1)CN115588042B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119006463B (en)*2024-10-232025-01-21江苏康缘药业股份有限公司 Key point detection method and device based on light spot

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109146929A (en)*2018-07-052019-01-04中山大学A kind of object identification and method for registering based under event triggering camera and three-dimensional laser radar emerging system
CN112346073A (en)*2020-09-252021-02-09中山大学Dynamic vision sensor and laser radar data fusion method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11450103B2 (en)*2020-10-052022-09-20Crazing Lab, Inc.Vision based light detection and ranging system using dynamic vision sensor
WO2022135594A1 (en)*2020-12-252022-06-30北京灵汐科技有限公司Method and apparatus for detecting target object, fusion processing unit, and medium
CN114359744B (en)*2021-12-072025-05-30中山大学 A depth estimation method based on LiDAR and event camera fusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109146929A (en)*2018-07-052019-01-04中山大学A kind of object identification and method for registering based under event triggering camera and three-dimensional laser radar emerging system
CN112346073A (en)*2020-09-252021-02-09中山大学Dynamic vision sensor and laser radar data fusion method

Also Published As

Publication numberPublication date
CN115588042A (en)2023-01-10

Similar Documents

PublicationPublication DateTitle
CN112270713A (en)Calibration method and device, storage medium and electronic device
TW202004560A (en)Object detection system, autonomous vehicle, and object detection method thereof
WO2019037088A1 (en)Exposure control method and device, and unmanned aerial vehicle
JP7209115B2 (en) Detection, 3D reconstruction and tracking of multiple rigid objects moving in relatively close proximity
JP2012533222A (en) Image-based surface tracking
CN109242900B (en)Focal plane positioning method, processing device, focal plane positioning system and storage medium
CN112036359B (en)Method for obtaining topological information of lane line, electronic device and storage medium
WO2022183685A1 (en)Target detection method, electronic medium and computer storage medium
CN115376109A (en)Obstacle detection method, obstacle detection device, and storage medium
CN115272452A (en) A target detection and positioning method, device, unmanned aerial vehicle and storage medium
CN109816697A (en) A system and method for building a map for an unmanned model vehicle
CN111489384B (en)Method, device, system and medium for evaluating shielding based on mutual viewing angle
CN119379816A (en) Camera calibration method, device and storage medium based on radar matching
CN115588042B (en) Motion detection method, device and equipment based on event camera and lidar
WO2024083010A9 (en)Visual localization method and related apparatus
CN104915948B (en)The system and method for selecting two-dimentional interest region for use scope sensor
CN113014899B (en)Binocular image parallax determination method, device and system
CN116721162A (en)External parameter calibration method for radar and camera, electronic equipment and storage medium
CN105488780A (en)Monocular vision ranging tracking device used for industrial production line, and tracking method thereof
CN110706288A (en)Target detection method, device, equipment and readable storage medium
CN115115684A (en)Calibration method, calibration system, electronic device and computer-readable storage medium
CN119068325A (en) An underwater target detection method, system and related equipment using edge computing
CN116709035B (en)Exposure adjustment method and device for image frames and computer storage medium
CN116503492B (en)Binocular camera module calibration method and calibration device in automatic driving system
CN114140659B (en)Social distance monitoring method based on human body detection under unmanned aerial vehicle visual angle

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp