Disclosure of Invention
The present disclosure provides a method of spatiotemporal alignment of a point cloud with image pixels, the method comprising:
collecting a time stamp of a point cloud obtained by scanning the surrounding environment by a laser radar;
Calling a bottom software interface for controlling the camera to shoot an image, so as to capture the image from the triggering of the camera shutter to the final generation of the image, wherein the camera shutter triggers time and actual exposure time;
according to the time stamp of the point cloud, the trigger time of the camera shutter and the exposure time, carrying out interpolation calculation on the point cloud and the image pixels to estimate the point cloud position corresponding to the exposure completion time of the camera;
And fusing the point cloud after interpolation calculation with the image pixels to obtain the point cloud after space-time alignment and the image pixels.
Optionally, the method further comprises:
Dividing the time period of the laser radar for completing one complete scanning into a plurality of sub-time periods, wherein each sub-time period corresponds to the time interval between two adjacent beams of point clouds with different laser radar scanning time sequences;
the interpolating calculation is performed on the point cloud and the image pixels according to the timestamp of the point cloud, the trigger time of the camera shutter and the exposure time to estimate a point cloud position corresponding to the exposure completion time of the camera, including:
Determining the exposure completion time of the camera according to the trigger time of the camera shutter and the exposure time;
and in a sub-time period covering the camera exposure completion time, carrying out interpolation calculation on the point clouds and the image pixels according to the time stamps of the two adjacent point clouds corresponding to the sub-time period and the camera exposure completion time so as to estimate the point cloud positions corresponding to the camera exposure completion time.
Optionally, the laser radar is a mechanical rotation laser radar, the scanning direction is a longitudinal array scanning, the time period for completing one complete scanning of the laser radar is divided into a plurality of sub-time periods, each sub-time period corresponds to a time interval between two adjacent point clouds with different laser radar scanning time sequences, and the method comprises the following steps:
The method comprises the steps that a mechanical rotary laser radar is used for completing complete scanning of an ambient environment once, a time period for completing scanning and a plurality of longitudinal arrays obtained by scanning are obtained, and each longitudinal array comprises a plurality of point clouds acquired by the mechanical rotary laser radar;
dividing the time period for completing scanning into a plurality of sub-time periods according to the plurality of longitudinal arrays, wherein each sub-time period corresponds to the time interval between the point clouds acquired by the two longitudinal arrays in two adjacent longitudinal arrays with different scanning time sequences of the mechanical rotary laser radar;
The point clouds contained in different longitudinal arrays have different scanning time sequences, and all the point clouds contained in one longitudinal array have the same scanning time sequence.
Optionally, the laser radar is a hybrid solid-state laser radar, the scanning direction is a transverse array scanning, the time period for completing one complete scanning of the laser radar is divided into a plurality of sub-time periods, each sub-time period corresponds to a time interval between two adjacent point clouds with different laser radar scanning time sequences, and the method comprises the following steps:
Carrying out complete scanning on the surrounding environment of the mixed solid-state laser radar once to obtain a time period for completing scanning and a plurality of transverse arrays obtained by scanning, wherein each transverse array comprises a plurality of point clouds acquired by the mixed solid-state laser radar;
Dividing the time period for completing scanning into a plurality of sub-time periods according to the plurality of obtained transverse arrays, wherein each sub-time period corresponds to the time interval between the point clouds acquired by the two transverse arrays respectively in two adjacent transverse arrays with different hybrid solid-state laser radar scanning time sequences;
The point clouds contained in different transverse arrays have different scanning time sequences, and all the point clouds contained in one transverse array have the same scanning time sequence.
Optionally, the method further comprises:
Screening out point clouds obtained by scanning surrounding environments by the laser radar according to the setting positions, the setting angles, the view angles and the moment when the camera shutter triggers the acquisition of images, wherein the point clouds are positioned in the coverage range of the acquired images by the camera, and the time stamp is not earlier than the moment when the camera shutter triggers the acquisition of images;
the interpolating calculation is performed on the point cloud and the image pixels according to the timestamp of the point cloud, the trigger time of the camera shutter and the exposure time to estimate a point cloud position corresponding to the exposure completion time of the camera, including:
And carrying out interpolation calculation on the screened point cloud and the image pixels according to the time stamp of the point cloud, the trigger moment of the camera shutter and the exposure time so as to estimate the point cloud position corresponding to the exposure completion moment of the camera.
Optionally, the method further comprises:
detecting a target area of interest in an original image acquired by a camera, and determining a central line in the target area;
collecting a time stamp when the laser radar scans to a physical position corresponding to the central line;
Determining a time stamp of the point cloud in the target area scanned by the laser radar according to the scanning frequency of the laser radar, a scanning wire harness and the time stamp when the laser radar scans to the physical position corresponding to the central line;
And carrying out interpolation calculation on the point cloud in the target area and the image pixels according to the time stamp of the point cloud scanned by the laser radar to the target area, the trigger moment of the camera shutter and the exposure time so as to estimate the position of the point cloud in the target area corresponding to the moment of finishing camera exposure.
Optionally, the method further comprises:
Analyzing an image acquired by a camera through a large model to obtain an object of interest in the image and motion attributes of the object, wherein the motion attributes comprise size, motion speed, motion acceleration and motion direction;
Calculating a speed interpolation compensation parameter according to the motion attribute of the target so as to adjust the point cloud position obtained by scanning the surrounding environment by the laser radar;
the interpolating calculation is performed on the point cloud and the image pixels according to the timestamp of the point cloud, the trigger time of the camera shutter and the exposure time to estimate a point cloud position corresponding to the exposure completion time of the camera, including:
and carrying out interpolation calculation on the adjusted point cloud and the image pixels according to the time stamp of the point cloud, the trigger moment of the camera shutter and the exposure time length so as to estimate the point cloud position corresponding to the exposure completion moment of the camera.
The present disclosure also provides a device for space-time alignment of a point cloud with image pixels, the device comprising:
the acquisition unit is used for acquiring the time stamp of the point cloud obtained by scanning the surrounding environment by the laser radar;
The capturing unit is used for calling a bottom software interface for controlling the camera to capture images, so as to capture the trigger time of the camera shutter and the actual exposure time in the process of triggering the camera shutter to start to capture the images to finally generate the images, wherein after the exposure time is ended, the camera captures original image data;
the interpolation unit is used for carrying out interpolation calculation on the point cloud and the image pixels according to the time stamp of the point cloud, the trigger time of the camera shutter and the exposure time so as to estimate the point cloud position corresponding to the exposure completion time of the camera;
and the fusion unit is used for fusing the point cloud after interpolation calculation with the image pixels to obtain the point cloud after space-time alignment and the image pixels.
The disclosure also provides an electronic device, which comprises a communication interface, a processor, a memory and a bus, wherein the communication interface, the processor and the memory are connected with each other through the bus;
The memory stores machine readable instructions and the processor performs the method by invoking the machine readable instructions.
The present disclosure also provides a machine-readable storage medium storing machine-readable instructions that, when invoked and executed by a processor, implement the above-described methods.
According to the embodiment of the disclosure, a time stamp of a point cloud obtained by scanning a surrounding environment by a laser radar is firstly acquired, a bottom software interface for controlling a camera to shoot an image is called to capture a camera shutter trigger to start acquiring the image until the image is finally generated, a camera shutter trigger time and an actual exposure time are acquired by the camera, after the exposure time is ended, original image data are acquired by the camera, further interpolation calculation is carried out on the point cloud and image pixels according to the time stamp of the point cloud, the camera shutter trigger time and the actual exposure time to estimate a point cloud position corresponding to the exposure completion time of the camera, and finally the point cloud and the image pixels after interpolation calculation are fused to obtain the point cloud and the image pixels after time-space alignment.
Through the mode, the technical scheme can accurately acquire the exact moment of triggering the camera shutter and the actual duration of the exposure process by directly accessing the bottom software interface, so that the situation that the third party chip provider dominates in the past is broken. These suppliers typically provide only shutter start time and exposure period information based on estimates for business confidentiality considerations. By adopting the technical scheme, more accurate time data can be obtained, which is particularly important to the application fields needing high-precision time synchronization, such as advanced image processing fields of automatic driving, three-dimensional reconstruction or augmented reality. Particularly when the point cloud is fused with two-dimensional image pixels, ensuring a high degree of consistency of the time stamps between the two is one of the key factors in achieving high quality results. The real and accurate shutter trigger time and exposure time data are utilized to perform space-time alignment of point cloud and image pixels, the quality of a final output result can be remarkably improved, and compared with the traditional method depending on an estimated value, the method is higher in accuracy, various errors caused by time deviation can be remarkably reduced or even eliminated, so that a generated three-dimensional model or augmented reality picture looks more natural and smooth, and the detail performance is richer and finer.
Detailed Description
In order that those skilled in the art will better understand the technical solutions in the present disclosure, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without inventive effort, based on the embodiments in this disclosure, shall fall within the scope of the present disclosure.
It should be noted that in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this disclosure. In some other embodiments, the method may include more or fewer steps than described in the present disclosure. Furthermore, a single step described in this disclosure may be described as being split into multiple steps in other embodiments, while multiple steps described in this disclosure may be described as being combined into a single step in other embodiments.
In applications such as autopilot, three-dimensional reconstruction, and augmented reality, the spatio-temporal alignment of point clouds with image pixels is critical. It ensures that the point cloud data acquired by the radar and the image data captured by the camera can be accurately matched in space and time, so that three-dimensional points in the point cloud and two-dimensional pixels in the image are synchronously captured at the same time point, and the spatial positions of the three-dimensional points in the point cloud and the two-dimensional pixels are consistent. This technique is critical to improving the accuracy and robustness of scene understanding because it can make full use of the precise three-dimensional geometry information provided by the point cloud data and the rich texture and color information provided by the image data.
For example, referring to fig. 1, fig. 1 is a schematic diagram of a lidar and camera observation area shown in an exemplary embodiment. As shown in fig. 1, the lidar is installed in the center of the roof so that a relatively wide field of view is obtained to facilitate environmental scanning and thus a comprehensive understanding of the surrounding environment. The vehicle-mounted cameras can be arranged at different positions of the vehicle, and in fig. 1, the total number of the vehicle-mounted cameras is six, and the vehicle-mounted cameras are respectively arranged right in front of, left in front of, right in front of, left behind, right behind and right behind the automobile, collect visual field information through different visual angles, and are used for functions of vehicle periphery road monitoring, traffic sign recognition and the like. The laser radar scanning area is 360 degrees surrounding, the covered area can be called an observation area 1, the scanned area of the vehicle-mounted camera is an area between two solid lines extending out of the vehicle-mounted camera in fig. 1, six areas can be called an observation area 2, and the space-time alignment of the pixels of the image shot by the vehicle-mounted camera and the point cloud in the laser radar scanning area is carried out, namely the alignment of the pixels of the image in the overlapping area of the observation area 1 and the observation area 2 and the point cloud is carried out.
Traditionally, space-time alignment relies on hardware clock synchronization to ensure that the acquisition times of the two sensors are consistent by triggering both the acquisition point cloud and the image. But they cannot always be perfectly aligned because the acquisition frequencies of the point cloud and the image may be different, such as the point cloud being acquired at 10Hz and the image being acquired at 30 Hz. To solve this problem, researchers have developed a post-processing soft synchronization method that uses a software algorithm to find the closest image frame based on the time stamp of the point cloud, and uses a manner of frame interpolation or the like to achieve frame alignment or the like, thereby achieving space-time alignment. Although this approach may be somewhat less accurate than hardware synchronization, it provides a flexible and more cost-effective solution, especially when hardware synchronization equipment is not feasible or cost-prohibitive.
In addition, the soft synchronization method uses estimated data provided by a third party chip provider, such as camera shutter trigger time and exposure time. When the data provided by the third party chip provider is estimated rather than measured accurately, time bias may occur in practical applications. The technician tries to correct these deviations by algorithms, for example using motion information in the image sequence or machine learning methods to predict and correct the time errors. However, due to inaccuracy of the original time data, the methods still have limitation on accuracy, and are difficult to completely meet the requirement of high-accuracy application, so that the generated three-dimensional model or augmented reality picture is unnatural and unsmooth due to inconsistency of time stamps in the process of fusing point cloud and two-dimensional image pixels, and detail expression is not rich and fine. In addition, different types of laser radars have different laser scanning characteristics, and different time stamps of scanning point clouds can cause limitation on final space-time alignment accuracy, so that the requirements of high-precision application are difficult to completely meet.
In view of this, the present disclosure aims to propose a technical solution that enables point cloud and image pixel space-time alignment based on accurate shutter start time and exposure period.
The technical scheme includes that a time stamp of a point cloud obtained by scanning an ambient environment by a laser radar is firstly collected, then a bottom software interface for controlling a camera to shoot an image is called, in the process from capturing the triggering of a camera shutter to starting to collect the image to finally generate the image, the triggering time of the camera shutter and the actual exposure time are obtained, after the exposure time is over, the camera collects original image data, further interpolation calculation is carried out on the point cloud and image pixels according to the time stamp of the point cloud, the triggering time of the camera shutter and the exposure time to estimate the position of the point cloud corresponding to the exposure completion time of the camera, and finally, the point cloud and the image pixels after the interpolation calculation are fused to obtain the point cloud and the image pixels after time-space alignment.
For example, an autonomous vehicle is equipped with a 360 ° scannable lidar and a forward camera on the roof of the vehicle, and our objective is to time-align the point cloud data acquired by the lidar with the image captured by the forward camera by the steps of 1 collecting the time stamp of the lidar point cloud, starting a complete 360 ° scan by the lidar, and recording the time stamp for each beam of point cloud during the scan. Laser radar slave timeStart scanning until timeFinally, each point cloud in the whole scanning process has its own accurate time stamp, for example, the time stamp of the first point cloud is thatThe time stamp of the last beam of point cloud is. Step 2, controlling the camera to shoot and record time information, namely, sending an instruction to the forward camera by the system to trigger the shutter, and grabbing from a bottom software interface for controlling the camera to shoot an image until the triggering time of the shutter of the camera isThe shot exposure time was 0.05 seconds (i.e., 50 milliseconds). This means that fromBeginning to+0.05 Seconds (i.e) During this time, the sensor of the camera is collecting light. After the exposure is completed, the camera processes the original data and generates a final image, which is usually very fast and negligible, and the corresponding time stamp of the final image can be considered as. Step 3, calculating the position of the interpolated point cloud, namely estimating the position of the point cloud in the laser radar by using a linear interpolation or other interpolation methods according to the time stamp of the laser radar point cloud, the triggering time of a camera shutter and the exposure time of the cameraThe position of each beam point cloud at the moment (i.e., the camera exposure completion moment). For example, if a certain beam spot is cloudedThe position of the moment is P1, inThe position of the moment is P9, then the position of the beam point cloud in the beam can be calculated through interpolationThe position P4 of the moment. And 4, fusing the point cloud with the image pixels, namely converting the point cloud data obtained after interpolation into a coordinate system identical to the camera image. This typically involves registering the transformed point cloud data with the camera image using a pre-calibrated external reference matrix (parameters describing the relative positional relationship between the lidar and the camera), ensuring that they are consistent in both space and time.
According to the technical scheme, a time stamp of a point cloud obtained by scanning a surrounding environment by a laser radar is firstly acquired, a bottom software interface for controlling a camera to shoot an image is called to capture a camera shutter trigger time and an actual exposure time in the process of starting to acquire the image to finally generate the image, after the exposure time is finished, the camera acquires original image data, further, interpolation calculation is carried out on the point cloud and image pixels according to the time stamp of the point cloud, the camera shutter trigger time and the actual exposure time to estimate a point cloud position corresponding to the finishing time of the camera, and finally, the point cloud after interpolation calculation and the image pixels are fused to obtain the point cloud and the image pixels after time-space alignment.
Through the mode, the technical scheme can accurately acquire the exact moment of triggering the camera shutter and the actual duration of the exposure process by directly accessing the bottom software interface, so that the situation that the third party chip provider dominates in the past is broken. These suppliers typically provide only shutter start time and exposure period information based on estimates for business confidentiality considerations. By adopting the technical scheme, more accurate time data can be obtained, which is particularly important to the application fields needing high-precision time synchronization, such as advanced image processing fields of automatic driving, three-dimensional reconstruction or augmented reality. Particularly when the point cloud is fused with two-dimensional image pixels, ensuring a high degree of consistency of the time stamps between the two is one of the key factors in achieving high quality results. The real and accurate shutter trigger time and exposure time data are utilized to perform space-time alignment of point cloud and image pixels, the quality of a final output result can be remarkably improved, and compared with the traditional method depending on an estimated value, the method is higher in accuracy, various errors caused by time deviation can be remarkably reduced or even eliminated, so that a generated three-dimensional model or augmented reality picture looks more natural and smooth, and the detail performance is richer and finer.
The present disclosure is described below with reference to specific embodiments and specific application scenarios.
Referring to fig. 2, fig. 2 is a flow chart illustrating a method of spatiotemporal alignment of a point cloud with image pixels according to an exemplary embodiment. The method may perform the steps of:
Step 202, collecting a time stamp of a point cloud obtained by scanning the surrounding environment by the laser radar.
For example, a lidar starts a full 360 ° scan and records a time stamp for each beam of point cloud during the scan. Laser radar slave timeStart scanning until timeFinally, each point cloud in the whole scanning process has its own accurate time stamp, for example, the time stamp of the first point cloud is thatThe time stamp of the last beam of point cloud is。
Among them, a lidar is a sensor that determines the distance of a target by emitting a laser beam and measuring the time of reflection. It can generate three-dimensional point cloud data with high precision. A point cloud is a collection of many discrete points, each containing spatial coordinates and possibly color or intensity information. In autopilot, a point cloud is typically used to represent the environment that the lidar scans. Lidar devices often incorporate a high precision time synchronization mechanism that can record an accurate time stamp for each point cloud during each scan, which can be accomplished by either a hardware time stamp or a software time stamp. The present disclosure is not limited in this regard as to the area of laser radar scanning, the particular angle of scanning, and the particular manner of scanning.
Step 204, calling a bottom software interface for controlling the camera to shoot an image, so as to capture the image to be acquired from the triggering of the camera shutter to the final generation of the image, wherein the camera acquires original image data after the exposure time is over.
For example, the system sends an instruction to the forward camera to trigger the shutter, and then captures the image from the underlying software interface controlling the camera to capture the image until the time of triggering the shutter of the camera isThe shot exposure time was 0.05 seconds (i.e., 50 milliseconds). This means that fromBeginning to+0.05 Seconds (i.e) During this time, the sensor of the camera is collecting light. After the exposure is completed, the camera processes the original data and generates a final image, which is usually very fast and negligible, and the corresponding time stamp of the final image can be considered as。
Wherein the shutter is the trigger mechanism in the camera that controls the light entering the camera's photosensitive element (e.g. CCD or CMOS sensor), triggering the shutter means that this process is initiated, allowing light to enter the camera and starting the exposure process of the image. The underlying software interface is a set of APIs (Application Programming Interface ) provided in the operating system or camera firmware that allow the developer or system software to interact directly with the hardware. In this case, the underlying software interface allows the system to obtain detailed information about the shooting process, such as the exact moment the camera triggers the shutter and the length of exposure actually spent. The exposure time period refers to the length of time that the camera shutter is open, allowing light to impinge on the photosensitive element.To the point ofThis time refers to the time from shutter opening) To the shutter is closed) I.e. the time period during which the camera sensor collects light. This time period determines how much light is collected by the sensor, thereby affecting the brightness and quality of the image. Here, the exposure time period is 0.05 seconds (50 milliseconds). After the exposure is completed, the camera's processor processes the raw image data collected. This includes the steps of demosaicing, white balance adjustment, color correction, noise reduction, etc., to produce a high quality image. Since the camera processes raw data very fast and negligible, the time of final image generation can be approximated as the time of shutter closure。
And 206, carrying out interpolation calculation on the point cloud and the image pixels according to the time stamp of the point cloud, the trigger time of the camera shutter and the exposure time so as to estimate the point cloud position corresponding to the exposure completion time of the camera.
For example, using linear interpolation or other interpolation methods, the time at the laser radar point cloud is estimated from the time stamp of the laser radar point cloud, the camera shutter trigger time, and the duration of the camera exposureThe time (i.e., camera exposure completion time) is the position of each point cloud. For example, if a certain beam spot is cloudedThe position of the moment is P1, inThe position at the moment is P10, then the position at the moment can be calculated by interpolationThe point cloud beam position P4 at the time.
Where interpolation is a mathematical method for estimating the value of an intermediate point between two known data points. Linear interpolation is a simple interpolation method whose idea is to assume that the variation between two data points is linear. If the point P1 is known to be at timeThe position of (2) and the point P10 at timeLinear interpolation may be estimated in time by a linear equationIs located at position P4. Besides linear interpolation, there are many other interpolation methods, such as polynomial interpolation, spline interpolation, etc., which can be more complex, but also more accurate estimation of the value of the intermediate point, especially in the case of varying nonlinearities. Pre-processing, such as denoising, filtering, etc., of the point cloud data may also be required before interpolation is performed to improve the accuracy of the interpolation. Implementing interpolation algorithms in software may involve writing or using an existing mathematical library to perform interpolation calculations, which is not limited by the present disclosure as to how interpolation calculations are specifically performed.
And step 208, fusing the point cloud after interpolation calculation with the image pixels to obtain the point cloud after space-time alignment and the image pixels.
For example, converting the interpolated point cloud data into the same coordinate system as the camera image may involve registering the transformed point cloud data with the camera image using a pre-calibrated outlier matrix (parameters describing the relative positional relationship between the lidar and the camera) to ensure that they are consistent in both space and time.
Where spatio-temporal alignment refers to ensuring that data from different sensors (e.g., point cloud data of lidar and image pixel data of a camera) are spatially and temporally consistent, this is typically accomplished by accurate time synchronization and spatial registration. The point cloud data is typically generated in a lidar coordinate system and needs to be converted to under the camera coordinate system in order to be fused with the image data. This process involves the use of a reference matrix, i.e. a parameter describing the relative positional relationship between the lidar and the camera. The extrinsic matrix is a matrix describing the spatial position and orientation between the sensors and includes a rotation matrix and translation vectors. By this matrix, the point cloud data can be converted from the lidar coordinate system to the camera coordinate system. Image registration is the spatial alignment of two or more images so that they represent different perspectives of the same scene. In point cloud and image fusion, registration involves matching point cloud data with image data at the pixel level. Data fusion is the merging of data from different sensors into one unified representation. In the case where the point cloud and the image data need to be fused, the fusion may involve extracting color information of the point cloud data from the image data, or mapping texture information of the image data onto the point cloud data, which is not limited by the disclosure regarding a specific fusion manner.
In one embodiment, the method further comprises dividing a time period of completing one complete scan of the laser radar into a plurality of sub-time periods, wherein each sub-time period corresponds to a time interval between two adjacent beams of point clouds with different laser radar scanning time sequences, performing interpolation calculation on the point clouds and the image pixels according to time stamps of the point clouds, the camera shutter trigger time and the exposure time length to estimate a point cloud position corresponding to a camera exposure completion time, determining a camera exposure completion time according to the camera shutter trigger time and the exposure time length, and performing interpolation calculation on the point clouds and the image pixels according to time stamps of the two adjacent beams of point clouds and the camera exposure completion time in the sub-time period covering the camera exposure completion time to estimate a point cloud position corresponding to the camera exposure completion time.
For example, the lidar completes a 360 ° scan every 0.1 seconds and continuously records point cloud data during this process. The lidar is able to acquire 1000 point cloud data points on the same horizontal line in one scan, then the time interval between each point cloud data point is 0.1 seconds divided by 1000, i.e., 0.0001 seconds (0.1 milliseconds). Laser radar at momentStarting scanning, e.g.The timestamp of the first beam point cloud may beThe timestamp of the second beam spot cloud isAnd so on, the timestamp up to the 999 th bundle point cloud isThe timestamp of the last cloud of points isThe end of a turn of scan is marked. While the laser radar scans, the system at a certain momentTriggering a shutter of a forward-facing camera, e.g.The exposure time period was 0.02 seconds (20 milliseconds). This means that fromBeginning toSeconds (i.e) During this time, the sensor of the camera is collecting light. In order to time align the point cloud data of the lidar with the image taken by the camera, we need to find the end time of exposure with the cameraClosest point cloud data. In particular, it is necessary to find the timestamp closest toIs a single point cloud data point. Assume that the time stamps of the two point clouds are respectivelyAndThe positions are respectivelyAnd. A linear interpolation method can be used to estimate the time stamp and position of the two point cloudsThe specific formula of the point cloud position at the moment is as follows:
Each pixel in an image shot by the camera has the same time stamp, but the time stamps of different point clouds are influenced by a laser radar scanning mode, and besides a laser point cloud scanning mode obtained by one-time complete scanning, a scanning mode of respectively emitting one beam of laser according to a time sequence brings time sequence differences to the acquired point clouds. In the present embodiment of the present invention, in the present embodiment,Is the moment the laser radar starts scanning, at which time no point cloud data has been acquired yet.Is the timestamp of the first point cloud data point after the laser radar starts scanning, representing the slaveA start of 0.0001 seconds (0.1 milliseconds) passed. The lidar scanning frequency refers to the number of times the lidar completes a complete scan per second. For example, 10 Hz means that 10 complete scans per second are completed. A point cloud is a collection of discrete points, each point containing spatial coordinatesAnd possibly color or intensity information. In autopilot, a point cloud is typically used to represent the environment that the lidar scans. In the above process, the exposure end time of the camera is foundThe closest two point cloud data points are one of the key steps. In specific implementation, all point cloud data and corresponding timestamps thereof are acquired from a laser radar. Traversing the point cloud data list to find the timestamp closest toIn particular, a binary search algorithm can be used to efficiently find the closest timestamp. The present disclosure is not limited in this regard to the specific manner of searching.
In the embodiment, the laser radar is a mechanical rotation laser radar, the scanning direction of the laser radar is a longitudinal array scanning, the time period of completing one complete scanning of the laser radar is divided into a plurality of sub-time periods, each sub-time period corresponds to a time interval between two adjacent point clouds with different laser radar scanning time sequences, the mechanical rotation laser radar completes one complete scanning of the surrounding environment to obtain a time period of completing the scanning and a plurality of longitudinal arrays obtained by scanning, each longitudinal array comprises a plurality of point clouds acquired by the mechanical rotation laser radar, the time period of completing the scanning is divided into a plurality of sub-time periods according to the plurality of longitudinal arrays, each sub-time period corresponds to a time interval between the point clouds acquired by each of the two adjacent longitudinal arrays with different mechanical rotation laser radar scanning time sequences, the point clouds contained by the different longitudinal arrays have different scanning time sequences, and all the point clouds contained by one longitudinal array have the same scanning time sequence.
For example, an autopilot car is equipped with a mechanically rotating lidar that is capable of 360 ° full-environment scanning and generating high-precision 3D point cloud data. The scanning frequency of the lidar is 10Hz, meaning that it completes a complete scanning cycle every 0.1 seconds. In this scan, the lidar generates 64 longitudinal arrays, each of which contains several (e.g., 100) point cloud data points, and then the lidar acquires 6400 point clouds (64 arrays x 100 point clouds/array) in a complete scan period. Since the time period for completing one complete scan is 0.1 seconds, then during this 0.1 seconds, 64 longitudinal arrays of lidar are acquired in sequence. Assuming that the rotational speed of the lidar is uniform, the acquisition time interval for each longitudinal array is 0.1 seconds/64= 0.0015625 seconds (about 1.5625 milliseconds). According to 64 longitudinal arrays, a time period of 0.1 seconds may be divided into 64 sub-time periods, each sub-time period corresponding to an acquisition time interval of two adjacent longitudinal arrays. For example, the first sub-period is from 0 seconds to 0.0015625 seconds, the second sub-period is from 0.0015625 seconds to 0.003125 seconds, and so on, until the 64 th sub-period ends at 0.1 seconds. The point clouds on different longitudinal arrays have different scan timings. For example, the point cloud on the first longitudinal array is acquired between 0 seconds and 0.0015625 seconds, while the point cloud on the second longitudinal array is acquired between 0.0015625 seconds and 0.003125 seconds. All point clouds on a longitudinal array have the same scanning timing, i.e. they are all acquired during the same sub-period.
Among them, a mechanical rotation lidar is a common type of lidar that performs an environmental scan of 360 ° by means of physical rotation. Such radars are typically mounted on a rotating platform where the laser transmitters and receivers scan the surrounding environment as the platform rotates. The principle of operation of mechanically rotating lidars is to generate three-dimensional point cloud data by calculating distances by emitting laser pulses and measuring the time that the pulses are reflected back from a target. Mechanically rotating lidars typically have multi-beam designs, such as 16-wire, 32-wire, 64-wire, etc. Each beam represents a vertically oriented laser beam and the beams are distributed in a vertical direction to form a plurality of longitudinal scan planes. Each harness scans in the horizontal direction with the rotation of the rotating platform, forming several columns of data in the vertical direction. For example, if the lidar has 32 beams, then each beam would scan one line in the vertical direction for a complete revolution, for a total of 32 lines of data. Due to the multi-beam design, the mechanically rotating lidar may provide higher resolution in the vertical direction, which helps to capture details in the environment in more detail.
In the embodiment, the laser radar is a hybrid solid-state laser radar, the scanning direction of the hybrid solid-state laser radar is a transverse array scanning, the time period of completing one complete scanning of the laser radar is divided into a plurality of sub-time periods, each sub-time period corresponds to a time interval between two adjacent beams of point clouds with different laser radar scanning time sequences, the hybrid solid-state laser radar is subjected to one complete scanning of surrounding environment to obtain a time period of completing the scanning and a plurality of transverse arrays obtained by scanning, each transverse array comprises a plurality of point clouds acquired by the hybrid solid-state laser radar, the time period of completing the scanning is divided into a plurality of sub-time periods according to the plurality of transverse arrays, each sub-time period corresponds to a time interval between the point clouds acquired by each of two adjacent transverse arrays with different hybrid solid-state laser radar scanning time sequences, the point clouds contained by different transverse arrays have different scanning time sequences, and all the point clouds contained by one transverse array have the same scanning time sequence.
For example, an autopilot car is equipped with a hybrid solid-state lidar designed for forward 120 ° field coverage. Such lidar uses electronic scanning techniques, such as micro-electromechanical systems or optical phased arrays, to achieve rapid transverse array scanning. The laser radar starts scanning forward 120 deg. and completes a complete 120 deg. environmental scan every 0.05 seconds. In this scan, the lidar generates 16 transversal arrays, each array containing several (e.g., 60) beams of point cloud data points, and then the lidar collects 960 beams of point cloud (16 arrays 60 beams of point cloud/array) in total in one complete scan period. The time period for the hybrid solid-state lidar to complete a complete scan is 0.05 seconds. Within this 0.05 seconds, 16 transversal arrays of lidar were acquired sequentially. Assuming that the scan speed of the lidar is uniform, the acquisition time interval for each transversal array is 0.05 seconds/16= 0.003125 seconds (about 3.125 milliseconds). According to 16 transversal arrays, a time period of 0.05 seconds may be divided into 16 sub-time periods, each sub-time period corresponding to the acquisition time interval of two adjacent transversal arrays. For example, the first sub-period is from 0 seconds to 0.003125 seconds, the second sub-period is from 0.003125 seconds to 0.006250 seconds, and so on, until the 16 th sub-period ends at 0.05 seconds. The point clouds on different lateral arrays have different scan timings. For example, the point clouds on the first lateral array are acquired between 0 seconds and 0.003125 seconds, while the point clouds on the second lateral array are acquired between 0.003125 seconds and 0.006250 seconds, and all the point clouds on one lateral array have the same scan timing, i.e., they are all acquired within the same sub-period.
Among them, hybrid solid-state lidar is a technology intermediate between conventional mechanical rotary lidar and pure solid-state lidar. It combines the advantages of mechanical scanning and solid state scanning, and aims to improve reliability and reduce cost while maintaining higher performance. Hybrid solid-state lidars typically use mirrors or other small mechanical components to effect scanning. These mirrors or small mechanical parts can be moved rapidly in one dimension to scan out a row of data. In the vertical direction, the laser beam may cover different height ranges by a plurality of beams or by adjusting the angle of the mirror surface.
In this embodiment, such subdivided sub-time periods may be used to accurately synchronize the lidar point cloud data with the camera image pixels. For example, if the camera triggers a shutter at 0.015 seconds with an exposure time of 10 milliseconds, we can determine laser radar point cloud data corresponding to the camera exposure time by looking up the 5 th sub-period (assuming that the exposure completion time 0.025 seconds falls within the 5 th sub-period).
In the embodiment, the method further comprises screening out point clouds obtained by scanning surrounding environments of the laser radar according to the setting positions, the setting angles, the view angles and the camera shutter triggering time of the laser radar and the camera, wherein the point clouds are located in the coverage area of the camera for collecting images and have time stamps not earlier than the time when the camera shutter triggering time of collecting images, and performing interpolation calculation on the point clouds and the image pixels according to the time stamps, the camera shutter triggering time and the exposure time to estimate the point cloud positions corresponding to the camera exposure completion time.
For example, a lidar is mounted in the front of a vehicle with a horizontal angle of view of 120 °, a vertical angle of view of 30 °, and a scanning frequency of 10Hz. The camera is installed below the lidar with a horizontal angle of view of 100 ° and a vertical angle of view of 25 °. Assuming that the system triggers the shutter of the forward camera at 10:30:00.150 AM, the exposure time is 50 milliseconds (0.05 seconds). At 10:30:00.150 AM, the lidar is scanning, assuming that it has just begun a new round of scanning cycles. And screening out point clouds positioned in the range of 100 degrees multiplied by 25 degrees of the field angle of the camera from the point cloud data generated by the laser radar. Further screening out point cloud data with a time stamp not earlier than 10:30:00.150 AM, wherein the point cloud data corresponds to images acquired during camera exposure. For the screened point cloud data, calculating the time stamp of the point clouds (such as the time stamp of the point cloud P5) according to the laser radar scanning frequency) Then, according to the camera shutter time (10:30:00.150 AM AM) and the exposure time length (0.05 seconds), the camera exposure completion time (such as) Using interpolation methods (e.g., linear interpolation), time stamping according to point cloud P5And the camera exposure end timeEstimating the time of exposure completionA corresponding point cloud position P5'.
Where the set position refers to the mounting position of the lidar and camera on the vehicle, the proper position is critical to ensure overlap of the sensor coverage areas and synchronization of the data. The setting angle refers to the mounting angle of the lidar and the camera, which determines their field of view direction. For lidar and cameras this typically includes horizontal and vertical field angles. The field angle refers to the range of solid angles that can be detected by lidar and cameras. The horizontal field of view and the vertical field of view together define the detection range of the sensor.
In this embodiment, in the above manner, it is ensured that only those point cloud data which are located within the coverage of the image captured by the camera and whose time stamp is not earlier than the camera exposure start time are used for interpolation calculation with the image pixels. This helps to improve the efficiency and accuracy of data fusion and provides more accurate context awareness information for the autopilot system.
In one embodiment, the method further comprises detecting a target area of interest in an original image acquired by a camera, determining a central line in the target area, acquiring a time stamp when the laser radar scans to a physical position corresponding to the central line, determining a time stamp when the laser radar scans to a point cloud in the target area according to the scanning frequency of the laser radar, a scanning wire harness and the time stamp when the laser radar scans to the physical position corresponding to the central line, and carrying out interpolation calculation on the point cloud in the target area and image pixels according to the time stamp when the laser radar scans to the point cloud in the target area, the camera shutter trigger time and the exposure time to estimate the position of the point cloud in the target area corresponding to the camera exposure completion time.
For example, the camera acquires an original image containing a pedestrian ahead at the time of 10:30:00.150 AM, detects the pedestrian in the image by using a target detection algorithm, takes the pedestrian as an area of interest and generates a rectangular frame to frame the area where the pedestrian is located as a target area. The vertical center line of the rectangular frame is calculated, the line is used as a reference for alignment calculation, and the time stamp when the laser radar scans to the physical position corresponding to the vertical center line is acquired. The lidar scans to a physical location corresponding to the vertical centerline of the area where the pedestrian is located at 10:30:00.155 AM and records the timestamp Tc at that time.
Then, according to the laser radar scanning frequency of 10Hz, determining a circle of laser radar scanning for 0.1 second, assuming the laser radar scanning is longitudinal array scanning (forming a plurality of columns in the vertical direction), and scanning a circle of laser radar scanning has 16 wire bundles, then the 16 wire bundles can be known to correspond to 16 wire bundles in a horizontal plane, the 16 wire bundles are acquired in sequence in time sequence within 0.1 second, the time stamps of all the wire bundles where each wire bundle is located are the same, and then the time stamps are combinedAs a reference, more accurate time stamps of other point clouds except the point clouds on the vertical center line can be sequentially calculated, and then the time stamp range of the point clouds scanned by the laser radar into the target area can be determined according to the laser radar scanning range, the size of the target area and the like, for example-0.002 Seconds andTime +0.002 seconds, finally, the laser radar is arranged at the position of-0.002 Seconds toThe time stamp of the point cloud acquired at the moment of +0.002 seconds is aligned with the moment of ending the exposure of the camera, so as to estimate the position of the point cloud in the target area corresponding to the moment of ending the exposure of the camera.
The center line is a horizontal center line or a vertical center line depending on the scanning mode of the laser radar, if the laser radar scans from left to right (such as a mechanical rotation laser radar), the center line will be a left-right center line of the target object, and if the laser radar scans up and down (such as a hybrid solid laser radar), the center line will be an up-down center line of the target object.
It should be noted that, in a complex dynamic environment, the rapid movement of the object may cause that the conventional method is difficult to accurately estimate the position of the object, and for a spatially moving object (such as a pedestrian in the present embodiment) based on the time axis as a baseline, the longer the time to be compensated when each point cloud in the target area where the spatially moving object is located calculates the time stamp, the larger the error between the calculated point cloud time stamp and the real point cloud time stamp, and the embodiment compensates before and after the time based on the time of the center line, so that the accumulated error in calculating the point cloud time stamp can be relatively reduced. By the method, the point cloud in the target area can be aligned with the camera image pixels more accurately, so that the perception precision and the robustness of the system are improved, and the method is suitable for application scenes needing high-precision alignment, such as target tracking, obstacle detection and the like.
In one embodiment, the method further comprises analyzing an image acquired by a camera through a large model to obtain an object of interest in the image and motion attributes of the object, wherein the motion attributes comprise size, motion speed, motion acceleration and motion direction, calculating speed interpolation compensation parameters according to the motion attributes of the object to adjust a point cloud position obtained by scanning a surrounding environment by a laser radar, and carrying out interpolation calculation on the point cloud and image pixels according to a time stamp of the point cloud, the camera shutter trigger time and the exposure time to estimate a point cloud position corresponding to the camera exposure completion time, wherein the method comprises the steps of carrying out interpolation calculation on the adjusted point cloud and image pixels according to the time stamp of the point cloud, the camera shutter trigger time and the exposure time to estimate the point cloud position corresponding to the camera exposure completion time.
For example, an autopilot is equipped with a high speed camera for capturing images of road conditions and a high performance lidar for acquiring three-dimensional point cloud data of the surrounding environment. The camera acquires an original image containing the vehicle ahead at the time of 10:30:00.150 AM. A deep learning model is used to detect the object of interest as a vehicle in front of the image and track its motion. By analyzing the motion attribute of the vehicle in the continuous frames, the model calculates that the vehicle runs north at a constant speed of 60 km/h, and the length of the vehicle body is 4 meters. And then calculating a speed interpolation compensation parameter according to the motion attribute of the vehicle. This parameter will be used to adjust the laser radar scanned point cloud position to reflect the movement of the vehicle during the camera exposure. Suppose that the vehicle moves about 0.83 meters within 50 milliseconds of the camera exposure. The point cloud position obtained by laser radar scanning was adjusted to move 0.83 meters north to reflect the actual movement of the vehicle during exposure. And finally, carrying out interpolation calculation on the adjusted point cloud and the image pixels to estimate the point cloud position corresponding to the camera exposure completion time (10:30:00.200 AM).
The specific category of the object of interest, such as pedestrians, vehicles, trees and the like, can be identified, so that speed interpolation compensation can be better performed according to the category. For example, if the object of interest is identified as a tree, then both its velocity and acceleration are 0, thereby reducing errors in the calculation of the velocity of the object of interest by the model. In implementation, a pre-trained deep learning model, such as a convolutional neural network, may be used to analyze the image and detect objects in the image, which is not limited by the present disclosure with respect to the specific type of deep learning model.
To assist those skilled in the art in better understanding the method as a whole, the method is described below in connection with fig. 3.
Referring to fig. 3, fig. 3 is a flow chart illustrating another method of point cloud to image pixel spatiotemporal alignment according to an exemplary embodiment. As shown in FIG. 3, the time stamp of the point cloud obtained by scanning the surrounding environment by the laser radar is firstly acquired, the trigger time and the actual exposure time of the camera and the motion attribute information such as the speed, the acceleration and the motion direction of the camera shooting target are acquired by calling a bottom software interface for controlling the camera to shoot an image, then the exposure completion time of the camera is determined according to the trigger time and the actual exposure time of the camera, so as to time axes of Ji Dian clouds and image pixels, then the point cloud motion compensation mode is determined based on the laser scanning mode (such as longitudinal array scanning or transverse array scanning), the point cloud obtained by scanning the laser radar and the camera is selected based on the set position, the set angle, the field angle and the time when the camera is triggered to start to acquire the image, the point cloud obtained by scanning the surrounding environment by the laser radar is located in the coverage area of the camera, the time stamp is not earlier than the trigger time when the camera is started to acquire the image is used for subsequent interpolation calculation (namely, the point cloud compensation start time is determined based on the target motion speed and the actual exposure time, the position of the point cloud is adjusted, and the like, finally the point cloud is calculated, the point cloud and the point cloud is overlapped at the final position corresponding to the final position of the camera exposure completion time.
Corresponding to the embodiment of the method for aligning the point cloud with the image pixels in the space-time manner, the disclosure further provides an embodiment of a device for aligning the point cloud with the image pixels in the space-time manner.
Referring to fig. 4, fig. 4 is a hardware configuration diagram of an electronic device according to an exemplary embodiment. At the hardware level, the device includes a processor 402, an internal bus 404, a network interface 406, a memory 408, and a non-volatile storage 410, although other hardware requirements are possible. One or more embodiments of the present disclosure may be implemented in a software-based manner, such as by the processor 402 reading a corresponding computer program from the non-volatile memory 410 into the memory 408 and then running. Of course, in addition to software implementation, one or more embodiments of the present disclosure do not exclude other implementation, such as a logic device or a combination of software and hardware, etc., that is, the execution subject of the following process flows is not limited to each logic unit, but may also be hardware or a logic device.
Referring to fig. 5, fig. 5 is a block diagram illustrating a point cloud and image pixel space-time alignment apparatus according to an exemplary embodiment. The space-time alignment device 500 of the point cloud and the image pixels can be applied to the electronic device shown in fig. 4 to implement the technical scheme of the disclosure. The device comprises:
The acquisition unit 502 is configured to acquire a timestamp of a point cloud obtained by scanning a surrounding environment by the laser radar;
The capturing unit 504 is configured to invoke a bottom software interface for controlling the camera to capture an image, so as to capture the image from the triggering of the camera shutter to the final generation of the image, where the camera captures original image data after the exposure time is over;
An interpolation unit 506, configured to perform interpolation calculation on the point cloud and the image pixels according to the timestamp of the point cloud, the trigger time of the camera shutter, and the exposure time, so as to estimate a point cloud position corresponding to the exposure completion time of the camera;
And the fusion unit 508 is used for fusing the point cloud after interpolation calculation with the image pixels to obtain the point cloud after space-time alignment and the image pixels.
In some embodiments, the apparatus further comprises:
The separation unit is used for dividing the time period of the laser radar for completing one complete scanning into a plurality of sub-time periods, and each sub-time period corresponds to the time interval between two adjacent beams of point clouds with different laser radar scanning time sequences;
the interpolation unit includes:
The first determining subunit is used for determining the camera exposure completion time according to the camera shutter trigger time and the exposure time;
And the first estimation subunit is used for carrying out interpolation calculation on the point clouds and the image pixels in a sub-time period covering the camera exposure completion time according to the time stamps of the two adjacent point clouds corresponding to the sub-time period and the camera exposure completion time so as to estimate the point cloud positions corresponding to the camera exposure completion time.
In some embodiments, the lidar is a mechanically rotating lidar whose scan direction is a longitudinal array scan, the separation unit comprises:
the first scanning subunit is used for completing complete scanning of the surrounding environment for one time by the mechanical rotary laser radar to obtain a time period for completing the scanning and a plurality of longitudinal arrays obtained by scanning, wherein each longitudinal array comprises a plurality of point clouds acquired by the mechanical rotary laser radar;
the first separation subunit is configured to divide the time period for completing scanning into a plurality of sub-time periods according to the obtained plurality of longitudinal arrays, where each sub-time period corresponds to a time interval between point clouds acquired by two longitudinal arrays in two adjacent longitudinal arrays with different mechanical rotation laser radar scanning time sequences;
The point clouds contained in different longitudinal arrays have different scanning time sequences, and all the point clouds contained in one longitudinal array have the same scanning time sequence.
In some embodiments, the lidar is a hybrid solid-state lidar whose scanning direction is a transverse array scan, the separation unit comprises:
The second scanning subunit is used for carrying out complete scanning on the surrounding environment of the mixed solid-state laser radar once to obtain a time period for completing the scanning and a plurality of transverse arrays obtained by scanning, and each transverse array comprises a plurality of point clouds acquired by the mixed solid-state laser radar;
The second separation subunit is configured to divide the time period for completing scanning into a plurality of sub-time periods according to the obtained plurality of transverse arrays, where each sub-time period corresponds to a time interval between point clouds acquired by each of two adjacent transverse arrays with different scanning timings of the hybrid solid-state laser radar;
The point clouds contained in different transverse arrays have different scanning time sequences, and all the point clouds contained in one transverse array have the same scanning time sequence.
In some embodiments, the apparatus further comprises:
The screening unit is used for screening out point clouds obtained by scanning surrounding environments by the laser radar according to the setting positions, the setting angles, the view angles and the moment when the camera shutter triggers to start to collect images, wherein the point clouds are positioned in the coverage range of the images collected by the camera, and the time stamp is not earlier than the moment when the camera shutter triggers to start to collect images;
the interpolation unit includes:
And the first interpolation subunit is used for carrying out interpolation calculation on the screened point cloud and the image pixels according to the time stamp of the point cloud, the trigger time of the camera shutter and the exposure time so as to estimate the point cloud position corresponding to the exposure completion time of the camera.
In some embodiments, the apparatus further comprises:
A first determining unit for detecting a target area of interest in an original image acquired by a camera and determining a center line in the target area;
The second acquisition unit is used for acquiring a time stamp when the laser radar scans to the physical position corresponding to the central line;
A second determining unit, configured to determine a timestamp of a point cloud in the target area scanned by the laser radar according to a scanning frequency of the laser radar, a scanning wire harness, and the timestamp when the laser radar scans to a physical position corresponding to the center line;
And the estimation unit is used for carrying out interpolation calculation on the point cloud in the target area and the image pixels according to the time stamp of the point cloud scanned to the target area by the laser radar, the trigger moment of the camera shutter and the exposure time so as to estimate the position of the point cloud in the target area corresponding to the exposure completion moment of the camera.
In some embodiments, the apparatus further comprises:
the analysis unit is used for analyzing the image acquired by the camera through the large model to obtain an interested target in the image and the motion attribute of the target, wherein the motion attribute comprises a size, a motion speed and a motion direction;
the adjusting unit is used for calculating a speed interpolation compensation parameter according to the motion attribute of the target so as to adjust the point cloud position obtained by scanning the surrounding environment by the laser radar;
the interpolation unit includes:
And the interpolation subunit is used for carrying out interpolation calculation on the adjusted point cloud and the image pixels according to the time stamp of the point cloud, the trigger time of the camera shutter and the exposure time so as to estimate the point cloud position corresponding to the exposure completion time of the camera.
The implementation process of the functions and roles of each unit in the above device is specifically shown in the implementation process of the corresponding steps in the above method, and will not be described herein again.
For the device embodiments, reference is made to the description of the method embodiments for the relevant points, since they essentially correspond to the method embodiments. The apparatus embodiments described above are illustrative only, in that the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the objectives of the disclosed solution. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
User information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) referred to in this disclosure are both user-authorized or fully authorized information and data by parties, and the collection, use and processing of relevant data requires compliance with relevant laws and regulations and standards of relevant countries and regions, and is provided with corresponding operation portals for user selection of authorization or denial.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The foregoing has described certain embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The terminology used in the one or more embodiments of the disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the disclosure. As used in one or more embodiments of the present disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that while the terms first, second, third, etc. may be used in one or more embodiments of the present disclosure to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments of the present disclosure. The term "if" as used herein may be interpreted as "at..once" or "when..once" or "in response to a determination", depending on the context.
The foregoing description of the preferred embodiment(s) of the present disclosure is merely intended to illustrate the embodiment(s) of the present disclosure, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the embodiment(s) of the present disclosure are intended to be included within the scope of the present disclosure.