Movatterモバイル変換


[0]ホーム

URL:


CN111868561A - Efficient signal detection using adaptive identification of noise floor - Google Patents

Efficient signal detection using adaptive identification of noise floor
Download PDF

Info

Publication number
CN111868561A
CN111868561ACN201980020319.1ACN201980020319ACN111868561ACN 111868561 ACN111868561 ACN 111868561ACN 201980020319 ACN201980020319 ACN 201980020319ACN 111868561 ACN111868561 ACN 111868561A
Authority
CN
China
Prior art keywords
signal
determining
samples
examples
light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201980020319.1A
Other languages
Chinese (zh)
Other versions
CN111868561B (en
Inventor
S·S·苏巴辛哈
R·安德鲁斯
T·卡拉德尼兹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zoox Inc
Original Assignee
Panosense Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panosense IncfiledCriticalPanosense Inc
Publication of CN111868561ApublicationCriticalpatent/CN111868561A/en
Application grantedgrantedCritical
Publication of CN111868561BpublicationCriticalpatent/CN111868561B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

An apparatus can accurately distinguish a valid pulse from noise by setting a dynamic noise floor that is adjusted according to environmental conditions. For example, the device may distinguish a light pulse emitted by a light emitter of the system and reflected from the object to the light sensor from noise, such as sunlight glare, and distinguish the light pulse as a valid pulse by determining a dynamic noise floor and identifying at least a portion (a threshold number of samples) of the received signal that exceeds the dynamic local noise as a valid pulse. The dynamic noise floor may be determined, for example, using a moving average of the received signal and/or moving or scaling the noise floor based on other characteristics of the returned signal.

Description

Efficient signal detection using adaptive identification of noise floor
Cross Reference to Related Applications
This PCT international application claims priority to us patent application 15/925,770 filed on 3/20/2018, which is incorporated herein by reference.
Background
Light detection and ranging, or "LIDAR," refers to a technique for measuring distance to a visible surface by emitting light and measuring light reflection characteristics. LIDAR systems have a light emitter and a light sensor. The light emitter may comprise a laser that directs light into the environment. When the emitted light is incident on the surface, a portion of the light is reflected and received by the light sensor, thereby converting the light intensity into a corresponding electrical signal.
The LIDAR system has a signal processing component that analyzes the reflected light signal to determine the distance to the surface that reflected the emitted laser light. For example, the system may measure the travel time of an optical signal as it travels from the laser emitter to the surface and back to the optical sensor. The distance is then calculated based on the time of flight and the known speed of light.
Distortion of the reflected light signal caused by a variety of factors may result in the inability of conventional LIDAR systems to accurately determine when the reflected light returns to the light sensor. For example, a 1 nanosecond change in return signal time may correspond to a change in estimated distance of approximately 15 centimeters. Some factors that may cause distortion of the reflected light signal may include the surface being highly reflective, the surface being in close proximity to the LIDAR unit, etc.
Since LIDAR systems may take thousands or even millions of measurements per second, it is not easy to detect if there is such a small change in the return signal time of the reflected light signal. In many cases, finding this problem becomes more difficult because the change in the return time of the reflected light signal cannot be detected at all. The LIDAR system may detect a delayed return and not accurately measure the distance to the object.
Furthermore, under certain high noise or low noise conditions, it is difficult to distinguish the return signal from noise. For example, a clear condition may produce a strong noise power band, thereby obscuring the return signal. Conventional LIDAR systems set a threshold and filter out any signals below the threshold. This effectively filters out noise, but also filters out weaker returns below the threshold. Furthermore, setting a high threshold significantly reduces the range of the LIDAR system, since the strength of the return signal is low for longer distance objects.
Drawings
The detailed description is made with reference to the accompanying drawings. In the drawings, the left-most digit(s) of a reference number identifies the drawing in which the reference number first appears. The use of the same reference symbols in different drawings indicates similar or identical items.
Fig. 1 shows a block diagram of components that may be used in a LIDAR channel of an example LIDAR system.
Fig. 2A shows an example signal diagram of an unsaturated return signal.
Fig. 2B shows an example signal diagram of a saturated return signal.
FIG. 3 shows a flow chart of an example process for selecting a light detector from a plurality of light detectors to determine a distance to an object.
Fig. 4A illustrates a block diagram of an additional or alternative example architecture of a classifier and a detector, where the classifier selects the output of one of the plurality of detectors to output as a distance measurement.
Fig. 4B illustrates an example architecture for calibrating the light detector output based on the non-linearities of the components of the LIDAR system.
Fig. 5 illustrates example waveforms received and generated in accordance with an example process for correlating a return signal with a reference signal.
6A-6F illustrate example signal diagrams of example processes for detecting a rising edge of a saturated return signal.
Fig. 7A-7C illustrate an example received signal and static thresholds used to determine valid pulses.
Fig. 7D illustrates an example received signal and a dynamic noise floor for determining valid pulses and/or for classifying the signal as a saturated signal.
Fig. 8 illustrates a block diagram of an example autonomous vehicle that may incorporate the LIDAR system discussed herein.
Detailed Description
LIDAR systems typically have at least one light emitter and a corresponding light sensor, where a pair of light emitter and light sensor is commonly referred to as a channel. The light emitter may include a laser, such as an Injection Laser Diode (ILD), that directs highly coherent light in the direction of the object or surface. The light sensor may include a photodetector, such as a photomultiplier tube or an Avalanche Photodiode (APD), that converts the light intensity at the light sensor into a corresponding electrical signal. Optical elements such as lenses or mirrors may be used to focus and direct light in the light transmission and reception paths.
Some LIDAR devices may measure distances of multiple surface points in a scene. For each surface point, the LIDAR system may determine the distance of the surface point and its angular orientation relative to the device. This function may be used to create a point cloud comprising three-dimensional coordinates of a plurality of surface points.
However, high reflectivity objects, objects that are spatially close to the LIDAR device, the temperature of the light sensor, non-linearities of the light emitter and/or the light sensor, and like factors may cause distortion of the electrical signal generated by the light sensor when the light sensor detects the return signal. Since this return signal is used to measure the distance to the object surface, and since only nanosecond horizontal offsets in the return signal may correspond to approximately 15 centimeters of difference in distance measurements, these disturbances may greatly reduce the accuracy of conventional LIDAR devices. In some examples, an accuracy of 5 centimeters or less may be desired, which requires that the time at which the return signal is received can be accurately ascertained (with an error as low as one-third of a nanosecond or less).
For example, the light emitter may emit light pulses that reflect off of a high reflectivity object such as a retroreflector, a street sign, or a mirror. When the light sensor receives this pulse of light, the intensity of the light received at the sensor, caused by the reflectivity (or proximity) of the object, may exceed the ability of the light sensor to generate an electrical signal that varies proportionally with the intensity of the pulse. In other words, the intensity of the reflected light pulse corresponds to a higher value of the electrical signal than the optical sensor is capable of producing. In some examples, the electrical signal generated by the light sensor may be converted to a digital signal by an analog-to-digital converter (ADC). Worse still, in some cases, the electrical signal generated by the photosensor may exceed the maximum dynamic range of the ADC, and therefore, similar to the photosensor problem, the digital signal generated by the ADC may not reflect a sufficiently high photosensor amplitude corresponding to the electrical signal generated by the photosensor. This type of signal may be referred to as a "saturated signal" -for example, where the amplitude of the returned signal is limited by the maximum capability of the photosensor and/or ADC. In some examples, the occurrence of saturation signals may be reduced by reducing the optical transmit power and/or scaling the ADC output. However, in some examples, this is still insufficient to prevent a saturated signal from being received.
The saturation signal reduces the accuracy of some methods for determining the delay between the transmission of a light pulse and the reception of a light pulse because the time corresponding to the peak of the return pulse cannot be directly measured because the peak is truncated. For example, some methods include cross-correlating a peak of the return signal with a peak of the reference signal. When the return signal is saturated, the time corresponding to the signal peak cannot be easily ascertained, and cross-correlating the saturated signal with the reference signal can result in an erroneous determination of the delay, and thus the distance determined by the delay.
The techniques (e.g., machines, programs, processes) discussed herein improve the accuracy of determining a delay (e.g., time delay of arrival (TDOA)) between transmitting a light pulse and receiving a reflected light pulse at a light sensor, for example, by classifying a type of return signal as saturated or unsaturated, and selecting an output of one of a plurality of detectors as an estimated distance output based at least in part on the type. In some examples, classifying the type of return signal may include determining a height of the return signal (i.e., an indication of amplitude, which is used equivalently with the term "amplitude" herein), a width of the return signal, and/or whether one or more samples in the return signal exceeds a threshold amplitude for a predetermined number of samples (e.g., a consecutive number of samples). In this way, the accuracy of the distance determined from the delay can be improved. When these distances are used to control an autonomous vehicle, improving the accuracy of the time at which the detected light pulses are reflected onto the light detector may be able to save lives. These techniques may not only improve the safety of autonomous vehicles, but may also improve the accuracy of robot movements and/or three-dimensional maps generated from LIDAR data.
In some examples, the LIDAR device may include multiple detectors, which in some examples may receive the received signals substantially simultaneously (i.e., within a technical tolerance of simultaneous reception), and may then generate an output from one or more of the multiple detectors. The classifier may classify the received signal as a particular type (e.g., saturated or unsaturated), and may select one of the outputs of the one or more detectors to output as a distance measurement by the LIDAR device based at least in part on the type. In another example, the classifier may select a detector based at least in part on the type, and then may provide the received signal to the detector to cause the detector to generate an output.
In some examples, the plurality of detectors may include an unsaturated signal detector that determines the TDOA based at least in part on correlating the received signal with a reference signal. The plurality of detectors may additionally or alternatively include a saturated signal detector that determines the TDOA based at least in part on detecting a rising edge of the received signal. In some examples, the plurality of detectors may include additional detectors that may use various techniques to determine TDOA, distinguish a valid signal from noise, or other characteristics of a signal. For example, the plurality of detectors may include a cross-correlation detector, a leading edge detector, a deconvolution detector, a frequency domain analysis detector, and the like. In some examples, valid signal discrimination may be performed by a classifier.
The techniques discussed herein may also distinguish a valid return signal (e.g., a portion of an electrical signal generated by a light sensor that corresponds to the return signal) from pure noise. For example, the techniques discussed herein may include determining a dynamic noise floor for identifying valid return signals. That is, the noise floor may be dynamically adjusted based on characteristics of the received signal. For example, the noise floor may be adjusted based at least in part on a moving average of the received signal. In some examples, the technique may include identifying samples of the received signal associated with an amplitude that exceeds a dynamic noise floor as valid pulses. In additional or alternative examples, a valid pulse may be identified from a threshold number of consecutive samples that exceed the dynamic noise floor (e.g., 3 consecutive samples exceed the dynamic noise floor). In some examples, the dynamic noise floor may be scaled and/or translated based at least in part on additional characteristics of the received signal (e.g., a height and/or width of the received signal). Increasing the dynamic noise floor during periods of greater noise (e.g., in the case of sunny days) may reduce false positive identification of noise as a valid return signal and false negative identification of a valid return signal as no valid return signal, thereby increasing the accuracy of LIDAR distance measurements and increasing the safety and accuracy of machines (e.g., autonomous vehicles, robotic accessories) that rely on distance measurements. By dynamically reducing the noise floor during low noise periods (e.g., nighttime), the LIDAR system is able to distinguish valid return signals of low intensity, thereby increasing the range of the LIDAR system.
Example LIDAR System
Fig. 1 illustrates a block diagram of components of anexample LIDAR system 100 that may be used to perform range measurements.
In some examples, theexample LIDAR system 100 may include a channel that includes alight emitter 102 and a correspondinglight sensor 104. The channel is used to emit laser pulses and to measure the reflection characteristics of the pulses, as described below. Fig. 1 depicts a single measurement channel (e.g., a light emitter/light sensor pair), but it is contemplated that theexample LIDAR system 100 may include multiple channels. Those skilled in the art will appreciate that the number of light emitters and light sensors may be multiplied beyond the single laser emitter and light sensor depicted. The term "channel" may also encompass support circuitry associated with the emitter/sensor pair, and at least some of the support circuitry may be shared between multiple channels (e.g., ADC, detector, classifier).
In some examples, theoptical transmitter 102 may include a laser transmitter that generates light having wavelengths between 600 and 1000 nanometers. In additional or alternative examples, the wavelength of the emitted light may be in a range between 10 microns to 250 nm. Theoptical transmitter 102 may transmit optical pulses (e.g., laser pulses) of different powers and/or wavelengths. For example, some of the laser emitters of the exampleoptical LIDAR system 100 may emit light at 905 nanometers, while other of the laser emitters may emit light at 1064 nanometers. Laser emitters of different wavelengths may then be used alternately so that the emitted light alternates between 905 nanometers and 1064 nanometers. The light sensors may be similarly configured to be sensitive to the respective wavelengths and to filter out other wavelengths.
Activating or turning on the emitter may be referred to as "firing" the emitter. In some examples, thelight emitter 102 may be excited to create a light pulse having a short duration. Moreover, to conserve power, theLIDAR system 100 may reduce the power of the transmitted light pulses based at least in part on conditions of the environment into which the detected light pulses are to be transmitted (e.g., low light/low noise conditions).
For a single distance measurement, thelaser transmitter 102 may be controlled to transmit a beam of laser pulses (i.e., one or more) along anoutward path 108 through thelens 106. The beam of laser light pulses is reflected by thesurface 110 of the LIDAR ambient along areturn path 114, passes through thelens 112, and reaches thelight sensor 104. In some examples, the LIDAR may include a plurality of laser emitters positioned within the chassis to project laser light outward through one or more lenses. In some examples, the LIDAR may also include multiple light sensors such that light from any particular emitter is reflected through one or more lenses to the corresponding light sensor.
In some examples,lens 106 andlens 112 are the same lens, redundantly depicted for clarity. In other examples, thelens 112 is a second lens designed such that beams fromlaser emitters 102 at different physical locations within the housing of the LIDAR are directed outward at different angles. In particular, thefirst lens 106 is designed to direct light from a particular channel oflaser emitters 102 in a corresponding and unique direction. Thesecond lens 112 is designed such that the correspondinglight sensor 202 of that channel receives reflected light from the same direction.
In some examples, thelaser transmitter 102 may be controlled by acontroller 116, thecontroller 116 implementing control and analysis logic for multiple channels. Thecontroller 116 may be implemented in part by a field programmable gate array ("FPGA"), a microprocessor, a digital signal processor ("DSP"), or a combination of one or more of these and/or other control and processing elements, and may have associated memory for storing associated programs and data. To initiate a single range measurement using a single channel, thecontroller 116 may generate a trigger signal. The trigger signal may be received by a pulse generator, which may generate apulse train signal 118 in response to the trigger signal. In some examples, theburst signal 118 may include a pair of sequential pulses that indicate the time at which thelaser transmitter 102 should be activated or turned on. In some examples, the rising edge of the pulse may be used to indicate the time at which thelaser emitter 102 should be activated (fired), although any other characteristic of theburst signal 118 may be used to activate the laser emitter 102 (e.g., the falling edge). In some examples, the pulse generator may be part of thecontroller 116.
In embodiments where theburst signal 118 includes a pair ofsequential pulses 120, theburst signal 118 may be received by theoptical transmitter 102 and cause theoptical transmitter 102 to transmit a pair of sequential laser pulses. Theoptical transmitter 102 may transmit light 120 corresponding in time to the pulses of theburst signal 118. Although depicted as two pulses in fig. 1 for illustrative purposes, any number of pulses (e.g., one or more) are contemplated. In some examples, the trigger signal, theburst signal 118, and/or the signal generated by theoptical transmitter 102 may be used to determine the TDOA. For example, a time corresponding to the emission of light (e.g., a sample number of a clock signal generated by the controller 116) may be recorded based on one or more of these signals. Subsequent components (e.g., detectors and/or classifiers) can use the time to determine the TDOA.
Assuming that the emitted laser light is reflected from thesurface 110 of the object, thelight sensor 104 may receive the reflected light and generate a return signal 122 (or light sensor output signal). Thereturn signal 122 may have substantially the same shape as thelight pulses 120 emitted by thelight emitters 102, although it may differ due to noise, interference, crosstalk, interference signals from other LIDAR devices, etc. between different emitter/sensor pairs.Return signal 122 will also be delayed with respect tooptical pulse 120 by an amount corresponding to the round trip propagation time of the emitted laser pulse train.
In some examples, thelight sensor 104 may include an avalanche photodiode ("APD") and/or any other suitable component for generating a signal based on light detected at thelight sensor 104. In some examples, thelight sensor 104 may also include an amplifier, which may include a current-to-voltage converter amplifier (e.g., a transimpedance amplifier ("TIA")). Regardless, the amplifier may be any amplifier configured to convert the return signal so that a downstream component (such as an ADC) reading the signal may accurately read the signal.
In some examples,ADC 124 may receivereturn signal 122 and digitize return signal 122 when generating receivesignal 126.Received signal 126 may include a stream of digital values indicative of the magnitude ofreturn signal 122 over time. In some examples,ADC 124 may be programmed to samplereturn signal 126 at a frequency that matches a clock signal generated bycontroller 116 to simplify TDOA determinations. As used herein, a "sample" of receivedsignal 126 includes a representation of the magnitude ofreturn signal 122 at a discrete sample number. These discrete sample numbers may be associated with an analog time that may be used to determine the TDOA (e.g., by reference to a sampling frequency to determine a delay time).
The representation of the magnitude of the discrete samples may be based at least in part on the scale ofADC 124. For example, theADC 126 may have a 16-bit output, and thus may represent the current or voltage of thereturn signal 122 as a 16-bit value. The highest value of the output ofADC 124 may be referred to as the maximum dynamic range ofADC 124. In some examples, the size ofADC 124 may be set based at least in part on the power of emitted light 120 and/or the detected environmental conditions (e.g., signal-to-noise ratio (SNR), noise floor). However, a high reflectivity surface and/or a surface very close to the light emitter/light sensor may reflect more light onto thelight sensor 104 than expected, such that thelight sensor 104 outputs areturn signal 122 that exceeds the maximum dynamic range of theADC 124. In other words, in this case,ADC 124 will output the maximum possible value (e.g., unsigned integer output "65535" for 16 bits), but this value will be "not high enough" to accurately reflectreturn signal 122 and/or the received signal cannot be resolved by the ADC because the range between the noise floor and the received signal is not high enough. In additional or alternative examples, light reflected by the object onto thelight sensor 104 may similarly exceed the ability of thelight sensor 104 to produce a current or voltage that accurately reflects the intensity of light received at thelight sensor 104.
These conditions are referred to herein as "saturation" of thephotosensor 104 andADC 124. Regardless of whether one or both of the photosensor 104 or theADC 124 is saturated in the manner described above, the receivedsignal 126 resulting from saturation of thephotosensor 104 and/or theADC 124 may be referred to as a saturated signal.
In some examples, the detector 128(1) - (N) receives the receivedsignal 126 and determines the distance d therefrom1...dn(130(1) - (N)). For example, detector 128(1) may receive the receivedsignal 126 and may determine distance 130(1) based at least in part on the receivedsignal 126 based on programming and/or circuit layout of the detector. In some examples, the detector 128(1) - (N) may additionally or alternatively receive a clock signal from thecontroller 116, an indication of the time at which thelight pulse 120 was emitted by thelight emitter 102, and/or any other indication sufficient to determine TDOA from which the detector may calculate a distance (e.g., light sensor temperature, light emission power).
For example, detectors 128(1) - (N) may include a detector for determining TDOA of an unsaturated signal, a detector for determining TDOA of a saturated signal, a detector for determining TDOA based on photosensor temperature and/or transmitter, and/or combinations thereof. In some examples, the distances 130(1) - (N) determined by the different detectors 128(1) - (N) may vary based on changes in the programming of the detectors and/or the arrangement of the circuitry. For example, the undersaturated signal detector may determine TDOA based on programming/circuitry that correlates the receivedsignal 126 to a reference signal, while the saturated signal detector may determine TDOA based on programming/circuitry that detects the rising edge of the receivedsignal 126. The detector 128(1) - (N) may determine the distance 130(1) - (N) based at least in part on the TDOA and the speed of light. In some examples, these distances (or TDOAs) may be modified by calibration techniques discussed in more detail with respect to fig. 4B.
Fig. 1 illustrates one potential example of a configuration of anexample system 100, while fig. 4A illustrates an additional or alternative configuration of theexample system 100. For example, in fig. 1, classifier 132 may receive distances 130(1) - (N) and/or other determinations (e.g., receiving an indication of the width and/or height of signals 126) from detectors 128(1) - (N), and may select one of distances 130(1) - (N) for output as selecteddistance 134 based at least in part on distances 130(1) - (N), other data (e.g., width, height) determined by detectors 128(1) - (N), signals received fromcontroller 116, and/or receivedsignals 126 themselves. In other examples, such as in fig. 4A, detector 128(1) - (N) and classifier 132 may receive the receivedsignal 126 simultaneously, classifier 132 may determine a type of receivedsignal 126, and based at least in part on the type, classifier 132 may select one of the distances determined by one of detectors 128(1) - (N) for output. In some examples, classifier 132 may receive the receivedsignal 126, classify the receivedsignal 126 as a certain type, and select one of detectors 128(1) - (N) to transmit the receivedsignal 204 thereto based at least in part on the type.
In some examples, the detector 128(1) - (N) and/or the classifier 132 may be implemented at least in part by an FPGA, a microprocessor, a DSP board, or the like. In some examples, the selecteddistance 134 may be output to a perception engine for inclusion in a point cloud or for rendering a representation of an environment surrounding theexample LIDAR system 100. In some examples, the point cloud and/or other representation of the environment may be used to determine control signals for operating an autonomous vehicle, robotic accessories, video game system outputs, and the like.
Note that fig. 1 shows the logic components and signals in a simplified manner for the purpose of describing the general features. In a practical implementation, various types of signals may be generated and used to excite thelaser emitter 102 and measure the TDOA between the output of thelaser emitter 102 and the reflected light sensed by thelight sensor 104.
Example received Signal
Fig. 2A shows an example signal diagram of anunsaturated return signal 200, and fig. 2B shows an example signal diagram of a saturatedreturn signal 202. Note that theunsaturated return signal 200 has an identifiable maximum amplitude 204 (or equivalently, altitude), whichmaximum amplitude 204 can be used to cross-correlate (or otherwise) with the reference signal to identify the sample number corresponding to TDOA, while the saturatedreturn signal 202 is prominent with its "flat top" that has no discernible maximum. This "flat top" is due to saturation of the ability of the ADC and/or photosensor to produce an output of increasing magnitude as the intensity of light incident on the photosensor continues to increase. As discussed in the sections above, the intensity of light incident on the light sensor is a function of the transmitted power (i.e., the power of the pulse emitted from the light emitter), the proximity of the surface from which the emitted pulse is reflected to the light sensor, the reflectivity of the surface, and the like. It is not sufficient to merely estimate the maximum amplitude of the saturatedreturn signal 202 as halfway between the risingedge 206 and the fallingedge 208, because the sample number corresponding to the halfway point does not always correspond to the actual maximum value. As can be observed in fig. 2B, sometimes the fallingedge 208 of the saturation signal may include a longer tail than the rising tail of the risingedge 206, indicating a non-gaussian feature that may be introduced by high reflectivity or very close objects.
Fig. 2B also shows athreshold amplitude 210, afirst width 212, and asecond width 214. Thefirst width 212 is a width (e.g., number of samples, time) associated with the samples associated with the magnitude of change in the maximum magnitude of the receivedsignal 208. For example, although the signal diagrams in fig. 2A and 2B depict "smooth" signals, in practice, due to noise, the signals are more likely to be "jagged" and may contain outliers. Thus, thefirst width 212 may be calculated for samples associated with the following magnitudes: the magnitude lies within some deviation of the average maximum height associated with the "flat top" and/or the maximum height of the leftmost sample. Thesecond width 214 is the width calculated between the point at which the risingedge 206 reaches thethreshold amplitude 210 and the point at which the fallingedge 208 reaches thethreshold amplitude 210. These widths will be discussed in more detail below. In some examples, the height of the undersaturated signal may be used to calibrate the undersaturated signal detector output, and the width of the saturated signal may be used to calibrate the saturated signal detector and/or classify the saturated signal as saturated.
Example procedure
Fig. 3 shows a flow diagram of an example process for selecting a detector to determine a distance to an object using a LIDAR system that includes multiple light detectors. Inoperation 302, theexample process 300 may include emitting a light pulse in accordance with any of the techniques discussed herein. In some examples, this may include activating a laser emitter that emits one or more laser pulses into an environment in which the laser emitter is located.
Inoperation 304, theexample process 300 may include receiving a signal indicative of receipt of the reflected light pulse in accordance with any of the techniques discussed herein. In some examples, this may include receiving light from an object in the environment that reflects at least a portion of the light pulses to the light sensor. As described above, the photosensor may include an avalanche photodiode that converts the intensity of light incident to the photosensor into a current. In some examples, as described above, this current may be amplified, converted, and/or sampled and ultimately received by a classifier and/or detector as a received signal. In some examples, the received signal comprises a digital signal comprising, at each discrete sample, an indication of the magnitude of the current generated by the light sensor. As used herein, the relative magnitude of this indication is referred to as the "height" or "amplitude" of the received signal, even though those skilled in the art will understand, it is noted that the value of the received signal is a representation of the magnitude, rather than the actual value of the intensity of the light at the sensor.
Inoperation 306, theexample process 300 may include detecting that the received signal includes a valid pulse in accordance with any of the techniques discussed herein. In some examples, this may include the classifier classifying the received signal as a valid pulse based at least in part on the dynamic noise floor, as discussed in more detail below. For example, the classifier may continuously determine the dynamic noise floor and classify samples associated with magnitudes that do not reach the dynamic noise floor as noise (and then return to operation 304) and classify samples associated with magnitudes that exceed the noise floor as valid pulses (and then continue to operation 308). In some examples, to be classified as a valid pulse, the classifier may further require that a threshold number of samples exceed the dynamic noise floor before classifying those samples and subsequent samples that exceed the dynamic noise floor as valid pulses.
Inoperation 308, theexample process 300 may include classifying the received signal as being of a certain type according to any of the techniques discussed herein. For example, the classifier may classify the signal into types including the following types: unsaturated; (ii) saturated; noisy (e.g., associated with SNR values that exceed an SNR threshold); a valid signal (i.e., a return pulse corresponding to a transmitted light pulse); noise (e.g., not a valid signal); combinations thereof, and the like. For example, theexample process 300 may include: a noise floor is determined and, for those samples that exceed the noise floor and/or for a threshold number of consecutive samples that exceed the noise floor, the received signal is classified as a valid signal based at least in part on the height of the valid signal exceeding the noise floor. In additional or alternative examples, theexample process 300 may include: determining that the received signal is a saturated signal based at least in part on a width of the signal, a maximum dynamic range of the ADC, and/or determining that a threshold number of samples are associated with a height that exceeds a threshold amplitude.
In some examples, inoperation 308, if the classifier determines that the received signal is associated with noise rather than a return pulse, theexample process 300 may return tooperation 304. For example, the classifier may determine that the received signal does not exceed the dynamic noise floor, as discussed in more detail below.
In operation 310(a)/310(B), theexample process 300 may include selecting a detector from a plurality of detectors based at least in part on the type, according to any of the techniques discussed herein. For example, the classifier may select a detector to transmit the received signal thereto, or in some arrangements, the classifier allows the received signal to pass to the selected detector (e.g., via a switch controlling the detector) to cause the selected detector to process the received signal. In additional or alternative examples, when the received signal is received by the classifier, multiple detectors may receive the received signal at the same time, and inoperation 310, the classifier may select an output of one of the multiple detectors (e.g., via controlling a multiplexer or switch that receives the outputs of the multiple detectors as inputs).
For example, in operation 310(a), theexample process 300 may include: based at least in part on classifying the received signal as including a type of indication that the received signal is unsaturated, the first detector is selected to determine the TDOA from the received signal, e.g., by cross-correlation. In additional or alternative examples, any method may be used to determine the TDOA. For example, a direct delay calculation may be used to determine the delay between a peak in the reference signal and a peak in the received signal. In some examples, the type may additionally include an indication that the received signal is a "valid pulse" when the first detector is determined to be selected. In operation 310(B), theexample process 300 may include: based at least in part on classifying the received signal as including a type of indication that the received signal is saturated, a second detector is selected to determine the TDOA from the received signal by rising edge detection. In some examples, the type may additionally include an indication that the received signal is a "valid pulse" when the second detector is determined to be selected.
Inoperation 312, theexample process 300 may include calculating a distance to the object that reflected the light pulse based at least in part on the TDOA, in accordance with any of the techniques discussed herein. In some examples, this may include calculating the distance based at least in part on the speed of light and the TDOA. In some examples, the selected detector may perform the calculation, or a downstream component may perform the calculation.
Example Detector/classifier architecture
Fig. 4A depicts a block diagram of anexample architecture 400 for classifying a received signal and selecting a detector output from a plurality of detector outputs as an estimated distance output. For simplicity, theexample architecture 400 depicted in fig. 4A includes two detectors, anunsaturated signal detector 402 and a saturatedsignal detector 404. It is contemplated that more than two detectors may be employed in theexample architecture 400, but for simplicity, the discussion is limited to only these two detectors. Other detectors may include the following: a noisy signal detector; a detector of a particular temperature range (e.g., a detector that determines distance and/or TDOA based at least in part on non-linearity of the light sensor when the light sensor and/or LIDAR system is within the particular temperature range); a detector of a particular power range (e.g., a detector that determines distance and/or TDOA based at least in part on non-linearities of the light sensor and/or light emitter when the emitted power is within a particular power range), and the like.Detectors 402 and 404 may represent two of detectors 128(1) - (N).
Theexample architecture 400 may also include aclassifier 406, theclassifier 406 may be representative of the classifier 132 and may receive the receivedsignal 408, the receivedsignal 408 may be representative of the receivedsignal 126. In some examples, theclassifier 406 may be programmed and/or include the following circuit arrangement: distinguishing the valid pulse from noise; classifying the receivedsignal 408 as a certain type; and/or selecting a detector and/or a detector output based at least in part on the type. Fig. 4A depicts an example in which classifier 406 generatesselection 412 to select the output of one ofdetectors 402 or 404 as the selecteddistance 410 output.
For example, theundersaturated signal detector 402 may perform a cross-correlation of the receivedsignal 408 with a reference signal (as discussed in more detail with respect to fig. 5) to determine a TDOA from which theundersaturated signal detector 402 may determine thefirst distance 414. In some examples, theunsaturated signal detector 402 may additionally or alternatively determine an elevation 416 (e.g., a maximum value) of the receivedsignal 408. The saturatedsignal detector 404 may be programmed and/or include the following circuit arrangement: detecting a rising edge of the receivedsignal 408; correlating the rising edge with the time at which the TDOA is determined by the saturatedsignal detector 404; and asecond distance 418 from the TDOA. In some examples, saturatedsignal detector 404 may also determine awidth 420 of received signal 408 (e.g., a number of samples associated with the same average maximum of receivedsignal 408, a number of samples from a rising edge to a falling edge of received signal 408).
In some examples, theunsaturated signal detector 402, the saturatedsignal detector 404, and theclassifier 406 may receive the receivedsignal 408, and theclassifier 406 may classify the receivedsignal 408 as a certain type, and based at least in part on the type, one of the outputs of thedetectors 402 or 404 may be selected for delivery as the selecteddistance 410. In some examples, based on the detector selected,height 416 may also be passed ifunsaturated signal detector 402 is selected, orwidth 418 may be passed if saturatedsignal detector 404 is selected. In some examples, the output of the detector may be an input to amultiplexer 422, and thesorter 406 may generate aselection 412, theselection 412 controlling the multiplexer to output a signal corresponding to theselection 412. Regardless of the actual implementation employed,selection 412 may include a control signal generated byclassifier 406 for selecting the output of at least one detector as the final estimated distance output to downstream components and/or for modification by calibrator 428.
For example, where the received signal is a saturated signal, theunsaturated signal detector 402, the saturatedsignal detector 404, and theclassifier 406 may receive the receivedsignal 408, and theclassifier 406 may generate a selection 412 (i.e., asecond distance 418, and in some examples also a width 420) that identifies an output of the saturated signal detector. In examples where saturatedsignal detector 404 also determineswidth 420,multiplexer 422 may receiveselection 412 and cause the output ofunsaturated signal detector 402 to be blocked, while causingsecond distance 418 to be passed as selecteddistance 410 andwidth 420 to be passed.
In some examples,detectors 402 and 404 may additionally or alternatively include other detectors (e.g., cross-correlation detectors, leading edge detectors, deconvolution detectors, frequency domain analysis detectors, etc.). For example, these other detectors may be used as part ofdetectors 402 and/or 404, and/or may be completely independent detectors into which the received signals are passed and from which the TDOA is determined separately. In some examples, receivedsignal 408 may be filtered before it is received by any of detectors and/orclassifiers 406, and/or receivedsignal 408 may be filtered at any point in the operation of detectors and/orclassifiers 406. For example, the receivedsignal 408 may be passed through a low pass filter to smooth the signal.
In some examples, additional and/or alternative detectors may include a detector for processing a split beam that may occur when a transmitted pulse strikes an object that splits a reflected light pulse into two pulses (in time) (e.g., a step, first a reflection from a window, then a reflection from an object behind the window). For example, the deconvolution detector may determine wiener deconvolution to recover pulse delays from the optical transmitter to the optical sensor when the beam is split, and/or the frequency domain detector may perform optimal filtering and/or frequency domain analysis to recover the split beam reflections. In some examples, the deconvolution detector can deconvolve the received signal based at least in part on the transmitted pulse and the received signal. In this example, the deconvolution detector may select two peaks that are adjacent to each other and/or closest to each other to perform the determination of TDOA. In some examples, the distance from each of the one or more peaks may be recovered. In additional or alternative examples, distances associated with some of the plurality of peaks that are less than a threshold distance (e.g., which may be due to reflections by the LIDAR sensor itself) may be detected and discarded.
In some examples,classifier 406 may classify receivedsignal 408 as a valid pulse based at least in part on determining that receivedsignal 408 includes more than a threshold number of samples (e.g., more than one, more than three, increasing as the sampling rate increases) of the dynamic noise floor determined byclassifier 406, which will be discussed in more detail below. In some examples, thesorter 406 may control themultiplexer 422 to remain off (i.e., not pass an output) until thesorter 406 sorts the receivedsignal 408 into valid pulses. For example, theclassifier 406 may continuously determine the dynamic noise floor and compare the magnitude of the receivedsignal 408 to the dynamic noise floor, and may output aselection 412 that does not allow any detector output to be passed until theclassifier 406 determines that more than three samples of the receivedsignal 408 are associated with a magnitude that exceeds the noise floor. At that time, theclassifier 406 may also classify the received signal as being of a certain type and change theselection 412 to indicate which detector output to pass. Further, although described as a dynamic noise floor, any other differentiation of valid pulses (e.g., fixed threshold, number of received points, etc.) is contemplated.
In some examples,classifier 406 may classify receivedsignal 408 as a saturated signal based at least in part on a threshold number of samples exceeding a dynamic noise floor (e.g., three or more, ten or more), a threshold number of samples associated with magnitudes within a certain deviation from each other (e.g., ± 5 units, depending on the scale of the ADC), a threshold number of samples exceeding a threshold magnitude, a width of receivedsignal 408, and/or a combination thereof (collectively referred to herein as a threshold magnitude). In some examples,classifier 406 may classify receivedsignal 408 as a valid pulse based at least in part on more than three samples exceeding a threshold amplitude, and classify receivedsignal 408 as a saturated pulse based at least in part on more than 126 samples exceeding the threshold amplitude. In some examples,classifier 406 may classify receivedsignal 408 as an unsaturated signal if the number of samples of receivedsignal 408 that exceed the threshold amplitude is greater than three but less than 126. Although this example uses thenumber 126, the number of samples used to distinguish between effectively unsaturated pulses and effectively saturated pulses may vary based at least in part on the sampling frequency of the ADC.
In some examples, theclassifier 406 may include a decision tree or any arrangement thereof, such as a random forest and/or an enhanced set of decision trees; a Directed Acyclic Graph (DAG) (e.g., where nodes are organized as a bayesian network); deep learning algorithms, and the like. In some examples,classifier 406 may include: programming and/or circuitry for determining a dynamic noise floor; a comparator for comparing the amplitude of the receivedsignal 408 with a dynamic noise floor; and logic for driving the pin state to indicateselection 412.
In some examples, the selecteddistance 410 may be used by downstream components as a final estimated distance — e.g., for constructing a point cloud representation of the environment. In some examples, the selecteddistance 410 may be modified by an offsetdistance 424 to determine a modifieddistance 426, as shown in fig. 4B, before being used by downstream components as a final estimated distance. In some examples, the calibrator 428 may receive the selecteddistance 410,height 416, and/orwidth 420, and/or the power of the light emitted by the emitted light (transmission power 430). In some examples, the calibrator 428 may include a lookup table of experimental values that includes an offsetdistance 424, the offsetdistance 424 associated with at least theheight 416 and the transmit power 430 (for unsaturated signals), or thewidth 420 and the transmit power 430 (for saturated signals). In some examples, the offset distance may additionally be a function of the selecteddistance 410, the temperature of the light sensor and/or the LIDAR system, and the like. In some examples, the form may be input with data by recording changes between actual distances to the object and the estimated distances of the detector, changing the transmission power of the power transmitter, and changing the reflectivity of the surface (e.g., using a neutral density filter) to generate received signals having different heights and widths. For example, the calibrator 428 may determine that the selecteddistance 410 should be adjusted by-5 millimeters for an unsaturated signal having a light sensor at 75 degrees fahrenheit, a transmitpower 430 of 35 milliwatts, and a received signal power (i.e., height) of 32 milliwatts. The calibrator 428 may subtract the selecteddistance 418 and provide the modifieddistance 426 to the downstream components as the final estimated distance. The calibrator 428 may thus account for non-linearities in the light emitters and/or light sensors, thereby further improving the accuracy of the estimated distance.
In some examples, the calibrator 428 may include a look-up table that maps the experimental transmit power and the experimental receive height and/or width of the received signal to a distance offset determined by a difference operation on a measured distance to the test object and an estimated distance based on the received signal. In some examples, to determine the offset distance online, calibrator 428 may perform bilinear and/or bicubic interpolation of the actual transmit power and the height and/or width of the received signal to determine the distance offset. In some examples, to account for temperature fluctuations that vary over time during the input of data to the look-up table, the distance to the object may be kept constant and estimated by the system at different operating temperatures. A curve of temperature versus estimated distance (and/or change in estimated distance versus measured distance) may be fitted. In some examples, the curve may include a straight line (line). Thus, the calibrator 428 may adjust the distance offset by the change in distance specified by the curve. In so doing, the lookup table need not include a temperature dimension since the distance offset can be adjusted for temperature based on a curve or straight line.
Example unsaturated Signal Detector
Figure 5 depicts an answer feature of an emitted light pulse, an unsaturated received signal indicative of a reflected light pulse, and a cross-correlation between the emitted light pulse and the received signal. In some examples, the unsaturated signal detector may determine TDOA according to the following discussion.
Fig. 5 shows afirst waveform 502 representing the time and intensity of light emitted by a laser emitter. The light for a single distance measurement is emitted in the form of a sequence or burst of multiple pulses, in this example comprising a pair of pulses 504(a) and 504(B), each having a width of about 5 to 50 nanoseconds. However, in other examples, pulses having more than two pulses of longer or shorter duration may be usedA burst sequence or pulse train. In the example shown, the pair of pulses may have a duration t1Are spaced apart from each other. In one embodiment, the time interval duration of each pulse varies between 20 and 50 nanoseconds. The pulse is generated by the discharge of the capacitor by the laser emitter and thus has a gaussian shape.
Fig. 5 illustrates asecond waveform 506, whichsecond waveform 506 represents the magnitude of reflected light received and detected by the light sensor (such as may be indicated by receivedsignals 126 and/or 408). Thesecond waveform 506 includes a pair of pulses 508(a) and 508(B) corresponding to pulses 504(a) and 504(B), respectively. However, the pulses of thesecond waveform 506 are delayed with respect to thefirst waveform 502 by a time t2. The timing relationship between the pulses of thesecond waveform 506 should be the same as the timing relationship of the transmittedpulse 504.
Fig. 5 shows athird waveform 510 representing the cross-correlation between thefirst waveform 502 and thesecond waveform 506. Thehighest peak 512 of thethird waveform 510 corresponds in time to ti,tiIs the time difference between the transmittedfirst waveform 502 and the detectedsecond waveform 506. It is this time t2It is not recognizable for the saturation pulse because the flat top of the saturation signal is not generated and the time tiPeaks that correlate exactly with each other. Thus, according to the configuration of fig. 4A, if the receivedsignal 408 is a saturated signal, thefirst distance 414 determined by the unsaturated signal detector may be inaccurate.
Example saturated Signal edge detection
Fig. 6A-6F depict a technique for determining the arrival time of a reflected pulse at a photosensor for a saturated received signal. In some examples, the time of arrival may be used to determine the TDOA for use in determining thesecond distance 418. For example, saturatedsignal detector 404 may include programming and/or circuitry for performing the techniques described in fig. 6A-6F. In some examples, the technique includes detecting an edge of a received signal (which may represent receivedsignals 126 and/or 408). As described below, this may include determining a particular position on the rising edge (referred to herein as an "intermediate position") and identifying a sample number corresponding to the intermediate position (e.g., which may be a fractional sample, as an integer sample number may not exactly correspond to the position).
Fig. 6A depicts a receivedsignal 600 that may represent receivedsignals 126 and/or 408. In some examples, the classifier may have classified the receivedsignal 126 as a "saturated signal" type and passed the receivedsignal 600 to the saturated signal detector, or in another example, the saturated signal detector may continuously receive the receivedsignal 126 and determine an output, allowing the classifier to determine when to pass the output of the saturated signal detector (e.g., when the classifier has classified the receivedsignal 126 as indicating a valid pulse and is saturated).
Inoperation 602, the saturated signal detector may determine afirst maximum 604 of the receivedsignal 600 over time/associated with a lowest sample number (e.g., a first sample associated with a saturation value from, for example, an ADC). The sample associated with this value may be referred to as the leftmost sample 606 (i.e., the earliest in the time/sample sequence) indicated in fig. 6A, and is also referred to herein as the largest sample. In some examples, the first maximum may be detected from an unfiltered version of the received signal, and in some examples, subsequent operations may be performed on the filtered version of the received signal. For example, inoperation 602, the detector may identify a first maximum value from the unfiltered received signal, filter the received signal (e.g., using a low pass filter, using other filters or operations depending on additional detector functionality, such as fourier transforming the received signal to identify frequency domain components of the received signal), and then performoperation 608. In some examples, determining the first maximum may include using a maximum location technique, which includes a technique that incorporates variance to account for noise. For example, the variance may be set based at least in part on SNR, noise power, a dynamic noise floor discussed below, or other indicators of current noise.
Fig. 6B depictsoperation 608. Inoperation 608, the saturated signal detector may fit a firstpolynomial curve 610 to theleftmost sample 606 and at least twoprevious samples 612 and 614. In some examples, the firstpolynomial curve 610 may include a second or third order polynomial function. Any suitable curve fitting technique may be employed. For example, the saturated signal detector may determine the firstpolynomial curve 610 using a least squares regression analysis and/or a non-linear least squares regression analysis (e.g., a gaussian-newton algorithm with a damping factor based at least in part on noise power, dynamic noise floor, and/or SNR).
Fig. 6C depictsoperation 616, and for clarity, a portion of receivedsignal 600 has been removed. Inoperation 616, the saturated signal detector may define an intermediatethreshold amplitude value 618 based at least in part on the firstpolynomial curve 610. For example, the saturation signal detector may determine a composite maximum 620 from the firstpolynomial curve 610 and define theintermediate threshold amplitude 618 as a value that is a predetermined percentage of the composite maximum 620 (e.g., 60% of the maximum, any percentage between 50% and 80%, as low as a percentage of 40%, but with increased inaccuracy of the results). In some examples, the saturated signal detector defines theintermediate threshold amplitude 618 as 60% of thecomposite maximum 620.
In some examples, the saturated signal detector may determine thesynthetic maximum 620 by identifying a maximum (e.g., a local maximum, a global maximum, depending on the order of the polynomial) of the firstpolynomial curve 610. Note that although fig. 6C depicts the firstpolynomial curve 610 as a straight line segment, in practical implementations, the firstpolynomial curve 610 may include at least a local maximum located near theleftmost sample 606. In additional or alternative examples, the saturation signal detector may determine thecomposite maximum 620 by evaluating a polynomial curve at a sample number corresponding to the leftmost sample 606 (e.g., "inserting" the sample number into the polynomial). In some examples,operation 616 may also include checking coefficients of the polynomial curve to ensure that the polynomial curve includes a concave shape and/or to ensure that the coefficients indicate that the polynomial curve is a second, third, or higher order polynomial. This may be done before thecomposite maximum 620 is determined to ensure that the maximum can be found and to ensure accuracy of subsequent operations.
Operation 616 may additionally or alternatively include determining apoint 622 of the firstpolynomial curve 610 that intersects the intermediatethreshold amplitude value 618.
Fig. 6D depictsoperation 624. Inoperation 624, the saturated signal detector may determine at least three samples of receivedsignal 600 that are closest to point 622 (i.e., the intersection of the first polynomial curve and the intermediate threshold amplitude). In some examples, the closest six samples of receivedsignal 600 may be found, as shown by 626(1) - (6) in fig. 6D.
Fig. 6E depictsoperation 628, and for clarity, a portion of receivedsignal 600 has been removed. Inoperation 628, the saturated signal detector may fit a secondpolynomial curve 630 to the more than three samples (six samples 626(1) - (6) in fig. 6E), which are referred to herein as "intermediate samples. Again, any suitable fitting algorithm may be used to fit the secondpolynomial curve 630 to the intermediate samples, and the secondpolynomial curve 630 may be a second or third order polynomial. In some examples, instead of using thepolynomial curve 630, a straight line may be fit to the intermediate sample closest to the intersection of the first polynomial curve and the intermediate threshold amplitude. In such an example, the saturated signal detector may fit a straight line to more than two samples inoperation 628.
Fig. 6F depictsoperation 632. Inoperation 632, the saturated signal detector may determine asecond intersection point 634 of the secondpolynomial curve 630 and theintermediate threshold magnitude 618. This point is referred to herein as theintermediate point 634. In some examples,intermediate point 634 is an indication of a rising edge of receivedsignal 600. In some examples, the saturated signal detector can determine a sample number 636 (which is referred to herein as a sample index) corresponding to theintermediate point 634. Thesample number 636 may include a fractional sample number (e.g., may be interpolated between two samples). In some examples, the saturated signal detector may determine the TDOA using asample number 636 corresponding to theintermediate point 634. For example, the saturation signal detector may receive a reference signal from the controller to time the time from the sample number corresponding to the emission of the light pulse from the light emitter until thesample number 636. Converting it to TDOA may include determining a fractional sample number between the sample number of the emitted light pulse and the sample number 636 (i.e., the sample number corresponding to the estimated time of arrival by the edge detection technique) and using the fractional sample number corresponding to the delay and frequency of the reference signal (which may match or correlate to the sampling rate of the ADC) to convert the delay of the sample number to a time delay. In some examples, the saturated signal detector may use TDOA to determine a distance (e.g., second distance 418) based at least in part on the speed of light. In some examples, the saturated signal detector may output the distance.
In some examples, if thesample number 636 is outside of a predetermined range (e.g., a range of sample numbers), the saturated signal detector can override themiddle point 634, which corresponds to invalid edge detection. For example, and depending on the sampling frequency of the ADC, the saturation signal detector may overridesample number 636 if the sample number is belowsample number 2 or abovesample number 5. In another example, the saturated signal detector may override thesample number 636 if the sample number is below thesample number 1 or above thesample number 6. This is a very broad range, although typically a range of 2 to 5 is safe, but it can ensure that edge detection of true valid pulses is not discarded.
In additional or alternative examples, the saturated signal detector may output the width of the receivedsignal 600. For example, the saturated signal detector may reverse the described process to find themiddle point 634, and thus the right middle point (e.g., by finding the rightmost sample associated with the largest magnitude, and at least two samples following the rightmost sample, fitting a first polynomial of these samples, etc.), and may measure the width between the leftmiddle point 634 and the right middle point (e.g., the number of fractional samples in between, the time in between). In additional or alternative examples, the width of the "flat top" may be used as the width or any other method may be used (e.g., the number of fractional samples within the variance associated with the maximum amplitude of received signal 600).
Example valid pulse detection
Fig. 7A-7C show signal diagrams of receivedsignals 700, 702, and 704, for example, each including a valid pulse 706 (depicted as a saturated signal). The remainder of the received signals 700, 702, and 704 is pure noise.Received signals 700, 702, and 704 may represent any of the received signals discussed herein. An "effective pulse" is the true positive portion of the signal corresponding to the reflection of the transmitted light pulse. Fig. 7A-7C each depict astatic threshold amplitude 708 and illustrate a potential malfunction such as a system for accurately identifying valid pulses and/or classifying signals as saturated signals.
For example, fig. 7A may accurately identify avalid pulse 706 portion of receivedsignal 700 by determining that a sample associated with a valid pulse region ofreceived signal 700 exceedsthreshold amplitude 708. However, this approach may identify aspike 710 in the noise as a valid signal, which is a false positive. Thisnoise spike 710 can be attributed to a glancing direct sunlight light sensor (e.g., reflection from a reflective surface), a night headlight, and the like. In some examples, this may be prevented by identifying a number of samples (consecutive number of samples) associated with an amplitude exceeding thethreshold amplitude 708 as a valid pulse.
However, in some cases, this is not sufficient to prevent false positives and false negatives. For example, fig. 7B may depict a receivedsignal 702 received under nighttime conditions, where the transmission power of the optical transmitter may be reduced to conserve energy, and/or where sudden changes in noise conditions and/or the reflectivity of objects (e.g., thick foliage) reduce the total received power. In this example, since no portion of the receivedsignal 702 exceeds thethreshold amplitude 708, thevalid pulse 706 will not be identified as a valid pulse. Instead, thevalid pulse 706 will be identified as noise, which is a false negative.
Furthermore, under clear conditions or other high noise conditions, the receivedpulse 704 may well exceed thethreshold amplitude 708, as shown in FIG. 7C. In this example, the entire receivedsignal 704 would be identified as a valid pulse, even though this is true positive for the valid pulse portion, but false negative for the noise portion of the signal.
Although in the example employing an ADC, the ADC may scale its output according to the total power, thereby normalizing the received signal, this may not be sufficient to avoid the above-described problem of identifying valid pulses using static thresholds.
Fig. 7D shows: an example received signal 712 (solid line) that includes avalid pulse 714 and anoise spur 716; dynamic noise floor 718 (dashed line); and an adjusted dynamic noise floor 720 (heavy dashed line).Received signal 712 may represent any received signal discussed herein. In some examples, the classifier discussed herein may determine the dynamic noise floor and/or the adjusted dynamic noise floor to classify at least a portion of the receivedsignal 712 as a valid pulse (i.e., to distinguish a valid pulse from pure noise) and/or to classify the received signal 712 (e.g., a valid pulse portion of the received signal 712) as a certain type. In some examples, this may effectively distinguish a valid pulse (i.e., a reflection of light emitted by the light emitter from an object) from pure noise. Noise may be introduced into receivedsignal 712 by noise in the photodiode, background light in the environment (e.g., light in the field of view of the light sensor, but not due to reflection of the emitted light), infrared radiation, solar noise, electrical and/or thermal noise, and the like.
In some examples, the classifier may determine thedynamic noise floor 718 based at least in part on computing a moving average of the receivedsignal 712. In some examples, the moving average may be based at least in part on the last moving average and the current value (e.g., magnitude, height) of receivedsignal 712. In some examples, the moving average may be a simple moving average, a weighted moving average, an exponential moving average, or the like. In some examples, the last moving average may be given more weight than the current value of receivedsignal 712. For example, the moving average of the current nth sample may be calculated as follows: mavgn=.99*mavgn-1+.01*MnWhere M is the amplitude of the receivedsignal 712. This is the equation used to generate thedynamic noise floor 718 in FIG. 7D.
In some examples, the classifier may modify thedynamic noise floor 718 to obtain an adjusteddynamic noise floor 720. In some examples, the classifier may modify thedynamic noise floor 718 based at least in part on characteristics of the receivedsignal 712 and/or the valid pulse. For example, the classifier may translate and/or scale thedynamic noise floor 720 based at least in part on the width and/or height of the receivedsignal 712. In some examples, the classifier may scale the dynamic noise floor by a scaling factor based at least in part on the maximum amplitude of the receivedsignal 712.
In some examples, the classifier may additionally or alternatively adjust the noise floor based at least in part on a temperature of the light sensor and/or the LIDAR system, a transmission power, an SNR, a noise power, a comparison of the transmission power to the received signal power, and/or the like. For example, the classifier may translate thedynamic noise floor 718 upward based at least in part on determining that at least a portion of the receivedsignal 712 has a power that exceeds the transmission power (e.g., sunlight may have directed the light sensor). In some examples, thedynamic noise floor 718 is translated by a determined factor such that at least some of the amplitude of the true positivevalid pulse 714 is between 2 and 3 times the amplitude of the adjusteddynamic noise floor 720. The factor may be determined based at least in part on the historical valid pulses and/or the transmission power.
In some examples, the classifier may classify at least a portion of the receivedsignal 712 as avalid pulse 714 based at least in part on a first threshold number (e.g., three or more) of samples exceeding thedynamic noise floor 718 and/or the adjusteddynamic noise floor 720. In some examples, the classifier may additionally or alternatively classify at least a portion of the receivedsignal 712 as a saturated signal based at least in part on the second threshold number of samples exceeding thedynamic noise floor 718 and/or the adjusteddynamic noise floor 720, and/or determining that a third threshold number of consecutive samples exceeding thedynamic noise floor 718 and/or the adjusteddynamic noise floor 720 are within a variance of each other. For example, the classifier may determine that three samples exceed the adjusted noise floor, and thus may identify those samples and each sample after three samples as a valid pulse until the classifier identifies a sample that does not exceed the adjusted noise floor. Among the samples identified as valid pulses, the classifier may determine that the valid pulse is a saturated pulse based at least in part on determining that a number of samples making up the valid pulse equals or exceeds five samples, and/or based at least in part on determining that the samples making up the valid pulse include at least three consecutive samples associated with amplitudes within ± 2 of each other.
Unless explicitly defined as a static threshold magnitude, references herein to a "threshold magnitude" may include a magnitude defined by thedynamic noise floor 718 and/or the adjusteddynamic noise floor 720.
Example System architecture
Fig. 8 is a block diagram of anexample architecture 800, theexample architecture 800 including an example vehicle system 802 for controlling operation of at least one vehicle (e.g., an autonomous vehicle) using distances determined by a LIDAR system, in accordance with any of the techniques discussed herein.
In some examples, the vehicle system 802 may include a processor 804 and/or a memory 806. These elements are shown in combination in fig. 8, but it should be understood that they may be separate elements of the vehicle system 802, and in some examples, components of the system may be implemented as hardware and/or software.
The processor 804 may include: a single processor system comprising one processor; or a multi-processor system that includes several processors (e.g., two, four, eight, or another suitable number). Processor 804 may be any suitable processor capable of executing instructions. For example, in various embodiments, processors 804 may be general-purpose or embedded processors implementing any of a variety of Instruction Set Architectures (ISAs) such as the x86, PowerPC, SPARC, or MIPS ISAs, or any other suitable ISA. In a multi-processor system, each processor 804 may implement the same ISA in common (but this is not required). In some examples, processor 804 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), an FPGA, an Application Specific Integrated Circuit (ASIC), or a combination thereof. In some examples, one or more of the classifiers and/or detectors discussed herein may be implemented using any of these processor architectures. For example, the classifier and/or one or more of the detectors may be an FPGA.
The example vehicle system 802 may include a memory 806. In some examples, the memory 806 may include a non-transitory computer-readable medium configured to store executable instructions/modules, data, and/or data items accessible to the processor 804. In various embodiments, the non-transitory computer-readable medium may be implemented using any suitable memory technology, such as Static Random Access Memory (SRAM), synchronous dynamic ram (sdram), non-volatile/flash type memory, or any other type of memory. In the illustrated example, program instructions and data implementing desired operations as described above are shown stored in a non-transitory computer readable memory. In other embodiments, program instructions and/or data may be received, transmitted or stored on different types of computer-accessible media (such as non-transitory computer-readable media) or on similar media separate from non-transitory computer-readable media. In general, the non-transitory computer-readable memory may include storage or memory media, such as flash memory (e.g., solid state memory), magnetic or optical media (e.g., magnetic disks), coupled to the example vehicle system 802 via an input/output ("I/O") interface 808. Program instructions and data stored via a non-transitory computer readable medium may be transmitted by a transmission medium or signal such as an electrical, electromagnetic, or digital signal, which may be transmitted via a communication medium such as a network and/or a wireless link, such as may be implemented vianetwork interface 810.
Further, while shown as a single unit in fig. 8, it is to be understood that the processor 804 and the memory 806 may be distributed among multiple computing devices of a vehicle and/or among multiple vehicles, data centers, remote operation centers, and the like. In some examples, the processor 804 and the memory 806 may perform at least some of the techniques discussed herein, and the processor 804 and the memory 806 may include the processor and memory of the LIDAR system discussed herein.
In some examples, an input/output ("I/O") interface 808 may be configured to coordinate I/O traffic between the processor 804, the memory 806, thenetwork interface 810, the sensors 812, the I/O devices 814, thedrive system 816, and/or any other hardware of the vehicle system 802. In some examples, I/O devices 814 may include external and/or internal speakers, displays, passenger input devices, and/or the like. In some examples, the I/O interface 808 may perform protocols, timing, or other data transformations to convert data signals from one component (e.g., a non-transitory computer-readable medium) into a format suitable for use by another component (e.g., a processor). In some examples, the I/O interface 808 may include support for devices attached through various types of peripheral buses (e.g., the Peripheral Component Interconnect (PCI) bus standard, the Universal Serial Bus (USB) standard, or variants thereof). In some embodiments, the functionality of the I/O interface 808 may be split into two or more separate components (e.g., a north bridge and a south bridge). Also, in some examples, some or all of the functionality of the I/O interface 808 (e.g., an interface to the memory 806) may be incorporated directly into the processor 804 and/or one or more other components of the vehicle system 802.
The example vehicle system 802 may include anetwork interface 810 configured to establish a communication link (i.e., a "network") between the vehicle system 802 and one or more other devices. For example, thenetwork interface 810 may be configured to allow data exchange between the vehicle system 802 and anothervehicle 818 via thefirst network 820 and/or between the vehicle system 802 and aremote computing system 822 via thesecond network 824. For example, thenetwork interface 810 may enable wireless communication between anothervehicle 818 and/or aremote computing device 822. In various implementations, thenetwork interface 810 may support communication via a wireless general-purpose data network (e.g., a Wi-Fi network) and/or a telecommunications network (e.g., a cellular communication network, a satellite network, etc.). In some examples, sensor data discussed herein (e.g., received signal, TDOA, selected distance, estimated distance, height and/or width of received signal, etc.) may be received at a first vehicle and transmitted to a second vehicle. In some examples, at least some components of the LIDAR may be located at different devices. For example, a first vehicle may include a light emitter and a light sensor and may generate a receive signal, but may transmit the receive signal to a second vehicle and/or a remote computing device on which one or more of the classifiers and/or detectors are additionally or alternatively placed.
The example vehicle system 802 may include a sensor 812, for example, the sensor 812 configured to position the vehicle system 802 in an environment to detect one or more objects in the environment, to sense movement of the example vehicle system 802 in its environment, to sense environmental data (e.g., ambient temperature, pressure, and humidity), and/or to sense conditions inside the example vehicle system 802 (e.g., occupant count, interior temperature, noise level). The sensors 812 may include, for example: one ormore LIDAR sensors 818, which may represent theexample system 100 and/or components thereof; one or more cameras (e.g., RGB camera; intensity (grayscale) camera; infrared camera; depth camera; stereo camera); one or more magnetometers; one or more radar sensors; one or more sonar sensors; one or more microphones for sensing sound; one or more IMU sensors (e.g., including accelerometers and gyroscopes); one or more GPS sensors; one or more geiger counter sensors; one or more wheel encoders; one or more drive system sensors; a speed sensor; and/or other sensors related to the operation of the example vehicle system 802.
In some examples, although theLIDAR 818 is depicted as a discrete sensor in fig. 8, at least one of the components of the LIDAR 818 (e.g., the components discussed in fig. 1, 4, etc.) may be separate from theLIDAR 818. For example, as discussed herein, the processor 804 and/or the memory 806 may include programming and/or circuitry for a classifier and/or one or more detectors.
In some examples, the example vehicle system 802 may include aperception engine 826 and aplanner 830.
Theperception engine 826 may include instructions stored in the memory 806 that, when executed by the processor 804, configure the processor 804 to receive sensor data from the sensor 812 as input (which may include an estimated distance and/or a selected distance determined by the LIDAR system discussed herein), and to output representative data, for example. In some examples, theperception engine 826 may include instructions stored in the memory 806 that, when executed by the processor 804, configure the processor 804 to determine a LIDAR point cloud based at least in part on an estimated distance and/or a selected distance determined in accordance with any of the techniques discussed herein. In some examples, theperception engine 826 may use a LIDAR point cloud to determine one or more of: a representation of an environment surrounding the example vehicle system 802, a pose (e.g., position and orientation) of an object in the environment surrounding the example vehicle system 802, an object trajectory associated with the object (e.g., historical position, speed, acceleration, and/or heading of the object over a period of time (e.g., 5 seconds)), and/or an object classification associated with the object (e.g., pedestrian, vehicle, bicycle, etc.). In some examples, theperception engine 826 may be configured to make more predictions than to predict object trajectories of one or more objects. For example, theperception engine 826 may be configured to predict trajectories of multiple objects based on, for example, probabilistic determinations or multi-modal distributions of predicted positions, trajectories, and/or velocities associated with objects detected from a LIDAR point cloud.
In some examples, theplanner 830 may receive the LIDAR point cloud and/or any other additional information (e.g., object classification, object trajectory, vehicle pose) and use that information to generate a trajectory for controlling the motion of the vehicle 802.
Example clauses
A. A non-transitory computer-readable medium storing instructions that, when executed, cause one or more processors to perform operations comprising: causing the light emitter to emit light pulses; distinguishing a return pulse from noise of a signal received at a light sensor, the return pulse comprising reflected light and noise from the light pulse; and determining a distance to an object reflecting at least a portion of the light pulse to the light sensor based at least in part on a delay between the time associated with transmitting the light pulse and receiving the return pulse at the light sensor, wherein distinguishing the return pulse from noise of the received signal comprises: determining a noise floor, the determining the noise floor comprising: determining a moving average of the amplitude of the received signal; and at least one of translating or scaling the moving average based at least in part on at least one characteristic of the received signal; determining a number of samples of the received signal associated with an amplitude that exceeds the noise floor; determining that the number of samples exceeds a threshold number; and indicating that the samples that exceed the noise floor represent the return pulse based at least in part on determining that the number of samples exceeds the threshold number.
B. The non-transitory computer-readable medium of paragraph a, wherein the threshold number is a first threshold number and the operations further comprise: determining that the number of samples exceeds a second threshold number, the second threshold number being greater than the first threshold number; and identifying the return pulse as a saturated signal based at least in part on determining that the threshold number of samples or more exceeds the threshold amplitude.
C. The non-transitory computer-readable medium of paragraph a or B, the operations further comprising: determining the distance based at least in part on the identifying based at least in part on a rising edge of the return pulse.
D. The non-transitory computer-readable medium of any of paragraphs a through C, wherein the threshold number is a first threshold number, and the operations further comprise: determining that the number of samples does not reach a second threshold number; and identifying the return pulse as an unsaturated signal based at least in part on determining that the number of samples is less than the second threshold number.
E. The non-transitory computer-readable medium of any one of paragraphs a-D, the operations further comprising: determining the distance based at least in part on correlating the return pulse with a reference signal based at least in part on the identifying.
F. A computer-implemented method, comprising: receiving a signal indicative of light received at a light sensor, the signal being discretized into a series of samples; determining a noise floor based at least in part on determining a moving average of the signal; determining a number of samples of the signal that exceed the noise floor; and determining that the signal includes a valid return pulse or only noise based at least in part on the number of samples.
G. The computer-implemented method of paragraph F, further comprising: determining a distance based at least in part on the signal and determining that the signal includes a valid return pulse.
H. The computer-implemented method of paragraph F or G, further comprising: determining that one or more samples of the signal do not exceed the noise floor; and identifying the one or more samples as noise.
I. The computer-implemented method of any of paragraphs F through H, wherein determining that the signal includes a valid return pulse further comprises determining that the number of samples exceeds a threshold number.
J. The computer-implemented method of any of paragraphs F through I, wherein the threshold number is a first threshold number, and the method further comprises: determining that the signal is saturated based at least in part on determining that the number of samples exceeds a second threshold number, the second threshold number being greater than the first threshold number; or determining that the signal is not saturated based at least in part on determining that the number of samples does not reach the second threshold number.
K. The computer-implemented method of any of paragraphs F through J, wherein determining the moving average comprises determining a sum of 10% of a current amplitude and 90% of a previous moving average.
L. the computer-implemented method of any of paragraphs F through K, wherein determining the noise floor additionally comprises: at least one of vertically translating the moving average or scaling the moving average based at least in part on at least one characteristic of the signal.
M. the computer-implemented method of paragraph L, wherein the at least one characteristic comprises a magnitude of the signal.
The computer-implemented method of paragraph F, further comprising: discarding the signal based at least in part on determining that the signal includes only noise.
A system, comprising: a light sensor; one or more processors; and one or more computer-readable media storing instructions executable by the one or more processors, wherein the instructions program the one or more processors to: receiving a signal indicative of light received at the light sensor, the signal being discretized into a series of samples; generating a noise floor based at least in part on the signal; determining that a number of samples associated with the magnitude exceeds a threshold number; determining a time delay of arrival from the signal; and determining a distance based at least in part on the time-of-arrival delay.
P. the system of paragraph O, wherein generating the noise floor comprises determining a moving average.
Q. the system of paragraph O or P, wherein the moving average gives a greater weight to the previous magnitude than to the current magnitude.
R. the system of any of paragraphs O to Q, wherein generating the noise floor additionally comprises: at least one of vertically translating the moving average or scaling the moving average based at least in part on at least one characteristic of the signal.
S. the system of any of paragraphs O to R, wherein the at least one characteristic comprises an amplitude of the signal.
T. the system of any of paragraphs O to S, wherein generating the noise floor additionally comprises: scaling the moving average based at least in part on a magnitude of the signal, the scaling increasing in proportion to the magnitude.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claims.
The modules described herein represent instructions that may be stored in any type of computer-readable medium and may be implemented in software and/or hardware. All of the above described methods and processes may be embodied in, and fully automated by, software code modules and/or computer-executable instructions executed by one or more computers or processors, hardware, or some combination thereof. Some or all of these methods may alternatively be embodied in dedicated computer hardware.
Conditional language (e.g., "may", "might") should be understood in the context of this disclosure to mean that some examples include certain features, elements, and/or steps, while other examples do not include certain features, elements, and/or steps, unless expressly stated otherwise. Thus, such conditional language is not generally intended to imply that one or more examples require certain features, elements, and/or steps in any way or that one or more examples necessarily include logic for determining, with or without user input or prompting, whether certain features, elements, and/or steps are included or are to be performed in any particular example.
Unless explicitly stated otherwise, combinatory language such as the phrase "X, Y or at least one of Z" should be understood to mean that the item, term, etc. can be X, Y or Z or any combination thereof (including multiples of each element). The use of "a" or "an" is inclusive of the singular and the plural, unless explicitly described as such.
Any conventional descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the drawings should be understood as potentially representing modules, segments, or portions of code which include computer-executable instructions for performing particular logical functions or elements in the routines. Alternative embodiments are included within the scope of the examples described herein in which elements or functions may be deleted from the elements or functions shown or discussed or performed out of the order shown or discussed (including substantially simultaneous, in reverse order, with additional operations or with omission of operations), depending on the functionality involved, as would be understood by those skilled in the art.
It should be emphasized that many variations and modifications may be made to the above-described examples, and elements thereof should be understood to be elements of other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.

Claims (15)

CN201980020319.1A2018-03-202019-03-20 Effective signal detection using adaptive identification of the noise floorActiveCN111868561B (en)

Applications Claiming Priority (3)

Application NumberPriority DateFiling DateTitle
US15/925,7702018-03-20
US15/925,770US10830881B2 (en)2018-03-202018-03-20Active signal detection using adaptive identification of a noise floor
PCT/US2019/023254WO2019183278A1 (en)2018-03-202019-03-20Active signal detection using adaptive identification of a noise floor

Publications (2)

Publication NumberPublication Date
CN111868561Atrue CN111868561A (en)2020-10-30
CN111868561B CN111868561B (en)2024-05-14

Family

ID=67985076

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201980020319.1AActiveCN111868561B (en)2018-03-202019-03-20 Effective signal detection using adaptive identification of the noise floor

Country Status (5)

CountryLink
US (1)US10830881B2 (en)
EP (1)EP3769116A4 (en)
JP (1)JP7308856B2 (en)
CN (1)CN111868561B (en)
WO (1)WO2019183278A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2022198638A1 (en)*2021-03-262022-09-29深圳市大疆创新科技有限公司Laser ranging method, laser ranging device, and movable platform
CN118018874A (en)*2024-02-042024-05-10北京弘图半导体有限公司 CMOS image sensor, camera system and method for improving dynamic range

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11105925B2 (en)2017-03-012021-08-31Ouster, Inc.Accurate photo detector measurements for LIDAR
US10830880B2 (en)2018-03-202020-11-10Panosense Inc.Selecting LIDAR pulse detector depending on pulse type
US10768281B2 (en)2018-03-202020-09-08Panosense Inc.Detecting a laser pulse edge for real time detection
US11740335B2 (en)2019-03-272023-08-29Zoox, Inc.Identifying and/or removing false positive detections from LIDAR sensor output
US11480686B2 (en)*2019-03-272022-10-25Zoox, Inc.Identifying and/or removing false positive detections from lidar sensor output
EP3832339B1 (en)*2019-12-062025-04-30ID Quantique S.A. LIDAR WITH PHOTON RESOLUTION DETECTOR
WO2021173296A1 (en)2020-02-272021-09-02Becton, Dickinson And CompanyMethods for identifying saturated data signals in cell sorting and systems for same
JP7068364B2 (en)*2020-03-022022-05-16三菱電機株式会社 Object detection device
US12138559B2 (en)*2020-03-112024-11-12Spin Master Ltd.System and method for controlling a flying toy
US20220035035A1 (en)*2020-07-312022-02-03Beijing Voyager Technology Co., Ltd.Low cost range estimation techniques for saturation in lidar
JP7161230B2 (en)*2020-11-092022-10-26国立研究開発法人宇宙航空研究開発機構 DISTANCE MEASURING DEVICE, DISTANCE MEASURING METHOD, AND PROGRAM
JP7112765B2 (en)*2020-11-092022-08-04国立研究開発法人宇宙航空研究開発機構 DISTANCE MEASURING DEVICE, DISTANCE MEASURING METHOD, AND PROGRAM
US11867835B2 (en)*2020-11-102024-01-09Denso CorporationSystem and method for classifying pulses generated by a sensor
CN115144863A (en)*2021-03-312022-10-04上海禾赛科技有限公司Method for determining noise level, lidar and ranging method
US11467267B1 (en)*2021-07-092022-10-11Aeva, Inc.Techniques for automatic gain control in a time domain for a signal path for a frequency modulated continuous wave (FMCW) light detection and ranging (LIDAR) system
US12164058B2 (en)*2021-07-232024-12-10Zoox, Inc.Radar data analysis and concealed object detection
US20230131721A1 (en)*2021-07-232023-04-27Zoox, Inc.Radar and doppler analysis and concealed object detection
US20230124956A1 (en)*2021-10-202023-04-20AyDeeKay LLC dba Indie SemiconductorSignal-Adaptive and Time-Dependent Analog-to-Digital Conversion Rate in a Ranging Receiver
EP4437363A1 (en)*2021-11-262024-10-02Sony Semiconductor Solutions CorporationTime-of-flight circuitry and time-of-flight method
WO2023183632A1 (en)*2022-03-252023-09-28Innovusion, Inc.A method for accurate time-of-flight calculation on saturated and non-saturated lidar receiving pulse data
EP4249950B1 (en)*2022-03-252024-03-13Sick AgDetection of an object and distance measurement
CN116992352B (en)*2023-09-272023-12-05中国人民解放军战略支援部队航天工程大学Method for generating electromagnetic signal classification recognition data set, recognition method and device
CN120067896B (en)*2025-04-272025-08-22中国计量科学研究院 A background event identification method based on random forest classification model

Citations (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5455669A (en)*1992-12-081995-10-03Erwin Sick Gmbh Optik-ElektronikLaser range finding apparatus
US5488620A (en)*1995-01-051996-01-30Hughes Aircraft CompanyPassively mode locked-laser and method for generating a pseudo random optical pulse train
DE19607345A1 (en)*1996-02-271997-08-28Sick Ag Laser distance determination device
AUPR301401A0 (en)*2001-02-092001-03-08Commonwealth Scientific And Industrial Research OrganisationLidar system and method
US20010033246A1 (en)*1998-07-102001-10-25Burchett Michael H.Signal processing method
US20030173514A1 (en)*2002-03-182003-09-18Syage Jack A.High dynamic range analog-to-digital converter
US20030218919A1 (en)*2002-02-082003-11-27Omron CorporationDistance measuring apparatus
CN102113210A (en)*2008-08-052011-06-29高通股份有限公司Joint time-frequency automatic gain control for wireless communication
US20110279307A1 (en)*2010-05-142011-11-17Massachusetts Institute Of TechnologyHigh Duty Cycle Radar with Near/Far Pulse Compression Interference Mitigation
US20120242976A1 (en)*2009-10-092012-09-27EpsilineDevice for measuring wind speed
DE102012211222A1 (en)*2011-07-052013-01-10Denso CorporationTarget information measuring apparatus e.g. radar device used in motor vehicle, has processing module that determines finally distance to target using first and second information and parameter correlated with intensity of echo
US20130076861A1 (en)*2010-01-212013-03-28Shmuel SternklarMethod and apparatus for probing an object, medium or optical path using noisy light
CN103076611A (en)*2013-01-092013-05-01中国电子科技集团公司第十一研究所Method and device for measuring speed and distance by coherent detecting laser
US20130269424A1 (en)*2012-04-132013-10-17Waters Technologies CorporationChromatographic Optical Detection System
CN103675791A (en)*2013-12-052014-03-26北京师范大学Method for recognizing cloud based on mie-scattering laser radar with equalized value distribution
WO2014144324A1 (en)*2013-03-152014-09-18Seno Medical Instruments, Inc.Noise suppression in an optoacoustic system
CN105549021A (en)*2014-10-222016-05-04株式会社电装Object detection apparatus
CN105607073A (en)*2015-12-182016-05-25哈尔滨工业大学 A Photon Counting Imaging LiDAR Using Adjacent Pixel Thresholding Method to Filter Noise in Real Time
CN107144829A (en)*2017-06-292017-09-08南京信息工程大学A kind of efficient laser radar echo signal antinoise method
US20200319611A1 (en)*2016-06-302020-10-08Intel CorporationSensor based data set method and apparatus
WO2022206031A1 (en)*2021-03-312022-10-06上海禾赛科技有限公司Method for determining noise level, lidar, and ranging method

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2853578B2 (en)*1993-06-281999-02-03日産自動車株式会社 Radar equipment
JPH1138136A (en)*1997-07-151999-02-12Hamamatsu Photonics KkDistance measuring equipment
US7245893B1 (en)2001-09-252007-07-17Atheros Communications, Inc.Method and system for noise floor calibration and receive signal strength detection
US6650404B1 (en)*2002-05-282003-11-18Analog Modules, Inc.Laser rangefinder receiver
US8577538B2 (en)*2006-07-142013-11-05Irobot CorporationMethod and system for controlling a remote vehicle
CN101688774A (en)*2006-07-132010-03-31威力登音响公司High-precision laser radar system
USRE46672E1 (en)2006-07-132018-01-16Velodyne Lidar, Inc.High definition LiDAR system
DE602007006232D1 (en)*2006-11-142010-06-10Instro Prec Ltd DETECTION SYSTEM FOR INTAKES
US7554652B1 (en)2008-02-292009-06-30Institut National D'optiqueLight-integrating rangefinding device and method
WO2010051615A1 (en)*2008-11-052010-05-14Neptec Design Group Ltd.Return pulse shape analysis for falling edge object discrimination of aerosol lidar
WO2010141631A1 (en)*2009-06-022010-12-09Velodyne Acoustics, Inc.Color lidar scanner
US9091754B2 (en)*2009-09-022015-07-28Trimble A.B.Distance measurement methods and apparatus
JP5804467B2 (en)2010-03-312015-11-04北陽電機株式会社 Signal processing device and scanning distance measuring device
EP3786668A1 (en)*2010-05-172021-03-03Velodyne Lidar, Inc.High definition lidar system
US9506944B2 (en)2010-11-032016-11-29Koninklijke Philips N.V.Velocity determination apparatus
US8982668B2 (en)2010-11-152015-03-17Semiconductor Components Industries, LlcSemiconductor device and method of forming same for correlation detection
US8659747B2 (en)2011-06-172014-02-25Raytheon CompanyDetermining thresholds to filter noise in GmAPD LADAR data
JP6061588B2 (en)2012-09-262017-01-18古野電気株式会社 Radar receiving apparatus and radar apparatus provided with the same
US9360554B2 (en)*2014-04-112016-06-07Facet Technology Corp.Methods and apparatus for object detection and identification in a multiple detector lidar array
JP6852085B2 (en)*2015-11-302021-03-31ルミナー テクノロジーズ インコーポレイテッド Photodetection and ranging systems with distributed lasers and multiple sensor heads, and pulsed lasers for photodetection and ranging systems
JP2017161279A (en)2016-03-082017-09-14オムロン株式会社Bicycle detector, bicycle detection method, and bicycle detection program
US10962647B2 (en)*2016-11-302021-03-30Yujin Robot Co., Ltd.Lidar apparatus based on time of flight and moving object
US11105925B2 (en)*2017-03-012021-08-31Ouster, Inc.Accurate photo detector measurements for LIDAR
US10254388B2 (en)*2017-03-282019-04-09Luminar Technologies, Inc.Dynamically varying laser output in a vehicle in view of weather conditions
US10830880B2 (en)2018-03-202020-11-10Panosense Inc.Selecting LIDAR pulse detector depending on pulse type
US10768281B2 (en)2018-03-202020-09-08Panosense Inc.Detecting a laser pulse edge for real time detection
US10935378B2 (en)*2018-05-212021-03-02Tusimple, Inc.System and method for angle measurement

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5455669A (en)*1992-12-081995-10-03Erwin Sick Gmbh Optik-ElektronikLaser range finding apparatus
US5488620A (en)*1995-01-051996-01-30Hughes Aircraft CompanyPassively mode locked-laser and method for generating a pseudo random optical pulse train
DE19607345A1 (en)*1996-02-271997-08-28Sick Ag Laser distance determination device
US20010033246A1 (en)*1998-07-102001-10-25Burchett Michael H.Signal processing method
AUPR301401A0 (en)*2001-02-092001-03-08Commonwealth Scientific And Industrial Research OrganisationLidar system and method
US20030218919A1 (en)*2002-02-082003-11-27Omron CorporationDistance measuring apparatus
US20030173514A1 (en)*2002-03-182003-09-18Syage Jack A.High dynamic range analog-to-digital converter
CN102113210A (en)*2008-08-052011-06-29高通股份有限公司Joint time-frequency automatic gain control for wireless communication
US20120242976A1 (en)*2009-10-092012-09-27EpsilineDevice for measuring wind speed
US20130076861A1 (en)*2010-01-212013-03-28Shmuel SternklarMethod and apparatus for probing an object, medium or optical path using noisy light
US20110279307A1 (en)*2010-05-142011-11-17Massachusetts Institute Of TechnologyHigh Duty Cycle Radar with Near/Far Pulse Compression Interference Mitigation
DE102012211222A1 (en)*2011-07-052013-01-10Denso CorporationTarget information measuring apparatus e.g. radar device used in motor vehicle, has processing module that determines finally distance to target using first and second information and parameter correlated with intensity of echo
US20130269424A1 (en)*2012-04-132013-10-17Waters Technologies CorporationChromatographic Optical Detection System
CN103076611A (en)*2013-01-092013-05-01中国电子科技集团公司第十一研究所Method and device for measuring speed and distance by coherent detecting laser
WO2014144324A1 (en)*2013-03-152014-09-18Seno Medical Instruments, Inc.Noise suppression in an optoacoustic system
CN103675791A (en)*2013-12-052014-03-26北京师范大学Method for recognizing cloud based on mie-scattering laser radar with equalized value distribution
CN105549021A (en)*2014-10-222016-05-04株式会社电装Object detection apparatus
CN105607073A (en)*2015-12-182016-05-25哈尔滨工业大学 A Photon Counting Imaging LiDAR Using Adjacent Pixel Thresholding Method to Filter Noise in Real Time
US20200319611A1 (en)*2016-06-302020-10-08Intel CorporationSensor based data set method and apparatus
CN107144829A (en)*2017-06-292017-09-08南京信息工程大学A kind of efficient laser radar echo signal antinoise method
WO2022206031A1 (en)*2021-03-312022-10-06上海禾赛科技有限公司Method for determining noise level, lidar, and ranging method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
姜燕冰;严惠民;王启明;: "APD探测电路脉冲饱和现象及解决方法", 光学仪器, no. 01*
郭少锋;程湘爱;傅喜泉;孙运强;王飞;李文煜;周玉平;陆启生;文双春;: "高重复频率飞秒激光对面阵CCD的干扰和破坏", 强激光与粒子束, no. 11*

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2022198638A1 (en)*2021-03-262022-09-29深圳市大疆创新科技有限公司Laser ranging method, laser ranging device, and movable platform
CN118018874A (en)*2024-02-042024-05-10北京弘图半导体有限公司 CMOS image sensor, camera system and method for improving dynamic range
CN118018874B (en)*2024-02-042024-09-20北京弘图半导体有限公司CMOS image sensor, image pickup system and method for improving dynamic range

Also Published As

Publication numberPublication date
US10830881B2 (en)2020-11-10
JP2021518556A (en)2021-08-02
CN111868561B (en)2024-05-14
WO2019183278A1 (en)2019-09-26
EP3769116A4 (en)2022-03-16
EP3769116A1 (en)2021-01-27
JP7308856B2 (en)2023-07-14
US20190293769A1 (en)2019-09-26

Similar Documents

PublicationPublication DateTitle
CN111919138B (en)Detecting laser pulse edges for real-time detection
US11681029B2 (en)Detecting a laser pulse edge for real time detection
CN111868561B (en) Effective signal detection using adaptive identification of the noise floor
CN111538020B (en) Histogram-based signal detection with sub-regions corresponding to adaptive bin widths
KR20220145845A (en) Noise Filtering Systems and Methods for Solid State LiDAR
US11733354B2 (en)LIDAR ring lens return filtering
KR20200100099A (en) Systems and methods for efficient multi-feedback photo detectors
JP7214888B2 (en) Radar power control method and apparatus
JP2022539706A (en) Adaptive multi-pulse LIDAR system
JP7294139B2 (en) Distance measuring device, distance measuring device control method, and distance measuring device control program
EP4016124B1 (en)Time of flight calculation with inter-bin delta estimation
US11841466B2 (en)Systems and methods for detecting an electromagnetic signal in a constant interference environment
US20210003676A1 (en)System and method
US20240288555A1 (en)Lidar data processing method
CN113050119A (en)Judgment method suitable for interference of optical flash three-dimensional imaging radar
US20230036431A1 (en)BLOOM COMPENSATION IN A LIGHT DETECTION AND RANGING (LiDAR) SYSTEM
KR102756250B1 (en) Telemetry method and system using imager
US20250277898A1 (en)Lidar noise canceling device and method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right

Effective date of registration:20211208

Address after:California, USA

Applicant after:Zux Co.,Ltd.

Address before:California, USA

Applicant before:PANOSENSE, Inc.

TA01Transfer of patent application right
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp