Movatterモバイル変換


[0]ホーム

URL:


CN120188067A - Unevenly distributed illumination for depth sensors - Google Patents

Unevenly distributed illumination for depth sensors
Download PDF

Info

Publication number
CN120188067A
CN120188067ACN202380078118.3ACN202380078118ACN120188067ACN 120188067 ACN120188067 ACN 120188067ACN 202380078118 ACN202380078118 ACN 202380078118ACN 120188067 ACN120188067 ACN 120188067A
Authority
CN
China
Prior art keywords
light
depth sensor
degrees
lidar
fov
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202380078118.3A
Other languages
Chinese (zh)
Inventor
王浩森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taida Intelligent American Co ltd
Original Assignee
Taida Intelligent American Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US18/389,406external-prioritypatent/US20240159518A1/en
Application filed by Taida Intelligent American Co ltdfiledCriticalTaida Intelligent American Co ltd
Publication of CN120188067ApublicationCriticalpatent/CN120188067A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

A depth sensor is provided. The depth sensor includes one or more light sources configured to provide a plurality of light beams and one or more optical structures coupled to the one or more light sources. The one or more optical structures are configured to receive a plurality of light beams. At least one of the one or more light sources or the one or more optical structures is configured to unevenly distribute the plurality of light beams in a vertical field of view (FOV) such that the vertical FOV includes dense and sparse regions. The dense region of the vertical FOV has a higher beam density than the sparse region of the vertical FOV, and the depth sensor does not include mechanically movable parts configured to scan light.

Description

Non-uniformly distributed illumination of depth sensor
Cross Reference to Related Applications
The present application claims priority from U.S. patent application Ser. No. 18/389,406 entitled "uneven distribution illumination of depth sensor" filed on day 11 and 14 of 2023 and U.S. provisional patent application Ser. No. 63/425,644 entitled "uneven distribution illumination of depth sensor" filed on day 11 and 15 of 2022. The contents of both applications are hereby incorporated by reference in their entirety for all purposes.
Technical Field
The present disclosure relates generally to depth sensors and, more particularly, to unevenly distributed illumination of depth sensors.
Background
Light detection and ranging (LiDAR) systems use light pulses to create an image or point cloud of an external environment. The LiDAR system may be a scanning or non-scanning system. Some typical scanning LiDAR systems include a light source, a light emitter, a light turning system, and a light detector. The light source generates a beam of light that, when emitted from the LiDAR system, is directed in a particular direction by the light turning system. When the emitted light beam is scattered or reflected by an object, a portion of the scattered or reflected light returns to the LiDAR system to form a return light pulse. The light detector detects the return light pulse. Using the difference between the time that the return light pulse is detected and the time that the corresponding light pulse in the beam is emitted, the LiDAR system may determine the distance to the object based on the speed of light. This technique of determining distance is known as time of flight (ToF) technique. The light steering system may direct the light beams along different paths to allow the LiDAR system to scan the surrounding environment and produce an image or point cloud. A typical non-scanning LiDAR system illuminates the entire field of view (FOV) rather than scanning the entire FOV.
One example of a non-scanning LiDAR system is flash LiDAR, which may also use ToF technology to measure distance to an object. LiDAR systems may also use techniques other than time-of-flight and scanning to measure the surrounding environment.
Disclosure of Invention
A depth sensor, also known as a depth camera or 3D sensor, is a device that can capture spatial information of objects in its field of view. These sensors are designed to measure the distance from the sensor to different points in the environment, thereby creating a three-dimensional representation of the scene. The depth sensor may measure distance (also referred to as depth) using a direct time-of-flight (dToF) method, and is thus a dToF sensor. The depth sensor may also be an indirect time-of-flight (iToF) sensor that measures distance using an indirect time-of-flight (iToF) method. A solid state depth sensor is a type of depth sensor that is capable of outputting three-dimensional (3D) depth measurements of an external environment, while there are no mechanically movable parts inside the sensor. For example, it may be a flash LiDAR, which may use a Vertical Cavity Surface Emitting Laser (VCSEL) as the light source and a Single Photon Avalanche Diode (SPAD) array as the light detector. The absence of mechanically movable parts is an advantage of solid state depth sensors. When the solid state depth sensor is operating, the laser source emits laser light toward the field of view, and the photodetector captures reflected or scattered light (also referred to as return light) from the object. In the following disclosure, depth sensors, flash LiDAR, iToF sensors may also be referred to as LiDAR. Flash LiDAR is also known as dToF sensor. The present disclosure thus uses LiDAR as an example of a depth sensor. However, it should be understood that the depth sensor may be a ToF sensor, a structured light sensor (e.g., using a known light pattern to measure depth based on light distortion), a stereoscopic vision sensor (e.g., using two or more cameras to measure depth), or a LiDAR system.
The present disclosure provides a novel method for optimizing the emitted light distribution of a solid state depth sensor. With this novel method, the detection range distribution of the depth sensor can be optimized, and the power consumption can be reduced.
In one embodiment, a depth sensor is provided. The depth sensor includes one or more light sources configured to provide a plurality of light beams and one or more optical structures coupled to the one or more light sources. The one or more optical structures are configured to
A plurality of light beams is received. At least one of the one or more light sources or the one or more optical structures is configured to unevenly distribute the plurality of light beams in a vertical field of view (FOV) such that the vertical FOV includes dense and sparse regions. The dense region of the vertical FOV has a higher beam density than the sparse region of the vertical FOV, and the depth sensor does not include mechanically movable parts configured to scan light.
Drawings
The application may best be understood by reference to the following description of an embodiment taken in conjunction with the accompanying drawings, in the several figures of which like parts may be designated by like numerals.
FIG. 1 illustrates one or more exemplary LiDAR systems disposed or included in a motor vehicle.
FIG. 2 is a block diagram illustrating interactions between an exemplary LiDAR system and a plurality of other systems including a vehicle perception and planning system.
FIG. 3 is a block diagram illustrating an exemplary LiDAR system.
Fig. 4 is a block diagram illustrating an exemplary semiconductor-based laser source.
Fig. 5A-5C illustrate an exemplary LiDAR system that uses pulsed signals to measure distance to objects disposed in a field of view (FOV).
FIG. 6 is a block diagram illustrating an exemplary apparatus for implementing the systems, apparatuses, and methods in various embodiments.
FIG. 7 is a block diagram illustrating an exemplary depth sensor according to some embodiments.
FIG. 8 is a block diagram illustrating another exemplary depth sensor according to some embodiments.
Fig. 9 illustrates an exemplary depth sensor providing an unevenly distributed light beam in the vertical direction of the FOV, according to some embodiments.
Fig. 10 is a block diagram illustrating a change in detection range requirements as a function of transmitted light angle in a vertical FOV, according to some embodiments.
Fig. 11 is a block diagram illustrating providing an uneven distribution of light beams by unevenly placing VCSEL elements in a VCSEL laser array according to some embodiments.
FIG. 12 is a block diagram illustrating providing an uneven distribution of a light beam through the use of an optical diffuser, according to some embodiments.
Fig. 13 is a block diagram illustrating providing non-uniform distribution of a light beam by using a semiconductor wafer with a microlens array, according to some embodiments.
FIG. 14 is a flow chart illustrating a method of unevenly distributing a plurality of light beams using a depth sensor, according to some embodiments.
Detailed Description
The following description sets forth numerous specific details, such as specific configurations, parameters, examples, etc., in order to provide a more thorough understanding of the various embodiments of the present invention. It should be recognized, however, that such description is not intended as a limitation on the scope of the present invention, but is instead intended to provide a better description of the exemplary embodiments.
Throughout the specification and claims, the following terms have the meanings explicitly associated herein, unless the context clearly dictates otherwise:
As used herein, the phrase "in one embodiment" does not necessarily refer to the same embodiment, although it may. Accordingly, as described below, various embodiments of the present invention may be readily combined without departing from the scope or spirit of the present disclosure.
As used herein, the term "or" is an inclusive "or" operator and is equivalent to the term "and/or" unless the context clearly indicates otherwise.
The term "based on" is not exclusive and allows for being based on additional factors not described, unless the context clearly indicates otherwise.
As used herein, unless the context indicates otherwise, the term "coupled to" is intended to include both direct coupling (where two elements coupled to each other are in contact with each other) and indirect coupling (where at least one additional element is located between the two elements). Thus, the terms "coupled to" and "coupled with" are used synonymously. In the context of a networked environment in which two or more components or devices are capable of exchanging data, the terms "coupled to" and "coupled with" are also used to mean "communicably coupled with" possibly via one or more intermediary devices. The component or device may be an optical, mechanical and/or electrical device.
Although the following description uses the terms "first," "second," etc. to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, the first detection range may be referred to as a second detection range, and similarly, the second detection range may be referred to as a first detection range, without departing from the scope of the various described examples. The first detection range and the second detection range may both be detection ranges, and in some cases may be separate, different detection ranges.
In addition, throughout the specification, the meaning of "a", "an", and "the" includes plural numbers, and the meaning of "in" may include "in" and "on".
While some of the various embodiments presented herein constitute a single combination of inventive elements, it should be understood that the inventive subject matter is considered to include all possible combinations of the disclosed elements. Thus, if one embodiment includes elements A, B and C, while another embodiment includes elements B and D, the inventive subject matter is also considered to include other remaining combinations of A, B, C or D, even if not explicitly discussed herein. Furthermore, the transitional term "comprising" means having, or being, a component or member. As used herein, the transitional term "comprising" is inclusive or open-ended and does not exclude additional, unrecited elements or method steps.
As used in the description herein and throughout the claims that follow, when a system, engine, server, device, module, or other computing element is described as being configured to perform or execute a function on data in memory, the meaning of "configured to" or "programmed to" is defined as one or more processors or cores of the computing element being programmed by a set of software instructions stored in the memory of the computing element to perform the set of functions on target data or data objects stored in memory.
It should be noted that any language for a computer should be understood to include any suitable combination of computing devices or network platforms (including servers, interfaces, systems, databases, proxies, peers, engines, controllers, modules, or other types of computing devices operating alone or in concert).
It should be appreciated that the computing device includes a processor configured to execute software instructions stored on a tangible, non-transitory computer-readable storage medium (e.g., hard drive, FPGA, PLA, solid state drive, RAM, flash memory, ROM, or any other volatile or non-volatile storage device). The software instructions configure or program the computing device to provide roles, responsibilities, or other functions, as discussed below with respect to the disclosed apparatus. Furthermore, the disclosed techniques may be embodied as a computer program product comprising a non-transitory computer-readable medium storing software instructions that cause a processor to perform the disclosed steps associated with the implementation of a computer-based algorithm, process, method, or other instruction. In some embodiments, the various servers, systems, databases, or interfaces exchange data using standardized protocols or algorithms, possibly based on HTTP, HTTPS, AES, public-private key exchanges, web services APIs, known financial transaction protocols, or other electronic information exchange methods. The data exchange between the devices may be through a packet switched network, the internet, LAN, WAN, VPN or other type of packet switched network, a circuit switched network, a cell switched network, or other type of network.
A solid state depth sensor is a sensor capable of outputting three-dimensional (3D) depth measurements of a field of view, while having no mechanically movable parts inside the sensor. The solid state depth sensor may be a semiconductor based sensor. One type of solid state sensor is a flash LiDAR. When flash LiDAR operates, the entire FOV is typically illuminated by a single pulse or a single shot of a widely divergent laser beam. Unlike scanning LiDAR (e.g., a LiDAR system with an optical steering mechanism), flash LiDAR may not have mechanically movable part optics for scanning the FOV. Thus, without the use of scanning components, flash LiDAR may be more compact than scanning LiDAR. Eliminating mechanically movable parts also makes flash LiDAR (and other solid state depth sensors) more robust, durable, and reliable.
The solid state depth sensor may use a Vertical Cavity Surface Emitting Laser (VCSEL) as a light source. VCSELs are a type of semiconductor laser diode whose laser beam emission is perpendicular to the wafer surface or mounting surface. In contrast, edge-emitting semiconductor lasers (EELs) propagate laser light in a direction along or parallel to the wafer surface of a semiconductor chip. For edge-emitting semiconductor lasers, the laser light is typically reflected or coupled out at the cleaved edge of the wafer. VCSELs can provide higher beam quality and therefore better performance than EELs. VCSELs tend to have lower power lasers than EELs. In addition, testing VCSELs is generally easier than testing EELs. For example, testing of VCSELs may use lower cost and simpler-to-program wafer probes, which are readily available in the semiconductor industry.
Solid state depth sensors may use a Single Photon Avalanche Diode (SPAD) array as a photodetector for detecting return light. As described above, the return light is light that is formed in the FOV when the transmitted beam from the depth sensor is scattered or reflected by one or more objects in the FOV. Single photon avalanche diodes or SPADs are solid state photodetectors based on reverse biased semiconductor p-n junctions, such as photodiodes and Avalanche Photodiodes (APDs). Unlike conventional photodiodes, SPADs operate in a mode known as a "geiger mode" in which a single incident photon can generate electron-hole pairs that are amplified sufficiently to produce a measurable current. Thus SPADs are inherently capable of detecting single photons with very high temporal resolution. A key component of SPADs is the region within the diode, called the depletion region. This region is designed to have a high electric field, which enables it to function as a high gain avalanche photodiode. When a single photon interacts with the depletion region, it generates electron-hole pairs. The high electric field across the depletion region causes electrons and holes to accelerate, resulting in a process known as impact ionization, where each electron or hole can gain enough energy to generate another electron-hole pair, resulting in an avalanche effect. The avalanche process rapidly amplifies the initial signal, converting the weak optical signal from single photons into a detectable electrical pulse. The plurality of SPADs may be arranged to form a 1-dimensional array, a 2-dimensional array, or a 3-dimensional array.
In addition to the light source and the light detector, the depth sensor may have other components, such as optics and control circuitry, which will be described in more detail below. The depth sensor may measure distance (also referred to as depth) using a direct time-of-flight (dToF) method, and is thus a dToF sensor. The depth sensor may also be an indirect time-of-flight (iToF) sensor that measures distance using an indirect time-of-flight (iToF) method. The dToF method includes directly measuring the time of flight between the time light is emitted from the depth sensor and the time the depth sensor detects return light. Using the time of flight, the distance between the depth sensor and the target object (using the well known speed of light) can be calculated. The iToF method measures distance by collecting return light and distinguishing the phase offset between the emitted light and the return light. The iToF method is particularly effective in high-speed, high-resolution 3D imaging of objects located at short and long distances. The indirect ToF sensor emits continuously modulated light and measures the phase of the return light to calculate the distance to the target object.
As mentioned above, the solid state depth sensor may not have any mechanically movable parts. The absence of movable parts is an advantage of solid state depth sensors. When the solid state depth sensor is operating, a laser source (e.g., a VCSEL) emits laser light toward the FOV and a detector (e.g., a SPAD array) captures return light formed by objects in the FOV. In this disclosure, depth sensors, flash LiDAR, iToF sensors, and dToF sensors may also be referred to as LiDAR. Thus, the present disclosure uses LiDAR as an example, and LiDAR and depth sensors may be used interchangeably. The present disclosure provides a novel method for optimizing the emitted light distribution of a solid state depth sensor. With the novel method, the detection range distribution of the depth sensor can be optimized. The power consumption of the depth sensor may also be reduced.
In one embodiment, a depth sensor is provided. The depth sensor includes one or more light sources configured to provide a plurality of light beams and one or more optical structures coupled to the one or more light sources. The one or more optical structures are configured to receive a plurality of light beams. At least one of the one or more light sources or the one or more optical structures is configured to unevenly distribute the plurality of light beams in a vertical field of view (FOV) such that the vertical FOV includes dense and sparse regions. The dense region of the vertical FOV has a higher beam density than the sparse region of the vertical FOV, and the depth sensor does not include mechanically movable parts configured to scan light to the FOV.
FIG. 1 illustrates one or more exemplary LiDAR systems 110 and 120A-120I disposed or included in a motor vehicle 100. The vehicle 100 may be an automobile, sports Utility Vehicle (SUV), truck, train, truck, bicycle, motorcycle, tricycle, bus, motor scooter, tram, ship, watercraft, underwater vehicle, aircraft, helicopter, unmanned Aerial Vehicle (UAV), spacecraft, or the like. The motor vehicle 100 may be a vehicle having any level of automation. For example, the motor vehicle 100 may be a partially automated vehicle, a highly automated vehicle, a fully automated vehicle, or an unmanned vehicle. The partially automated vehicle may perform some driving functions without human driver intervention. For example, the partially automated vehicle may perform blind spot monitoring, lane keeping and/or lane changing operations, automatic emergency braking, intelligent cruising, and/or traffic following, etc. Certain operations of a partially automated vehicle may be limited to a particular application or driving scenario (e.g., limited to highway driving only). Highly automated vehicles can typically perform all of the operations of a partially automated vehicle, but with fewer limitations. Highly automated vehicles can also detect their own limits when operating the vehicle and, if necessary, require the driver to take over the control of the vehicle. Full-automatic vehicles can perform all vehicle operations without driver intervention, but can also detect own limits and require the driver to take over if necessary. The unmanned vehicle may operate by itself without any driver intervention.
In a typical configuration, the motor vehicle 100 includes one or more LiDAR systems 110 and 120A-120I. Each of the LiDAR systems 110 and 120A-120I may be a scanning-based LiDAR system and/or a non-scanning LiDAR system (e.g., flash LiDAR). A scanning-based LiDAR system scans one or more light beams in one or more directions (e.g., horizontal and vertical directions) to detect objects in a field of view (FOV). Non-scanning based LiDAR systems emit laser light to illuminate the FOV without scanning. For example, flash LiDAR is one type of non-scanning based LiDAR system. Flash LiDAR may emit laser light that simultaneously illuminates the FOV with a single pulse of light or light.
LiDAR systems are common sensors for at least partially automated vehicles. In one embodiment, as shown in FIG. 1, a motor vehicle 100 may include a single LiDAR system 110 (e.g., without LiDAR systems 120A-120I) disposed at a highest location of the vehicle (e.g., at the vehicle roof). Locating the LiDAR system 110 at the roof of the vehicle facilitates 360 degree scanning around the vehicle 100. In some other embodiments, the motor vehicle 100 may include multiple LiDAR systems, including two or more of the systems 110 and/or 120A-120I. As shown in FIG. 1, in one embodiment, multiple LiDAR systems 110 and/or 120A-120I are attached to a vehicle 100 at different locations of the vehicle. For example, liDAR system 120A is attached to vehicle 100 at the right front corner, liDAR system 120B is attached to vehicle 100 at the front center location, liDAR system 120C is attached to vehicle 100 at the left front corner, liDAR system 120D is attached to vehicle 100 at the right side rearview mirror, liDAR system 120E is attached to vehicle 100 at the left side rearview mirror, liDAR system 120F is attached to vehicle 100 at the rear center location, liDAR system 120G is attached to vehicle 100 at the right rear corner, liDAR system 120H is attached to vehicle 100 at the left rear corner, and/or LiDAR system 120I is attached to vehicle 100 at the center toward the rear end (e.g., the rear end of the vehicle roof). It should be appreciated that one or more LiDAR systems may be distributed and attached to a vehicle in any desired manner, and FIG. 1 illustrates only one embodiment. As another example, liDAR systems 120D and 120E may be attached to the B-pillar of vehicle 100 instead of a rearview mirror. As another example, the LiDAR system 120B may be attached to a windshield of the vehicle 100, rather than a front bumper.
In some embodiments, liDAR systems 110 and 120A-120I are stand-alone LiDAR systems with their respective laser sources, control electronics, transmitters, receivers, and/or steering mechanisms. In other embodiments, some of the LiDAR systems 110 and 120A-120I may share one or more components, forming a distributed sensor system. In one example, an optical fiber is used to deliver laser light from a centralized laser source to all LiDAR systems. For example, the system 110 (or another system positioned in the center or anywhere of the vehicle 100) includes a light source, an emitter, and a light detector, but no steering mechanism. The system 110 may distribute the transmitted light to each of the systems 120A-120I. The transmitted light may be distributed via an optical fiber. Optical connectors may be used to couple optical fibers to each of the systems 110 and 120A-120I. In some examples, one or more of the systems 120A-120I include a steering mechanism, but no light source, emitter, or light detector. The steering mechanism may include one or more movable mirrors, such as one or more polygonal mirrors, one or more single plane mirrors, one or more multi-plane mirrors, and the like. Embodiments of the light source, emitter, steering mechanism and light detector are described in more detail below. Via a steering mechanism, one or more of the systems 120A-120I scan light into one or more respective FOVs and receive corresponding return light. The return light is formed by scattering or reflecting the transmitted light by one or more objects in the FOV. The systems 120A-120I may also include collection lenses and/or other optics to focus and/or direct the return light into an optical fiber that delivers the received return light to the system 110. The system 110 includes one or more light detectors for detecting the received return light. In some examples, the system 110 is disposed inside a vehicle such that it is in a temperature controlled environment, and one or more of the systems 120A-120I may be at least partially exposed to an external environment.
FIG. 2 is a block diagram 200 illustrating interactions between an on-board LiDAR system 210 and a plurality of other systems including a vehicle perception and planning system 220. LiDAR system 210 may be installed on or integrated into a vehicle. LiDAR system 210 includes a sensor that scans a laser into the surrounding environment to measure the distance, angle, and/or velocity of an object. Based on the scattered light returned to the LiDAR system 210, it may generate sensor data (e.g., image data or 3D point cloud data) representative of the perceived external environment.
LiDAR system 210 may include one or more of a short range LiDAR sensor, a medium range LiDAR sensor, and a long range LiDAR sensor. Short range LiDAR sensors measure objects up to about 20-50 meters from the LiDAR sensor. Short range LiDAR sensors may be used, for example, to monitor nearby moving objects (e.g., pedestrians crossing roads in a school zone), parking assistance applications, and the like. The medium range LiDAR sensor measures objects up to about 70-200 meters from the LiDAR sensor. Mid-range LiDAR sensors may be used, for example, to monitor road intersections, assist in merging into or exiting highways, and so forth. Remote LiDAR sensors measure objects located at 200 meters and above. Remote LiDAR sensors are typically used when the vehicle is traveling at high speed (e.g., on a highway), such that the vehicle's control system may only have a few seconds (e.g., 6-8 seconds) to respond to any detected conditions by the LiDAR sensor. As shown in FIG. 2, in one embodiment, liDAR sensor data may be provided to a vehicle perception and planning system 220 via a communication path 213 for further processing and control of vehicle operation. The communication path 213 may be any wired or wireless communication link capable of transmitting data.
Still referring to FIG. 2, in some embodiments, other onboard sensors 230 are configured to provide additional sensor data alone or in conjunction with LiDAR system 210. Other in-vehicle sensors 230 may include, for example, one or more cameras 232, one or more radars 234, one or more ultrasonic sensors 236, and/or other sensors 238. The camera 232 may take images and/or video of the external environment of the vehicle.
The camera 232 may capture High Definition (HD) video having, for example, millions of pixels per frame. Cameras include image sensors that facilitate the production of monochrome or color images and video. Color information may be important in interpreting data in certain situations, such as interpreting an image of a traffic light. Color information may not be available from other sensors, such as LiDAR or radar sensors. The cameras 232 may include one or more of a narrow focal length camera, a wider focal length camera, a lateral camera, an infrared camera, a fisheye camera, and the like. The image and/or video data generated by the camera 232 may also be provided to the vehicle perception and planning system 220 via the communication path 233 for further processing and control of vehicle operation. Communication path 233 may be any wired or wireless communication link capable of transmitting data. The camera 232 may be mounted or integrated onto the vehicle at any location (e.g., rear view mirror, post, front grille, and/or rear bumper, etc.).
Other in-vehicle sensors 230 may also include radar sensors 234. The radar sensor 234 uses radio waves to determine the distance, angle and speed of an object. The radar sensor 234 generates electromagnetic waves in the radio or microwave spectrum. The electromagnetic waves are reflected by the object and some of the reflected waves return to the radar sensor, providing information about the object's position and velocity. Radar sensor 234 may include one or more of short range radar, medium range radar, and long range radar. Short range radar measures objects about 0.1-30 meters from the radar. Short range radar is useful in detecting objects located near a vehicle, such as other vehicles, buildings, walls, pedestrians, cyclists, etc. Short range radar may be used to detect blind spots, assist lane changes, provide rear-end warnings, assist parking, provide emergency braking, etc. The mid-range radar measures objects about 30-80 meters from the radar. Remote radar measures objects located at about 80-200 meters. Mid-range and/or remote radar may be used for traffic tracking, adaptive cruise control, and/or highway automatic braking, for example. Sensor data generated by radar sensor 234 may also be provided to vehicle perception and planning system 220 via communication path 233 for further processing and control of vehicle operation. The radar sensor 234 may be mounted or integrated onto the vehicle at any location (e.g., rear view mirror, post, front grille, and/or rear bumper, etc.).
Other in-vehicle sensors 230 may also include ultrasonic sensors 236. The ultrasonic sensor 236 uses sound waves or pulses to measure objects located outside the vehicle. The acoustic waves generated by the ultrasonic sensor 236 are emitted to the surrounding environment. At least some of the emitted waves are reflected by the object and return to the ultrasonic sensor 236. Based on the return signal, the distance of the object can be calculated. The ultrasonic sensor 236 may be used, for example, to check blind spots, identify parking spaces, provide lane change assistance in traffic, and the like. Sensor data generated by ultrasonic sensor 236 may also be provided to vehicle perception and planning system 220 via communication path 233 for further processing and control of vehicle operation. The ultrasonic sensor 236 may be mounted or integrated onto the vehicle at any location (e.g., rear view mirror, post, front grille, and/or rear bumper, etc.).
In some embodiments, one or more other sensors 238 may be attached in the vehicle, and may also generate sensor data. Other sensors 238 may include, for example, a Global Positioning System (GPS), an Inertial Measurement Unit (IMU), and the like. Sensor data generated by other sensors 238 may also be provided to the vehicle perception and planning system 220 via communication path 233 for further processing and control of vehicle operation. It should be appreciated that the communication path 233 may include one or more communication links to communicate data between the various sensors 230 and the vehicle perception and planning system 220.
In some embodiments, as shown in FIG. 2, sensor data from other onboard sensors 230 may be provided to the onboard LiDAR system 210 via communication path 231. LiDAR system 210 may process sensor data from other onboard sensors 230. For example, sensor data from the cameras 232, radar sensors 234, ultrasonic sensors 236, and/or other sensors 238 may be correlated or fused with the sensor data LiDAR system 210 to at least partially offload the sensor fusion process performed by the vehicle perception and planning system 220. It should be appreciated that other configurations may also be implemented to transmit and process sensor data from the various sensors (e.g., the data may be transmitted to a cloud or edge computing service provider for processing, and then the processing results may be transmitted back to the vehicle perception and planning system 220 and/or LiDAR system 210).
Still referring to FIG. 2, in some embodiments, sensors on other vehicles 250 are used to provide additional sensor data alone or in conjunction with LiDAR system 210. For example, two or more nearby vehicles may have their own LiDAR sensors, cameras, radar sensors, ultrasonic sensors, and the like. Nearby vehicles may communicate and share sensor data with each other. Communication between vehicles is also known as V2V
(Vehicle-to-vehicle) communication. For example, as shown in fig. 2, sensor data generated by other vehicles 250 may be communicated to vehicle perception and planning system 220 and/or on-board LiDAR system 210 via communication path 253 and/or communication path 251, respectively. Communication paths 253 and 251 may be any wired or wireless communication link capable of transmitting data.
Sharing sensor data facilitates better perception of the environment outside of the vehicle. For example, a first vehicle may not sense a pedestrian behind a second vehicle but approaching the first vehicle. The second vehicle may share sensor data related to the pedestrian with the first vehicle such that the first vehicle may have additional reaction time to avoid collisions with pedestrians. In some embodiments, similar to the data generated by the sensors 230, the data generated by the sensors on the other vehicles 250 may be correlated or fused with the sensor data generated by the LiDAR system 210 (or other LiDAR systems located in other vehicles), thereby at least partially offloading the sensor fusion process performed by the vehicle perception and planning system 220.
In some embodiments, the intelligent infrastructure system 240 is used to provide sensor data alone or in conjunction with the LiDAR system 210. Some infrastructures may be configured to communicate with vehicles to communicate information, and vice versa.
The communication between the vehicle and the infrastructure is commonly referred to as V2I (vehicle to infrastructure) communication. For example, the intelligent infrastructure system 240 may include intelligent traffic lights that may communicate their status to approaching vehicles in messages such as "yellow after 5 seconds". The intelligent infrastructure system 240 may also include its own LiDAR system installed near an intersection so that it can communicate traffic monitoring information to vehicles. For example, a vehicle turning left at an intersection may not have sufficient sensing capability because some of its own sensors may be blocked by traffic in the opposite direction. In this case, the sensors of the intelligent infrastructure system 240 may provide useful data to the vehicle turning left. Such data may include, for example, traffic conditions, object information in the direction of vehicle turn, traffic light status and predictions, and the like. These sensor data generated by intelligent infrastructure system 240 may be provided to vehicle awareness and planning system 220 and/or on-board LiDAR system 210 via communication paths 243 and/or 241, respectively.
Communication paths 243 and/or 241 may include any wired or wireless communication links capable of transmitting data. For example, sensor data from the intelligent infrastructure system 240 may be transmitted to the LiDAR system 210 and correlated or fused with sensor data generated by the LiDAR system 210, thereby at least partially offloading the sensor fusion process performed by the vehicle perception and planning system 220. The V2V and V2I communications described above are examples of vehicle-to-X (V2X) communications, where "X" represents any other device, system, sensor, infrastructure, etc. that may share data with the vehicle.
Still referring to FIG. 2, via various communication paths, the vehicle perception and planning system 220 receives sensor data from one or more of the LiDAR system 210, other onboard sensors 230, other vehicles 250, and/or the intelligent infrastructure system 240. In some embodiments, different types of sensor data are correlated and/or fused by the sensor fusion subsystem 222. For example, the sensor fusion subsystem 222 may generate a 360 degree model using multiple images or videos captured by multiple cameras disposed at different locations of the vehicle. The sensor fusion subsystem 222 obtains sensor data from different types of sensors and uses the combined data to more accurately perceive the environment. For example, the onboard camera 232 may not be able to capture a clear image because it is directly facing the sun or light source (e.g., the headlights of another vehicle at night). LiDAR system 210 may not be too affected and thus sensor fusion subsystem 222 may combine sensor data provided by camera 232 and LiDAR system 210 and use the sensor data provided by LiDAR system 210 to compensate for the unclear image captured by camera 232. As another example, radar sensor 234 may work better than camera 232 or LiDAR system 210 in rainy or foggy weather. Accordingly, the sensor fusion subsystem 222 may use sensor data provided by the radar sensor 234 to compensate for sensor data provided by the camera 232 or LiDAR system 210.
In other examples, sensor data generated by other onboard sensors 230 may have a lower resolution (e.g., radar sensor data), and thus may need to be correlated and validated by LiDAR system 210, which typically has a higher resolution. For example, the radar sensor 234 may detect a manhole cover (also referred to as a manway cover) as an object that the vehicle is approaching. Due to the low resolution nature of radar sensor 234, vehicle perception and planning system 220 may not be able to determine whether the object is an obstacle that the vehicle needs to avoid. Thus, the high-resolution sensor data generated by the LiDAR system 210 can be used to correlate and confirm that the object is a manhole cover and that it is not damaging to the vehicle.
The vehicle perception and planning system 220 further includes an object classifier 223. Using raw sensor data and/or correlation/fusion data provided by the sensor fusion subsystem 222, the object classifier 223 may use any computer vision technique to detect and classify objects and estimate the position of the objects. In some embodiments, object classifier 223 may use machine learning based techniques to detect and classify objects. Examples of machine learning based techniques include algorithms that utilize techniques such as region-based convolutional neural networks (R-CNN), fast R-CNN, faster R-CNN, oriented gradient Histograms (HOG), region-based full convolutional networks (R-FCN), single-pass probes (SSD), spatial pyramid pooling (SPP-net), and/or "you look only once" (You Only Look Once, yolo).
The vehicle perception and planning system 220 further includes a road detection subsystem 224. The road detection subsystem 224 locates roads and identifies objects and/or markers on the roads. For example, based on raw or fused sensor data provided by radar sensor 234, camera 232, and/or LiDAR system 210, road detection subsystem 224 may construct a 3D model of a road based on machine learning techniques (e.g., pattern recognition algorithms for recognizing lanes). Using a 3D model of the road, the road detection subsystem 224 may identify objects (e.g., obstacles or debris on the road) and/or markers (e.g., lane lines, turn markers, crosswalk markers, etc.) on the road.
The vehicle perception and planning system 220 further includes a positioning and vehicle pose subsystem 225. Based on the raw or fused sensor data, the positioning and vehicle pose subsystem 225 may determine the position of the vehicle and the pose of the vehicle. For example, using sensor data from the LiDAR system 210, the camera 232, and/or GPS data, the positioning and vehicle pose subsystem 225 may determine the precise location of the vehicle on the road and six degrees of freedom of the vehicle (e.g., whether the vehicle is moving forward or backward, upward or downward, left or right). In some embodiments, high Definition (HD) maps are used for vehicle positioning. HD maps can provide a very detailed three-dimensional computer map to pinpoint the position of the vehicle. For example, using HD maps, the positioning and vehicle pose subsystem 225 may accurately determine the current location of the vehicle (e.g., on which lane of the road the vehicle is currently on, how close it is to the roadside or the sidewalk) and predict the future location of the vehicle.
The vehicle perception and planning system 220 further includes an obstacle predictor 226. The objects identified by the object classifier 223 may be stationary (e.g., light poles, road signs) or dynamic (e.g., moving pedestrians, bicycles, another car). For moving objects, predicting their path of movement or future position is important to avoid collisions. The obstacle predictor 226 may predict an obstacle trajectory and/or alert a driver or a vehicle planning subsystem 228 of a potential collision. For example, if the likelihood that the trajectory of the obstacle intersects the current path of travel of the vehicle is high, the obstacle predictor 226 may generate such a warning. The obstacle predictor 226 may use various techniques to make such predictions. These techniques include, for example, constant velocity or acceleration models, constant turn rate and velocity/acceleration models, kalman filter-based and extended Kalman filter-based models, recurrent Neural Network (RNN) -based models, long-short term memory (LSTM) -based neural network models, encoder-decoder RNN models, and the like.
Still referring to fig. 2, in some embodiments, the vehicle perception and planning system 220 further includes a vehicle planning subsystem 228. The vehicle planning subsystem 228 may include one or more planners, such as a route planner, a driving behavior planner, and a movement planner. The route planner may plan a route of the vehicle based on current location data of the vehicle, target location data, traffic information, and the like. The driving behavior planner uses the obstacle predictions provided by the obstacle predictor 226 to adjust timing and planned movement based on how other objects may move. The motion planner determines the particular operations that the vehicle needs to follow. The planning results are then communicated to the vehicle control system 280 via the vehicle interface 270. Communication may be performed through communication paths 227 and 271, which include any wired or wireless communication link over which data may be transmitted.
The vehicle control system 280 controls steering mechanisms, throttle, brakes, etc. of the vehicle to operate the vehicle according to the planned route and movement. In some examples, the vehicle awareness and planning system 220 may further include a user interface 260 that provides a user (e.g., driver) with access to the vehicle control system 280, for example, to override or take over control of the vehicle when necessary. The user interface 260 may also be separate from the vehicle perception and planning system 220. The user interface 260 may communicate with the vehicle perception and planning system 220, for example, to obtain and display raw or fused sensor data, identified objects, vehicle position/pose, and the like. These displayed data may help the user to better operate the vehicle. The user interface 260 may communicate with the vehicle awareness and planning system 220 and/or the vehicle control system 280 via communication paths 221 and 261, respectively, including any wired or wireless communication links that may transmit data. It should be appreciated that the various systems, sensors, communication links, and interfaces in fig. 2 may be configured in any desired manner and are not limited to the configuration shown in fig. 2.
FIG. 3 is a block diagram illustrating an exemplary LiDAR system 300. LiDAR system 300 may be used to implement LiDAR systems 110, 120A-120I, and/or 210 shown in FIGS. 1 and 2. In one embodiment, liDAR system 300 includes a light source 310, an emitter 320, an optical receiver and light detector 330, a steering system 340, and control circuitry 350. These components are coupled together using communication paths 312, 314, 322, 332, 342, 352, and 362. These communication paths include communication links (wired or wireless, bi-directional or uni-directional) between the various LiDAR system components, but are not necessarily physical components themselves. Although the communication path may be implemented by one or more wires, buses, or optical fibers, the communication path may also be a wireless channel or a free-space optical path, such that no physical communication medium exists. For example, in one embodiment of LiDAR system 300, communication path 314 between light source 310 and emitter 320 may be implemented using one or more optical fibers. Communication paths 332 and 352 may represent optical paths implemented using free-space optics and/or optical fibers. And communication paths 312, 322, 342, and 362 may be implemented using one or more wires carrying electrical signals. The communication paths may also include one or more of the types of communication media described above (e.g., they may include optical fibers and free space optics, or include one or more optical fibers and one or more wires).
In some embodiments, the LiDAR system 300 may be a coherent LiDAR system. Frequency Modulated Continuous Wave (FMCW) LiDAR is one example. Coherent LiDAR detects an object by mixing return light from the object with light from a coherent laser transmitter.
Thus, as shown in FIG. 3, if LiDAR system 300 is a coherent LiDAR, it may include a route 372 that provides a portion of the transmitted light from emitter 320 to optical receiver and light detector 330. Route 372 may include one or more optics (e.g., optical fibers, lenses, mirrors, etc.) for providing light from emitter 320 to optical receiver and light detector 330. The transmitted light provided by the emitter 320 may be modulated light and may be split into two parts. One portion is transmitted to the FOV and a second portion is transmitted to the optical receiver and photodetector of the LiDAR system. The second part is also called light that is kept Local (LO) to the LiDAR system. The transmitted light is scattered or reflected by various objects in the FOV and at least a portion thereof forms return light. The return light is then detected and recombined with the second portion of the transmitted light that remains localized. Coherent LiDAR provides a mechanism to optically sense the range of objects and their relative velocity along a line of sight (LOS).
LiDAR system 300 may also include other components not shown in FIG. 3, such as a power bus, power supply, LED indicators, switches, and the like. Additionally, there may be other communication connections between the components, such as a direct connection between the light source 310 and the optical receiver and light detector 330, to provide a reference signal so that the time from transmitting the light pulse to detecting until returning the light pulse may be accurately measured.
The light source 310 outputs laser light for illuminating an object in a field of view (FOV). The laser may be infrared light having a wavelength in the range 700nm to 1 mm. The light source 310 may be, for example, a semiconductor-based laser (e.g., a diode laser) and/or a fiber-based laser. The semiconductor-based laser may be, for example, an edge-emitting laser (EEL), a Vertical Cavity Surface Emitting Laser (VCSEL), an external cavity diode laser, a vertical cavity surface emitting laser, a Distributed Feedback (DFB) laser, a Distributed Bragg Reflector (DBR) laser, an interband cascade laser, a quantum well laser, a double heterostructure laser, or the like. An optical fiber-based laser is one in which the active gain medium is an optical fiber doped with rare earth elements such as erbium, ytterbium, neodymium, dysprosium, praseodymium, thulium, and/or holmium. In some embodiments, the fiber laser is based on a double-clad fiber, where the gain medium forms the core of the fiber surrounded by two cladding layers. Double-clad fibers allow the core to be pumped with a high power beam, thereby enabling the laser source to be a high power fiber laser source.
In some embodiments, light source 310 includes a master oscillator (also referred to as a seed laser) and a power amplifier (MOPA). The power amplifier amplifies the output power of the seed laser.
The power amplifier may be a fiber amplifier, a bulk amplifier, or a semiconductor optical amplifier. The seed laser may be a diode laser (e.g., a fabry-perot cavity laser, a distributed feedback laser), a solid-state body laser, or an external cavity tunable diode laser. In some embodiments, the light source 310 may be an optically pumped microchip laser. Microchip lasers are alignment-free monolithic solid state lasers in which the laser crystal is in direct contact with the end mirror of the laser resonator. Microchip lasers are typically pumped (directly or using fiber) by a laser diode to obtain the desired output power. Microchip lasers may be based on neodymium-doped yttrium aluminum garnet (Y3 Al5O 12) laser crystals (i.e., nd: YAG), or neodymium-doped vanadate (i.e., ND: YVO 4) laser crystals. In some examples, light source 310 may have multiple amplification stages to achieve high power gain so that the laser output may have high power, thereby enabling a LiDAR system to have a long scan range. In some examples, the power amplifier of light source 310 may be controlled such that the power gain may be varied to achieve any desired laser output power.
Fig. 4 is a block diagram illustrating an exemplary semiconductor-based laser source 400. Semiconductor-based laser source 400 is an example of light source 310 shown in fig. 3. In the example shown in fig. 4, the laser source 400 is a Vertical Cavity Surface Emitting Laser (VCSEL), which is a type of semiconductor laser diode with a unique structure that allows it to emit light vertically from the chip surface, rather than through the edges of the chip as in an Edge Emitting Laser (EEL) diode. VCSELs have advantages of high-speed operation and easy integration into semiconductor devices. Fig. 4 shows a cross-sectional view of an exemplary VCSEL 400. In this example, the VCSEL 400 includes a metal contact layer 402, an upper bragg reflector 404, an active region 406, a lower bragg reflector 408, a substrate 410, and another metal contact layer 412. In the VCSEL 400, metal contact layers 402 and 412 are used to make electrical contact so that current and/or voltage can be provided to the VCSEL 400 to generate a laser light. The substrate layer 410 is a semiconductor substrate, which may be, for example, a gallium arsenide (GaAs) substrate. The VCSEL 400 uses a laser resonator that includes two Distributed Bragg Reflectors (DBRs) (i.e., an upper bragg reflector 404 and a lower bragg reflector 408) with an active region 406 sandwiched between the DBR reflectors. Active region 406 includes one or more quantum wells, for example, for laser generation. The planar DBR reflector may be a mirror having alternating high and low refractive index layers. Each layer has a thickness of one quarter of the laser wavelength in the material, resulting in an intensity reflectivity of higher than, for example, 99%. The high reflectivity mirror in the VCSEL can balance the short axial length of the gain region. In one example of the VCSEL 400, the upper DBR reflector 404 and the lower DBR reflector 408 can be doped with a p-type material and an n-type material to form a diode junction. In another example, the p-type region and the n-type region may be embedded between the reflectors, requiring more complex semiconductor processes to make electrical contact with the active region, but eliminating electrical power loss in the DBR structure. The active region 406 is sandwiched between DBR reflectors 404 and 408 of the VCSEL 400. The active region is where the laser light is generated. Active region 406 typically has a quantum well or quantum dot structure that contains a gain medium responsible for optical amplification.
When a current is applied to the active region 406, it generates photons by stimulated emission. The distance between the upper DBR reflector 404 and the lower DBR reflector 408 defines the cavity length of the VCSEL 400. The cavity length in turn determines the wavelength of the emitted light and affects the performance characteristics of the laser. When current is applied to the VCSEL, it generates light that bounces between the DBR reflector 404 and the DBR reflector 408 and exits the VCSEL 400 through the DBR reflector 408 as follows, thereby generating a highly coherent and vertically emitted laser beam 414. The VCSEL 400 can provide improved beam quality, low threshold current, and the ability to produce single-mode or multi-mode outputs.
In some variations, the VCSEL 400 may be controlled (e.g., by the control circuitry 350) to generate pulses of different amplitudes. The communication path 312 couples the VCSEL 400 to the control circuit 350 (as shown in fig. 3) such that components of the VCSEL 400 may be controlled by or otherwise in communication with the control circuit 350. Alternatively, the VCSEL 400 may include its own dedicated controller. Instead of the control circuit 350 communicating directly with the components of the VCSEL 400, a dedicated controller of the VCSEL 400 communicates with the control circuit 350 and controls and/or communicates with the components of the VCSEL 400. The VCSEL 400 may also include other components not shown, such as one or more power connectors, power supplies, and/or power lines.
The VCSEL 400 may be used to generate laser pulses or Continuous Wave (CW) lasers. To generate the laser pulse, the control circuit 350 modulates the current provided to the VCSEL 400. By switching the power supply current on and off quickly, a laser pulse can be generated. The duration, repetition rate and shape of the pulses can be controlled by adjusting the modulation parameters. As another example, the VCSEL 400 may also be a mode-locked VCSEL that uses a combination of current modulation and optical feedback to obtain ultra-short pulses. Mode-locked VCSELs can also be controlled to synchronize the phase of the laser mode to produce very short and high intensity pulses. As another example, the VCSEL 400 may use Q-switching techniques, including optical switches in the laser cavity, temporarily blocking lasing, and allowing energy to accumulate in the cavity. When the switch is open, a high intensity pulse is emitted. As another example, the VCSEL 400 may also have external modulation performed by an external modulator, such as an electro-optic or acousto-optic modulator. External modulation may be used in conjunction with the VCSEL itself to produce the pulsed output. An external modulator may be used to control the pulse duration and repetition rate. The type of VCSEL used as at least a portion of the light source 310 depends on the application and the desired pulse characteristics, such as pulse duration, repetition rate, and peak power. Referring to fig. 3, typical operating wavelengths for light source 310 include, for example, about 850nm, about 905nm, about 940nm, about 1064nm, and about 1550nm. For laser safety, the upper limit of the maximum available laser power is set by the U.S. Food and Drug Administration (FDA) regulations. The optical power limit at 1550nm is much higher than the optical power limit at the other wavelengths described above. Furthermore, at 1550nm, the optical power loss in the optical fiber is very low.
These characteristics of 1550nm wavelength make it more advantageous for remote LiDAR applications. The amount of optical power output from light source 310 may be characterized by its peak power, average power, pulse energy, and/or pulse energy density. Peak power is the ratio of pulse energy to pulse width (e.g., full width half maximum or FWHM). Thus, for a fixed amount of pulse energy, a smaller pulse width may provide a larger peak power. The pulse width may be in the range of nanoseconds or picoseconds. The average power is the product of the pulse energy and the Pulse Repetition Rate (PRR). As described in more detail below, PRR represents the frequency of the pulsed laser. In general, the smaller the time interval between pulses, the higher the PRR. PRR generally corresponds to the maximum range that the LiDAR system can measure. The light source 310 may be configured to pulse at a high PRR to meet a desired number of data points in a point cloud generated by the LiDAR system. The light source 310 may also be configured to generate pulses of medium or low PRR to meet a desired maximum detection distance. Wall Plug Efficiency (WPE) is another factor in assessing overall power consumption, which may be a useful indicator of assessing laser efficiency. For example, as shown in fig. 1, multiple LiDAR systems may be attached to a vehicle, which may be an electric vehicle or a vehicle with limited fuel or battery power supply. Thus, high WPE and intelligent ways of using laser power are often important considerations when selecting and configuring the light source 310 and/or designing a laser delivery system for an in-vehicle LiDAR application.
It should be appreciated that the above description provides a non-limiting example of light source 310. The light source 310 may be configured to include many other types of light sources (e.g., laser diodes, short cavity fiber lasers, solid state lasers, and/or external cavity tunable diode lasers) configured to generate one or more optical signals at various wavelengths. In some examples, light source 310 includes an amplifier (e.g., a pre-amplifier and/or a boost amplifier), which may be a doped fiber amplifier, a solid-state body amplifier, and/or a semiconductor optical amplifier. The amplifier is configured to receive and amplify the optical signal with a desired gain.
Referring back to FIG. 3, liDAR system 300 further includes a transmitter 320. The light source 310 provides laser light (e.g., in the form of a laser beam) to the emitter 320. The laser light provided by the light source 310 may be an amplified laser light having a predetermined or controlled wavelength, pulse repetition rate, and/or power level. The emitter 320 receives the laser light from the light source 310 and emits the laser light to the steering mechanism 340 with low divergence. In some embodiments, the emitter 320 may include, for example, optical components (e.g., lenses, optical fibers, mirrors, etc.) for emitting one or more laser beams to a field of view (FOV) either directly or via the steering mechanism 340. Although fig. 3 illustrates the transmitter 320 and steering mechanism 340 as separate components, in some embodiments they may be combined or integrated into one system. The steering mechanism 340 will be described in more detail below.
The laser beam provided by the light source 310 may diverge as it propagates to the emitter 320. Accordingly, the emitter 320 generally includes a collimating lens configured to collect the diverging laser beam and produce a more parallel beam with reduced or minimal divergence. The collimated beam may then be further directed through various optics, such as mirrors and lenses. The collimating lens may be, for example, a single plano-convex lens or a lens group. The collimating lens may be configured to achieve any desired characteristics, such as beam diameter, divergence, numerical aperture, focal length, and the like. The beam propagation ratio or beam quality factor (also known as the M2 factor) is used to measure the laser beam quality.
In many LiDAR applications, it is important to have good laser beam quality in the resulting emitted laser beam. The M2 factor represents the degree of change in the beam relative to an ideal Gaussian beam. Thus, the M2 factor reflects how well the collimated laser beam is focused on a small spot, or how well the divergent laser beam is collimated. Accordingly, the light source 310 and/or the emitter 320 may be configured to meet, for example, scanning resolution requirements, while maintaining a desired M2 factor.
One or more of the beams provided by the emitter 320 are scanned to the FOV by the steering mechanism 340. Steering mechanism 340 scans the beam in multiple dimensions (e.g., in the horizontal and vertical dimensions) to facilitate LiDAR system 300 in mapping an environment by generating a 3D point cloud. The horizontal dimension may be a dimension parallel to the horizon or a surface associated with a LiDAR system or vehicle (e.g., a road surface). The vertical dimension is perpendicular to the horizontal dimension (i.e., the vertical dimension forms a90 degree angle with the horizontal dimension). The steering mechanism 340 will be described in more detail below. The laser light scanned into the FOV may be scattered or reflected by objects in the FOV. At least a portion of the scattered or reflected light forms return light that returns to the LiDAR system 300. Fig. 3 further illustrates an optical receiver and photodetector 330 configured to receive the return light. The optical receiver and photodetector 330 includes an optical receiver configured to collect return light from the FOV. The optical receiver may include optics (e.g., lenses, optical fibers, mirrors, etc.) for receiving, redirecting, focusing, amplifying, and/or filtering the return light from the FOV. For example, optical receivers typically include a collection lens (e.g., a single plano-convex lens or lens group) to collect and/or focus the collected return light onto a photodetector.
The photodetector detects the return light focused by the optical receiver and generates a current and/or voltage signal proportional to the incident intensity of the return light. Based on such current and/or voltage signals, depth information of the object in the FOV may be derived. One exemplary method for deriving such depth information is based on direct TOF (time of flight), which will be described in more detail below. The light detector may be characterized by its detection sensitivity, quantum efficiency, detector bandwidth, linearity, signal-to-noise ratio (SNR), overload resistance, interference immunity, etc. The light detector may be configured or customized to have any desired characteristics, depending on the application. For example, the optical receiver and light detector 330 may be configured such that the light detector has a large dynamic range while having good linearity. Photodetector linearity indicates the ability of a detector to maintain a linear relationship between the input optical signal power and the detector output. Detectors with good linearity can maintain a linear relationship over a large dynamic input optical signal range.
The structure of the light detector and/or the material system of the detector may be configured or customized to achieve desired detector characteristics. Various detector configurations may be used for the light detector. For example, the photodetector structure may be a PIN-based structure having an undoped intrinsic semiconductor region (i.e., an "I" region) between a p-type semiconductor and an n-type semiconductor region. Other photodetector structures include, for example, APD (avalanche photodiode) based structures, PMT (photomultiplier tube) based structures, siPM (silicon photomultiplier tube) based structures, SPAD (single photon avalanche diode) based structures, and/or quantum wires. For the material system used in the photodetector, si, inGaAs and/or Si/Ge based materials may be used. It should be appreciated that many other detector structures and/or material systems may be used in the optical receiver and light detector 330.
The light detector (e.g., APD-based detector) may have an internal gain such that the input signal is amplified when the output signal is generated. However, due to the internal gain of the photo detector, noise may also be amplified. Common noise types include signal shot noise, dark current shot noise, thermal noise, and amplifier noise. In some embodiments, the optical receiver and photodetector 330 may comprise a Low Noise Amplifier (LNA) pre-amplifier. In some embodiments, the preamplifier may further include a transimpedance amplifier (TIA) that converts the current signal to a voltage signal. For linear detector systems, the input equivalent noise or Noise Equivalent Power (NEP) measures the sensitivity of the photodetector to weak signals. They can therefore be used as indicators of overall system performance. For example, the NEP of the photodetector specifies the power of the weakest signal that can be detected, and thus it in turn specifies the maximum range of the LiDAR system. It should be appreciated that a variety of light detector optimization techniques may be used to meet the requirements of the LiDAR system 300. Such optimization techniques may include selecting different detector structures, materials, and/or implementing signal processing techniques (e.g., filtering, noise reduction, amplification, etc.). For example, coherent detection may be used for the light detector in addition to or instead of direct detection using a return signal (e.g., by using ToF). Coherent detection allows the amplitude and phase information of the received light to be detected by interfering the received light with a local oscillator. Coherent detection can improve detection sensitivity and noise immunity.
FIG. 3 further illustrates that LiDAR system 300 includes a steering mechanism 340. As described above, the steering mechanism 340 directs the beam from the emitter 320 to scan the FOV in multiple dimensions. The steering mechanism is referred to as a grating mechanism, scanning mechanism or simply as an optical scanner. Scanning the beam in multiple directions (e.g., in the horizontal and vertical directions) facilitates LiDAR systems to map an environment by generating images or 3D point clouds. The steering mechanism may be based on mechanical scanning and/or solid state scanning. Mechanical scanning uses a rotating mirror to steer or physically rotate the LiDAR transmitters and receivers (collectively referred to as transceivers) to scan the laser beam. The solid state scan directs the laser beam through the FOV to various locations without mechanically moving any macroscopic components, such as a transceiver. Solid state scanning mechanisms include, for example, optical phased array based steering and flash LiDAR based steering. In some embodiments, because the solid state scanning mechanism does not physically move macroscopic components, the steering performed by the solid state scanning mechanism may be referred to as effective steering. LiDAR systems that use solid state scanning may also be referred to as non-mechanically scanned or simple non-scanned LiDAR systems (flash LiDAR systems are exemplary non-scanned LiDAR systems).
Steering mechanism 340 may be used with transceivers (e.g., emitter 320 and optical receiver and light detector 330) to scan the FOV for generating an image or a 3D point cloud. As an example, to implement steering mechanism 340, a two-dimensional mechanical scanner may be used with a single point or several single point transceivers. The single point transceiver transmits a single beam or a small number of beams (e.g., 2-8 beams) to the steering mechanism. Two-dimensional mechanical steering mechanisms include, for example, polygonal mirrors, oscillating mirrors, rotating prisms, rotating tilting mirrors, single or multi-plane mirrors, or combinations thereof. In some embodiments, steering mechanism 340 may include a non-mechanical steering mechanism, such as a solid state steering mechanism. For example, steering mechanism 340 may be based on a tuned wavelength of a laser incorporating a refractive effect, and/or based on a reconfigurable grating/phased array. In some embodiments, the steering mechanism 340 may implement two-dimensional scanning using a single scanning device, or using a combined plurality of scanning devices.
As another example, to implement steering mechanism 340, a one-dimensional mechanical scanner may be used with an array or a large number of single-point transceivers. In particular, the transceiver array may be mounted on a rotating platform to achieve a 360 degree horizontal field of view. Alternatively, the static transceiver array may be combined with a one-dimensional mechanical scanner. The one-dimensional mechanical scanner includes a polygonal mirror, an oscillating mirror, a rotating prism, a rotating tilt mirror, or a combination thereof, for obtaining a forward looking horizontal field of view. Steering mechanisms using mechanical scanners can provide robustness and reliability in mass production for automotive applications.
As another example, to implement the steering mechanism 340, a two-dimensional transceiver may be used to directly generate a scanned image or a 3D point cloud. In some embodiments, stitching or micro-displacement methods may be used to increase the resolution of the scanned image or the field of view being scanned. For example, using a two-dimensional transceiver, signals generated in one direction (e.g., horizontal direction) and signals generated in another direction (e.g., vertical direction) may be integrated, interleaved, and/or matched to generate a higher or full resolution image or 3D point cloud representing the scanned FOV.
Some implementations of the steering mechanism 340 include one or more optical redirecting elements (e.g., mirrors or lenses) that steer the return light signal along a receiving path (e.g., by rotation, vibration, or steering) to direct the return light signal to the optical receiver and light detector 330. The optical redirection element that directs the optical signal along the transmit path and the receive path may be the same component (e.g., shared), separate components (e.g., dedicated), and/or a combination of shared and separate components. This means that in some cases the transmit and receive paths are different, although they may overlap partially (or in some cases substantially or completely).
Still referring to FIG. 3, liDAR system 300 further includes control circuitry 350. The control circuitry 350 may be configured and/or programmed to control various portions of the LiDAR system 300 and/or to perform signal processing. In a typical system, the control circuitry 350 may be configured and/or programmed to perform one or more control operations including, for example, controlling the light source 310 to obtain a desired laser pulse timing, pulse repetition rate, and power, controlling the steering mechanism 340 (e.g., controlling speed, direction, and/or other parameters) to scan the FOV and maintain pixel registration and/or alignment, controlling the optical receiver and light detector 330 (e.g., controlling sensitivity, noise reduction, filtering, and/or other parameters) so that it is in an optimal state, and monitoring overall system health/functional safety states (e.g., monitoring laser output power and/or safety of steering mechanism operating states).
The control circuitry 350 may also be configured and/or programmed to signal process raw data generated by the optical receiver and light detector 330 to obtain distance and reflectivity information and to package and communicate with the vehicle perception and planning system 220 (shown in fig. 2). For example, the control circuitry 350 determines the time it takes from transmitting a light pulse to receiving a corresponding return light pulse, determines when a return light pulse of the transmitted light pulse has not been received, determines the direction (e.g., horizontal and/or vertical information) of the transmitted light pulse/return light pulse, determines an estimated range in a particular direction, derives the reflectivity of objects in the FOV, and/or determines any other type of data related to the LiDAR system 300.
LiDAR system 300 may be disposed in a vehicle that may operate in many different environments, including hot or cold weather, rough road conditions that may cause strong vibrations, high or low humidity, dusty areas, and so forth. Thus, in some embodiments, the optical and/or electronic components of LiDAR system 300 (e.g., the optics in emitter 320, optical receiver and light detector 330, and steering mechanism 340) are arranged and/or configured in a manner that maintains long-term mechanical and optical stability. For example, components in LiDAR system 300 may be fixed and sealed such that they may operate under all conditions that a vehicle may encounter. As an example, a moisture-resistant coating and/or hermetic seal may be applied to the optical components of the emitter 320, the optical receiver and photodetector 330, and the steering mechanism 340 (as well as other moisture-susceptible components). As another example, a housing, enclosure, fairing, and/or window may be used in the LiDAR system 300 to provide desired characteristics such as hardness, foreign object protection (IP), self-cleaning capability, chemical resistance, impact resistance, and the like. In addition, an efficient and economical method for assembling LiDAR system 300 can be used to meet LiDAR operational requirements while maintaining low cost.
It will be appreciated by those of ordinary skill in the art that fig. 3 and the above description are for illustrative purposes only, and that the LiDAR system may include other functional units, blocks, or segments, and may include variations or combinations of these above functional units, blocks, or segments. For example, liDAR system 300 may also include other components not shown in FIG. 3, such as a power bus, power supply, LED indicators, switches, and the like. Additionally, there may be other connections between components, such as a direct connection between the light source 310 and the optical receiver and light detector 330, so that the light detector 330 may accurately measure the time from the emission of a light pulse by the light source 310 to the detection of a return light pulse by the light detector 330.
These components shown in fig. 3 are coupled together using communication paths 312, 314, 322, 332, 342, 352, and 362. These communication paths represent communications (bi-directional or uni-directional) between the various LiDAR system components, but are not necessarily physical components themselves. Although the communication path may be implemented by one or more wires, buses, or optical fibers, the communication path may also be a wireless channel or an open air optical path, such that no physical communication medium exists. For example, in one exemplary LiDAR system, communication path 314 includes one or more optical fibers, communication path 352 represents an optical path, and communication paths 312, 322, 342, and 362 are all wires carrying electrical signals. The communication paths may also include more than one of the types of communication media described above (e.g., they may include optical fibers and optical paths, or one or more optical fibers and one or more wires).
As described above, some LiDAR systems use the time of flight (ToF) of an optical signal (e.g., an optical pulse) to determine a distance to an object in an optical path. For example, referring to FIG. 5A, an exemplary LiDAR system 500 includes a laser light source (e.g., a fiber laser), a steering mechanism (e.g., a system of one or more moving mirrors), and a light detector (e.g., a photodetector with one or more optics). LiDAR system 500 may be implemented using, for example, liDAR system 300 described above. LiDAR system 500 emits light pulses 502 along an optical path 504 defined by a steering mechanism of LiDAR system 500.
In the depicted example, the light pulse 502 generated by the laser light source is a short pulse of laser light. Further, the signal steering mechanism of LiDAR system 500 is a pulsed signal steering mechanism. However, it should be appreciated that LiDAR systems may operate by generating, transmitting, and detecting non-pulsed light signals, and using techniques other than time of flight to derive distance to objects in the surrounding environment. For example, some LiDAR systems use frequency modulated continuous waves (i.e., "FMCW"). It should also be appreciated that any of the techniques described herein for a time-of-flight based system using pulsed signals may also be applied to LiDAR systems that do not use one or both of these techniques.
Referring back to FIG. 5A (e.g., illustrating a time-of-flight LiDAR system using light pulses), when light pulse 502 reaches object 506, light pulse 502 is scattered or reflected to form return light pulse 508. The return light pulse 508 may return to the system 500 along an optical path 510. The time from when the emitted light pulse 502 leaves the LiDAR system 500 to when the return light pulse 508 returns to the LiDAR system 500 may be measured (e.g., by a processor or other electronic device within the LiDAR system, such as the control circuitry 350). This knowledge of time of flight in combination with the speed of light can be used to determine the distance/distance from LiDAR system 500 to the portion of object 506 from which light pulse 502 was scattered or reflected.
By directing many light pulses, as depicted in FIG. 5B, the LiDAR system 500 scans the external environment (e.g., by directing light pulses 502, 522, 526, 530 along light paths 504, 524, 528, 532, respectively). As depicted in FIG. 5C, liDAR system 500 receives return light pulses 508, 542, 548 (corresponding to emitted light pulses 502, 522, 530, respectively). The return light pulses 508, 542, and 548 are formed by one of the objects 506 and 514 scattering or reflecting the emitted light pulses. Return light pulses 508, 542, and 548 can be returned to LiDAR system 500 along light paths 510, 544, and 546, respectively. Based on the direction of the emitted light pulses (as determined by LiDAR system 500) and the calculated distance from LiDAR system 500 to the portion of the object that scattered or reflected the light pulses (e.g., portions of objects 506 and 514), the external environment within the detectable range (e.g., the field of view between paths 504 and 532, inclusive) can be accurately mapped or mapped (e.g., by generating a 3D point cloud or image).
If no corresponding light pulse is received for a particular emitted light pulse, liDAR system 500 may determine that there is no object within the detectable range of LiDAR system 500 (e.g., that the object is outside the maximum scanning distance of LiDAR system 500). For example, in fig. 5B, the light pulse 526 may not have a corresponding return light pulse (as illustrated in fig. 5C) because the light pulse 526 may not generate scattering events along its transmission path 528 within a predetermined detection range. LiDAR system 500 or an external system (e.g., a cloud system or service) in communication with LiDAR system 500 may interpret the lack of a return light pulse as no object being disposed along light path 528 within a detectable range of LiDAR system 500.
In fig. 5B, light pulses 502, 522, 526, and 530 may be transmitted in any order, serially, in parallel, or based on other timing relative to each other. Additionally, while FIG. 5B depicts the emitted light pulses as being directed in one dimension or plane (e.g., the plane of paper), the LiDAR system 500 may direct the emitted light pulses along other dimensions or planes. For example, liDAR system 500 may also direct the emitted light pulses in a dimension or plane perpendicular to the dimension or plane shown in FIG. 5B, thereby forming a 2-dimensional transmission of the light pulses. Such 2-dimensional transmission of the light pulses may be point-by-point, line-by-line, disposable, or otherwise. That is, liDAR system 500 may be configured to perform a point scan, a line scan, a single scan without a scan, or a combination thereof. A point cloud or image (e.g., a single horizontal line) from a 1-dimensional transmission of light pulses may generate 2-dimensional data (e.g., (1) data from a horizontal transmission direction and (2) range or distance to an object). Similarly, a point cloud or image from 2-dimensional transmission of light pulses may generate 3-dimensional data (e.g., (1) data from a horizontal transmission direction, (2) data from a vertical transmission direction, and (3) range or distance to an object). Typically, liDAR systems that perform n-dimensional transmission of light pulses generate (n+1) -dimensional data. This is because the LiDAR system can measure the depth of or distance to an object, which provides an additional data dimension.
Thus, a 2D scan by a LiDAR system may generate a 3D point cloud that is used to map the external environment of the LiDAR system.
The density of the point cloud refers to the number of measurements (data points) per area performed by the LiDAR system. The point cloud density is related to the LiDAR scanning resolution.
Generally, at least for a region of interest (ROI), a greater point cloud density is desired, and thus a higher resolution is required. The point density in the point cloud or image generated by the LiDAR system is equal to the number of pulses divided by the field of view. In some embodiments, the field of view may be fixed. Thus, in order to increase the density of points generated by a set of transmit-receive optics (or transceiver optics), a LiDAR system may need to generate pulses more frequently. In other words, the light source in the LiDAR system may have a higher Pulse Repetition Rate (PRR). On the other hand, by generating and transmitting pulses more frequently, the furthest distance that a LiDAR system can detect may be limited. For example, if a return signal from a distant object is received after the system transmits the next pulse, the return signal may be detected in a different order than the order in which the corresponding signals were transmitted, resulting in ambiguity if the system is unable to properly correlate the return signal with the transmitted signal.
For illustration, consider an exemplary LiDAR system that can emit laser pulses with pulse repetition rates between 500kHz and 1 MHz. Based on the time it takes for the pulse to return to the LiDAR system, and to avoid confusion in typical LiDAR designs from the return pulse of consecutive pulses, the furthest distances that the LiDAR system can detect can be 300 meters and 150 meters, respectively, for 500kHz and 1 MHz. The spot density of a LiDAR system with a repetition rate of 500kHz is half of 1 MHz. Thus, this example shows that increasing the repetition rate from 500kHz to 1MHz (and thus increasing the dot density of the system) may reduce the detection range of the system if the system cannot properly correlate out-of-order arriving return signals. Various techniques are used to mitigate the tradeoff between higher PRRs and limited detection range. For example, multiple wavelengths may be used to detect objects in different ranges. Optical and/or signal processing techniques (e.g., pulse coding techniques) are also used to correlate between the emitted optical signal and the return optical signal.
The various systems, apparatus, and methods described herein may be implemented using digital electronic circuitry, or using one or more computers employing well known computer processors, memory units, storage devices, computer software, and other components. Generally, a computer includes a processor for executing instructions and one or more memories for storing instructions and data. A computer may also include, or be coupled to, one or more mass storage devices, such as one or more magnetic disks, internal hard and removable magnetic disks, magneto-optical disks, and the like.
The various systems, apparatuses, and methods described herein may be implemented using a computer operating in a client-server relationship. Typically, in such systems, the client computer is located remotely from the server computer and interacts via a network. The client-server relationship may be defined and controlled by computer programs running on the respective client and server computers. Examples of client computers may include desktop computers, workstations, portable computers, cellular smartphones, tablet computers, or other types of computing devices.
The various systems, apparatuses, and methods described herein may be implemented using a computer program product tangibly embodied in an information carrier, e.g., in a non-transitory machine-readable storage device, for execution by a programmable processor, and the method processes and steps described herein (including one or more steps of at least some of fig. 1-13) may be implemented using one or more computer programs executable by such a processor. A computer program is a set of computer program instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
FIG. 6 illustrates a simplified block diagram of an exemplary apparatus that may be used to implement the systems, apparatuses, and methods described herein. Apparatus 600 includes a processor 610 operatively coupled to a persistent storage device 620 and a main memory device 630. The processor 610 controls the overall operation of the apparatus 600 by executing computer program instructions defining these operations. The computer program instructions may be stored in persistent storage 620 or other computer-readable medium and loaded into main memory device 630 when execution of the computer program instructions is desired. For example, the processor 610 may be used to implement one or more of the components and systems described herein, such as the control circuitry 350 (shown in fig. 3), the vehicle perception and planning system 220 (shown in fig. 2), and the vehicle control system 280 (shown in fig. 2).
Accordingly, the method steps of at least some of fig. 1-13 may be defined by computer program instructions stored in the main memory device 630 and/or the persistent storage device 620 and controlled by the processor 610 executing the computer program instructions. For example, the computer program instructions may be implemented as computer executable code programmed by a person skilled in the art to perform an algorithm defined by the method steps discussed herein in connection with at least some of fig. 1-13. Accordingly, by executing computer program instructions, the processor 610 executes the algorithms defined by the method steps of these previous figures. The apparatus 600 also includes one or more network interfaces 680 for communicating with other devices via a network. The apparatus 600 may also include one or more input/output devices 690 that enable a user to interact with the apparatus 600 (e.g., display, keyboard, mouse, speakers, buttons, etc.).
Processor 610 may include both general purpose microprocessors and special purpose microprocessors, and may be the only processor or one of a plurality of processors of apparatus 600. The processor 610 may include one or more Central Processing Units (CPUs) and one or more Graphics Processing Units (GPUs), which may, for example, operate separately from and/or perform multiple tasks with the one or more CPUs to speed up processing, e.g., for the various image processing applications described herein. Processor 610, persistent storage 620, and/or main memory 630 may include or be supplemented by, or incorporated in, one or more application-specific integrated circuits (ASICs) and/or one or more field-programmable gate arrays (FPGAs).
Persistent storage 620 and main memory 630 each include tangible, non-transitory computer-readable storage media. The persistent storage 620 and the main memory 630 may each include high-speed random access memory, such as Dynamic Random Access Memory (DRAM), static Random Access Memory (SRAM), double-rate synchronous dynamic random access memory (DDR RAM), or other random access solid state memory devices, and may include non-volatile memory, for example, one or more magnetic disk storage devices, such as internal hard disks and removable magnetic disks, magneto-optical disk storage devices, flash memory devices, semiconductor memory devices (such as Erasable Programmable Read Only Memory (EPROM), electrically Erasable Programmable Read Only Memory (EEPROM)), compact disk read only memory (CD-ROM), digital versatile disk read only memory (DVD-ROM) disks, or other non-volatile solid state memory devices.
Input/output devices 690 may include peripheral devices such as printers, scanners, display screens, and the like. For example, input/output devices 690 may include a display device (such as a Cathode Ray Tube (CRT), plasma or Liquid Crystal Display (LCD) monitor, keyboard) for displaying information to a user, and a pointing device (such as a mouse or trackball) by which a user may provide input to apparatus 600.
Any or all of the functions of the systems and devices discussed herein may be performed by the processor 610 and/or incorporated into a device or system, such as the LiDAR system 300. Further, the LiDAR system 300 and/or apparatus 600 may utilize one or more neural networks or other deep learning techniques performed by the processor 610 or other systems or apparatuses discussed herein.
Those skilled in the art will recognize that an actual computer or implementation of a computer system may have other structures and may contain other components as well, and that FIG. 6 is a brief representation of some of the components of such a computer for purposes of illustration.
Fig. 7 is a block diagram illustrating an exemplary depth sensor 700 according to some embodiments. The depth sensor 700 includes a light source 710, an emitter 720, an optical receiver and light detector 730, and a control circuit 750. These components may be substantially the same as or similar to the light source 310, the emitter 320, the optical receiver and light detector 330, and the control circuit 350, respectively, described above with reference to fig. 3. In fig. 7, communication paths 712, 722, 732, 752, 762, and 772 may also be substantially the same or similar to paths 312, 322, 332, 352, 362, and 372, respectively, as described above, and thus the description will not be repeated.
In one embodiment, the light source 710 may include a semiconductor-based laser source (e.g., a VCSEL), an optical fiber-based laser source (e.g., a rare earth doped fiber for lasing), a liquid-based laser source (e.g., dye lasers such as sodium fluorescein, rhodamine B, and rhodamine 6G), a solid-state-based laser source (e.g., lasers using neodymium crystals, typically doped with yttrium aluminum garnet (Nd: YAG), yttrium orthovanadate (Nd: YVO 4), or yttrium lithium fluoride (Nd: YLF)), and/or a gas-based laser source (e.g., carbon dioxide (CO 2) -based lasers, argon-based lasers, or helium-neon-based lasers). In the examples described below, VCSELs are used for illustration. It should be understood that other types of laser sources may be used.
The optical receiver and photodetector 730 may include any type of photodetector, such as photodiodes, avalanche Photodiodes (APDs), SPADs, phototransistors, charge coupled devices
(CCD), CMOS Image Sensor (CIS), and/or photomultiplier tube (PMT). In the examples described below, high sensitivity photodetectors or SPAD array-like detectors are used as illustrative examples.
In contrast to LiDAR system 300, depth sensor 700 does not have a steering mechanism or any other mechanically movable scanning optics. Thus, the depth sensor 700 eliminates any mechanically movable parts configured to scan light. The depth sensor 700 may thus be more compact, robust, durable, and reliable. In one example, depth sensor 700 is a flash LiDAR that emits laser light to illuminate the entire FOV in a single pulse or single shot. Depth sensor 700 may be a solid state LiDAR device configured to perform electronic scanning. In contrast to optical scanning, electronic scanning does not use a mechanically movable optical system to scan light. Instead, solid state LiDAR devices may use a phase-based scan that emits a constant laser beam to multiple phases. It then compares the phase shift of the returned laser energy. The laser scanner determines the distance using a phase shift algorithm that is based on the unique properties of each individual phase and based on the following formula: (time of flight = phase offset/(2 pi x modulation frequency). Phase-based scanners can collect data at a much faster rate than time of flight scanners using mechanical scanning, but their effective detection range may be shorter. In one example of this, in one implementation, the light source and detector may be optically matched to each other.
In some embodiments, depth sensor 700 may be a flash LiDAR. As described above, when flash LiDAR operates, the entire field of view is illuminated with a widely divergent laser beam in a single pulse. In scanning LiDAR (e.g., liDAR system 300 shown in FIG. 3), a collimated laser beam scanned by steering mechanism 340 irradiates a single point in the FOV at a time, and the beam is raster scanned to irradiate the FOV point-by-point and row-by-row. The flash LiDAR illumination method requires a different detection scheme than the scanning LiDAR illumination method. In both scanning and flash LiDAR systems, a light detector and a time-of-flight engine are used to collect and process data related to the three-dimensional position and intensity of return light incident on the light detector in each frame.
In scanning LiDAR, the light detector may comprise a spot sensor, while in flash LiDAR, the light detector comprises a one-or two-dimensional sensor array, each pixel in which collects three-dimensional position and intensity information. In both cases, the depth information is calculated using a time-of-flight engine based on the emitted laser pulses and the return light (i.e., the time it takes each laser pulse to hit the target object and return to the sensor). The result is a point cloud comprising distance information of the target object. In contrast to scanning LiDAR, flash LiDAR is particularly advantageous when the sensor, FOV, or both are moving, because the entire FOV is illuminated at the same time.
In some embodiments, the depth sensor 700 may be a iToF sensor that measures the distance of a target object using the iToF method. The iToF method measures distance by collecting return light and distinguishing the phase offset between the emitted light and the return light. The iToF method is particularly effective in high-speed, high-resolution 3D imaging of objects located at short and long distances. The indirect ToF-based depth sensor emits continuously modulated light and measures the phase of the return light to calculate the distance to the target object.
As shown in fig. 7, in some embodiments, a light source 710 emits a light beam to an emitter 720 that provides light to illuminate the FOV. The emitter 720 may include one or more optical structures configured to distribute the light beam to the FOV. In some embodiments, the light source 710 may directly illuminate the FOV, and thus the depth sensor 700 may not include the emitter 720. Fig. 8 is a block diagram illustrating an exemplary depth sensor 800 according to some embodiments. Depth sensor 800 may be used to implement depth sensor 700 shown in fig. 7. Referring to fig. 8, in this example, a depth sensor 800 includes a VCSEL laser array 810 as a light source, a SPAD array 830 as one or more light detectors, a time of flight engine 850, transmit optics 820, and receive optics 840. Other components are omitted from fig. 8 for simplicity. It should be appreciated that the VCSEL laser array 810 and SPAD array 830 are for illustration purposes, and that other types of light sources and detectors may be used in the depth sensor 800.
As shown in fig. 8, in one embodiment, VCSEL laser array 810 emits a laser beam 832 to emission optics 820, which may include one or more of a lens, a lens group, a mirror, a prism, a microlens, a diffuser, or any other optical device. The emission optics 820 may include optical structures configured to receive the light beam emitted from the VCSEL laser array 810 and transmit the light beam as a transmitted light beam 832 to the FOV. The transmitting optic 820 may be part of the transmitter 720 shown in fig. 7. Examples of optical structures are described in more detail below. In one example, the receiving optics 840 may include a collection lens or lens group, an optical fiber or array of optical fibers, an optical filter, one or more converging lenses, one or more beamsplitters or other light splitting devices, or a combination thereof, for collecting and directing the return light 852 to the SPAD array 830 in the depth sensor 800. As described above, SPAD array 830 may include highly sensitive photodetectors configured to convert photons detected in return light 852 into electrical signals. The electrical signals may be provided to a time-of-flight engine 850 for processing. For example, the time-of-flight engine 850 may use time and/or phase information associated with the return light 852 and time and/or phase associated with the transmitted light beam 832 (or reference light beam) to calculate the distance of the target object 870. The time-of-flight engine 850 may be part of the control circuit 750 (or the control circuit 350) described above. It may include one or more processors and programs for calculating the distance of the target object 870 based on the dToF method and/or iToF method described above. In one embodiment, depth sensor 800 is a flash LiDAR or iToF sensor.
Fig. 9 illustrates an exemplary depth sensor 900 that provides an unevenly distributed light beam in the vertical direction of the FOV, according to some embodiments. In one example, the depth sensor 900 may include a light source 902, an optical structure 904, and a receiver 906. The light source 902 may be substantially the same as or similar to the light source 710 or the VCSEL laser array 810. The optical structure 904 may include optical components such as one or more optical diffusers, microlens arrays, and/or other optical components (e.g., lenses, mirrors, prisms, etc.). The optical structure 904 may be used to form the above-described emission optics 820, and will be described in more detail below. Receiver 906 can include receiving optics (e.g., substantially the same as or similar to receiving optics 840) and a light detector (e.g., a SPAD array). The receiver 906 is configured to receive and detect return light formed by scattering or reflecting the transmitted beam by objects in the FOV.
As shown in fig. 9, at least one of the one or more light sources 902 or the one or more optical structures 904 is configured to unevenly distribute the plurality of light beams in a vertical field of view (FOV) such that the vertical FOV includes dense and sparse regions. The dense region of vertical FOV has a higher beam density than the sparse region of vertical FOV.
For example, fig. 9 illustrates that the light beam provided by light source 902 and/or optical structure 904 includes light beams 932 and 942. Beam 932 is more densely distributed than beam 942. The light beam 932 is used to illuminate and detect objects located in the forward direction, which are within a vertical angle range of at least one of, for example, about-5 degrees to 0 degrees or-5 degrees to +5 degrees. One such object 970 is shown in fig. 9. The light beam 942 is used to illuminate and detect objects located in the forward direction, which are in the vertical angle range of at least one of, for example, -90 degrees to-5 degrees or +5 degrees to +90 degrees. The sparse area shown in fig. 9 is a portion of the area covered by beam 942. For example, FIG. 9 does not show the complete-90 to-5 vertical angle range and +5 to +90 range, which are also part of the sparse zone. The vertical FOV covered by both beam 932 and beam 942 may thus be from-90 degrees to +90 degrees.
As shown in fig. 9, the light beam is unevenly distributed in at least a portion of the vertical angular range of the FOV. In dense areas (e.g., -5 degrees to 0 degrees, or-5 degrees to +5 degrees), the beam is dense. Within the sparse region (e.g., -90 degrees to-5 degrees, or +5 degrees to +90 degrees), the beam is either more sparse or less dense than the beam in the dense region. The non-uniform distribution of the light beam optimizes the light distribution and object detection in different detection ranges with a satisfactory resolution. The optimization of the detection range with non-uniform light distribution is described in more detail using fig. 10.
Fig. 10 is a block diagram illustrating a change in distance detection requirements according to transmitted light angle in a vertical FOV according to some embodiments. As shown in fig. 10, two exemplary depth sensors 1000A and 1000B are mounted on a vehicle 1090. The depth sensor 1000A or 1000B may be implemented using any of the depth sensors 700, 800, and 900 described above. Fig. 10 uses a depth sensor 1000B as an illustration of the distance detection requirement. The depth sensor 1000B is mounted on the vehicle 1090 at a height above the road surface 1002 of about, for example, 0.84m to 1.1 m. As shown in fig. 10, if the beam is transmitted at a vertical angle of 0 degrees (or in the range of about-5 degrees to +5 degrees), the detection range of the beam may reach a long distance, for example, 50m to 150m. Such a0 degree vertical angle beam propagates in a direction substantially parallel to the road surface 1002. The vertical angle of the light beam emitted by the depth sensor refers to the angle of the light beam in the vertical direction (e.g., the angle between the light beam and a line parallel to the road surface or the mounting surface of the depth sensor). The vertical direction is generally perpendicular to the road surface 1002 and the horizontal direction is generally parallel to the road surface 1002.
As further shown in fig. 10, if the beam is transmitted at a vertical angle of-5 degrees, the detection range of the beam is significantly reduced to about 11.4m in the horizontal direction before the beam reaches the road surface. Similarly, if the beam is transmitted at-15, -30, -45, and-60 degrees, the detection range is further reduced to 3.9m, 2m, 1.4m, and 1.2m (corresponding to 3.7m, 1.7m, 1.0m, and 0.6m in the horizontal direction), respectively, before the beam reaches the road surface 1002. As can be seen from fig. 10, as the vertical angle of the light beam emitted from the depth sensor 1000B becomes larger (in the negative vertical direction), the detection range of the depth sensor 1000B in the horizontal direction becomes rapidly shorter. In other words, as the vertical angle of the beam becomes greater (e.g., from-5 degrees to-90 degrees), the distance that the beam travels before reaching the road surface decreases. Thus, light beams emitted at certain vertical angles (e.g., between-5 degrees and-90 degrees) cannot nor need to be used for remote detection, as compared to light beams emitted between-5 degrees and +5 degrees.
Although not shown in fig. 10, light beams emitted at vertical angles between +5 and +90 degrees cannot be used for long-range object detection either because typically these light beams would be emitted to the sky with no or minimal reflection of the sky.
Referring back to fig. 9, light beams emitted at different vertical angles may be used to detect objects located within different detection ranges. For example, the light beam 932 transmitted in the dense region is directed to detect objects located in a first detection range, and the light beam 942 transmitted in the sparse region is directed to detect objects located in a second detection range. The first detection range may include, for example, a measured distance of 50 meters or more from the depth sensor 900. The second detection range may include, for example, a distance of 0 meters to 20 meters measured from the depth sensor 900. The vertical angle at which the beam 932 is transmitted may be in the range of, for example, -5 degrees to +5 degrees. And the vertical angle at which beam 942 is emitted may be, for example, from-5 degrees to-90 degrees. As described above, and as shown in fig. 9, the light beam 932 is used to detect an object (e.g., object 970) that may be located a significant distance (e.g., greater than 50 meters) from the depth sensor 900. In order to detect such objects that are remote from the depth sensor 900, the sensing resolution of the depth sensor 900 may need to be high. Resolution refers to the level of detail that depth sensor 900 may capture. For LiDAR systems (or other depth sensors), resolution may be expressed in terms of the number of points in a point cloud of the number of spatial units or pixels in a unit area. Thus, the higher the number of dots or pixels, the higher the resolution of the depth sensor.
When an object (e.g., object 970) is positioned away from the depth sensor 900 (e.g., 50 meters to 200 meters or more), the depth sensor 900 needs to have a high resolution to detect the object because the object appears very small from a distance. Thus, if the light beam used to detect such a distant object is sparse, the object may not be detected or may have low resolution detection because no or few light beams may hit the object. Thus, there may be no return light, or there may be little return light. Thus, in order to detect such distant objects, the depth sensor 900 needs to emit a light beam having a high beam density. Beam density refers to the number of beams per unit vertical angle (e.g., 1 degree) or per unit area/volume. The higher the beam density, the greater the number of beams per vertical angle or per area/volume. In fig. 9, the beam density in the dense area is greater than that in the sparse area. The absolute value of the beam density in the dense and sparse areas may depend on the detection distance and the object size. As shown in fig. 9, the light beam 932 has a high beam density and thus can be used to detect objects located in a far detection range (e.g., 50 meters or more). In contrast, the depth sensor 900 need not have a high beam density to detect objects (e.g., object 972) located in its vicinity with good resolution. As shown in fig. 9, even if beams 942 have a lower beam density than beams 932, they can be used to detect objects located near depth sensor 900 (e.g., within 0 meters to 20 meters). This is because an object located near the depth sensor 900 appears large, and thus, even if the light beam 942 is sparse, the object can be detected.
Fig. 9 and 10 show that in order to detect an object located at a far detection range corresponding to a dense region in a vertical angle range, a dense light beam should be used, and in order to detect an object located at a near detection range corresponding to a sparse region in a vertical angle range, a sparse light beam may be used. Accordingly, the depth sensor 900 may be configured to emit light beams unevenly distributed in the vertical field of view such that the vertical FOV has a dense area and a sparse area, wherein the dense area of the vertical FOV has a higher beam density than the sparse area of the vertical FOV. The non-uniform distribution of the light beam along the vertical FOV can optimize the detection range and reduce the power consumption while still meeting the detection resolution requirements of objects in different detection ranges. Depth sensors configured to provide an uneven distribution of the light beam improve overall system efficiency and performance compared to using an evenly distributed light beam.
As described above, the light beam provided by the depth sensor (e.g., sensor 800, 900) may be provided directly by one or more light sources, or by a combination of light sources and one or more optical structures. Fig. 11 is a block diagram illustrating an example of providing non-uniform distribution of light beams by non-uniformly placing VCSEL elements in a VCSEL laser array 1110 according to some embodiments. In this embodiment, as shown in fig. 11, no optical structure may be required to provide an uneven distribution of the light beam. The VCSEL laser array 1110 includes a plurality of VCSEL elements 1115A-1115N and 1117A-1117M. These VCSEL elements can form an array (e.g., a 1D or 2D array, or matrix). VCSEL elements 1115A-1115N may be arranged close to each other (e.g., forming a densely arranged array at a predetermined distance) such that light beam 1132 emitted by VCSEL elements 1115A-1115N has a high beam density. Beam 1132 is thus distributed in a dense region of the vertical FOV. In contrast, the VCSEL elements 1117A-1117M may be sparsely arranged with respect to one another (e.g., arranged at another predetermined distance to form a sparsely arranged array). Thus, the light beams 1142 emitted by the elements 1117A-1117M are distributed in sparse areas of the vertical FOV. Each of the VCSEL elements 1115A-1115N and 1117A-1117M may also have a predetermined orientation such that they direct a corresponding beam of light at a corresponding vertical angle. For example, element 1117A may be tilted at some vertical angle such that its beam has a vertical angle of-5 degrees in the vertical FOV. The element 1117B may be tilted at another vertical angle such that its beam has a vertical angle of-10 degrees, and so on.
In the configuration shown in FIG. 11, by differently arranging VCSEL elements 1115A-1115N and 1117A-1117M, an uneven distribution of beams 1132 and 1142 may be obtained, where beam 1132 has a higher beam density than beam 1142. The VCSEL laser array 1110 may not require additional optical structures to provide non-uniform distribution. In some embodiments, the emission optics 1120 may be coupled to the VCSEL laser array 1110 to further shape the beam, or to further assist or fine tune the non-uniform distribution of the beam. Such emission optics 1120 may be, for example, a lens or a lens group. In one example, the emission optics 1120 may be a collimating lens.
Fig. 11 above illustrates that non-uniform distribution of the light beam is provided by non-uniform arrangement of light source elements (e.g., VCSEL elements). Fig. 12 is a block diagram illustrating providing non-uniform distribution of a light beam through the use of an optical diffuser 1224, according to some embodiments. As shown in fig. 12, the depth sensor 1200 includes a light source 1220. The light source 1220 has an array of uniformly distributed elements (e.g., VCSEL elements). The light beams 1222 emitted by these elements of the light source 1220 are thus also evenly distributed. The optical diffuser 1224 forms an optical structure that may be used to create an uneven distribution of the light beam 1222, thereby forming an unevenly distributed light beam 1226. The optical diffuser 1224 may be a device or element for scattering or diffusing light. The primary purpose of this is to create an uneven distribution of light by splitting at least a portion of the incident beam 1222 into a wider and lower intensity illumination pattern. As shown in fig. 12, the optical diffuser 1222 may redirect some of the light beams more than others and/or reshape some of the light beams to create an uneven distribution of the light beams. Optical diffusers can diffuse light by disrupting the wavefront of the light and reducing its spatial coherence. Thus, the optical diffuser may obtain a variation of the optical phase of different parts of the profile of the incident light beam. The optical diffuser 1224 may be made from a variety of materials, including glass, plastic, and film. They may be tailored to a specific pattern or texture that scatters the incident light. The complexity of these patterns varies from a simple roughened surface to a more complex microstructured design. The optical diffuser 1224 may thus include a surface having micro-optical structures configured to receive the uniformly distributed light beam and form an uneven distribution of the light beam. The formation of the beam non-uniform distribution can be precisely controlled (by using micro-optical patterns). The optical diffuser 1224 may thus achieve a specific illumination effect and improve the illumination quality of the depth sensor. The choice of diffuser type and design depends on the specific requirements of the depth sensor, including the desired level of diffusion and the desired lighting effect.
Fig. 13 is a block diagram illustrating the provision of non-uniform distribution of light beams by using a semiconductor wafer 1323 with a microlens array 1324, according to some embodiments. Similar to the configuration in fig. 12, the depth sensor 1300 shown in fig. 13 includes a light source 1320 having an array of uniformly distributed elements (e.g., VCSEL elements). The light beams 1322 emitted by these elements of the light source 1320 are thus also uniformly distributed. The depth sensor 1300 shown in fig. 13 also includes a semiconductor wafer 1323 having a microlens array 1324. Microlens array 1324 is configured to unevenly distribute a plurality of light beams 1322 in a vertical FOV, forming unevenly distributed light beams 1326.
Semiconductor wafer 1323, also referred to simply as wafer 1323, is a thin, flat, generally circular slice of semiconductor material, such as silicon, that is used as a substrate for fabricating electrical and/or optical devices (e.g., microlens arrays). Wafer 1323 may be silicon-based (e.g., silicon carbide) or based on other semiconductor materials (e.g., gallium nitride-based). The semiconductor wafer 1323 is transparent to the light beam 1322 of a particular wavelength or range of wavelengths such that the light beam 1322 can pass through the wafer 1323 and into the microlens array 1324. In other words, the light beam 1322 may enter from the back side of the wafer 1323 and exit from the front side through the microlens array 1324. This configuration is also known as a back-lit technique. For example, a silicon-based wafer is transparent to a light beam having a wavelength of 905 nm. In some embodiments, elements of the light source 1320 (e.g., VCSEL elements) may also be disposed on one surface (e.g., the back surface) of the wafer 1323, and the microlens array 1324 may be disposed on another surface (e.g., the front surface) of the wafer 1323. In this way, the depth sensor 1300 is highly integrated and can be very compact. In other embodiments, the elements of the light source 1320 may be separate and distinct from the wafer 1323.
The microlenses in array 1324 are microlenses having very small dimensions, typically on the order of micrometers (m) or even smaller. The micro-lenses can thus be much smaller than conventional lenses, and thus the micro-lenses can be easily arranged in a semiconductor wafer, making the entire sensor very compact. The microlenses in array 1324 can be made from a variety of materials, including glass, polymeric, or semiconductor materials. The choice of material depends on the type of wafer 1323 and the specific optical requirements. As shown in fig. 13, the microlenses in array 1324 can shape and redirect light beam 1322. The beam may or may not change its direction as it passes through a particular microlens in array 1324. For example, as the topmost beam 1322 passes through the microlens 1324A, the beam may maintain its direction. As the other beams pass through their respective microlenses 1324B-1324N, their directions can be changed to form one set of beams in a dense region of the vertical FOV and another set of beams in a sparse region of the vertical FOV.
Thus, in one embodiment, each different microlens may be configured differently to bend the respective incident beam to its intended direction (or corresponding vertical angle). For example, in fig. 13, the microlens 1324B is designed to bend the light beam slightly downward, so the output light beam from the microlens 1324B is directed at a vertical angle of, for example, -5 degrees. In contrast, the microlens 1324N is designed to bend the beam significantly downward such that the output beam from the microlens 1324N is oriented at a vertical angle of, for example, -45 degrees.
While fig. 13 illustrates each of the microlenses 1324A-1324N configured to distribute one of the light beams 1322, it should be understood that in other embodiments, a subset of the microlenses of the array 1324 may be configured to distribute one light beam. For example, a set of two, three, four, or more microlenses 1324 can be arranged together to receive one beam 1322 and redistribute the beam to provide one output beam 1326. The plurality of microlenses may form sub-arrays (1D or 2D) or sub-groups placed in positions such that the sub-arrays or sub-groups of microlenses may receive incident light beams.
Microlens array 1324 can be fabricated on semiconductor wafer 1323 via a variety of semiconductor processing techniques. In one example, the surface of the semiconductor wafer 1323 may be processed to form a microlens array 1324 by removing material from the surface to form microlenses. Removal of material (e.g., silicon, oxide, metal, etc.) from wafer 1323 may be performed via photolithography (e.g., for patterning), chemical etching (e.g., dry etching or wet etching), and/or precision machining (e.g., chemical mechanical polishing). In another example, the surface of the semiconductor wafer 1323 is processed to form a microlens array 1324 by depositing material to the surface to form microlenses. The deposited material may include, for example, a polymer material, a silicon material, a glass material, a plastic material, and the like. Deposition techniques may include Physical Vapor Deposition (PVD), chemical Vapor Deposition (CVD), atomic Layer Deposition (ALD), electrochemical deposition, spin coating, sputtering, chemical solution deposition, and the like. As one example, tiny droplets of polymer may be deposited onto the surface of wafer 1323 to form microlenses through a subsequent thermal process.
Fig. 14 illustrates a method 1400 of unevenly distributing a plurality of light beams using a depth sensor, in accordance with some embodiments. The depth sensor does not include a mechanically movable part for scanning the light beam. Method 1400 includes step 1402 where one or more light sources emit a plurality of light beams. In step 1404, the plurality of light beams are received by one or more optical structures coupled to one or more light sources. In step 1406, at least one of the one or more light sources or the one or more optical structures unevenly distributes the plurality of light beams in a vertical field of view (FOV) such that the vertical FOV includes dense and sparse regions. The dense region of vertical FOV has a higher beam density than the sparse region of vertical FOV. In some embodiments, when the light beams are unevenly distributed, the light beams in a dense region of the vertical FOV are directed to detect objects located in a first detection range, and the light beams in a sparse region of the vertical FOV are directed to detect objects located in a second detection range. The first detection range is greater than the second detection range. In some examples, the first detection range includes a distance of 50 meters or more from the depth sensor and the second detection range includes a distance of 0 meters to 20 meters from the depth sensor. In some examples, the dense region of vertical FOVs corresponds to a vertical angle range of-5 degrees to 0 degrees or-5 degrees to +5 degrees, and the sparse region of vertical FOVs corresponds to a vertical angle range of at least one of-90 degrees to-5 degrees or +5 degrees to +90 degrees.
The foregoing description is to be understood as being in all respects illustrative and exemplary, rather than limiting, and the scope of the invention disclosed herein is not to be determined from the description, but rather from the claims as interpreted according to the full breadth permitted by the patent laws. It is to be understood that the embodiments shown and described herein are merely illustrative of the principles of this invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention. Various other combinations of features may be implemented by those skilled in the art without departing from the scope and spirit of the invention.

Claims (22)

CN202380078118.3A2022-11-152023-11-15 Unevenly distributed illumination for depth sensorsPendingCN120188067A (en)

Applications Claiming Priority (5)

Application NumberPriority DateFiling DateTitle
US202263425644P2022-11-152022-11-15
US63/425,6442022-11-15
US18/389,4062023-11-14
US18/389,406US20240159518A1 (en)2022-11-152023-11-14Unevenly distributed illumination for depth sensor
PCT/US2023/079830WO2024107849A1 (en)2022-11-152023-11-15Unevenly distributed illumination for depth sensor

Publications (1)

Publication NumberPublication Date
CN120188067Atrue CN120188067A (en)2025-06-20

Family

ID=89322258

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202380078118.3APendingCN120188067A (en)2022-11-152023-11-15 Unevenly distributed illumination for depth sensors

Country Status (2)

CountryLink
CN (1)CN120188067A (en)
WO (1)WO2024107849A1 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20220043127A1 (en)*2020-08-102022-02-10Luminar, LlcLidar system with input optical element
DE102020216528A1 (en)*2020-12-232022-06-23Robert Bosch Gesellschaft mit beschränkter Haftung lidar sensor

Also Published As

Publication numberPublication date
WO2024107849A1 (en)2024-05-23

Similar Documents

PublicationPublication DateTitle
US20220413102A1 (en)Lidar systems and methods for vehicle corner mount
US20230358870A1 (en)Systems and methods for tuning filters for use in lidar systems
US20230305161A1 (en)Real-time monitoring dc offset of adc data of lidar system
US20240159518A1 (en)Unevenly distributed illumination for depth sensor
US20240103138A1 (en)Stray light filter structures for lidar detector array
US20240094351A1 (en)Low-profile lidar system with single polygon and multiple oscillating mirror scanners
US20230341532A1 (en)Dynamic calibration method of avalanche photodiodes on lidar
US11768294B2 (en)Compact lidar systems for vehicle contour fitting
WO2023220316A1 (en)Dual emitting co-axial lidar system with zero blind zone
WO2023183425A1 (en)Methods and systems of window blockage detection for lidar
CN117769658A (en)Emitter channel for light detection and ranging system
CN120188067A (en) Unevenly distributed illumination for depth sensors
US20240230848A9 (en)Two dimensional transmitter array-based lidar
US20230324526A1 (en)Method for accurate time-of-flight calculation on the cost-effective tof lidar system
US20230305124A1 (en)Methods and systems of window blockage detection for lidar
US20240295633A1 (en)Thin profile windshield mounted lidar system
US20230366988A1 (en)Low profile lidar systems with multiple polygon scanners
US20250093471A1 (en)Wavelength stability of multijunction diode laser in lidar
US20230366984A1 (en)Dual emitting co-axial lidar system with zero blind zone
US20240210541A1 (en)Detector alignment method for lidar production
US20240103174A1 (en)Point cloud data compression via below horizon region definition
WO2025116877A2 (en)Stray light filter structures for lidar detector array
WO2024186846A1 (en)Thin profile windshield mounted lidar system
CN120390889A (en) Detector alignment method for LiDAR production
WO2023205477A1 (en)Dynamic calibration method of avalanche photodiodes on lidar

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp