SPECIFICATIONOptical radar system for vehiclesBackground of the inventionThis invention relates to radar system for vehicles, and more particularly to an optical radar system which locates and profiles objects in a road along which the vehicle travels.
One conventional optical radar system uses a pulse generator which produces a pulse which activates a light emitting element to transmit a search beam toward objects in the road along which the vehicle is travelling, and an optical receiver which receives light reflected by the objects and transduces the light into a corresponding electrical signal. The pulse generator also produces a trigger signal simultaneously with the production of the pulse signal. The transduced receiver signal and the trigger signal are inputted to a distance determiner which derives the distances to the objects from the time lag between the trigger signal and the received signal in a well-known manner.
With such radar systems, however, only the presence of the objects and the distances to the objects can be sensed. It is impossible to determine the azimuths and profiles of the objects, so that it is impossible to determine with certainty whether the objects are obstacles which may obstruct the detecting vehicle.
Summary of the inventionIt is an object of this invention to provide an optical radar system which can determine the range, the azimuths and profiles of detected objects.
This invention provides an optical radar system which comprises an optical transmitter for transmitting a pulsed search beam toward objects present in its detection region, and an optical receiver for receiving light reflected by the objects. The receiver includes a photoelectrical transducer for transducing the light two a corresponding electrical signal, the transducer including a plurality of photodetectors arranged in a matrix. A distance determiner determines the distances to the objects on the basis of the propagation time of the search beam from transmission of the search beam to reception of the light reflected by the objects.The azimuths of the objects with respect to the orientation of the detecting vehicle are derived from the positions of photodetectors within the matrix of said transducer which have received the reflected light. In addition, the contours of the objects can be profiled pn the basis of the mutual positional relationships among photodetectors of said transducer which have received the reflected light.
The above and other objects, features and advantages of this invention will be made apparent in the following description of a preferred embodiment thereof, taken in conjunction with the accompanying drawings.
Brief description of the drawingsIn the drawings:Figure 1 is a block diagram of a prior art optical radar system with the transmitter and receiver being shown in cross-section;Figure 2 is a block diagram of a distance determiner of Figure 1;Figure 3 is a timing chart of the main output waveforms of the system of Figure 1;Figure 4 illustrates the basic concept of this invention;Figure 5 is a diagram, similar to Figure 1, of one embodiment of an optical radar system according to thisinvention;Figure 6 is a timing chart of the main output waveforms of the system of Figure 5;Figure 7 is a schematic circuit diagram of a pulse drive unit of Figure 5;Figure 8 is a block diagram of a matrix control unit and a photoelectrical transducer of Figure 5;;Figure 9 illustrates an array of photo diodes which constitute the photoelectrical transducer, and a group of analog switches connected thereto;Figure 10 is a schematic circuit diagram illustrating the identification of a single photodiode by means of column and row analog switches;Figure 11 is a diagram illustrating the operation of the matrix control unit;Figure 12 is a timing chart of the relationship between the transmission timing signal and gate signals;Figure 13 is a timing chart of a second timing signal and a second set of gate signals;Figure 14 illustrates the range of detection of the radar system according to this invention;Figure 15 illustrates phantom planes, one for each of the objects present in the range of detection;Figure 16 illustrates reception of planar image of an object by the radar system according to this invention;;Figure 17 shows the image of an object projected on the photoelectrical transducer of the receiver;Figure 18 illustrates the relation between the detection region and the memory area of the radar systemaccording to this invention;Figures 19, 20 and 21 illustrate the data storage in the memory area prompted by the detected objects ofFigure 14;Figure 22 illustrates the principle of determining the azimuths of objects;Figure 23 illustrates another memory area;  Figure 24 is a flowchart of a routine for storing the determined distance data into the memory area ofFigure 23;Figure 25 is a flowchart of a program for preventing collision on the basis of the data stored in the memory area of Figure 23;Figure 26 is a diagram of a region monitored for possible collision between obstacles and the detecting vehicle;;Figure 27 is a perspective view of an array of photoelectrical cells which receive light reflected by the objects; andFigure 28 is a view, similar to Figure 27, of an array of photoelectrical cells.
Detailed description of the inventionIn order to facilitate understanding of this invention, the prior art optical radar system mentioned above will be described in more detail. The radar system is mounted on the vehicle and includes an optical transmitter 1, an optical receiver 2 and a control unit 3. Transmitter 1 includes a light emitting element 4 such as a semiconductor laser which is supplied with a driver signal A having a period Tp and a pulse width Tw, such as shown in Figure 3(a), by a pulse generator 6 of control unit 3 and which produces a coherent optical pulse Lt, i.e. laser light, having a wavelength X such as shown in Figure 3(b), pulse-modulated by drive signalA. Optical pulse Lt is shaped by a convex lens 9 into a search beam having an angle of divergency Ot and then transmitted toward objects ahead of the vehicle (Figures 1 to 3).
The light Lr reflected by the objects is weak as shown in Figure 3(c), even after being concentrated by a lens 10 of optical receiver 2.
Background light such as solar light, artificial illumination, etc., is filtered out by an interference filter 11.
The filtered light is supplied to an electro-optical transducer 5. Then, transducer 5 produces an electrical signal B as shown in Figure 3(d) which is supplied to control unit 3. Signal B is amplified and shaped by a pulse detector 7 and then supplied to distance determiner 8. In shaping signal B, the leading edge of signal B is detected and shaped into a trigger signal D, as shown in Figure 3(f).
Distance determiner 8 is supplied with a trigger signal C, as shown in Figure 3(e), occurring in synchronism with drive signal A, in addition to trigger signal D. Thus, determiner8 determines the time lag T between trigger signal D and trigger signal C and outputs a signal E indicative of the corresponding distance travelled at light speed.
Distance determiner 8 includes a flip-flop 81 which is set and reset by trigger signal C and reflection pulse signal D, a high-frequency oscillator 82, and AND gate 83 and a high-speed counter 84, as shown in Figure 2.
The output F of flip-flop 81, as shown in Figure 3(9), opens AND gate 83 for time lag T, so that a signal H, containing the number of pulses, as shown in Figure 3(i) corresponding to time lag T, of the output G of oscillator 82, as shown in Figure 3(h), is supplied to counter 84. Thus, counter 84 outputs a signal E indicative of the distance proportional to time lag T. Signal E is supplied to an arithmetic unit of a microcomputer, for example, which calculates the distance R to the object by R = light speedy/2.
The basic concept of an optical radar system according to this invention will be described with respect toFigure 4.
An optical transmitter transmits a search pulse beam toward objects. An optical receiver receives light reflected by the objects. The receiver includes an electro-optical transducer which in turn includes a plurality of photoelectrical cells arrayed in a matrix.
The ranging of the objects is performed the objects is performed on the basis of the time lag between transmission of the search beam and reception of light reflected by the object.
Determination of the azimuths of the objects is performed on the basis of the positions of photoelectrical cells which receive the reflected light.
A profile determination of the objects is performed on the basis of the mutual positions of photoelectrical cells of the transducer which receive the reflected light.
One embodiment of the radar system according to this invention will be described in more detail with respect to Figure 5. Transmitter 21 includes a laser diode 24, a pulse drive unit 23 and a convex lens 25. When pulse drive unit 23 is supplied with a trigger signal S2 having a period Tp (= 10 us) and a pulse widthTt (= 200 ns) and a peak value Ip (= 40A) (Figure 6), a laser diode 24 emits a corresponding laser light pulse having a pulse width Tw ( > 40 ns), as shown in Figure 6, and a peak value Pp of = 10W which is transmitted forward via lens 25 as a search beam having an angle of divergence et (= 100 mrad).
Pulse drive unit 23 has a structure as shown in Figure 7 and operates as follows: When a transistor Q1 is triggered by signal S2, it becomes conductive and allows the electrical charge stored across a capacitor C1 (having about 0.01 pF) at a high voltage Va (of up to 200volts) to be supplied to laser diode 24.
A clock pulse generator 33 supplies a clock signal S1 having a period Tp, as shown in Figure 6, to monostable multivibrator 34 which produces trigger signal S2 in response to the rising edge of signal St.
Clock pulse generator 33 also produces a timing signal S1 which is the logical inverse of clock signal S1 and which is supplied to matrix control unit 30, a distance determiner 35 and a microprocessor 37 via an interface 36.
Optical receiver 22 includes a convex lens 27 having a diameter equal to or greater than 70 mm fixed to an opening of a hollow cylindrical housing which collects the part Lr of search beam Lt reflected by the objects, a filter 28 which filters background light out of light Lr, and a photoelectric transducer 29 positioned at the  focal point of the lens 27 which transduces the received light Lr to a corresponding electrical signal. Receiver 29 further includes matrix control unit 30 which controls transducer 29, and a preamplifier 31 which amplifies the electrical signal by about 30 dB at the output side of transducer 29.The signal S3 (Figure 6) from preamplifier 31 is amplified to a predetermined level (to a peak value of about 5 volts) by a wide-band amplifier 32, and supplied as a timing signal S4 having a pulse width Te (s 50 ns) (Figure 6) to distance determiner 35 which has the same structure as the prior art shown in Figure 2. In this case, the timing signalsS1, S4 and Dr of Figure 5 correspond to the signals C, D and E, respectively, of Figure 2. The high-frequency oscillator of the distance determiner shown at 82 in Figure 2 generates a clock signal H having a period Th of 10 ns, i.e. frequency of 100 MHz. The fact that the period of the clock signal is 10 ns results in a measurement accuracy or resolution of t1.5m in this particlar embodiment.
Microprocessor 37, which includes an I/O interface 36, a ROM 39 and a RAM 38, each connected to a CPU 40, uses the trigger signal Si, distance data Dr and signals from the matrix control unit 30, to be described below, to derive the range, bearing and profile of objects within a detection region of the search beam to be described below. If the range and bearing of detected objects fall within specified ranges, the microprocessor 37 activates an external alarm device 41 to notify the driver that an obstacle lies in the path of the vehicle.
Figure 8 illustrates transducer 29 and matrix control unit 30 in more detail. Transducer 29 includes a matrix of 150 photo diodes arrayed in 15 columns and 10 row, each of which corresponds to a unique combination of column analog switches SA and row analog switches SB. The switches SA are controlled from matrix control unit 30 by the combination of a hex counter 303 and a hex decoder 304 and similarly, the switches SB are controlled by a decade counter 301 and decimal decoder 302, as will be described in more detail later.
The switches SA are supplied in parallel with a bias voltage Vb. The switches SB are grounded in parallel viaa load resistor Rd. The junction between resistor Rd and the common connection to switches SB is tapped to provide a signal Sd, which is the input of pre-amp 31.
Figure 9 illustrates the actual connection among the 150 photodiodes and the analog switches SA and SB of transducer 29. Each of the analog switches SA applies the bias voltage Vb to the anodes of the photodiodes in one of the rows of the matrix in response to the corresponding output 91-915  (gi) ofhexadecimal decoder 304. On the other hand, each of the analog switches SB grounds the cathodes of thephotodiodes in one of the columns of the matrix in response to the corresponding output 921-930  (gj) ofdecimal decoder 302.
Decade counter 301 counts the pulses of timing signal Si whereas hexadecimal counter 303 isincremented by gate signal g21 which is the "10" output of decimal decoder 302. Counter 303 is reset by g16 which is the "16" output of hexadecimal decoder 304, so that it functions essentially as a penta-decimal counter.
Figure 10 illustrates the details of the circuitry connecting one of the analog switches SA, one of the analog switches SB and photodiode selected by the selected switches SA and SB. The switches SA, SB consistpredominantly of single FET's, which are connected with the same polarity on opposite sides of thephotodiode. The column gate signal g is supplied to the base electrode of the SA FET via an RC-circuit(C1, r). The row gate signal gj is connected to the base of SB FET in parallel with a bias resistor r3.
When the reflected light strikes a photodiode, the photodiode produces an electric current Idcorresponding to the intensity of the incident light. Current Id flows through load resistor Rd via analogswitch SB, thereby outputting a signal indicative of the reflected light.
The resistors rl-r4 used in the analog switches SA and SB each have a resistance of about 200 KQ; capacitor C1 has a capacity of about 1 F; the FETs have a threshold of about 1 volt; and bias voltage Vb should be tens of volts when the photodiodes are of the ordinary type whereas Vb should be hundreds of volts when the photodiodes are of the avalanche type.
Figure 11 illustrates the naming convention for the photodiode array. Each diode is identified in the format"PD(i,j)" where i and j are the hexadecimal and decimal switch numbers, respectively.
Figure 12 is a timing chart of timing signal S, and the outputs g21-g30 of decimal decoder 302 whereasFigure 13 is a timing chart of gate signal g21 and the outputs g1-g15 of hexadecimal decoder 304. As will beseen from the timing charts of Figures 12 and 13, the 150 photodiodes of the transducer 29 are turned on insequence in response to timing signal Si, i.e., at a period of Tp (= 10 s): photodiode PD(1,1)e  PD(1,2)  PD(1,3)...  PD(1,10) PD(2,1) - . . . PD(15,10) <  PD(1,1).... That is, matrix controller 30 cycles through all the photodiodes every 150xTp.
As shown in Figure 12, the rising edge of gate signals 921-930 is offset by half of its period relative to thetiming of search light Lt in order to eliminate the influence of the response delay time (s 1 Ls) required wheneach of analog switches SA1-SB10 is switched on or off.
It is assumed that, as shown in Figure 14, a vehicle (referred to as "our own vehicle" below) with the radarsystem being mounted on its front end is travelling along a straight road and that there are another vehicle J1 at a distance R1, a telephone booth J2 at a distance R2 and an omnibus J3 at a distance R3 in front of ourown vehicle.
Search beam Lt is transmitted in the form of a pulse covering a diverging frusto-conical volume in front ofour vehicle. The axial length e of each pulse volume is constant throughout its propagation path and is equal to the speed of light times the trigger pulsewidth TT.
The light reception field of optical receiver 22 or the detection area of the radar system is determined by  the focal distance f of lens 27 and the size, shape and sensitivity of transducer 29, i.e. the pyramid defined by the projection through the center 0 of convex lens 27 of the corners of the transducer 29. The optical axis Z of the lens 27 is designed to intersect the optical axis Zt of optical transmitter 21 at a great distance. The detection area and the search area by beam Lt are designed to be substantially commensurate.
As shown in Figure 15, the optical axis Z of receiver 22 is designated the Z-axis of a three-dimensionalCartesian coordinate system, the X-axis of which is selected to be parallel to the plane of the road surface, i.e.
in the widthwise direction of the vehicle, and the Y-axis of which is selected to be normal to the road surface.
A phantom plane parallel to the plane of transducer 29 and including the rear end plane of vehicle J1 is designated Pa. Similarly, a phantom plane including the near surface of telephone booth J2 is designated Pb, and a phantom plane including the rear end surface of bus J3 is designated Pc.
Since the distance Ra from convex lens 27 of receiver 22 to vehicle J1 is much greater than the focal distance f of lens 27, the light Lr reflected from vehicle J1 in plane Pa forms a vertically and horizontally reversed real image of the rear end surface of vehicle J1 on the light receiving surface of transducer 29, as shown in Figure 16.
As shown in Figure 16, plane Pa can be divided into 150 (15-columns and 10 rows) subsections which are designated Pa(1 ,1 )-Pa(1 5,10). Subsection Pea(1 ,1) corresponds to photodiode PD(1,1) of transducer 29, Pa(1,2) to PD(1,2) . . . and likewise Pa(1 5,10) to  PD(1 5,10). The  rear end face of vehicle J1 occupies subsections Pa(1,5), Pa(1,6), Pa(1,7), Pa(2,5), Pa(2,6), Pa(2,7), Pea(3,5) . . ., Pa(6,5), Pa(6,6), and Pa(6,7) of plane Pa, so that the light Lr reflected from the rear end face of vehicle J1 falls on photodiodes Pud(1,5) . . ., PD(6,7) (Figure 17).
The inter vehicle distance R1 to vehicle J, can be derived from the time lag Ti between transmission of search beam Lt and reception of reflected light Lr by the photodiodes; the azimuth of vehicle J1 can be derived from the position of the photodiodes onto which the reflected light Lr has fallen; and a rough profile of vehicle J1 is formed by the pattern of arrangement of the photodiodes onto which the reflected light Lr has fallen. The processing of the azimuth and profile of vehicle J1 is performed by microprocessor 37.
Figure 18 illustrates the preferred method of assembling a rudimentary 3-dimensional map of possible obstacles in RAM 38.
RAM 38 includes 100 memory groups, each group including 150 memory areas Am(m=1-100); a total of 15,000 memory cells Am(i,j) (m=1-100, i=1-15, and j= 1-10) which correspond to the three-dimensionally divided subsections of the detection area of the radar system. Specifically, the length of the detection area in the Z-axis direction is taken to be 150m, and is equally divided into 100 sublengths or intervals of 1.5 meters to correspondingly form 100 phantom planes Pi-Pioo. The reason the planes Pi-Pico are spaced at intervals of 1.5 meters is that the Z-axis resolution of the radar system is 1.5 meters.
Microprocessor 37 identifies which photodiode of transducer 29 is on in accordance with timing signal Si and two gate signals gl and 921 supplied by matrix control circuit 30. When CPU 40 is supplied with distance data Dr from distance determiner 35, it selects a memory area group Am corresponding to phantom planePm present at the distance Rm indicated by the-distance data Dr, and stores a "1" in memory cell Am(i,j) corresponding to photodiode PD(i,j). For example, if distance Ra to vehicle J1 is 30 meters, a "1" is stored in 18 memory areas A20(1,5), A20(1,6) . . .,  A2o(6,6),  A2oS6,7) of a twentieth memory group A20, as shown inFigure 19.Similarly, as shown in Figure 20, a "1" is stored in 6 memory cells Ab(14,4), Ab(14,5), Ab(14,6), Ab(l 5,4), Ab(15,5), and Ab(15,6) of memory group Ab corresponding to phantom plane Pb selected on the basis of the distance to telephone booth J2. As shown in Figure 21, a "1" is stored in 4 memory cells Ac(8,5),Ac(8,6), Ac(9,5) and Ac(9,6) of memory group Ac corresponding to imaginary plane Pc selected on the basis of the distance to omnibus J3.
Microprocessor 37 then determines the azimuth of the objects on the basis of information stored in the memory area which will be described below. As shown in Figure 22, it is assumed that there is an object Pin an arbitrary cell Pm(i,j) of phantom plane Pm at a distance Rm from our own vehicle. Thus, "1" is stored in memory area Am(i,j).
In Figure 22, the projection of point P onto the Y-axis is labelled Py and is a distance Ym away from the projection Om of the optical axis 0 onto plane Pm. Similarly, the projection of point P onto the X-axis is labelled Px and is a distance Xm away from point Om. The angles subtended at optical center 0 by the sections Xm and Ym are labelled Ox and Oy respectively and represent the azimuth and altitude, respectively, of the object.
Assuming that the light receiving surface of transducer 29 has a height hand a width w, the height Hm and width Wm of plane Pm are given byWm = Rmw/f  ..... (1)Hm = Rm-h/f ..... (2)  where f is the focal length of lens 27. The distance Xm between Px and Om and the distance Ym between Py and Om are given byXm =Wm (8 - i)/15 . .. . (3)Ym = Hm (5 - j + 0.5)/10 ..... (4)The relationship between Xm, Ym, Rm and angles Ox and Oy is as follows: Ox = Xm/Rm ..... (5) Oy = Ym/Rm ..... (6) Thus, from formulae (1), (3) and (5), Ox = (8 - i)w/l5f ..... (7) (7) from formulae (2), (4) and (6), Oxy = (5.5- j)h/10f ..... (8) (8) If the position of the memory area (i,j) where "1" is stored is known, then from formulae (7) and (8), the azimuth and altitude of the object will be determined.
In addition, a rough profile of the object can be determined from the positional relationship of memory cells in which "1"'s are stored.
An alternative method of gaining the bearing and profile of objects executed by microprocessor 37 is shown in Figures 23 through 26. An array of 150 memory cells An (n=1 -150) is defined in RAM 38 in correspondence with the 150 photodiodes PD(i,j) composing the light receiving surface of transducer 29, which each memory cell An storing the smallest distance value Dr outputted by distance determiner 35 for the corresponding photodiode.
The flowchart for this process is shown in Figure 24 and consists of three continuously repeated loops. In step (1), the outer loop starts by setting the row number to one (j= 1). The middle loop starts at subsequent step (2) by setting the column number to zero (i=0). The inner loop then starts at step (3) by incrementing the column number (i=i+1).Upon receipt of the next trigger signal Si in step (4), the inner loop continues through steps (5), (6) and (7) in which first the number of the current photodiode and memory cell N is derived from the current row and column numbers (N =(j-1 ).1 5), then the distance value Dr yielded by the reflected intensity Lr at photodiode N is stored in the appropriate cell AN and finally, in step (7), the column number i is checked for the last column (i=15?). If the inner loop has not yet been performed for all of the columns of the current row (NO at step (7)), the inner loop is repeated from step (3). If YES at step (7), i.e.
when the current row has been completed, the row number j is checked at step (8) to see if the last row has been completed (j=10?). If so, the outer loop is repeated from step (1); otherwise, the row number is incremented (j=j+ 1) 1 ) at step (9) and the middle loop is repeated from step (2).
When the distance data stored in adjacent addresses are almost equal, these data concern the same object, so that the positional relationship of the addresses in which the almost equal data are stored enables rough recognition of the profile of the object.
The process for prevention of collision of vehicles effected on the basis of data stored in memory area An will be described below.
Figure 25 is the flowchart of a program for prevention of collision.
As shown in Figure 26, it is assumed that the vehicle on which the radar system is mounted is regarded as a rectangular prism and includes a safety margin surrounding the vehicle. This composite structure Jo is 2Wo wide and (H1 +H2) high, where Hi of height (H1 +H2) denotes the height of the portion of vehicle JO above the optical axis of the radar system and H2 denotes the height of the portion of vehicle JO below the optical axis.
If the braking distance corresponding to the current vehicle speed VO is designated Rs, a rectangular prismCs having width 2Wo, height (H+H2) and length Rs will be designated a collision precaution region, as shown in the Figure. That is, if there are any objects in the collision precaution region Cs, there is a danger of collision with vehicle Jo.
In the flowchart of Figure 25, distance data Dr stored in memory cells N are accessed individually in sequence at a step (13) to determine whether the detected objects are within the braking distance Rs after  steps (11) and (12) ensure that the memory address N is within the relevant range.
A braking distance value Ds is calculated on the basis of the vehicle speed sensed by a vehicle sensor in step (14) and the calculated distance Ds is compared to the distance data Dr at a step (15).
If distance Rm to object P is greater than braking distance Rs as reflected in the digital values Dr and Ds respectively, there is no danger of collision whereas if distance Rm is less than Rs, there is a danger of collision. In the latter case, the azimuth of the object must be checked to determine whether the object is within precaution region Cs.
Specifically, in step (16), absolute distances X, Y from the optical axis 0 are calculated using the following equations:X = Hx-Rm  ..... (9) Y=Hy-Rm (10) where Rm is the distance to the object and ex and Oy are derived from equations (7) and (8) respectively.
In step (17), the absolute value of X is compared to the width Wo of the collision region Cs and if outside of, i.e. greater than, the width Wo, collision is not imminent, and to the routine continues to the next memory cell. Similarly if Y is higher (greater) than H1 or lower (less) than H2, the routine progresses without further action to the next memory cell. However, if Dr, X, and Y are all within ranges corresponding to the collision precaution region Cs, then an alarm is generated to alert the driver to the presence of a dangerousiy close obstacle. In either case, the final step (21) increments memory address N and returns control to step (11).
Scanning the entire memory area by executing the program of Figure 25 continuously will ensure that the drive will be alerted to any possible collision throughout the detection region of the radar system.
The actual structure of transducer 29 is shown in Figures 27 and 28.
Figure 27 illustrates the structure of the transducer which includes regular photodiodes arrayed in matrix.
As shown in the Figure, a plurality of P-type patch layers 51 are formed by diffusion of Boron onto theupper surface of an N-type Si substrate 50, and then a like number of similar N-type patch layers 52 are formed by diffusion of Phosphorus onto layer 51, thereby resulting in P-N junctions.
SiO2 insulating films 53 formed by thermal oxidization divide the surface of the resulting product into a grid and also separate a number of parallel aluminum electrodes 54.
Electrodes 54 are provided with windows which define a grid-like light receiving surface consisting of 15 columns by 10 rows of photodiodes PD(1,1), PD(1,2), . ., PD(15,10).
The P-type layers 51 are connected to corresponding analog switches SAl -SA1 5 via corresponding terminals at one end of substrate 50 whereas respective electrodes 54 are connected to correspondinganalog switches SB1-SB10.
Reflection preventive films are deposited onto the light receiving surfaces of the photodiodes to protect the surfaces. Substrate 50 is impressed with a bias voltage Vb to isolate P-type layers 51 from each other.
Figure 28 illustrates a transducer 29 comprising a plurality of avalance photodiodes (referred to as APDhereinafter).
An APD has a photosensitivity (= photocurrent A/incident light intensity W) at least ten times that of aregular photodiode and an inter-terminal capacity of several pFs which is an order of magnitude less thanthat of the regular photodiode, so that it has a better response to high-speed light pulses, thereby increasingsystem performance (sensitivity, measuring accuracy) greatly.
The light receiving surface of transducer 29 shown in Figure 28 has a semiconductor layered structurewhich includes a P-type Si substrate 60, a plurality of parallel high-concentration N+ layer strips 61 formedby diffusion of As onto the upper surface of substrate 60 and isolated longitudinally from each other, N-typelayers 62 deposited over substrate 60 and layers 61, P-type layer patches 63 formed by diffusion ofPhosphorus onto the upper surface of layer 62 and mutually isolated and P+ layer patches 64 formed bydiffusion of higher-concentration phosphorus onto the upper surface of layers 63.
Longitudinal Pt layers 65 divide N-type layer 62 into a number of parallel strips.
A SiO2 insulating film 66 is formed to divide the upper surface of the element into a grid leaving only layers64 exposed. Aluminum electrode strips 67 overlay film 66 and connect layers 64 into rows of 15perpendicularto the longitudinal layers 65.
The lower surface of substrate 60 has a plurality of electrode strips 68 parallel to but offset from layers 65and which are formed by exposing portions of lower surfaces of N+ layers 61 by etching techniques, andthen depositing aluminum onto the exposed surfaces.
Longitudinally arranged electrodes 68 are connected to corresponding analog switches SA1-SA15 whereas crosswise arranged electrodes 67 are connected to corresponding analog switches SB1-SB10.
P-type substrate 60 is grounded, so that N+ type layers 61 are electrically isolated from each other andN-type layers 62 are electrically isolated from each other.
In the above embodiment, the light receiving surface of transducer 29 is shown as including 150photodiodes arrayed in a 15-column, 10-row matrix. However, if the number of lines and columns of the  matrix is increased and the size or dimensions of each photodiode are decreased, optical resolution will be enhanced without changing the size of the light receiving surface of the transducer, thereby improving the accuracy of the azimuths and profiles of detected objects.
While this invention has been shown and described in terms of preferred embodiments thereof, it is noted that this invention should not be limited to the shown embodiments. Various changes and modifications could be made without departing from the scope of this invention as set forth in the attached claims.