This application is based on application No. H11-233760 filed in Japan on Aug. 20, 1999, the entire contents of which are hereby incorporated by reference.
BACKGROUND OF THE INVENTION1. Field of the Invention
The present invention relates to an image input apparatus that forms an image through a plurality of minute-image-formation optical systems.
2. Description of the Prior Art
In recent years, with the advent of highly information-oriented society brought by the development of communications media, there has been a keen demand for acquiring various kinds of information effectively and timely. Such information includes a very large proportion of image information, and therefore recording and saving of image information are essential for conducting advanced information processing activities. Conventionally, apparatuses such as photographic cameras and video cameras have been used for recording and saving image information. However, it is impossible to miniaturize beyond a certain limit such apparatuses by making their constituent components smaller, and therefore, to realize apparatuses so compact as to be portable all the time, it is necessary, and in fact expected, to develop a compact image input apparatus based on a novel construction.
An arrangement conventionally known to help miniaturize an image input apparatus makes use of a lens array composed of a plurality of microlenses combined together. This is an application of the compound-eye seen in the visual system of insects, and helps realize an optical system that occupies less volume, offers a wider angle of view, and is brighter than a “single-eye” image formation system.
As a conventional image input apparatus adopting this arrangement, for example, Japanese Published Patent Application No. S59-50042 discloses an image input apparatus composed of a microlens array, a pinhole array, and an image surface. Here, the microlenses form a reduced image of an object, and the pinholes, paired one to one with the microlenses, sample different parts of this reduced image, forming together an image of the object on the image surface.
As another example, Japanese Laid-Open Patent Application No. H5-100186 discloses an image input apparatus composed of a microlens array, a pinhole array, and a photosensitive element array. Here, the microlenses, pinholes, and photosensitive elements are grouped into units each composed of a microlens, a pinhole, and a photosensitive element, and the individual units convert optical signals received from different parts of an object (subject) into electric signals, that together represent image information.
As still another example, Japanese Laid-Open Patent Application No. H10-107975 discloses an image input apparatus in which a plurality of photosensitive elements are arranged for each of microlenses. Here, the image of an object formed by the microlenses is not sampled by pinholes that individually sample different parts thereof, but is directly read by a two-dimensionally extending photosensitive element array that yields signals of minute images. Here, an aperture stop is arranged on the object side of the photosensitive element array and the microlens array, and the individual microlenses observe different parts of the object with no overlap among the signals obtained.
However, the image input apparatuses disclosed in Japanese Published Patent Application No. S59-50042 and Japanese Laid-Open Patent Application No. H5-100186 mentioned above both have basically the same optical system, and suffer from unsatisfactory resolution in the image signals obtained. Specifically, in a lens-pinhole type compound-eye optical system, the number of units each including a microlens and a photosensitive element coincides with the number of dots, i.e. resolution, of the obtained image, therefore it is inevitable to increase the number of units to obtain a high-resolution image. However, even a microlens needs to have a certain size to function satisfactorily as a lens, and therefore, in this compound-eye optical system, it is impossible to increase beyond a certain limit the number of units even if the microlens are arranged in a closest packed structure. As a result, it is difficult to obtain a high-resolution image.
On the other hand, the arrangement disclosed in Japanese Laid-Open Patent Application No. H10-107975 mentioned above solves the problem of unsatisfactory resolution in the image signals obtained, but, to obtain the best optical characteristics in this optical system, it is necessary to arrange the microlens array and the photosensitive element array on a spherical surface having its center at the aperture stop, and thus this arrangement is not fit for the purpose of making the entire image input apparatus satisfactorily compact. In particular, the individual photosensitive elements need to be arranged at discrete locations on a curved surface, i.e. a spherical surface in this case. This makes the photosensitive elements difficult to arrange in desired positions, and thus makes the entire image input apparatus difficult to manufacture.
In another invention, no aperture stop is used, and a microlens array and a photosensitive element array are each arranged on a flat plane. However, here, to prevent interference between the optical signals from adjacent microlenses, the photosensitive element array needs to be arranged at a certain distance from the microlens array, and thus the photosensitive element array requires an unduly large fitting area. In addition, this arrangement is specialized for narrow object angles at the cost of optical characteristics, and therefore exhibits various drawbacks in practical use.
SUMMARY OF THE INVENTIONAn object of the present invention is to provide an image input apparatus that has a simple construction, that is more compact than ever, and that offers high-resolution images.
To achieve the above object, according to the present invention, an image input apparatus is provided with: a photoelectric converter element having a flat photosensitive surface; and an image formation unit array having a plurality of image formation units arranged in an array. Here, the plurality of image formation units individually receive light beams substantially from an identical area and focus the received light beams on different regions of the photosensitive surface of the photoelectric converter element to form images thereon.
BRIEF DESCRIPTION OF THE DRAWINGSThis and other objects and features of this invention will become clear from the following description, taken in conjunction with the preferred embodiments with reference to the accompanied drawings in which:
FIG. 1 is an exploded perspective view schematically showing an image input apparatus embodying the invention;
FIG. 2 is a vertical sectional view schematically showing the optical systems of two adjacent units in an image input apparatus embodying the invention;
FIGS. 3A to 3D are perspective views schematically showing the manufacturing process of diffractive optical lenses;
FIGS. 4A to 4D are diagrams schematically showing the manufacturing process of refractive lenses;
FIG. 5 is a diagram showing the multiple reduced images formed by the lens array;
FIG. 6 is a diagram showing an image reconstructed by gathering the signals obtained from the pixels at the center of the units;
FIG. 7 is a diagram showing an image reconstructed by using the signals obtained from all the photosensitive elements;
FIG. 8 is a perspective view schematically showing a signal separation polarizing filter;
FIGS. 9A and 9B are perspective views schematically showing the configuration of the signal processing system for processing optical signals;
FIG. 10 is a diagram schematically illustrating the relationship between the photosensitive area and the minute view angle;
FIG. 11 is a diagram schematically showing the relationship between the object and the view angle of the image input apparatus;
FIGS. 12A and 12B are diagrams schematically showing the principle of making the view angle of the image input apparatus wider;
FIG. 13 is a diagram schematically illustrating deflection caused by decentering of the image formation elements;
FIG. 14 is a diagram schematically illustrating deflection caused by a diffraction grating; and
FIG. 15 is a perspective view schematically showing the construction of an image input apparatus employing dispersive elements.
DESCRIPTION OF THE PREFERRED EMBODIMENTSHereinafter, embodiments of the present invention will be described with reference to the drawings. This invention provides a slim type image input apparatus in which a single plane photosensitive element array is divided into areas each corresponding to one microlens so that each area includes a plurality of photosensitive elements, and in addition partition walls are provided to prevent interference among the optical signals from the individual microlenses.FIG. 1 is an exploded perspective view schematically showing an image input apparatus embodying the invention.
InFIG. 1,reference numeral1 represents a microlensarray having microlenses1aarranged in a two-dimensional array, in a square shape for example, andreference numeral3 represents a photosensitive element array disposed below themicrolens array1 so as to face it and havingphotosensitive elements3aarranged similarly in a two-dimensional array, in a square shape for example.Reference numeral2 represents a partition wall layer disposed betweenmicrolens array1 andphotosensitive element array3 and composed ofpartition walls2athat are arranged below the boundaries of theindividual microlenses1aof themicrolens array1 so as to form a grid-like structure.
As shown inFIG. 1, onemicrolens1aof themicrolens array1 corresponds to a plurality ofphotosensitive elements3aof thephotosensitive element array3, and corresponds to one compartment formed in between in thepartition wall layer2. As an imaginary square prism drawn with broken lines indicates, these together form a signal processing unit U. The individual units are separated from one another by thepartition walls2ato prevent the optical signal from onemicrolens1afrom entering the adjacent units; that is, the optical path through each unit is restricted. As thephotosensitive element array3, it is possible to use a solid-state image-sensing device such as a CCD. This helps reduce the number of components needed and thereby simplify the construction of the image input apparatus.
FIG. 2 is a vertical sectional view schematically showing the optical systems of two adjacent ones among those units. In this figure, when an image of an object (not shown) located above the apparatus is focused within the target unit U1, the light beam L1 coming from the object as a rightful optical signal strikes themicrolens1abelonging to the unit U1 by traveling parallel to the optical axis X1 thereof. Then, the light beam is condensed by thatmicrolens1aon thephotosensitive element array3 belonging to the unit U1. At this time, if a light beam L2, as an unnecessary optical signal, enters the adjacent unit U2 by traveling obliquely from a direction opposite to the target unit U1 at a large angle relative to the optical axis X2 of themicrolens1abelonging to the unit U2, this unnecessary light beam L2 also is focused on thephotosensitive element array3 belonging to the target unit U1.
It is for the purpose of preventing such interference that thepartition walls2a, as shown inFIG. 2, are provided between the units. It is needles to say that interference is perfectly prevented if the partition walls are formed so as to extend from themicrolenses1ato thephotosensitive element array3. However, even if the partition walls are formed partially, they effectively reduce unnecessary signals. When thepartition walls2 are formed so as to extend vertically from themicrolenses1 downward, as shown inFIG. 2, for each unit, the following formula (1) holds:
x=(a−c)d/2c (1)
where
- x represents the width of the area struck by an optical signal coming from an adjacent unit and having half the amount of light per microlens;
- a represents the distance between the microlens and the photosensitive element;
- c represents the height of the partition walls;
- d represents the width of the unit.
In this case, it is not possible to perfectly prevent interference; however, the higher the partition walls are made, the less interference occurs. How much the interference among signals affects the image information eventually obtained can be evaluated practically by simulation or the like.
Regarding the microlens array mentioned above, the sizes of the whole microlens array and of each unit, the number of microlenses, the focal length, the manufacturing process will be presented below. Here, two types of microlens array, one using diffractive optical lenses and the other using refractive lenses, are presented. Moreover, as the type using diffractive optical lenses, two examples of different sizes are presented.
| TABLE 1 |
|
| <<Diffractive Optical Lens Type>> |
|
|
| Unit pitch | 176 μm | 250 μm |
| Overall size | 8.8 mm × 6.7 mm | 15 mm × 15 mm |
| | (50 × 38 units) | (60 × 60 units) |
| Focal length | 176 μm | 250 μm |
| |
Manufacturing Process
The manufacturing process of diffraction optical lenses is shown inFIGS. 3A to 3D. First, as shown inFIG. 3A, a resist R is applied to the surface of a glass substrate G. As this resist R, an electron resist is used. Next, as shown inFIG. 3B, by using an electron beam drawing machine, the desired pattern Pn is drawn on the resist R. Then, as shown inFIG. 3C, the drawn pattern is developed. Finally, as shown inFIG. 3D, the drawn and developed pattern is transferred onto the glass substrate G by using an etching machine.
The drawn pattern consists of Fresnel zone plates of either a binary or a multilevel type. Prototype lens arrays currently being manufactured have a minimum line width of 0.4 μm with binary-type zone plates and 0.2 μm with multilevel-type zone plates. When the minimum line width is as wide as about 1 μm, a laser beam can be used to draw the pattern.
| TABLE 2 |
|
| <<Refractive Lens Type>> |
|
|
| Unit pitch | 176 μm | 250 μm |
| Overall size | 8.8 mm × 6.7 mm | 15 mm × 15 mm |
| | (50 × 38 units) | (60 × 60 units) |
| Focal length | about 230 μm | about 350 μm |
| |
Manufacturing Process
FIGS. 4A to 4D show the manufacturing process of refractive lenses. A so-called thermal re-flow method is applied here. First, as shown inFIG. 4A, a glass substrate G having a photo resist PR applied to its surface is coated with a mask M having a pattern drawn thereon. Then, the glass substrate G is exposed to light from above as indicated by arrows W. The pattern is thus transferred onto the photo resist PR by so-called photolithography. Here, unlike diffractive optical lenses, the pattern consists not of Fresnel zone plates but of an array of circles that is used as a mask.
After exposure and then development, as shown inFIG. 4B, the resist remains as a pattern of cylindrical patches. As shown inFIG. 4C, when these remaining resist patches are post-baked by using a hotplate or an oven, they melt and form into the shapes of lenses by their own surface tension. As shown inFIG. 4D, by etching these resist patches, the resist patter is transferred onto the glass substrate G. Quartz is the typical material of the glass substrate G. The thermal re-flow method mentioned above is described in: “Micro-optics” by Hans Peter Herzig, p132–p136, published 1997 by Taylor & Francis.
The partition walls mentioned above need to meet the following requirements:
- 1. The partition walls should be as thin as possible.
- 2. The partition walls should preferably extend from the microlenses to the photosensitive element surface.
- 3. The partition walls should be opaque and should reflect or scatter as little light as possible.
In practice, the partition walls are produced, for example, by cutting a metal with a laser, or by three-dimensionally molding a photo-setting resin. As an example produced by cutting a metal with a laser, prototype partition walls having a thickness of 20 μm and arranged at intervals of 250 μm are produced from a stainless steel plate having a thickness of 200 μm. Here, to prevent reflection, the surfaces of the partition walls are blackened.
As an example produced by three-dimensionally molding a photo-setting resin, prototype partition walls are produced by scanning with a laser beam a resin called pentaerythritol triacrilate (PETA), which has a highly self-focusing property against light entering it, with 3% of benzil added thereto as a photopolymerization starter. It is confirmed that the partition walls thus produced have a thickness of about 56 μm and a height of about 500 μm.
How, the image obtained by using the image input apparatus of this embodiment will be described.FIG. 5 is a diagram of multiple reduced images formed by the lens array. The lens array used here is an array of gradient-index plane microlenses produced by ion exchange that have a focal length of 650 μm, have an aperture diameter of 250 μm, and are arranged at intervals of 250 μm. The photosensitive element used here is a CCD image-sensing device with 739×575 elements each having an aperture of 11 μm×11 μm. In this case, approximately 22.7×22.7 elements constitute one unit. Although the number of elements that constitute one unit is not an integer number here, this does not affect the final image.
FIG. 6 is a diagram showing an image reconstructed by gathering the signals obtained from the pixels at the centers of the units. As shown in this figure, an erect image of the object is obtained.FIG. 7 is a diagram showing an image reconstructed by using the signals obtained from all the photosensitive elements by an inverse matrix method. As shown in this figure, a satisfactory image is obtained.
Separation of adjacent signals as described above can also be achieved with a combination of polarizing filters, instead of using partition walls as described above.FIG. 8 is a perspective view schematically showing such polarizing filters. As shown in this figure, thispolarizing filter array4 is divided intoblocks4aeach corresponding to one unit described above and has polarized light transmitting filters arranged in a checkered pattern such that every two adjacent filters have mutually perpendicular polarization directions. Two such polarizing filter arrays are prepared and arranged one at the microlens array surface and the other at the photosensitive element surface shown inFIG. 1 with each polarizing filter aligned with the corresponding unit.
Here, with respect to a given unit, the filter blocks4aadjacent thereto upward, downward, leftward, and rightward transmit only light polarized perpendicularly to the polarization direction of the filter block of the given unit. Thus, interference among the optical signals entering these adjacent units is prevented. By selectively using the signals obtained through units of a particular polarization direction, it is possible to realize an image input apparatus that exhibits different sensitivities in different polarization direction.
As those polarizing filters, either diffractive optical elements or hybrid elements, i.e. diffractive optical elements and refractive optical elements combined together, are used. The polarizing filters, when composed of diffractive optical elements, is produced in the same manner as the diffractive lens array described above. That is, a resist pattern drawn by using an electric beam drawing machine is transferred onto a glass substrate by etching. Here, the pattern of the polarizing filters is basically of a binary type consisting of lines and spaces. A minimum line width of approximately 0.2 μm can be obtained by using currently available electric beam drawing machine. Like the microlens array, when a relatively large minimum line width suffices, a laser beam drawing machine can be used.
Now, how this image input apparatus is used when it is used as an image input apparatus that exhibits different sensitivities in different polarization directions will be described. In general, on the basis of polarization information of an object, it is possible to know physical properties, such as the dielectric constant, of the object. To conduct measurement with a high sensitivity, accurate alignment and the like are necessary. However, even with a comparatively simple image input apparatus, it is possible to obtain general information regarding physical properties of an object. For example, when reflection occurs on the surface of a dielectric substance, it exhibits varying reflectances toward differently polarized light components. Thus, by observing polarization information, it is possible to know the angle of the reflecting surface relative to the image input apparatus.
For example, this technique can be applied in a measuring instrument for measuring the angle of a reflecting surface such as a glass plate. As other examples, this technique can be applied to measurement of stress distribution inside a transparent object such as a plastic plane, or in an image input apparatus, such as a polarizing filter used in a camera, which acquires an object image by separating images reflected from and transmitted through a glass plate.
The total thickness of an image input apparatus embodying the present invention depends on the following parameters:
(Thickness of the Glass Substrate)+(Focal Length of the Microlenses)+(Thickness of the Photosensitive Elements)
FIGS. 9A and 9B are perspective views schematically showing the configuration of the signal processing system for processing the optical signals acquired by an image input apparatus of the invention. As shown inFIG. 9A, assume that the distance between theobject5 and themicrolens array1 is A, the distance between themicrolens array1 and thephotosensitive element array3 is B, and the intervals at which the individual units are arranged is D. Reference symbol O represents the output signals obtained from thephotosensitive element array3. Moreover, suppose that the numbers of the column and row in which a unit of the image input apparatus lies are represented by p and q, and that, as shown inFIG. 9B, the minute image in the unit [p, q] (i.e. the input signal entering the unit) is expressed as Ip,q(x, y) and the choice function of the input signals is expressed as fp,q(X, y), then the output signal Op,qis given by formula (2) below.
Opq=∫∫Ip,q(x,y)fp,q(x,y)dxdy (2)
InFIG. 9A, in individual regions E on the photosensitive element array3 (photoelectric converter element), images of the area (object5) are formed asobject images5aseen from different view points. The images obtained in the individual units areobject images5ashifted by a predetermined distance from one another on thephotosensitive element array3 in accordance with the arrangement of theobject5, themicrolens array1, and thephotosensitive element array3. For simplicity's sake, assume that themicrolens array1 exerts no deflecting effect. Then the relative shift amount Δ in each unit is given by formula (3) below.
Δ=BD/A (3)
Accordingly, using the input signal I0,0(x, y) of the unit [0, 0], the output signal of each unit is given by formula (4) below.
Op,q=∫∫I0,0(x−pΔ,y−qΔ)fp,q(x,y)dxdy (4)
Here, by manipulating the choice function fp,q(x, y) of each unit, it is possible to obtain various effects. For example, when only the signal at the unit origin is chosen, then fpq(x, y)=δ(x, y), and hence the output signal is given by formula (5) below.
Op,q=I0,0(−pΔ,−qΔ) (5)
This formula presents the image obtained by enlarging I0,0(x, y) by a factor of A/BD. Since I0,0(x, y) represents an inverted image of the object enlarged by a factor of A/B, the aggregate of the output signals from the individual units Op,qis equivalent to an erect image of the object enlarged by a factor of 1/D.
Now, as another case, assume that formula (6) below is used as the unit choice function.
fp,q(x,y)=δ(x−αp,y−βq) (6)
In this case, formula (2) above yields formula (7) below.
Op,q=I0,0{−p(Δ−α),−q(Δ−β)} (7)
This formula indicates that an image of the object enlarged by a factor Δ/(Δ−α)D in the x direction and by a factor of Δ(Δ−β)D in the y direction is obtained.
Now, as still another case, assume that formula (8) below is used as the unit choice function.
fp,q(x,y)=δ{x−(P−p)Δ(cos θ−1)−(Q−q)Δ sin θ,y+(P−p)Δ sin θ−(Q−q)Δ(cos θ−1)} (8)
In this case, the output signal is given by formula (9) below.
Op,q=IP,Q{(P−p)Δ cos θ+(Q−q)Δ sin θ,−(P−p)Δ sin θ+(Q−q)Δ(cos θ} (9)
This formula represents an image equivalent to an image of the object rotated through θ degrees counter-clockwise about the unit [P, Q].
Now, as a further case, assume that formula (10) below is used as the unit choice function.
fp,q(x,y)=δ(x)rect(y/1y) (10)
In this case, when input signals spreading in the y direction are chosen, the output signal is given by formula (11) below.
Op,q=∫1y/2−1y/2I0,0(−pΔ,y−qΔ)dy (11)
This response characteristic can be used to extract the information of a line segment extending in the y direction.
As described above, by varying the definition of the unit choice function fp,q(x, y), the obtained image signals can be processed in various ways. In an image input apparatus of the present invention, this can be achieved by specifying the addresses of the signals read out from the photosensitive element array. In other words, by calculating, using the signals from the photosensitive elements at the addresses corresponding to the choice function used, the output signals of the units they belong to, it is possible to obtain output signals corresponding to formula (2).
Note that, in an image input apparatus of the present invention, even if the constituent components are aligned insufficiently with one another in assembly, it is possible to obtain correct signals by processing the signals in an appropriate manner. Specifically, the procedure for achieving this is as follows. First, the partition walls are aligned with and then bonded to the microlens array, then the thus obtained integral unit is arranged in close contact with the surface of the photosensitive element array. In this state, the apparatus is already ready for use, but, to achieve more efficient use of the photosensitive element array and to facilitate post-processing, the above-mentioned integral unit is aligned with the photosensitive element array such that the partitions of the former are parallel with the regions of the latter.
This is achieved by observing the moire fringes that are formed by the shadow of the partition walls and the photosensitive regions when light is shone into the apparatus. In this adjustment procedure, the pixels that output bright-state signals are regarded as effective pixels from which to read actual optical signals, and therefore, by using the positions of these pixels, it is possible to know the correspondence between the signals of the individual units and the addresses at which to read the outputs of the photosensitive elements. For example, unnecessary signals can be easily excluded by masking the other signals than those of the effective pixels after acquiring the data from all pixels.
Incidentally, Ip,qcorresponds to the signal obtained from the photosensitive elements of the unit [p, q]. Thus, when the partition walls reach the photosensitive element surface, Ip,qrepresents the signals from the photosensitive elements except those located at the partition walls. When the partition walls do not reach the photosensitive element surface, Ip,qrepresents the signals from the photosensitive elements excluding the photosensitive regions where interference is occurring.
In an actual image input apparatus, fp,qhas finite sampling range and thus is a “rect” function, i.e. a function that gives “1” within the range corresponding to the regions (minute view angles) participating in image sensing and that gives “0” outside that range. InFIG. 10, the relationship between the width η of a pixel that senses light on the photosensitive element surface S and minute view angle β is given by formula (c) below.
β=2 tan−1(η/2f) (c)
where f represents the focal length of an image formation element IF. The range of the rect function is determined by using this formula.
FIG. 11 is a diagram schematically showing the relationship between the object and the field of view of the image input apparatus. This figure shows a case in which an image of anobject5 occupying an area extending over a length of X is formed on thephotosensitive element array3 through the microlenses Li,j. Inverted, reduced images of theentire object5 are formed through the individual microlenses of themicrolens array1 on the corresponding regions of thephotosensitive element array3. Here, since the individual lenses of themicrolens array1 have different positional relationships with theobject5, the images formed on the individual regions of thephotosensitive element array3 have different information from one another. This is exploited to reconstruct an image.
Specifically, an image X of theentire object5 is focused through each microlens Li,jof themicrolens array1, so as to form an image Z (Li,j) on the corresponding region of thephotosensitive element array3. In other words, the same number of inverted, reversed images of theobject surface5bon theobject5 as that of microlenses in themicrolens array1 are formed on theimage formation surface3bofphotosensitive element array3. Here, since the individual regions of the photosensitive element array corresponding one to one to the individual lenses have different positional relationships with theobject5, the images formed on the individual regions have different information as to the position of the image within the region, the intensity of the signal, and the like. This is exploited, for details, refer to the descriptions ofFIG. 9A andFIG. 9B above.
FIGS. 12A and 12B are diagrams schematically showing the principle of making the field of view of the image input apparatus wider. As shown inFIG. 12A, each microlens Li,jof themicrolens array1 images a length of Xi,jon theobject plane5bon theobject5 onto theimage formation surface3bof thephotosensitive element array3. Since the individual microlenses Li,jare located at different locations, they form images of different areas on theobject surface5b. As shown inFIG. 12B, such differences are increased by the use of deflecting member, which thus helps make the overall field of view wider.
Thus, when shooting needs to be performed with a wider view angle instead of themicrolens array1, for example, alens array6 with a deflection function, i.e. a functional optical element having both deflecting and image-forming functions is used. Thislens array6 has deflecting elements6aprovided one for each unit. Thus, the fields of view V of the individual units combine to cover a wider area on theobject5.
Deflecting elements having such a deflecting function can be realized not only with prisms but also with diffracting optical elements. Their material, manufacturing process, size, and others are common to the diffractive lenses described earlier. Specifically, one method is to decenter the center of a pattern, such as Fresnel zone plate, that has an image-formation function in the direction in which to diffract the incident light. Another method is to overlay a one-dimensioned diffraction grating on an image-forming diffractive optical element. Both methods produce the same effects. When the deflection angle is relatively narrow, the former method is recommendable and when the deflection angle is relatively wide, the latter method is recommendable. How the field of view varies as a result of introduction of deflecting elements can be controlled by manipulating the parameters of each deflecting element.
FIG. 13 is a diagram schematically illustrating an example of the construction based on the former method described above. In this diagram, ξ represents the amount of decentering of the Fresnel zone plate F, α represents the view angle, and f represents the focal length.FIG. 14 is a diagram schematically illustrating an example of the construction based on the latter method described above. In this diagram, d represents the grating constant of the one-dimensioned diffraction grating DG, α represents the view angle, and λ represents the wave length of the incident light. In both diagrams, l1, represents the incident light and l2represents the diffracted light. Here, the decentering amount ξ and the grating constant d function as the above-mentioned parameters of each deflecting elements. The view angle α that each unit offers is equal to the deflection angle through which the incident light is deflected. Thus,FIGS. 13 and 14 give formulae (a) and (b) below.
α=tan−1(ξ/f) (a)
α=sin−1(mλ/d) (b)
where m represents an integer number. When a diffraction grating is used, a large number, corresponding to m, of kinds of diffracted light appear. However, for example by giving the diffraction grating blaze-shaped grooves, it is possible to obtain only one kind of diffracted light.
FIG. 15 is a perspective view schematically showing the construction of an image input apparatus employing a functional optical element to which a dispersive property is added. As shown in this diagram, which shows only a portion corresponding to one unit, as a functional optical element, adiffraction grating7 is arranged above aphotosensitive element array3 so as to face it. Independent spectroscopes are formed one for each unit, so that the spectral data of different parts of the sensed object can be obtained. Thisdiffraction grating7, having different grating constants in the x and y directions indicated by arrows in the diagram, deflects the incoming light L at different angles according to the wave length, and then leads the resulting optical signal as spectral light La to thephotosensitive elements3alocated at different locations on of thephotosensitive element array3.
Here, it is not necessary to provide a special filter for selecting particular wave lengths on the side of thephotosensitive element array3, and therefor a so-called single plane photosensitive element can be used as it is. Thephotosensitive elements3aconstituting each unit are arranged in a square shape, and by deflecting the optical signals of the spectral light La in such a way as to cover all the area of that square, it is possible to make efficient use of thephotosensitive elements3a.
The spectroscopes mentioned above are manufactured as diffractive optical elements just like the polarizing filters and deflecting elements described earlier. Therefore, they are produced from a quartz glass substrate by a similar manufacturing process. The pattern formed here is designed to have different predetermined grating constants in the x and y directions so that incident light is deflected at varying angles according to the wave length independently in each direction.
Images obtained as a set of spectral data in this way have the following advantages and uses. Generally, an object is observed through a primary color filter when it is handed as visual information; however, more versatile information of the object can be obtained by using its spectral data. For example, different materials have different spectral absorption coefficients, and therefore it is possible to identify the materials constituting the object and investigate their distribution by measuring the spectrum absorbed by the object. Furthermore, this technique can be applied to an apparatus which checks the level of comfortableness, mental state, and the like of a subject person by measuring the person's perspiration by using the wave length information of the light absorbed by water. For details, refer to Japanese Laid-Open Patent Application No. H8-184555.