This application is a divisional application of the invention patent with application number 200880015174.8 entitled "image capturing apparatus and method".
Detailed Description
Fig. 1 to 4 show an image capturing device according to a first embodiment of the invention, the image capturing device 2 having a sensor 4, the sensor 4 being provided with a working area comprising picture elements (pixels) 6 arranged in a square array. The sensor 4 may be a CCD, CMOS device or similar device. In this embodiment, the sensor 4 is a CCD device with mega pixels comprising a square array with a lateral width of 1000 pixels and a longitudinal width of 1000 pixels. The sensor may be larger or smaller than this size and have different aspect ratios, and the pixels may be arranged in a variety of different shapes.
Located directly in front of the sensor 4 is an electronic shutter device 8 having an array of shutter elements 10 of equal size to the pixel size. The shutter elements 10 are arranged in a shape matching the pixels 6 of the sensor 4 so that each shutter element controls the exposure of the pixels directly behind it. Thus, the shutter device 8 of the present embodiment has one million shutter elements 10 arranged in a 1000 x 1000 array, each shutter element 10 being drivable either individually or together with one or more other shutter elements to expose the pixels 6 located behind it.
In the present embodiment, the shutter device 8 is constituted by a ferroelectric liquid crystal device including liquid crystal cells arranged in an array, each of which can be made transparent or opaque according to voltage control, which allows the shutter element 10 to operate quickly. The mechanism is reliable in operation since it has no moving parts. Of course, other shutter mechanisms that can be electronically controlled and are required to provide the necessary pixel levels can be used with the present invention.
In general, the shutter device 8 has a shutter elements and is divided into N modules, we define module 1, module 2 … … module N, each module having a/N shutter elements. In the present embodiment, the shutter device has one million shutter elements (a) and four modules (N), each having 250,000 shutter elements. The pixels 6 are located behind the corresponding shutter elements 10 of the pixel module.
The shutter elements 10 of different modules are arranged to form shutter groups 12, each having one shutter element per module. As shown in fig. 2, in the present embodiment, each shutter group 12 has four shutter elements 10 arranged in a square, including a shutter element 10A of the module 1 located at the upper left corner, a shutter element 10B of the module 2 located at the upper right corner, a shutter element 10C of the module 3 located at the lower left corner, and a shutter element 10D of the module 4 located at the lower right corner. The shutter device 8 has 250,000 such shutter groups. The pixels are similarly arranged to form pixel modules and pixel groups, each pixel group including one pixel of each pixel module. The pixel groups cover substantially all of the active surface of the sensor 4, and therefore each pixel module has pixels taken from almost all of the active area of the sensor 4.
The shutter elements of each module are electrically connected to each other so that all shutter elements of the module are simultaneously opened or closed under the control of the control circuit. In the present embodiment, the four shutter modules are sequentially opened in the order shown in fig. 3.
It can be seen that shutter module 1 is on for 0.25 seconds, then module 2 is on for the next 0.25 seconds, then module 3 is on for another 0.25 seconds, and finally module 4 is on for the fourth 0.25 seconds. Therefore, the total working time Ti of all the modules is 1.0 second, and the exposure time of each pixel is Ti/N (0.25 second in this embodiment).
During each exposure time, the pixels under the opened shutter elements are exposed to light. However, the pixels are not exposed simultaneously, but are exposed sequentially as the corresponding shutter element opens and closes. Thus, the pixels behind the shutter element of module 1 are exposed in the first 0.25 seconds, and then in turn are the pixels behind the shutter element of modules 2,3, 4.
During the exposure time, each pixel will generate a certain amount of charge, accompanied by the discharge of photons from the surface of each pixel. After a total on-time Ti the charge on all pixels will be digitized and the digitized image data will be transferred from the sensor 4 to the storage device.
The stored image data may be displayed as a moving image (movie) or as a static image, and when displayed as a movie, the image data for each pixelblock will form a separate image. Thus, as shown in FIG. 4, image 1 is formed from image data captured by all pixels within the pixelblock 1, which represents the image formed when light is incident on the sensor within the first 0.25 seconds. Image 2 is formed from image data captured by all pixels within the pixelblock 2, representing the image formed when light is incident on the sensor within the second 0.25, and image 3 is similarly formed as image 4. The four images will be displayed in sequence as if a four frame video. The position of each pixel within the displayed low resolution image will shift because the pixel position at which the data is acquired will change slightly. Each frame of image in a movie has 250,000 pixels, which is one quarter of the highest resolution of the sensor. We define here that the low resolution (lo-res) image is distinguished from the high resolution (hi-res) image formed using all pixels.
If a continuous video is desired to be played, the above operations may be repeated one or more times, capturing multiple sets of data at a rate that one set of data is acquired every Ti seconds, and sequentially playing the captured low resolution images in sequence.
If a still image is to be displayed, the data from all the pixels on the sensor will be combined into a full frame high resolution image. In this embodiment, the image is one million pixels, which can be achieved by combining the data of four low resolution images to form one high resolution image.
The pixels of each block do not necessarily need to be arranged in a regular shape as shown in fig. 2, and a digital pattern may be applied to achieve random arrangement of the pixels in the shutter group, the digital pattern may be generated by a random number generator, the user inputs a source parameter, and the random number generator rearranges the position of each pixel in the exposure group. As shown in fig. 5, assume that within a 6 x 6 pixel set having a 4 square matrix, each pixel group has 4 pixels. The digital pattern generates a rearranged sequence for exposing all pixels in the pixel group during the aforementioned total on-time, but the exposure times of the pixels are reordered.
In fig. 2, the non-reordered sequence ((1,2,3,4), (1,2,3,4), (1,2,3,4) ….) indicates that the exposure time for pixel 6A, which is located in the upper left corner of the first shutter group, is between 0-0.25 seconds, the exposure time for pixel 6B, which is located in the upper right corner of the first shutter group, is between 0.25-0.5 seconds, and so on. Since the operation is repeated according to a predetermined pattern, the exposure time is the same for each pixel group.
In contrast, the reordered sequence shown in FIG. 5 ((3,2,1,4), (4,1,2,3), (2,3,1,4) …) indicates that the exposure time for pixel 6A in the upper left corner of the first shutter group is between 0.5-0.75 seconds, the exposure time for pixel 6B in the upper right corner of the first shutter group is between 0.25-0.5 seconds, the exposure time for pixel 6C in the lower left corner of the first shutter group is between 0-0.25 seconds, and the exposure time for pixel 6D in the lower right corner of the first shutter group is between 0.75-1.0 seconds. The second exposure group has a different sequence of exposure times: the exposure time for the top left pixel of the second shutter group is between 0.75 and 1.0 second, the exposure time for the top right pixel of the second shutter group is between 0 and 0.25, and so on. Furthermore, these timings do not repeat over successive frames: the exposure model from 0 to 1 second is different from 1 to 2 seconds, and is similar for each frame later, depending on the length of the reordering.
The use of a random shutter pattern has two advantages. First, the position of each pixel of the low-resolution image may be randomly arranged even if the average distance between adjacent two pixels is equal. The advantages of Random distribution of pixels over a regular simple arrangement in image restoration applications are described in detail in U.S. patent No. 4574311 entitled "Random Array Sensing Devices" to Resnikoff, Poggio, and Sims. Second, if a sequence of re-arranged orders at the correct time is obtained, the low resolution image order is recoverable unidirectionally. As the rearrangement sequence is generated by an algorithm including the use of a key to generate a random number, once the key is entered by a viewer viewing the image sequence, the image sequence can be restored in one direction, which enables the image sequence to be encrypted to prevent unauthorized viewing.
In addition to being arranged in a regular square shape as shown in fig. 2 and 5, the shutter groups may also be arranged in an irregular polygonal shape, and the shutter groups may cover only the region of interest precisely, without covering the unwanted pixels. One particular application is in the field of life sciences where a user may draw an irregular shape around a small portion of a cell under a microscope, divide the pixels within the shape into groups by an algorithm, and expose them at different times, when he wishes to monitor the activity of the cell.
If the object imaged by the sensor moves during exposure, the image will produce a "motion blur" phenomenon, and the degree of motion blur is typically higher than that produced by conventional still camera sensors, since all pixels in a conventional camera sensor are exposed simultaneously, whereas the total exposure time of the present invention is longer than the exposure time of a single pixel.
However, if the object moves little or no, the quality of the image formed will be substantially comparable to conventional sensors.
Of course, the image capture device may operate in different modes, for example a "movie/still" mode in which all shutter elements act sequentially as described above, or a "full still" mode in which all shutter elements act simultaneously. In movie/still mode, the captured low resolution images are played continuously like a movie, or combined to form one high resolution image (possibly with motion blur). Motion blur is comparable to conventional sensors in the fully static mode because the sensor cannot capture time separated low resolution images, nor can the images be played continuously like a movie.
Of course, some pixels may be used to obtain a high resolution image without blurring, and others may be used to obtain a series of low resolution images as described above. The user can select the number of pixels that will produce a high resolution blur-free image based on the image quality requirements, e.g., 50% of the pixels are used to produce a high resolution blur-free image, and the pixels of the set of modules are distributed throughout the array, and these pixels will be exposed simultaneously and at an instant in time of the user-set optimal exposure time. The other pixels will be divided into N-1 groups as described above and exposed sequentially to form a cine sequence. The pixels used to form the high resolution image may be distributed in a regular shape (e.g., the second pixel in each module) or randomly, semi-randomly (so that the average distance of adjacent pixels within a given sub-region is equal, but each pixel within the region is randomly distributed in a known regular pattern). Missing pixels in a high resolution image may be repaired using various known image processing techniques. The advantages of Random distribution of pixels over a regular simple arrangement in image restoration applications are described in detail in U.S. patent No. 4574311 entitled "Random Array Sensing Devices" to Resnikoff, Poggio, and Sims.
In addition to using shutter groups equal to the number of pixels, it is also possible to generate images having a plurality of resolutions using shutter groups having different sizes in the panoramic image. For example, the top half of the detectors generate images using a shutter set of size 4 (thus, four low resolution images will be formed when capturing a full frame image), while the bottom half of the detectors generate images using a shutter set of size 9 (thus, nine low resolution images will be formed when capturing a full frame image).
In principle, the number of different frame sets can be freely defined according to the needs of the user, and the user may need to monitor multiple moving objects in the same frame and set the optimal settings (frame rate and resolution) according to each object. The specification and shape of the shutter sets can also be dynamically changed frame by frame to meet the needs of different images in the picture.
In certain circumstances, shutter elements of different sizes may be selected such that each shutter element controls exposure of more than one pixel, and the effective area of each shutter element will be greater than the area of one pixel, which will allow the number of shutter elements in the shutter device to be less than the number of pixels. Of course, the area of each shutter element may be smaller than the area of one pixel, and the exposure of each pixel will be controlled by a plurality of shutter elements. This will help focus in certain circumstances, since different shutters can be used to compensate for the axial curvature created by the lens, and by opening different sets of shutters, the spatial resolution of the sensor frames can also be increased.
The application of the invention in a single-lens reflex camera with an 8.2 megapixel sensor having 2340 x 3500 pixels is described below using a specific example. Currently, a conventional comparable size camera can capture full frame images at five frames per second, and if the camera employs the techniques of the present invention, each full frame image can be divided into ten low resolution image captures of 0.82 megapixels, which can be played sequentially at fifty frames per second in movie mode, with a size of 738 x 1108 pixels per frame. Of course, each set of low resolution images may be combined to form a high resolution still image having 8.2 megapixels.
It follows that the more low resolution images embedded per full frame image, the higher the frame rate obtained. Of course, if a lower frame rate is already sufficient, a higher resolution image may be acquired.
As mentioned before, the exposure time interval of two consecutive low resolution images is equal to the duration of each exposure, i.e. the starting time of the next exposure is the ending time of the previous exposure, and the total exposure time Ti is equal to Nt, where N is the number of pixel modules and t is the exposure time per pixel. Of course, the time interval between two adjacent exposures and the time of each exposure can be adjusted so that the exposure times are overlapped or separated, which is beneficial for the user to compensate the motion blur or the shooting in the low-light-intensity environment. For example, the exposure time may be reduced to 0.15 seconds instead of 0.25 seconds as shown in FIG. 3. The exposure timing for each pixel module is as follows:
from 0 to 0.15 seconds for pixelblock 1, 0.25 to 0.4 seconds for pixelblock 2, 0.5 to 0.65 seconds for pixelblock 3, and 0.75 to 0.9 seconds for pixelblock 4, the total exposure time will be less than Nt.
In another example, the exposure time may be increased to 0.4 seconds, and the exposure time of each pixel module is as follows: pixelblock 1 is from 0 to 0.4 seconds, pixelblock 2 is from 0.25 to 0.65 seconds, pixelblock 3 is from 0.5 to 0.9 seconds, and pixelblock 4 is from 0.75 to 1.15 seconds. Thus, the exposure times of the pixel modules will overlap (1 and 2 overlap, 2 and 3 overlap, 3 and 4 overlap, 4 and 1 overlap). The interval of the exposure time may be adjusted accordingly according to the transition or moving speed of the object.
Alternatively, rather than exposing the pixel modules sequentially for a fraction of the total exposure time, they may be exposed for a total exposure time Ti that eliminates the brief light blocking time. The light blocking time can be rotated between each pixel module, and the pixel value in the blocking time can be calculated by an N-line linear equation, wherein N is an unknown value. For example, the short occlusion time may be Ti/N, where Ti is the total exposure time, N is the number of pixel modules, and the exposure time of each pixel is Ti-Ti/N, and when N is a larger value, the exposure time of each pixel is close to the total exposure time Ti. In the case of low resolution images with errors in intensity, for example, the intensity of each pixel can be inferred from neighboring pixels, which has the following advantages: the brightness of the full-resolution frame image obtained in an environment of low light intensity is close to that of the full-frame image obtained by a conventional sensor.
Fig. 1, 6, 7, 8 show a number of different practical examples of the application of the sensor device. In fig. 1, a random accessible LCD pixel level shutter array 8 is located on the surface of a sensor array 4, the sensor array 4 being a device such as a CCD, CMOS or EMCCD. Shutter array 8, such as a ferroelectric shutter device, whose transparency can be rapidly changed facilitates exposing pixels behind it to incident light.
As shown in fig. 6, a Liquid Crystal On Silicon (LCOS) device may be used to reflect light onto the pixels. Light exiting object 14 is focused by objective lens 16 and reflected onto polarizing filter (or beam splitter) 18 for incidence into LCOS device 20, which reflects the polarized light beam out of LCOS device 20 according to a user-selected pattern. The beam passes through a polarizing filter 18 and is focused through an eyepiece 22 into a sensor 4 containing a CCD detector.
Of course, if a reflective pixel level shutter (LCOS or DMD) is used, the light exiting the blocked pixels can be focused into another CCD detector opposite the one, which will ensure that most of the light is captured when the CCD is imaged.
The light intensity values for the given pixel can be aggregated to produce a bright high resolution image.
In the arrangement shown in fig. 7, the shutter array 8 is at a distance from the CCD sensor device 4 and two sets of lenses are provided, with a first set of lenses 24 being located in front of the shutter array 8 for focusing the image of the object 14 onto the shutter array. A second group of lenses 26 is located between the shutter array and the CCD sensor 4 for focusing the image formed by the shutter array into the CCD sensor 4. A shutter array 8, such as a ferroelectric LCD shutter array, is used to block or allow light to pass through the sensor array as the case may be.
In the arrangement shown in fig. 8, a high speed Digital Micromirror Device (DMD) 28 is disposed at the focal plane of a pair of objective lenses 30, the objective lenses 30 being used to focus an image of the object 14 onto the surface of the DMD 28. The DMD28 includes a randomly accessible array of microlenses 32, which microlenses 32 can tilt back and forth under actuation of a drive voltage. The DMD array has 0.7 inch 1024 x 768 bistable micro-lenses that can easily and suitably achieve the quality of a 16,000 full array lens model per second.
The micro-lens 32 may be positioned at a first angle to reflect incident light to the CCD sensor array 34 or at a second angle to reflect light to the optical traps 36. A second set of lenses 38 is located between the DMD28 and the sensor 34 for focusing the image formed on the DMD surface into the sensor 34. As such, the DMD28 may be used to control the exposure of each pixel to incident light.
The embodiments described above all use dynamic shading techniques, where light is only substantially blocked in front of the pixels by, for example, an LCD shutter device or a DMD array. In practice, the present invention can also use a static on-chip masking technique that simulates a substantial masking technique, i.e., the charge on a pixel is sequentially transferred to a masked uncharged area on another chip, and FIG. 9 is a schematic diagram of an embodiment of a sensor that uses this technique.
In fig. 9, the sensor 40 is divided into columns of working pixels 42 and columns of occluded pixels 44 independent of the working pixels 42, the occluded pixels 44 being occluded from incident light by an opaque mask and thus not participating in the image capture process. Also, shaded pixels 44 may serve as charge storage devices, with each shaded pixel 44 connected to an adjacent working pixel 42. In operation, charge is transferred from an unmasked row of active pixels to a masked row of inactive pixels, one row by one row. The sensor may be configured to have four pixel modules as shown in fig. 9, and defined as a pixel module 1, a pixel module 2, a pixel module 3, and a pixel module 4. The pixel modules are located in exposure groups 46, each having a column of pixel elements of each pixel module.
The pixels of each pixelblock are exposed to light during an exposure time, the charge on the pixels is transferred to an adjacent column of shaded pixels and digitized, and each pixelblock repeats this process in sequence to form four time-separated low-resolution images within each high-resolution full-frame image time. These embedded low resolution images can be played sequentially like a movie or combined to form a high resolution image.
Once the charge is transferred to the shaded pixels, the unshaded pixels are immediately exposed again in light to capture the next image.
A static on-chip masking process with a frame transfer structure will be described.
We assume that the number of frame transfers in a CCD is M columns, where part of the chip is exposed to light and a high resolution image is generated every Ti seconds. The exposure group is composed of N adjacent columns which are designated, and the charges of the pixels of the nth column of each group are simultaneously transferred to the shielded area.
The charge accumulated on each nth column of the CCD is repeatedly transferred to the masked areas in (N × Ti)/N time, which ensures that the transfer time for all column charges is summed to within Ti seconds. The above process is repeated so that the total exposure time for each column of pixels is T seconds, but the exposure for each column of pixels is staggered.
The charge on any pixel of the nth column in any sub-region can be calculated by subtracting the adjacent column of pixels from the previous column of pixels.
An example of the above scheme is as follows: assuming that an image area having a width of 1000 pixels is divided into 250 exposure groups each having four columns of pixels, the exposure time of one full frame image is 1 second. The first column of each exposure group is transferred to the shielded area at a time t-0.25 seconds, the second column is transferred to the shielded area at a time t-0.5 seconds, the third column is transferred at a time t-0.75 seconds, and the fourth column is transferred at a time t-0.75 seconds. Once the charge in any column is transferred, the pixels in that column continue to be exposed to light and then transfer to the shielded area again after 1 second (i.e., the first column transfers at times t 1.25 seconds, 2.25 seconds, etc., and the second column transfers at times t 1.5 seconds, 2.5 seconds, etc.).
Of course, the function of the brief occlusion of light can be achieved by moving the charge back and forth over the occluded and active pixels, another way being to use a CMOS detector to store the charge generated by the pixels at different times during the time a high resolution image is captured.
Fig. 10 shows another way of implementing the invention. In this embodiment, the sensor device 50 has a 640 x 640 pixel array, and the pixels 52 and their corresponding shutter elements (not shown) are divided into one hundred pixel groups 54, each having sixty-four pixels arranged in an 8 x 8 array. In use, the sixty-four pixels 52 within each group 54 are exposed sequentially, for example, in the order labeled 1-64 in FIG. 10. As can be seen from this embodiment, the order of exposure of the pixels within each pixel group 54 is the same, and this arrangement ensures that the successively exposed pixels are spatially separated from one another. In this embodiment, the first exposed pixel is located in the first column and the first row, the second exposed pixel is located in the fifth column and the first row, the third exposed pixel is located in the first column and the fifth row, and so on.
In outputting the pixel signals of different combinations, the relationship between the frame rate and the resolution when the sensor device captures the image can be adjusted by balancing the temporal resolution and the spatial resolution. For example, each full frame image captured by the sensor device may be displayed as a high resolution image of 640 x 640 pixels or as sixty-four sequentially played low resolution images of 80 x 80 pixels. Thus, pixels 1 through 4 can be combined to form a block (since they are exposed to light in relatively short time periods of adjacency), pixels 5 through 8 can similarly be combined to form a block, and so on, the original sixty-four pixel groups can be combined to form a sixteen pixel block. All pixel groups may also perform the same operation, which will form a sixteen frame 160 x 160 pixel image.
Of course, also pixels adjacent to the exposure time can be further combined, further combinations of temporal or spatial resolution are possible. For example, as shown in FIG. 10, pixels 1-16 (dot-filled) are combined to form a first module, pixels 17-32 (striped-filled) are combined to form a second module, pixels 33-48 (mesh-filled) are combined to form a third module, and pixels 49-64 (no-filled) are combined to form a fourth module. The low resolution images formed by these merged modules can be used to generate a four frame 320 x 320 pixel image sequence. In general, a sensor having N square pixels and provided with a shutter device including a shutter elements will form an image sequence having 4m elements when the equation 4m × D2 = a holds, where D is a positive integer equal to the size (width and length) of each image in the image sequence and m is a positive integer. This would allow the user to determine the appropriate spatial and temporal resolution of these low resolution images after image capture, provided the original shutter set had sufficient shutter elements. Of course, since the shutter elements of the merged modules are not exposed simultaneously, the images in the new sequence of images formed by the merging are distorted. For example, the combined four frame 320 x 320 pixel image sequence shown in fig. 10 may be less sharp than the resulting four pixel image sequence shown in fig. 2-4.
Variations of the above scheme may enable random placement of pixels if the size of the pixel groups is large enough to ensure sufficient distance between sequentially exposed pixels. In theory, the pixel group size could be such that all pixels form a group, which would allow the user to merge all pixels that are currently in close proximity to obtain the proper image sequence.
The present invention may be used in a variety of applications, some of which are described below.
Household camera
The invention can be applied in cameras, mainly for capturing still images and as much as possible movies of higher resolution and frame rate. For example, as previously described, a camera may capture continuous images of 8.2 megapixels at a rate of five frames per second, or a film of 0.8 megapixels at a rate of fifty frames per second.
The advantages brought to the consumer are: the original high resolution image can be converted and the data storage requirements (image size in memory) are the same as for a conventional digital camera. Specific applications include video acquisition and surveillance camera operations, which allow a user to capture a high resolution image of a detail in a screen and multiple low resolution images simultaneously.
A block diagram of the basic elements of the camera is shown in fig. 11. The camera 60 has a lens 62 that focuses the image onto a CCD sensor 64, and an LCD shutter array 66 is provided directly in front of the sensor 64 for controlling exposure of each pixel to incident light. The operation of the LCD shutter array 66 is controlled by a Central Processing Unit (CPU) 68, and the CPU68 is also connected to the sensor 64 to read the data transmitted by the sensor. The data is stored in a storage device 70 such as a flash memory card. The camera includes a shutter reset 72 and control switch 74 connected to the CPU68 for setting an operating mode of the camera, for example. These modes may be "movie/still" mode, in which the camera captures multiple time-separated low-resolution images, which may be played like a movie or combined to form a high-resolution still image, or "full-still" mode. In "all-static" mode, all pixels are exposed simultaneously to form a high resolution image with less motion blur. The camera may include conventional devices such as a mirror, a display unit for checking settings or viewing captured images, a lens control for controlling the aperture, focal length or focus of the lens 62, a flash memory unit or a data output port, etc.
Scientific imaging
Specialized advanced camera systems have special requirements for high temporal resolution detectors, either with very low spatial resolution (e.g., Marconi CCD39, which can operate at 1 khz but only 80 x 80 pixels), or with lower dynamic range (e.g., enhanced CCD or EMCCD, which use gain compensation machines to compensate for the increased frame read noise due to high frame rates, but at the cost of drastically reduced dynamic range). The invention enables a conventional low-noise, high-resolution CCD to capture high-speed images.
Other advantages are discussed below.
At present, an image model with accurate high temporal resolution and high spatial resolution cannot be established, but the technology needs to be applied in the field of life science. For example, one needs to monitor the movement of the heart muscle at high spatial resolution, while at the same time one needs to monitor the cardiac electrical wave activity at high temporal resolution.
A high-resolution mega scientific CCD can generate one frame of image in 0.1 to 1 second depending on the depth of the data and the camera internal circuit configuration. Most scientific grade high resolution CCD and EMCCD systems use a conventional approach to increase the read speed, using a technique called chip integration (on chip binning), which groups adjacent pixels on the chip to be read out at high speed. The integrated nxn model can increase the speed by a factor of N, but the factor N is still less than the expected effect obtained by applying the aforementioned technique. Furthermore, the integrated image does not contain high spatial resolution data.
The new technique may define arbitrary exposure group shapes (irregular), which will allow the researcher to choose the appropriate speed and resolution depending on the specific features in the scene that need to be imaged.
If low spatial resolution is acceptable, a higher frame rate can be achieved, and the technique can image at high dynamic range with frame times below the millisecond level. This is far faster than can be achieved by current scientific field detectors with low noise and high dynamic range.
Improvement of signal-to-noise ratio (S/N): another important advantage of the new technology is that full frame read speed is slower than existing chip-on-chip integration at the same frame rate. This will greatly improve the signal-to-noise ratio at higher speeds, i.e. the problem of read noise affecting the signal quality is significantly improved.
It is contemplated that the techniques further improve spatial and temporal resolution in the applicable new sampling protocols. One possible application is to increase the temporal resolution (relative to the Lomb periodogram) in terms of pixel voltage at irregular sampling times, which is not possible with current image processing techniques.
Safety/machinery field look-ahead
Cameras used for monitoring changing three-dimensional images, such as in general monitoring applications or for robotic/mechanical surveillance, necessarily need to solve several problems, one being that if an object is moving against the camera, the size of the image of the moving object on the sensor is inversely proportional to the distance between the object and the camera, which will result in significant motion blur of the object close to the camera and will obscure the necessary details. For example, a group of people walking on the street or building lobby is monitored and their faces will be blurred if they are too close to the camera. Reducing the imaging time of the entire image will reduce the imaging quality of objects that are far from the camera because the collection of light rays from objects that are far away will be reduced.
The present invention solves the above problems by varying the number of shutter groups (temporal resolution, spatial resolution and total exposure time) within a frame, thus allowing for optimized imaging of multiple objects on the screen. Preferably, the object near the camera is enlarged in its entirety, reducing spatial resolution requirements and increasing temporal resolution. The size of the shutter set may be selected based on a known screen (e.g., on a highway using a camera mounted high and facing the direction of traffic to monitor vehicle operation, the top of the image may be reduced in size as a whole, and gradually changing shutter set sizes may be used to keep the average number of pixels for each vehicle constant). Of course, the size of the shutter group can be dynamically selected by algorithmic methods (e.g., acquiring Optical Flow from motion blur in still images, as described in Berthold and Shunck, "Determining Optical Flow", Engineers, MIT, 1980) or span search devices (e.g., laser sights, etc.).
Also, a different resolution imaging may be used for a particular region of interest than the shutter set model used for most regions of the image. For example, a camera monitoring traffic conditions may use low spatial resolution to monitor vehicle motion by controlling the appropriate selected exposure time to arrange regularly shaped pixels to obtain a high resolution image of the license plate of the vehicle.
The system can be used for monitoring the running speed of the automobile and the license plate of the automobile by continuously monitoring the movement of the automobile with low spatial resolution (by using a sufficiently large shutter group) by the camera without using a radar device. The computer algorithm can calculate the speed of the automobile and dynamically change the shutter group, and form a panoramic image or an image of the license plate of the automobile with high spatial resolution according to actual needs.