The present patent application claims the following benefits: U.S. provisional application No.61/676,289 filed on 26/7/2012, U.S. provisional application No.61/790,487 filed on 15/3/2013, U.S. provisional application No.61/790,719 filed on 15/3/2013, and U.S. provisional application No.61/791,473 filed on 15/3/2013, all of which are incorporated herein by reference, including but not limited to those portions specifically shown hereinafter, are incorporated by reference except as follows: this application supersedes the above-referenced application if any portion of the above-referenced application is inconsistent with this application.
Detailed Description
The present disclosure relates to methods, systems, and computer-based products directed to digital imaging that may be primarily applicable to medical applications. In the following description of the present disclosure, reference is made to the accompanying drawings which form a part hereof, and in which is shown by way of illustration embodiments in which the disclosure may be practiced. It is to be understood that other embodiments may be utilized and structural changes may be made without departing from the scope of the present invention.
The luminance-chrominance based color space dates back to the color television era when color image transmission was required to be compatible with older monochrome Cathode Ray Tubes (CRTs). The luminance component corresponds to the luminance aspect (color unknown) of the image data. Color information is carried in the remaining two channels. The separation of image data into luminance and chrominance components remains an important process in today's digital imaging systems because it is closely related to the human visual system.
The human retina contains an array of two basic photoreceptor cell types; rod cells (rod) and pyramidal cells (cone). Rod cells provide brightness information and, in contrast, the total spatial density of rod cells is about a factor-20 greater than that of cone cells. Pyramidal cells are less sensitive and there are three basic types, with peak responses at three different wavelengths. The spectral response of the rod cells with peaks in the green region is the basis for calculating the luminance color space conversion coefficients. Since rod cells have a greater density, the spatial resolution of the image representation is more important for the luminance component than for any chrominance component. Camera designers and image processing engineers seek to explain this fact in a number of ways, such as by spatially filtering the chrominance channels to reduce noise and by providing greater relative system bandwidth to the luminance data.
In describing the terms of the present disclosure, the following terms will be used in accordance with the definitions set forth below.
It must be noted that, as used in this specification and the appended claims, the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise.
As used herein, the terms "comprises," "comprising," "includes," "including," "characterized by," and grammatical equivalents thereof are inclusive or open-ended terms that do not exclude additional, unrecited elements or method steps.
As used herein, the phrase "consisting of … …" and its grammatical equivalents excludes any elements or steps not specified in the claims.
As used herein, the phrase "consisting essentially of … …" and its grammatical equivalents limit the scope of the claims to specified substances or steps as well as those substances or steps that do not materially affect the basic and novel characteristics or features of the disclosure as claimed.
As used herein, the term "proximal" shall refer broadly to the concept of the portion closest to the origin.
As used herein, the term "distal" shall generally refer to the opposite of proximal, and thus refers to the concept of a portion further away or farthest away from the origin, depending on the context.
Referring now to the drawings, FIG. 1 illustrates the basic timing of a single frame captured by a conventional CMOS sensor. Co-pending application U.S. patent application serial No. 13/952,518, entitled "CONTINUOUS VIDEO IN A LIGHTDEFICIENT endoronent" is incorporated by this reference into this disclosure as if fully set forth herein. It should be understood that the x-direction corresponds to time and the diagonal line represents the activity of an internal pointer that reads out the data of each frame one row at a time. The same pointer is responsible for resetting each row of pixels for the next exposure period. The net integration time for each row is equivalent, but they are staggered in time from each other due to the regular reset and read process. Thus, for any scheme in which adjacent frames need to represent different compositions of light, the only option for keeping each row consistent is to pulse the light between two readout periods. More specifically, the maximum available period corresponds to the sum of the blanking time plus any time during which a line of optical black or optical masking (OB) is serviced at the beginning or end of a frame.
One example illumination sequence is a repeating pattern of four frames (R-G-B-G). This provides greater luminance detail than chrominance for a bayer pattern color filter. This method is implemented by gating the scene with a laser or light emitting diode at high speed under the control of the camera system and by means of a specifically designated CMOS sensor with high speed readout. The main benefits are that: the sensor can achieve the same spatial resolution with significantly fewer pixels than a conventional bayer camera or 3-sensor camera. Thus, the physical space occupied by the pixel array may be reduced. The actual pulse period may vary within the repeating pattern, as illustrated in fig. 2. This is useful, for example, for allocating more time to components that require more optical energy or components with weaker sources. As long as the average captured frame rate is an integer multiple of the required final system frame rate, the data can be suitably simply buffered into the signal processing chain.
The convenience of allowing the CMOS sensor chip area to be reduced to some extent by combining all of these approaches is particularly attractive for small diameter (3-10 mm) endoscopy. In particular, it allows for an endoscopic design in which the sensor is located in the distal end of the confined space, thereby significantly reducing the complexity and cost of the optics while providing high definition video. The result of this method is: each final panchromatic image is reconstructed and the result requires fusing the data through three separate snapshots in time. Since the edges of the target appear to be located at slightly different positions within each captured component, any motion within the scene will generally reduce the perceived resolution relative to the reference light frame of the endoscope. In the present disclosure, means are described to reduce this problem, i.e. to exploit the fact that: spatial resolution is more important for luminance information than chrominance information.
The essence of this method is that instead of the emission of monochromatic light during each frame, a combination of three wavelengths is used to provide all the luminance information within a single image. The chrominance information is derived from individual frames having a repeating pattern, such as Y-Cb-Y-Cr. While pure luminance data can be provided by judicious choice of the pulse ratio, this is not applicable to chrominance. However, a solution to this is proposed in the present disclosure.
In an embodiment, as illustrated in fig. 3A, the endoscope system 300a may include a pixel array 302a having uniform (uniform) pixels, and the system 300a may be operable to receive Y (luminance pulse) 304a, Cb (chroma blue) 306a pulses, and (chroma red) 308a pulses.
In an embodiment, as illustrated in fig. 3B, the endoscope system 300B may include a pixel array 302B having uniform pixels, and the system may be operable to receive Y (luminance pulse) 304B, λ Y + Cb (modulated chrominance blue) 306B pulses, and δ Y + Cr (modulated chrominance red) 308B pulses.
In an embodiment, as illustrated in fig. 3C, the endoscope system 300C may include a pixel array 302C with (alternating) pixels in a checkered pattern, and the system may be operated to receive Y (luminance pulses) 304C, λ Y + Cb (modulated chrominance blue) 306C pulses, and δ Y + Cr (modulated chrominance red) 308C pulses. Within the luminance frame, two exposure periods are used for the purpose of extending the dynamic range (YL and YS, corresponding to long exposure and short exposure).
Fig. 4 illustrates the overall timing relationship between the pulse mixing of three wavelengths and the readout period of a monochrome CMOS sensor over a 4-frame period.
Essentially, there are three monochromatic pulsed light sources and a specified design of a monochromatic CMOS image sensor under fast camera control to be able to give a high final progressive video rate of 60hz or more. The periodic sequence of monochrome red, monochrome green and monochrome blue frames (e.g., with an R-G-B-G pattern) is captured and combined into an sRGB image of an image signal processor chain (ISP). The light pulse and sensor readout timing relationship is shown in fig. 5. To provide pure luminance information within the same frame, all three sources are pulsed in unison with optical energy adjusted according to the color transform coefficients (in accordance with ITU-R bt.709 high definition standard) converted from RGB space to YCbCr:
it should be understood that other color space conversion standards may be implemented by the present disclosure, including, but not limited to, the ITU-R BT.709 high definition standard, the ITU-R BT.601 standard, and the ITU-R BT.2020 standard.
If white balance is performed in the illumination area, such modulation is applied in addition to white balance modulation.
Two components of the chromaticity should also be provided in order to complete a full color image. However, the same algorithm applied for luminance cannot be applied directly to chrominance images due to the sign, as reflected by the fact that some of the RGB coefficients are negative. The solution to this is to increase the degree of brightness of sufficient magnitude that all the resulting pulse energy becomes positive. As long as the color fusion process in the ISP is aware of the components of the chroma frame, the chroma frame can be decoded by subtracting the appropriate amount of luma from the neighboring frames. The pulse energy is given by:
Y=0.183·R+0.614·G+0.062·B
Cb=λ·Y-0.101·R-0.339·G+0.439·B
Cr=δ·Y+0.439·R-0.399·G-0.040·B
wherein the content of the first and second substances,
the timing for the overall situation is shown in fig. 6A. As a result, if the factor λ is equal to 0.552, both the red and green components are strictly cancelled, where the Cb information can be provided with pure blue light. Similarly, setting δ to 0.650 cancels out the blue and green components of Cr for becoming pure red. This particular example is illustrated in fig. 6B, which also depicts λ and δ as 1/28Integer multiples of. This is a convenient approximation for digital frame reconstruction (see discussion below).
Referring to fig. 7, an overall timing diagram for this process is illustrated. The exposure periods of both pixels are controlled by two internal signals within the image sensor (depicted in the figure as TX1 and TX 2). In practice, this can be done simultaneously when extending the dynamic range of the luminance frame, which is most desirable because both integration times can be adjusted on a frame-by-frame basis (see fig. 3a to 3 c). The advantages are that: color motion artifacts are not a problem if all data is derived from two frames versus three frames. Of course, there is a subsequent loss of spatial resolution for the chrominance data, but for the reasons discussed earlier that is a negligible consequence for image quality.
The inherent features of a monochromatic wide dynamic range array are: pixels with long integration times must integrate the superset of the light seen by the short integration time pixels. Co-pending U.S. patent application No. 13/952,564, entitled "WIDE DYNAMIC RANGE USING monochronomic SENSOR", is incorporated by reference into this disclosure as if fully set forth herein. That is desirable for regular wide dynamic range operation in luminance frames. For chrominance frames, it is meant that the pulse must be controlled in conjunction with the exposure period to provide λ γ + Cb, for example, from the beginning of a long exposure and switch to δ Y + Cr at the point where a short pixel is turned on (both pixel types have an indication that they were transmitted at the same time). This will be explained during color fusion. Fig. 8 shows a timing diagram for the specification of this solution.
A typical ISP includes first accounting for any necessary sensors and optical corrections (such as defective pixel elimination, lens shading, etc.), then; white balance, demosaic/color fusion and color correction are considered.
There may typically be some operations (edge enhancement) and/or adjustments (such as saturation) performed in an alternative color space, such as YCbCr or HSL, before the gamma values are finally applied to place the data in the standard sRBG space. Fig. 9 depicts a basic ISP core, which is suitable for a pulsing scheme of R-G-B-G. In this example, the data is converted to YCbCr to apply edge enhancement in the luminance plane and filtering of chrominance is done, then converted back to linear RGB.
In the case of the Y-Cb-Y-Cr pulsing scheme, the image data is already in YCbCr space following color fusion. Therefore, in this case, it is meaningful to perform the operation based on the luminance and the chromaticity in advance before converting back to the linear RGB to perform the color correction. See fig. 10.
Since there is no spatial interpolation, the color fusion process is more straightforward than demosaicing, which is exactly what is needed for a bayer pattern. Although buffering of the frame is required in order to have all the necessary information available for each pixel, as represented in fig. 11. FIG. 12A shows an overview of the pipeline data for the Y-Cb-Y-Cr pattern, which produces a full color image with every two raw captured images. This is achieved by using each chroma sample twice. In FIG. 12B, a specific example of a 120Hz frame capture rate that provides 60Hz final video is shown.
The linear Y, Cb and Cr components for each pixel may be calculated by:
Yi=2m-4+(xi,n-1-K)
when n is equal to 'Cb' frame
When n ═ Cr' frame
Wherein x isi,nIs the input data for pixel i in frame n, m is the pipeline bit width of the ISP, and K is the ISP black offset level (if available) at the input to the color blending block. Since chroma is labeled, it is conventionally centered on the digital dynamic range (2)m-1) At 50% of the total.
As described earlier, if two exposures are used to provide two chrominance components in the same frame, the two pixels are split into two buffers. Empty pixels are then filled using, for example, linear interpolation. At this point, one buffer contains full images of δ Y + Cr data and the other buffer contains δ Y + Cr + λ Y + Cb. The δ Y + Cr buffer is subtracted from the second buffer to give λ Y + Cb. The appropriate proportion of luminance data from the Y frame is then subtracted for each.
Embodiments of the present disclosure may include or utilize a specific-purpose or general-purpose computer including, for example, computer hardware such as one or more processors and system memory, as discussed in greater detail below. Embodiments within the scope of the present disclosure may also include physical or other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any other available media that can be accessed by a general purpose or special purpose computer. A computer-readable medium that stores computer-executable instructions is a computer storage medium (device). A computer-readable medium that executes computer-executable instructions is a transmission medium. Thus, by way of example, and not limitation, embodiments of the present disclosure can include at least two distinct categories of computer-readable media: computer storage media (devices) and transmission media.
Computer storage media (devices) include RAM, ROM, EEPROM, CD-ROM, solid state drives ("SSDs"), such as RAM-based, flash memory, phase change memory ("PCM"), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
A "network" is defined as one or more data chains capable of transporting electronic data between computer systems and/or modules and/or other electronic devices. In an embodiment, the sensor and camera control unit are connected in a network for communication with each other and with other components connected to them by the network to which they are connected. When information is transferred or provided over a network or another communications connection (either hardwired or wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmission media can include a network and/or data chains which can be used to carry out desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Various computer system components and program code means in the form of computer-executable instructions or data structures can be seen in fig. 13, which may be transferred automatically from transmission media to computer storage media (devices) (and vice versa). For example, computer-executable instructions or data structures received over a network or data link may be buffered in RAM within a network interface module (such as a "NIC"), and then eventually transferred to computer system RAM and/or to non-volatile computer storage media (devices) at a computer system. RAM may also include solid state drives (SSD or pci based on real-time memory layered storage, such as converged IO). Thus, it should be understood that computer storage media (devices) can be included in computer system components that also (or even primarily) utilize transmission media.
Computer-executable instructions, when executed in a processor, comprise, for example, instructions and data which cause a general purpose computer, a specific purpose computer, or a specific purpose processing device to perform a certain function or group of functions. For example, the computer-executable instructions may be binary instructions or intermediate format instructions such as assembly language or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined herein is not necessarily limited to the features or acts described above. Rather, the above features and aspects are disclosed as examples.
Those skilled in the art will appreciate that the present disclosure may be practiced in network computing environments with many types of computer system configurations, including, personal computers, desktop computers, portable computers, message processors, control units, camera control units, hand-held devices, cell phones, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, Personal Digital Assistants (PDAs), tablets, pagers, routers, switches, and the like. It should be noted that any of the above-described computing devices may be provided by or in the brick and mortar positions. The present disclosure may be practiced in distributed system environments where local and remote computer systems, which are linked (either by wired, wireless, or by a combination of wired and wireless data links) via a network, both perform work. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Further, the functions described herein may be performed in one or more of hardware, software, firmware, digital components, or analog components, as appropriate. For example, one or more Application Specific Integrated Circuits (ASICs) or field programmable gate arrays may be programmed to perform one or more of the systems and procedures described herein. Throughout the following description, certain terms are used to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name but not function.
Fig. 13 is a block diagram illustrating an exemplary computing device 100. Computing device 100 may be used to execute a variety of programs such as those discussed herein. The computing device 100 may act as a server, a client, or any other computing entity. The computing device may perform various monitoring functions as discussed herein and may execute one or more application programs such as the application programs discussed herein. The computing device 100 may be any of a variety of types of computing devices, such as a desktop computer, a notebook computer, a server computer, a handheld computer, a camera control unit, a tablet computer, and so forth.
Computing device 100 includes one or more processors 102, one or more memory devices 104, one or more interfaces 106, one or more mass storage devices 108, one or more input/output (I/O) devices 110, and a display device 130, all of which are coupled to a bus 112. The processor 102 includes one or more processors or controllers that execute instructions stored in the memory device 104 and/or the mass storage device 108. The processor 102 may also include various types of computer-readable media, such as cache memory.
The memory storage 104 includes a variety of computer-readable media, such as volatile memory (such as Random Access Memory (RAM)114) and/or non-volatile memory (such as Read Only Memory (ROM) 116). Memory storage 104 may also include rewritable ROM, such as flash memory.
The mass storage device 108 includes a variety of computer-readable media, such as magnetic tape, magnetic disk, optical disk, solid state memory (e.g., flash memory), and so forth. As shown in FIG. 13, the particular mass storage device is a hard disk drive 124. The various drives may also be included in the mass storage device 108 to enable reading from and/or writing to a variety of computer readable media. The mass storage device 108 includes removable media 126 and/or non-removable media.
The I/O devices 110 include a variety of devices that enable data and/or other information to be input to the computing device 100 or retrieved from the computing device 100. Exemplary I/O devices 110 include digital imaging devices, electromagnetic sensors and transmitters, pointer control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, Charge Coupled Devices (CCDs), or other image capture devices, and the like.
Display device 130 comprises any type of device capable of displaying information to one or more users of computing device 100. Examples of display device 130 include a monitor, a display terminal, a video projection device, and the like.
The one or more interfaces 106 include a variety of interfaces that enable the computing device 100 to interact with other systems, devices, or computing environments. Exemplary interface 106 may include any number of different network interfaces 120, such as a Local Area Network (LAN), a Wide Area Network (WAN), a wireless network, and the internet. Other interfaces include a user interface 118 and a peripheral interface 122. The interface 106 may also include one or more user interface elements 118. The interface 106 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mouse, track pad, etc.), keyboards, and the like.
The bus 112 enables the processor 102, the memory device 104, the interface 106, the mass storage device 108, and the I/O devices to communicate with each other and with other devices or components coupled to the bus 112. Bus 112 represents one or more of several types of bus structures, such as a system bus, a PCI bus, an IEEE1394 bus, a USB bus, and so forth.
For purposes of illustration, programs and other executable program components are shown here in discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of the computing device 100, and are executed by the processor 102. Alternatively, the systems and processes described herein may be implemented in a combination of hardware, software, and/or firmware. For example, one or more Application Specific Integrated Circuits (ASICs) may be programmed to execute one or more of the systems and programs described herein.
Fig. 14A and 14B illustrate perspective and side views, respectively, of an embodiment of a monolithic sensor 2900, wherein the monolithic sensor 2900 has a plurality of pixel arrays for producing three-dimensional images in accordance with the teachings and principles of the present disclosure. Such an implementation may be desirable for three-dimensional image capture, where the two pixel arrays 2902 and 2904 may be offset during use. In another embodiment, first pixel array 2902 and second pixel array 2904 may be dedicated to receiving a predetermined range of wavelengths of electromagnetic radiation, where the first pixel array is dedicated to a different range of wavelengths of electromagnetic radiation than the second pixel array.
Fig. 15A and 15B illustrate perspective and side views, respectively, of an embodiment of an imaging sensor 3000 built on multiple substrates. As illustrated, a plurality of pixel columns 3004 forming a pixel array are located on the first substrate 3002 and a plurality of circuit columns 3008 are located on the second substrate 3006. Also as illustrated in the figures are the electrical connections and communications between a column of pixels and its associated or corresponding circuit column, in one embodiment an image sensor, which may additionally have its pixel array and support circuitry fabricated on a single monolithic substrate/chip, may have its pixel array separated from all or most of the support circuitry. The present disclosure may use at least two substrates/chips that may be stacked together by a three-dimensional stacking technique. The first 3002 of the two substrates/chips may be processed by using an image CMOS process. The first substrate/chip 3002 may be comprised of a proprietary array of pixels or an array of pixels surrounded by a limiting circuit. The second or subsequent substrate/chip 3006 may be processed using any process and the substrate/chip 3006 does not have to come from image CMOS processing. The second substrate/chip 3006 may be, but is not limited to, a high density digital process to integrate multiple or multiple functions in a very limited space or area of the substrate/chip, or may be a hybrid mode or analog process to integrate precise analog functions, for example, or an RF process to implement wireless functions, or a Micro Electro Mechanical System (MEMS) to integrate MEMS devices. The image CMOS substrate/chip 3002 may be stacked with a second or subsequent substrate/chip 3006 by using three-dimensional techniques. The second substrate/chip 3006 may support most or most of the circuitry that is otherwise implemented as peripheral circuitry in the first image CMOS chip 3002 (if implemented on a monolithic substrate/chip) and thus increase the global system area while keeping the pixel array size unchanged and optimized to the maximum extent possible. Electrical connection between the two substrates/chips may be achieved through interconnects 3003 and 3005, which may be wire bonds, bumps, and/or Through Silicon Vias (TSVs).
Fig. 16A and 16B illustrate perspective and side views, respectively, of an embodiment of an imaging sensor 3100 having a plurality of pixel arrays for producing a three-dimensional image. The three-dimensional image sensor may be built on a plurality of substrates and may include a plurality of pixel arrays and other associated circuits, wherein a plurality of pixel columns 3104a forming the first pixel array and a plurality of pixel columns 3104b forming the second pixel array are respectively located on respective substrates 3102a and 3102b, and a plurality of circuit columns 3108a and 3108b are located on a separate substrate 3106. Electrical connections and communications between columns of pixels and associated or corresponding circuit columns are also described.
It is to be understood that the teachings and principles of the present disclosure may be applied in reusable device platforms, limited use device platforms, reusable device platforms, or single use/disposable device platforms without departing from the scope of the present disclosure. It should be appreciated that in a reusable device platform, the end user is responsible for cleaning and disinfecting the device. In a limited use device platform, the device may be used some specified number of times before becoming inoperable. Typical new devices are delivered sterile with additional use requiring the end user to clean and disinfect prior to additional use. In a reusable device platform, a third party may reprocess (e.g., clean, pack, sterilize) the single-use device for additional use at a lower cost than a new unit. In a single use/disposable device platform, the device is provided sterile to the operating room and is used only once before being disposed of.
In addition, the teachings and principles of the present disclosure may include any and all wavelengths of electromagnetic energy, including visible and non-visible light spectra such as Infrared (IR), Ultraviolet (UV), and X-rays.
In an embodiment, a method of digital imaging for use with an endoscope in an ambient light deficient environment may include: exciting an emitter to emit a plurality of pulses of electromagnetic radiation to cause illumination in a low light environment, wherein the pulses comprise first pulses in a first wavelength range having a first portion of the electromagnetic spectrum, wherein the pulses comprise second pulses in a second wavelength range having a second portion of the electromagnetic spectrum, wherein the pulses comprise third pulses in a third wavelength range having a third portion of the electromagnetic spectrum; pulse modulating (pulse) the plurality of pulses at predetermined intervals; sensing reflected electromagnetic radiation from the pulses using a pixel array to create a plurality of image frames, wherein the pixel array is read at intervals corresponding to pulse intervals of the laser emitter; and creates an image stream by combining a plurality of image frames to form a video stream. In an embodiment, the first pulse comprises a chromatic red. In an embodiment, the second pulse comprises a chrominance blue. In an embodiment, the third pulse comprises a luminance pulse. In an embodiment, the brightness pulse is created by pulsing a red pulse, a blue pulse and a green pulse. In such an embodiment, the red pulse is modulated relative to the blue and green pulses such that the red pulse has a positive chromaticity value. In an embodiment, the blue pulse is modulated with respect to the red and green pulses such that the blue pulse has a positive chromaticity value. In an embodiment, the green pulse is modulated relative to the blue and red pulses such that the green pulse has a positive chromaticity value. In an embodiment, the method further comprises modulating the plurality of pulses by a value such that the chrominance value of each pulse is positive. In an embodiment, the method further comprises removing the pulse modulation values from the image stream construction period. In such an embodiment, the modulation process includes adding a brightness value to the plurality of pulses. In an embodiment, the brightness value for modulation is (1/2)8An integer of multiple of. In an embodiment, the luminance value used to modulate 0.552 cancels out the red and green chromaticities. In an embodiment, the luminance value for modulation 0.650 cancels outBlue chromaticity and green chromaticity. In an embodiment, the method further comprises reducing noise while creating a stream of image frames. In an embodiment, the method further comprises adjusting the white balance while creating the stream of image frames. In an embodiment, the third pulse is a brightness pulse that is a pulse twice as frequent as the first pulse and the second pulse. In an embodiment, the brightness pulse is sensed by a long exposure pulse and a short exposure pulse within the pixel array. In an embodiment, the method further comprises sensing data generated by the plurality of pixel arrays and incorporating the data into the three-dimensional image stream.
It will be appreciated that the various features disclosed herein provide significant advantages and advances in the art. The following claims are examples of some of those features.
In the foregoing detailed description of the present disclosure, various features of the present disclosure are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed disclosure requires more features than are expressly recited in each claim. Rather, the inventive aspects lie in less than all features of a single foregoing disclosed embodiment.
It is to be understood that the above-described arrangements are only illustrative of the application of the principles of the present disclosure. Numerous modifications and alternative arrangements may be devised by those skilled in the art without departing from the spirit and scope of the present disclosure, and the scope of the present disclosure and the appended claims are intended to cover such modifications and arrangements.
Thus, while the disclosure herein has been shown in the drawings and described above with particularity and detail, it will be apparent to those of ordinary skill in the art that numerous modifications (including, but not limited to, variations in size, materials, shape, form, function and manner of extension, assembly and use) may be made without departing from the principles and concepts set forth herein.
Further, where appropriate, the functions described herein may be performed in one or more of the following: hardware, software, firmware, digital components, or analog components. For example, one or more Application Specific Integrated Circuits (ASICs) may be programmed to execute one or more of the systems and programs described herein. Certain terms may be used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name but not function.