Background
Virtual reality or VR is generally defined as the real and immersive simulation of a three-dimensional environment created using interactive software and hardware and experienced or controlled by the movement of a subject. A person using a virtual reality device is typically able to look around an artificially created three-dimensional environment, walk around it and interact with features or objects depicted on the screen or in goggles. Virtual reality artificially creates a sensory experience, which may include visual, tactile, auditory, and less common olfactory.
Augmented Reality (AR) is a technology that adds computer-generated augmentation to existing reality to make it more meaningful by the ability to interact with it. AR was developed into applications and used on mobile devices to mix digital components into the real world so that they augment each other, but are also easily discernable. AR technology is rapidly becoming mainstream. For displaying a specific overlay of a sports game on television broadcast on a mobile device and popping up a 3D email, photo or text message. The AR is also used by the army of this technology industry to make exciting and revolutionary things with holograms and motion activation commands.
Separately, the delivery methods of virtual reality and augmented reality are different. Most 2016 years of virtual reality are displayed on computer display screens, projector screens, or by virtual reality headsets (also known as head mounted displays or HMDs). HMDs are typically in the form of head-mounted goggles, with the screen in front of the eyes. Virtual reality actually brings the user into the digital world by switching off the external stimulus. In this way, the user is only focusing on the digital content being displayed in the HMD. Augmented reality is increasingly used in mobile devices, such as notebook computers, smart phones, and tablet computers, to change the way real world and digital images, graphics intersect and interact.
In fact, VR is not always opposed to AR because they do not always operate independently of each other, but rather are often mixed together to create a more immersive experience. For example, haptic feedback as vibrations and sensations added to interactions with graphics is considered to be enhanced. However, it is often used within virtual reality scenes to make the experience more realistic by touch.
Virtual reality and augmented reality are prominent examples of experiences and interactions that are motivated by the desire to become immersive in simulated platforms of entertainment and gaming or to add new dimensions to interactions between digital devices and the real world. Undoubtedly, they open both real and virtual worlds, either alone or mixed together.
Fig. 1A shows an exemplary eyewear for delivering or displaying VR or AR applications that are common in the market today. Regardless of the design of the goggles, it appears bulky and cumbersome and creates inconveniences when worn by the user. Furthermore, most goggles cannot see through. In other words, when the user wears goggles, he or she will not be able to see or do anything else. Thus, there is a need for an apparatus that can display VR and AR and also allow the user to perform other tasks when needed.
Various wearable devices are being developed for VR/AR and holographic applications. Fig. 1B shows a simplified diagram of holonens from Microsoft. It weighs 579g (1.2 lbs) and at this weight, the wearer will feel uncomfortable after wearing it for a period of time. In fact, the products available on the market are generally heavy and bulky compared to normal spectacles (25 g-100 g). Wearable devices according to holonens by Microsoft are reported to supply goods to the army. If truly equipped to a soldier, the weight of the wearable device will likely greatly affect the movement of the soldier, especially in the battlefield, requiring rapid movement. Thus, there is additionally a need for a wearable AR/VR viewing or display device that looks similar to a pair of ordinary eyeglasses but that also allows for smaller footprints, enhanced impact performance, low cost packaging, and easier manufacturing processes.
Many eyeglass-type display devices use a common design that places an image forming component (e.g., LCOS) in front of or near the lens frame, desirably reducing image transmission loss and using fewer components. However, such designs often unbalance the eyeglass-type display, with the front portion being much heavier than the rear portion, thereby adding some stress on the nose. Thus, there remains a need to disperse the weight of such display devices when worn by a user.
Regardless of the design of the wearable display device, there are still many components, wires and even batteries that must be used to make the display device functional and operational. While much effort has been made to move as many parts as possible to attachable devices or housings to drive the display device from the user's waist or pocket, the necessary parts of copper wires and the like must be used to transmit various control signals and image data. The wires, typically in the form of cables, do have a weight that increases the stress on the wearer when wearing such display devices. Thus, there remains a need for a transmission medium that is as light as possible without sacrificing the desired functionality.
There are many other needs that, although not individually listed, it will be readily appreciated by those skilled in the art that one or more of the embodiments of the invention described in detail herein clearly meet these needs.
Disclosure of Invention
This section is intended to outline some aspects of the invention and to briefly introduce some preferred embodiments. Simplifications or omissions may be made in this section as well as in the abstract and title to avoid obscuring the purpose of this section, abstract and title. Such simplification or omission is not intended to limit the scope of the present invention.
The present invention relates generally to architecture and design of wearable devices that can be used for virtual reality and augmented reality applications. According to one aspect of the invention, a display device is made in the form of a pair of eyeglasses and includes a minimum number of parts to reduce its complexity and weight. A separate shell or housing is provided to be portable for attachment or attachment to a user (e.g., a pocket or belt). The housing contains all the necessary parts and circuitry to create content for virtual reality and augmented reality applications, creating the minimum number of parts needed on the glasses, thus making the glasses smaller in footprint, impact performance enhancement, lower packaging cost, and easier to manufacture. Content is optically picked up by the fiber optic cable and delivered to the glasses through the optical fibers in the fiber optic cable, where the content is projected onto a special lens, respectively, for display of the content in front of the wearer's eyes.
According to another aspect of the invention, the glasses do not contain electronic components and are coupled to a housing by means of a transmission line comprising one or more optical fibers (a single or a plurality of which may be hereinafter interchangeably expressed), wherein the optical fibers are responsible for transporting the content or optical image from one end of the optical fiber to the other end thereof by total internal reflection within the optical fibers. The optical image is picked up by a focusing lens from a micro-display in the housing.
According to yet another aspect of the invention, the optical image is of lower resolution in the fiber but it is accepted to be twice the normal refresh rate (e.g., 120Hz versus 60 Hz). Wherein two frames of successively lower resolution images are combined at the other end near the fiber to produce a higher resolution image, wherein the combined image is refreshed to a generally normal refresh rate.
According to yet another aspect of the invention, each lens comprises a prism in the form: which propagates an optical image projected onto one edge of a prism to an optical path where a user can see an image formed from the optical image. The prism is also integrated with or stacked on an optical correction lens that is complementary or reciprocal to the lens of the prism to form an integrated lens of the eyeglass. The optical correction lens is provided to correct the optical path from the prism, allowing a user to view through the integrated lens without optical distortion.
According to yet another aspect of the invention, one exemplary prism is a waveguide. Each of the integrated lenses includes an optical waveguide that propagates an optical image projected onto one end of the waveguide to the other end through an optical path in which an image formed from the optical image is viewable by a user. The waveguide may also be integrated with or stacked on an optical correction lens to form an integrated lens of the eyeglass.
According to yet another aspect of the invention, the integrated lens may also be coated with a multilayer film having optical properties to enhance the optical image in front of the user's eyes.
According to yet another aspect of the invention, the glasses include several electronic devices (e.g., sensors or microphones) to enable various interactions between the wearer and the displayed content. Signals captured by the device (e.g., depth sensor) are transmitted to the housing by wireless means (e.g., RF wireless or bluetooth) to eliminate the wired connection between the glasses and the housing.
According to yet another aspect of the invention, an optical catheter is used to convey an optical image received from an image source (e.g., a micro-display). The optical conduit is enclosed in or integrated with a temple of a display device. Depending on the embodiment, the optical conduit including the bundle or array of optical fibers may be twisted, thinned, or otherwise deformed to fit the fashion design of the temple while delivering the optical image from one end of the temple to the other.
According to yet another aspect of the invention, the portable device may be implemented as a stand-alone device or as a docking unit to receive the smart phone. The portable device is primarily a control box connected to a network, such as the internet, and generates control and command signals when controlled by a user. When the smartphone is received in the docking unit, many of the functions provided in the smartphone, such as the network interface and touch screen, may be used to receive input from the user.
The present invention may be implemented as part of an apparatus, method, system. Different embodiments may yield different benefits, objects, and advantages. In one embodiment, the present invention is a display device including: a spectacle frame; at least one integrated lens comprising an optical waveguide lens, wherein the integrated lens is framed in the eyeglass frame; at least one temple is attached to the eyeglass frame; a set of optical fibers having a first end and a second end, wherein the first end receives a sequence of two-dimensional optical images that are conveyed in the optical fibers from the first end to the second end in a total internal reflection manner, wherein no other power-driven electronic components are required in the display device to receive the two-dimensional optical images that are conveyed to the integrated lens, the two-dimensional optical images being formed in the optical waveguide lens to be seen by a viewer looking at the integrated lens. In one embodiment, the data image producing the two-dimensional optical image is at a first refresh rate and a first resolution, and two successive two-dimensional optical images are displayed in the integrated lens, resulting in a combined composite optical image at a second refresh rate and a second resolution. Wherein the first refresh rate=2×second refresh rate, and the first resolution=1/2×second resolution (e.g.: first refresh rate = 120Hz, first resolution = 640X 480).
In another embodiment, the invention is a method of a display device, the method comprising: a set of optical fibers having a first end and a second end, wherein the first end receives the two-dimensional optical image sequence projected thereon and the second end is coupled to an integrated lens, the integrated lens comprising an optical waveguide lens, the integrated lens being framed on an eyeglass frame, at least one temple being attached to the eyeglass frame; sequentially transporting the two-dimensional optical image in total internal reflection within the optical fiber from the first end to the second end; projecting the two-dimensional optical image into the optical waveguide lens and forming the two-dimensional optical image in the optical waveguide lens for viewing by a viewer looking at the integrated lens, wherein no other power-driven electronic components are required in the display device to receive the two-dimensional optical image, the two-dimensional optical image being transmitted to the integrated lens, the two-dimensional optical image being formed in the optical waveguide lens.
In addition to the above objects, which are achieved by the practice of the present invention in the following description and which result in the embodiments shown in the drawings, there are many other objects.
Drawings
These and other features, aspects, and advantages of the present invention will become better understood with regard to the following description, appended claims, and accompanying drawings where:
FIG. 1A illustrates an exemplary eyewear for delivering or displaying VR or AR applications that are common in the market today;
FIG. 1B shows a simplified diagram of HoloLens from Microsoft;
FIG. 2A illustrates a pair of exemplary glasses that may be used for VR applications in accordance with one embodiment of the present invention;
FIG. 2B shows the use of an optical fiber to direct light from the optical fiber along a curved path in a more efficient manner or by total internal reflection within the optical fiber
One location is transferred to another location;
FIG. 2C illustrates two exemplary ways of encapsulating an optical fiber or optical fibers according to one embodiment of the invention;
FIG. 2D illustrates how an image is carried from a micro-display to an imaging medium via a fiber optic cable;
FIG. 2E illustrates a set of exemplary Variable Focus Elements (VFEs) to accommodate adjustment of the projection of an image onto an optical object (e.g., an imaging medium or prism);
FIG. 2F illustrates an exemplary lens that may be used with the eyeglass shown in FIG. 2A, wherein the lens comprises two portions, a prism and an optical correction lens or corrector;
FIG. 2G illustrates internal reflection from multiple sources (e.g., a sensor, an imaging medium, and multiple light sources) in an irregular prism;
FIG. 2H shows a comparison of such an integrated lens with a coin and ruler;
FIG. 2I shows a shirt with a cable enclosed within or attached to the shirt;
FIG. 3A shows how three single color images are visually combined and perceived by human vision as a full color image;
Fig. 3B shows that three different color images are produced under three light at wavelengths λ1, λ2, and λ3, respectively, and the imaging medium contains three films, each film coated with one type of phosphor.
FIG. 4 illustrates the use of a waveguide to transport an optical image from one end of the waveguide to the other end thereof;
FIG. 5A shows an exemplary functional block diagram that may be used with a separate shell or housing to generate content regarding virtual reality and augmented reality for display on the exemplary eyewear of FIG. 2A;
Fig. 5B illustrates an embodiment according to which an exemplary circuit is used within a single housing device case (also referred to herein as an image engine).
Fig. 5C illustrates an exemplary embodiment of how a user may bring a pair of designed display glasses according to an embodiment of the present invention.
Fig. 5D illustrates an exemplary functional block circuit diagram for the image engine of fig. 5B, employing the technique disclosed in U.S. patent No. US 10,147,350, according to one embodiment.
Fig. 6A shows an example of an array of multiple pixel cells, each of which is shown with four sub-pixel cells.
Fig. 6B illustrates a concept to generate an extended image from two frames generated.
FIG. 6C illustrates an example of an expanded image into a double-sized image with sub-pixel elements by writing pixel values to all (four) sub-pixel elements in a group, the expansion forming two frames via two passes of the process flow and separating them.
Fig. 6D shows the method meaning in separating images to produce two frames of equal image size from the light intensity.
Fig. 6E shows another embodiment of an image that is enlarged to an extension with two considerably reduced and interlaced images for an input image.
FIG. 7 illustrates how an optical image is generated using an optical cube in one embodiment; and
Fig. 8 shows that the display glasses do not include any other power driven electronic components to provide images or video to the integrated lens.
Detailed Description
The detailed description of the invention is presented largely in terms of procedures, steps, logic blocks, processing, and other symbolic representations of operations that are directly or indirectly similar to data processing devices coupled to a network. These process descriptions and representations are generally used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art.
Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Moreover, the order of blocks in a process flow diagram or illustration representing one or more embodiments of the invention is not necessarily indicative of any particular order nor does it imply any limitation in the invention.
Embodiments of the present invention are discussed herein with reference to fig. 2A-7. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes as the invention extends beyond these limited embodiments.
Referring now to the drawings, in which like numerals refer to like parts throughout the several views. Fig. 2A illustrates a pair of exemplary eyeglasses 200 for use in VR/AR applications in accordance with one embodiment of the present invention. The eyeglass 200 has no significant difference in appearance from a pair of normal eyeglasses, but includes two flexible cables 202 and 204 extending from temples 206 and 208, respectively. According to one embodiment, each pair of the two flexible cables 202 and the temples 206 and 208 are integrated or removably connected at one end thereof and include one or more optical fibers. The temple may also be referred to herein as a temple, which may be understood as a support member located at the edge.
Both flexible cables 202 are coupled at their other ends to a portable computing device 210, wherein the computing device 210 generates images captured by the cables 202 based on a micro display. The image is transmitted through the optical fiber by total internal reflection therein in the flexible cable 202 to the other end of the optical fiber, where the image is projected onto the lens in the glasses 200.
According to one embodiment, each of the two flexible cables 202 contains one or more optical fibers. The optical fiber is used to transmit light from one location to another along a curved path in a more efficient manner as shown in fig. 2B. In one embodiment, the fiber is formed from thousands of strands of very fine quality glass or quartz with a refractive index of around 1.7. The thickness of the strand is small. The strands are coated with a layer of a material of a lower refractive index. The ends of the strands are polished and clamped firmly after careful alignment. When light is incident at one end at a small angle, it is refracted into the strand (or fiber) and incident on the interface of the fiber and the coating. At incident angles greater than the critical angle, the light rays undergo total internal reflection and substantially transport light from one end to the other, even when the fiber is bent. Depending on the embodiment of the invention, a single optical fiber or a plurality of optical fibers arranged in parallel may be used to transport an optical image projected onto one end of an optical fiber to the other end thereof. Typically a high resolution image will require more optical fibers to transmit. As will be described below, two such images (e.g., two consecutive images) are combined at a double refresh rate after transmission to produce a viewable image of a second (high) resolution by minimizing the number of fibers used to transmit the image of the first (low) resolution to achieve a small number of fibers.
Fig. 2C illustrates two exemplary ways of encapsulating an optical fiber or optical fibers according to one embodiment of the invention. The encapsulated fiber can be used as the cable 202 or 204 in fig. 2A and extends through each of the inflexible temples 206 and 208 all the way to the ends thereof. According to one embodiment, the temples 206 and 208 are made of a material type (e.g., plastic or metal) common to a pair of ordinary eyeglasses, a portion of the cable 202 or 204 being embedded or integrated in the temples 206 or 208, thereby creating an inflexible portion, while another portion of the cable 202 or 204 is still flexible. According to another embodiment, the non-flexible portion and the flexible portion of the cable 202 or 204 may be removably connected by an interface or connector.
Referring now to FIG. 2D, it shows how an image is delivered from microdisplay 240 to imaging medium 244 via fiber optic cable 242. As will be described further below, the imaging medium 244 may be physical (e.g., a film) or non-physical (e.g., air). The micro-display is a display with a very small screen (e.g., less than one inch). At the end of the 90 s of the 20 th century, tiny electronic display systems of this type were introduced commercially. The most common applications for micro-displays include rear projection TVs and head mounted displays. The micro-display may be reflective or transmissive, depending on the way light is allowed to pass through the display unit. An image (not shown) displayed on the micro display 240 is picked up by one end of the optical fiber cable 242 through a lens 246, which end transmits the image to the other end of the optical fiber cable 242. Another lens 248 is provided to collect an image from the fiber optic cable 242 and project the image to the imaging medium 244. Depending on the implementation, there are different types of micro-displays and imaging media. Some embodiments of the micro-display and the imaging medium are described in detail below.
Fig. 2E illustrates a set of exemplary Variable Focus Elements (VFEs) 250 to accommodate adjustment of projection of an image onto an optical object (e.g., an imaging medium or prism). To facilitate the description of the various embodiments of the present invention, it is assumed that image media is present. As shown in fig. 2E, an image 252 conveyed through the fiber optic cable reaches an end surface 254 of the fiber optic cable. The image 252 is focused onto an imaging medium 258 through a set of lenses 256, referred to herein as Variable Focus Elements (VFEs). The VFE 256 is provided to adjust to ensure that the image 252 is accurately focused onto the imaging medium 258. Depending on the implementation, the adjustment of the VFE 256 may be done manually or automatically based on input (e.g., measurements obtained from sensors). According to one embodiment, the adjustment of the VFE 256 is automatically performed in accordance with a feedback signal derived from a sensing signal from a sensor directed against the eye (pupil) of the wearer wearing the glasses 200 of fig. 2A.
Referring now to fig. 2F, an exemplary lens 260 that may be used with the eyeglass shown in fig. 2A is shown. The lens 260 comprises two portions: a prism 262 and an optical correction lens or corrector 264. The prism 262 and the corrector 264 are stacked to form the lens 260. As the name suggests, the optical corrector 264 is provided to correct the optical path from the prism 262 such that the light passing through the prism 262 passes straight through the corrector 264. In other words, the refraction light from the prism 262 is corrected or released from refraction by the corrector 264. In optics, a prism is a transparent optical element with a flat, polished surface that refracts light. At least two of the planar surfaces must have an angle therebetween. The exact angle between the surfaces depends on the application. The traditional geometry is a triangular prism with a triangular base and rectangular sides, and in spoken use, the prism is commonly referred to as this type. The prism may be made of any material that is transparent to the wavelength for which it is designed. Typical materials include glass, plastic and fluorite. According to one embodiment, the type of prism 262 does not actually reside in the shape of a geometric prism, so the prism 262 is referred to herein as an arbitrarily shaped prism, which directs the corrector 264 to a shape that is complementary, reciprocal or conjugate to the form of the prism 262 to form the lens 260.
On one edge of the lens 260 or the edge of the prism 262, there are at least three items that utilize the prism 262. Designated 267 is an imaging medium corresponding to imaging medium 244 of fig. 2D or imaging medium 258 of fig. 2E. Depending on the implementation, the image conveyed by the optical fiber 242 of fig. 2D may be projected directly onto the edge of the prism 262 or formed on the imaging medium 267 before it is projected onto the edge of the prism 262. In any event, depending on the shape of the prism 262, the projected image is refracted in the prism 262 and then seen by the eye 265. In other words, a user wearing a pair of glasses using the lenses 262 can see an image displayed through or in the prism 262.
A sensor 266 is provided to image the position or movement of the pupil in the eye 265. Also, based on the refraction provided by prism 262, sensor 266 may find the position of the pupil. In operation, an image of eye 265 is captured. The images are analyzed to derive the manner in which the pupil views the image through or presented in the lens 260. In AR applications, the position of the pupil may be used to activate a certain action. Optionally, a light source 268 is provided to illuminate the eye 265 to facilitate image capture by the sensor 266. According to one embodiment, the light source 268 uses a near-inferred source, whereby the user or his eye 265 will not be affected by the light source when the light source 268 is on.
Fig. 2G shows internal reflection from multiple sources (e.g., sensor 266, imaging media 267, and light source 268). Because the prism is uniquely designed, particularly in shape, or has a specific edge, light from the source is reflected several times within the prism 268 and then impinges on the eye 265. For completeness, figure 2H shows a comparison in size of such lenses with coins and straightedge.
As described above, there are different types of micro-displays, and therefore different imaging media. The following table summarises some of the micro-displays that may be used to facilitate the generation of optical images that may be conveyed by one or more optical fibers from one end thereof to the other by total internal reflection within the optical fibers.
LCoS = liquid crystal on silicon;
lcd=liquid crystal display;
OLED = organic light emitting diode;
RGB = red, green and blue; and
SLM = spatial light modulator.
In the first case shown in the table above, the full color image is actually displayed on silicon base. As shown in fig. 2D, a full color image may be picked up by a focusing lens or a set of lenses that projects the full image onto just one end of the optical fiber. The image is transported within the fiber and again picked up by another focusing lens at the other end of the fiber. Because the delivered image is visible and full color, imaging medium 244 of FIG. 2D may not be physically needed. The color image may be projected directly onto one edge of the prism 262 of fig. 2F.
In the second case shown in the table above, LCoS is used with different light sources. Specifically, there are at least three colored light sources (e.g., red, green, and blue) that are sequentially used. In other words, each light source produces a single color image. The image picked up by the optical fiber is only a single color image. A full color image can be reproduced when all three different single color images are combined. Imaging medium 244 of fig. 2D is provided to reproduce a full color image from three different single color images delivered by optical fibers, respectively.
Fig. 2I shows a shirt 270 with a cable 272 enclosed within or attached to the shirt 270. Shirt 270 is an example of a fabric material or a multi-layer piece. Such relatively thin cables may be embedded in the multilayer. When a user wears such a shirt manufactured or designed according to one embodiment, the cable itself has less weight and the user is more free to move around.
Fig. 3A shows how three single color images 302 are visually combined and perceived by human vision as a full color image 304. According to one embodiment, three colored light sources are used, for example, red, green and blue light sources that are turned on sequentially. More specifically, when the red light source is on, only a red image is produced as a result (e.g., from a micro display). The red image is then optically picked up and transported by the optical fiber and then projected into prism 262 of fig. 2F. As the green and blue light is then turned on sequentially, green and blue images are generated and delivered separately by the optical fibers and then projected into the prism 262 of fig. 2F. It is well known that human vision possesses the ability to combine three single color images and perceive them as a full color image. With all three single color images projected sequentially into the prism perfectly aligned, the eye sees a full color image.
Also in the second case shown above, the light source may be approximately invisible. According to one embodiment, three light sources produce light near the UV band. Under such illumination, three different color images may still be produced and delivered, but are not fully visible. Before the color image can be presented to the eye or projected into a prism, it will be converted into three primary color images, which can then be perceived as a full color image. According to one embodiment, imaging medium 244 of FIG. 2D is provided. Fig. 3B shows a full color image that can be subsequently perceived at the respective wavelengths. According to one embodiment, imaging medium 244 of FIG. 2D is provided. Fig. 3B shows three different color images 310 produced under three light sources at wavelengths λ1, λ2, and λ3, respectively, imaging medium 312 comprises three film layers 314, each film layer 314 being coated with one type of phosphor, i.e., a substance exhibiting luminescence. In one embodiment, three types of phosphors at wavelengths 405nm, 435nm, and 465nm are used to convert three different color images produced under three light sources near the UV band. In other words, when one such color image is projected onto a film coated with phosphor at a wavelength of 405nm, the single color image is converted to a red image, which is then focused and projected into a prism. The process is the same for the other two single color images through the film layer coated with phosphor at wavelength 435nm or 465nm, producing green and blue images. When such red, green, and blue images are projected into the prism in sequence, the human vision perceives them together as a full color image.
In the third or fourth case shown in the above table, instead of using light in or near invisible to the human eye's visible spectrum, a laser source is used as the light source. Visible and invisible lasers also exist. The third or fourth case uses so-called Spatial Light Modulation (SLM) to form a full-color image, which is not much different from the operation of the first and second cases. Spatial light modulators are general terms describing means for modulating the amplitude, phase or polarization of light waves in space and time. In other words, the slm+laser (RGB sequential) can produce three separate color images. When the color images are combined with or without imaging media, a full color image can be reproduced. In the case of slm+laser (invisible), the imaging medium will be presented to convert the invisible image to a full color image, in which case an appropriate film layer can be used as shown in fig. 3B.
Referring now to fig. 4, a waveguide 400 is shown for transporting an optical image 402 from one end 404 to the other end 406 of the waveguide 400, wherein the waveguide 400 may be stacked with or coated with one or more sheets of glass or lens (not shown) to form a suitable lens for application to a pair of glasses displaying an image from a computing device. As is known to those skilled in the art, an optical waveguide is a spatially non-uniform structure for guiding light, i.e., a region of space for confining light that can propagate, wherein the waveguide contains a region of increased refractive index compared to the surrounding medium (commonly referred to as a cladding).
Waveguide 400 is transparent and is shaped at 404 in a suitable manner to allow image 402 to propagate along waveguide 400 to 406 where user 408 can view through waveguide 400 to see propagated image 410. According to one embodiment, one or more film layers are disposed on the waveguide 400 to magnify the propagated image 410 such that the eye 408 can see the significantly magnified image 412. One example of such a film layer is known as metalenses (metalens), essentially a thin titanium dioxide nanoplatelet array on a glass substrate.
Referring now to fig. 5A, an exemplary functional block diagram 500 that may be used with a separate shell or housing to produce content related to virtual reality and augmented reality for display on the exemplary eyewear of fig. 2A is illustrated. As shown in fig. 5A, two micro-displays 502 and 504 are provided to supply content to two lenses in the glasses of fig. 2A, basically a left image to a left lens and a right image to a right lens. Examples of such content are 2D or 3D images and video or holograms. Each of the micro-displays 502 and 504 is driven by a corresponding driver 506 or 508.
The entire circuit 500 is controlled and driven by a controller 510 programmed to generate the content. According to one embodiment, the circuit 500 is designed to communicate with the Internet (not shown) and receive the content from other devices. Specifically, the circuit 500 includes an interface that receives a sensing signal from a remote sensor (e.g., the sensor 266 of fig. 2F) via wireless means (e.g., RF or bluetooth). The controller 510 is programmed to analyze the sense signals and provide feedback signals to control certain operations of the glasses, such as a projection mechanism, that includes a focusing mechanism that auto-focuses and projects an optical image onto the edge of the prism 262 of fig. 2F. Furthermore, audio is provided to synchronize with the content, and the audio may be wirelessly transmitted to headphones.
Fig. 5A shows an exemplary circuit 500 that generates content for display in a pair of glasses contemplated in one embodiment of the invention. Circuit 500 shows that there are two micro-displays 502 and 504 for providing two respective images or video streams to two lenses of the glasses in fig. 2A. According to one embodiment, only one micro-display may be used to drive both lenses of the glasses in fig. 2A. Such a circuit is not provided herein, as one of ordinary skill in the art knows how the circuit can be designed or how to modify the circuit 500 of fig. 5A.
Fig. 5B shows an embodiment according to which an exemplary circuit 500 is used within a single housing device box 516 (also referred to herein as an image engine). The image engine 516 receives an image source or video from the smart phone 518 while also being a controller to provide the necessary interface to enable the wearer or user to manipulate what is to be received and shown on the display glasses and how to interact with the display. Fig. 5C illustrates an exemplary embodiment showing how a user wears such display glasses. The glasses 520 according to this embodiment are shown without active electronic components (power driven) other than a pair of optical fibers 522 to deliver images or video. The accompanying sound can be provided by the smart phone 518 directly to the headset (earbud or bluetooth headset). As will be described further below, the thickness or number of fibers 522 from the image engine 516 to the glasses 520 used to transmit or deliver low resolution images and video will again be reduced.
Fig. 5D illustrates an exemplary circuit 530 employing the technique disclosed in U.S. patent No. US 10,147,350, the contents of which are hereby incorporated by reference, according to one embodiment. As shown in fig. 5D, circuit 530 essentially produces two low resolution images (e.g., 640x 480) that are diagonally displaced by one pixel and have a refresh rate of 120Hz (60 Hz in the united states for the "standard" refresh rate commonly used). The refresh rate typically used is 60Hz for most television TVs, PC monitors, and smartphones. A refresh rate of 60Hz means that the display is refreshed 60 times per second, in other words, the displayed image is updated (or refreshed) every 16.67 microseconds (ms). When such two images are refreshed twice at the standard refresh rate, the resolution of the image perceived by the user on the integrated glasses is doubled, i.e., nominally up to 1280x960.
According to one embodiment, the native (first) resolution of a display image on the display glasses, e.g., 640x480, or a resolution that is pre-set for efficient transmission over fiber optics, is at the first refresh rate when transmitting video. If the image is at a higher resolution than the first, it may be reduced to a lower resolution. According to US10,147,350, a duplicate but diagonally shifted half-pixel image is produced, resulting in a second image also at the first resolution, the two images being projected on the optical fiber 522 in turn at twice the refresh rate of the original image, that is to say the second refresh rate is equal to twice the (=2x) first refresh rate. When images are sequentially output from the other end of the fiber, they will be seen in the waveguide as one image of a second resolution, which is presented twice the first resolution.
Fig. 6A-6E replicate fig. 16A-16E of U.S. patent No. US10,147,350. As described above, the optical image output from the optical fiber in one embodiment of the invention will be twice the spatial resolution seen by the input image. Referring to fig. 6A, an array 600 of pixel cells (forming an image or a data image) is shown having 4 sub-image cells 604A, 604B, 604C and 604D. When an input image (e.g., 500x 500) having a first resolution is received and displayed at the first resolution, each pixel value is stored in each pixel cell 600. In other words, sub-picture elements 604A, 604B, 604C and 604D are all written or stored with the same value and addressed at the same time. As shown in FIG. 6A, a row and column of words (e.g., WL0, WL1, or WL 2) may be addressed simultaneously to sub-pixels belonging to two columns in pixel 602, and when a row and column of bits (e.g., BL0, BL1, or BL) may be addressed simultaneously to sub-pixels belonging to two rows in pixel 602. At any instant in time, a pixel value is written to pixel 602, where sub-image elements 604A, 604B, 604C, and 604D are selected. Finally, the input image is displayed at a first resolution (e.g., 500x 500), that is, the input images are all at the same resolution.
Now assume that an input (data) image of a first resolution (e.g., 500x 500) is received and displayed at a second resolution (e.g., 1000x 1000), where the second resolution is twice the first resolution. According to one embodiment, sub-picture elements are used to achieve the perceived resolution. It is extremely important to understand that this improved spatial resolution is viewable by the human eye, rather than the actual double resolution of the input image. To facilitate the description of the present invention, fig. 6B and 6C are used to illustrate how the magnified input image is expanded to achieve viewable resolution.
Let us now assume that one input image 610 is at a resolution of 500x 500. Via data processing 612 (e.g., zoom in and out), the input image 610 is expanded to a size 1000x1000 of one image 614. Fig. 6C shows an example where image 616 is expanded to image 618 to double the size with sub-pixel elements. In operation, each pixel write group of image 616 includes all (four) subpixel units (e.g., 2x2 for an exemplary subpixel). Those skilled in the art will appreciate that the description herein is immediately applicable to other sub-pixel structures (3 x3, 4x4, 5x5, etc.), resulting in even more viewable resolution. According to one embodiment, a sharpening process (e.g., a partial data process in FIG. 16B) is applied to expand image 618 is the fundamental process of causing image 618 to be enlarged (e.g., filtering, thinning, or sharpening the image edges) for the purpose of generating two frames from expanded image 618. In one embodiment, the value of each subpixel is algorithmically recalculated to arrive at a well-defined edge to generate image 620, and in another embodiment, the values of neighboring pixels are referenced to obtain a sharp edge.
The processed image 620 is then separated into two images 622 and 624 via a separate process 625. Both images 622 and 624 are at the same resolution as the input image (e.g., 500x 500), where the sub-pixel elements of images 622 and 624 are both written or stored with the same value. The boundary range at the pixel cells of image 622 is intentionally different from the boundary range at the pixel cells of image 624. In one embodiment, the boundary range of the pixel cell is offset by one half pixel in the vertical direction (corresponding to one subpixel in the 2×2 subpixel array) and also offset by one half pixel in the horizontal direction (corresponding to one subpixel in the 2×2 subpixel array). The separation process 625 proceeds in one manner: when images 622 and 624 overlap, the combined image best conforms to image 620 and is four times as high resolution as input image 616. In the example of fig. 6C, to maintain a fixed intensity of the input image 610, the separation process 625 also includes a process of reducing the intensity of each of the two images 622 and 624 by 50%. Operationally, the intensity of the first image is reduced by N percent, where N is an integer and ranges from 1 to 100, but practically set around 50. As a result, the intensity of the second image is reduced to (100-N). Either of these two images 622 and 624 is optionally displayed at twice the refresh rate of input image 610. In other words, if the input image is displayed at 50Hz per second, each pixel of the two images 622 and 624 is displayed at 100Hz per second. The combined image perceived by the viewer approximates image 620 due to the misalignment of the pixel boundaries and the processing of the data. The offset pixel boundary between the two images 626 and 624 has the effect of a pixel boundary "shift". According to another embodiment, as shown by two pixels 626 and 628, the example illustrated in fig. 6C is similarly shifted one (sub) pixel in the southeast direction.
According to an embodiment, the separation process 625 may be performed as an image algorithm or as a pixel shift, where a pixel shift means a sub-pixel in the sub-pixel structure shown in FIG. 6A. There are many ways to separate one NxM image by intensity into two images, each image still NxM, so the perceived display effect will be twice the refresh rate of either image for optimal vision. For example, one exemplary approximation is to hold and modify the original image and reduce its intensity as a first frame while the second frame is generated from the remainder of the first frame, again to reduce its intensity. In an approximation of another embodiment, bits are shifted from the first frame (from the original or modified acquisition) by half a (1/2) pixel (e.g., horizontally and vertically or diagonally) to produce a second frame, further details of which will be provided later. Fig. 6C shows two images 622 and 624 resulting from processing the expanded image 620, with two pixels 626 and 628 being generated concurrently in accordance with the image algorithm, wherein a second frame is generated by diagonally shifting the pixels of the first frame. It should be noted that the separation process here means that two frames equivalent to the original image size are generated by separating images with their intensities. Fig. 6D shows an image of two pixels, one full intensity (shown black) and the other half full intensity (shown gray). When two pixel images are separated into two frames of the same size as the original, the first frame has two pixels with half full intensity (shown as gray) and the second frame has two pixels with half full intensity (shown as gray) and one with almost zero percent full intensity (shown as white). There are now twice as many pixels as there were original images, which show a checkerboard pattern like a western jumping flag. Since each pixel is refreshed 60 times per second instead of 120, each pixel has only half the brightness, but because they have twice as much, the brightness of the image as a whole remains the same.
Referring now to fig. 6E, another embodiment of a despread input image 610 is shown. The input image 610 is still assumed to be 500x500 resolution. Via data processing 612, the input image 610 is expanded to a size 1000x1000. In this embodiment, it should be appreciated that 1000x1000 is not the resolution of the extended image. The expanded image is an image 630 and 632 with two 500x500 considerably reduced images. The expanded view 634 of the substantially reduced images 630 and 632 shows that the pixels in one image are substantially reduced while allowing pixels of the other image to be generated that are intermediate the pixels. According to one embodiment of the invention, the first grid image is derived from the input image and the second image is derived from the first image. As shown in the expanded view 634 of fig. 6E, an exemplary pixel 636 in the second image 632 is derived from three pixels 638A, 638B, and 638C. In the same way, that is to say that a displacement of half (1/2) pixels along a set direction can be applied to produce all pixels of the second image. At the end of the data processing 612, there is one interlaced image that includes two images 630 and 632, each 500x500. Another process flow 625 that allows for separate processing is applied to the interlaced images to generate or store two images 630 and 632 therein.
Referring now to one embodiment shown in fig. 7, an optical cube 702 is used to create an optical image. An image displayed on a micro display (e.g., LCoS or OLED) 706 is projected as an optical image using a light source 704 and picked up by a lens 708. The optical image is then transported from there to the other end via optical fiber 710. The optical image is then projected through another lens (e.g., collimator) 714 into a waveguide or integrated lens 712. The optical image is ultimately viewed by a human eye 716 in the waveguide lens 712. Fig. 8 shows that the display glasses 720 do not include any other power driven electronic components therein to provide images or video to the integrated lens.
The invention has been described with a certain degree of particularity. It will be understood by those skilled in the art that the present disclosure of the embodiments is by way of example only, and that various changes in the arrangement and combination of parts may be made without departing from the spirit and scope of the invention. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description of the embodiments.