Movatterモバイル変換


[0]ホーム

URL:


WO1997020244A1 - Method and apparatus for displaying a virtual environment on a video display - Google Patents

Method and apparatus for displaying a virtual environment on a video display
Download PDF

Info

Publication number
WO1997020244A1
WO1997020244A1PCT/CA1996/000789CA9600789WWO9720244A1WO 1997020244 A1WO1997020244 A1WO 1997020244A1CA 9600789 WCA9600789 WCA 9600789WWO 9720244 A1WO9720244 A1WO 9720244A1
Authority
WO
WIPO (PCT)
Prior art keywords
display
image
signal
visual orientation
virtual environment
Prior art date
Application number
PCT/CA1996/000789
Other languages
French (fr)
Inventor
Brian L. Welch
Andrew Fernie
Ken Unger
Original Assignee
Cae Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/563,195external-prioritypatent/US5933125A/en
Priority claimed from US08/593,842external-prioritypatent/US5764202A/en
Application filed by Cae Electronics Ltd.filedCriticalCae Electronics Ltd.
Priority to AU76164/96ApriorityCriticalpatent/AU7616496A/en
Priority to CA002238693Aprioritypatent/CA2238693C/en
Publication of WO1997020244A1publicationCriticalpatent/WO1997020244A1/en

Links

Classifications

Definitions

Landscapes

Abstract

The apparatus for displaying a virtual environment on a video display such as a head mounted display (23), has a position processor (11) for generating a visual orientation signal indicating a visual orientation of the video display with respect to the virtual environment, and an image generator (20) for generating a series of component images of the virtual environment for the visual orientation. The image generator (20) receives the visual orientation signals. Any change in the visual orientation signal from a time when orientation signal was used by the image generator (20) to generate each component image and a time of display of each component image on the video display (16) is detected to produce an offset shift signal. An image shifting device shifts the image on the display (16) in response to the offset shift signal, so as to improve the display of the virtual environment. Field sequential color video and temporal grey scale video can be improved using the apparatus, in addition to other video systems with a transport delay and active variable motion which is difficult to predict.

Description

METHOD AND APPARATUS FOR DISPLAYING A VIRTUAL ENVIRONMENT ON A VIDEO DISPLAY
Field of the Invention
The present invention relates to a method and apparatus for displaying a virtual environment on a video display. More particularly, the present invention relates to a display method and apparatus which suppresses image break-up or jerking which occurs when there is rapid motion of a color image being viewed, as is the case with head mounted displays
Background of tbe Invention
Television Display devices such as the Digital Micromirror Device (DMD's), Active Matrix Electroluminescent Displays (AMEL's) and Ferro Electric Liquid Crystal Displays (FELCD's) achieve a grey scale varying from white to black by switching each pixel on for a specific amount of time during each field or frame As the human eye has an integration time which is much longer than the time for each field (usually 1/60 sec in the U S ), it perceives a constant brightness proportional to the amount of time the pixel is turned on during each field period This is achieved by dividing each field time, nominally 16 67 milliseconds in the U.S., into bit planes representing each bit of the binary number which specifies the relative brightness of each pixel
A typical system for example would have the most significant bit turned on for 4 milliseconds, the next most significant bit turned on for 2 milliseconds and so on in a binary scale for the remainder of the bits A high quahty image may require eight or even nine bit planes while other systems may use as little as five or six bit planes The intervals between each bit plane are usually used for addressing each pixel in the display with the illumination source turned off Some schemes however keep the illumination source turned on for the complete field and addressing of each pixel for each bit plane takes place within the bit plane periods
All schemes however have one thing in common in that the same image is used to refresh each bit plane during the course of a specific field. This can cause annoying artifacts with moving imagery. The effect is most noticeable on helmet mounted displays during moderate to rapid head motion where discrete objects tend to break up into double or multiple images or may appear to jitter or be smeared
It can also be desirable in head mounted displays to use field sequential color display devices to improve picture quality, reduce weight or reduce costs of manufacture When used however with an image source such as a color television camera or a computer image generator which are operating in the conventional simultaneous color mode, color fringes are seen on objects during angular head motion. If the head motion is sufficiently rapid, three distinct red, blue and green images can be seen. The effect is also observed during rapid motion of an object within the color display when the head is stationary and is often called field sequential color break-up. In order to understand the invention, it is first necessary to have a clear understanding of why image break-up occurs. As is well known in the art, television creates the illusion of smooth motion by drawing successive images at a sufficiently fast rate that the human visual system can no longer see the individual images (i.e. the image is flicker-free). If the entire image or the objects within the image are moved appropriately relative to the previous image, the visual system will interpret the sequence of images as smooth motion. Figure 1 shows the motion of an upstanding arrow on a display moving from right to left in five sequential positions. The arrow represents any fixed object within the scene being displayed. The movement from right to left, in the case of a head mounted display, is caused by a rotational head motion from left to right. As is known, the human eye never views an image, whether still or moving, focusing only on one portion of the image. The human eye will tend to pick portions of an image to focus on and typically will wander from different portions of the image according to interest and the need to gather information. When the image moves across the display as illustrated in Figure 1, the eye typically fixates on a given object within the moving image, at least temporarily, before switching to -mother portion or object within the image to be observed. The eye therefore tracks each portion of the image that is to be observed as that portion ofthe image or object moves across the display.
In the example of the object represented by the upstanding arrow, the eye tracks the object as it moves from right to left. Even though the image appears at a finite number of discrete locations, the eye will move or rotate with a substantially constant velocity to track the object. The rotating eye is illustrated in Figure 2. It will be noted that all of the consecutive images are to be focused on the retina at the same position. This position is typically within a portion of the retina where good high resolution vision is to be had as opposed to a surrounding area of poorer lower resolution vision. When the color image displayed at each of the five discrete positions as illustrated in Figure 1 is carried out using a simultaneous color video display, the red, green and blue component images are caused to appear simultaneously at each of the five discrete positions and the resulting image on the retina is as illustrated in Figure 3a (for sake of clarity, the inversion ofthe image on the retina is not illustrated). In the case that a field sequential color display device sequentially displays the color component images from a simultaneous color image source, to present the images as illustrated in Figure 1 , the time lag between displaying the sequential color component images will give rise to a separation ofthe object into three color component images, as illustrated in Figure 3b, as a result of the constant velocity of the eye, as illustrated in Figure 2. The degree of spatial separation of the color component images is proportional to the rotational velocity of the eye, and thus, proportional to the angular velocity of movement ofthe image with respect to the display.
In the case of temporal modulation for grey scale, all of the consecutive images are focused on the retina at or near the fovea allowing the observer to see a single image as shown in Fig 4a. The eye would normally track the images created in the most significant bit plane and the images created in the remaining bit planes would be focused at different points on the retina as shown in Fig. 4b
In conventional color video displays, each image is usually called a field and the field rate is 60 Hz in the U.S. The color component images are displayed synchronously on the display such that the observer sees a single correctly colored image When a field sequential display is used to display video from a simultaneous color image source, the red, blue and green images are drawn sequentially at a field rate commonly three times as high as the normal rate, namely, 180 Hz in the U.S A typical field sequential color display device is a liquid crystal display device operating as a monochrome display which is provided with color illumination or filters which operate in an alternating sequence of red, blue and green, such that the alternating sequential monochrome images corresponding to the red, blue and green color component images, can be seen with varying color intensities to give the illusion of color video
In the case of a head mounted display using head tracking to control the image such that the wearer sees a stable virtual environment, rotation ofthe head causes an equal and opposite movement of the image If the image has been created by a device operating in the simultaneous color mode and the display is operating in the field sequential mode, the problem described above will occur. The problem could obviously be circumvented by operating the device creating the image, i e. either a Computer Image Generator (CIG) system or a television camera, in the field sequential mode This would be, however, a very expensive proposition and would furthermore discourage the use of field sequential helmet mounted displays.
US Patent 5,369,450 to Haseltine et al describes how color aberrations in a head mounted display operating in a field sequential mode can be corrected by electronic means. The color aberrations described by Haseltine, however, are caused by the different refractive indices ofthe optical components for red, blue and green and are not a function of head motion.
Computer image generators used in simulation and in virtual reality systems have an inherent transport delay due to the finite amount of time taken to perform the various computational algorithms necessary to assemble an image of the virtual environment with proper attributes. The effect of this transport delay on the performance of pilots in flight simulators has been well known for many years and care is taken to minimize such delays in image generation systems specifically designed for flight simulation. A far more obvious effect is seen, however, when image generation systems are coupled to head mounted displays. In these systems, the head position is continually being measured and is used by the image generator to compute the correct scene for the observer's viewpoint (the visual orientation of the display with respect to the virtual environment). If the observer moves his or her head while looking at a stationary image, the image will move in the direction ofthe head motion for a period of time corresponding to the total transport delay of the system (including head measuring device) and will only regain the correct position once the observer's head is stationary
This effect detracts considerably from the utility of the head mounted display and can give rise to nausea The problem and a reasonably effective solution is described by Uwe List in a U.S Airforce report entitled "Non Linear Prediction of Head
Movements for Helmet Mounted Displays" (AFHRL Technical Paper 83-45 December 1983). In this report, List recommends the use of angular acceleration sensors mounted on the helmet to calculate a predicted head position Welch and Kruk also suggest this solution in "HDTV Virtual Reality" published in Japan Display 1992 Figure 5 illustrates an exemplary acceleration curve as a user moves between two visual orientations. As can be seen, the acceleration is shown in the example to peak at 100 milliseconds with a deceleration or stopping of the head motion commencing near 200 milliseconds and ending near 400 milliseconds with the head of the helmet in its new angular position. In Figure 5, it is presumed that the head position sensor and the computer image generator requires 100 milliseconds to detect head position and generate an image for the new head position (i.e. the transport delay is 100 ms). The curve illustrating the displayed image orientation with no position prediction correction results in considerable unwanted image motion illustrated near 250 milliseconds as the difference of some 12° by the reference letter E In the prior art improvement, prediction of future position using acceleration measurements resulted in the dashed line for the image orientation with small but noticeable divergence between the predicted line and the actual head orientation curve.
These solutions relate to predicting the visual orientation of the display with respect to the virtual environment for a time in the future approximately equivalent to the present time plus the transport delay of the system. This prediction is accomplished by using measurements of angular head acceleration and/or angular head velocity. The image generator then uses this predicted position to compute the next image While the prediction of visual orientation or head position can be used in the image generator to greatly reduce the error or discrepancy between the image of the virtual environment being displayed and the correct image of the virtual environment for the actual visual orientation, this technique cannot eliminate such errors completely
Summary of the Invention
It is an object ofthe invention to correct the problems described above using a simple and relatively inexpensive technique which allows conventional field sequential color display devices to be used with standard simultaneous color image sources without the observer seeing color fringes during certain types of motion In the case of a flight simulator or other virtual reality systems, the simultaneous color image source is a computer image generator operating in real time In a telepresence system, a live camera is gimbal mounted on a robot and is driven by servo control to view in the direction ofthe observer's head In the case of a land vehicle simulator or a land vehicle telepresence robot, rapid motion may result from road bumps or the like and thus may not be the exclusive result of observer head motion Therefore the angular velocity is understood to be a combination of both the observer's head and any rapid robot or simulated vehicle angular movement
It is furthermore an object of the present invention to provide a field sequential color display device and method in which color break-up is suppressed by moving each field of a color image by an amount equivalent to the angular motion of the observer's head in that field Shifting ofthe color component images (fields) within each cycle can be done optically in the optical image relay systems (e g mirrors and lenses), using horizontal and vertical CRT controls, by electronically shifting image data in an electronic display, or by data processing in the video processor feeding the color component images to the display
It is an object of the present invention to provide a method and apparatus for reducing image breakup in television display devices which create a grey scale by the use of temporally separated bit planes The technique is often known as temporal modulation or pulse width modulation As is known in the art, image breakup occurs in moving television imagery whenever the update rate ofthe image and the refresh rate of display are not identical and synchronous This invention largely reduces such image breakup in helmet mounted displays by using the angular velocity of the head to generate small vertical and horizontal offsets for each bit plane The observer thereby sees each bit plane image as if it had been updated for the new head position and image breakup, smear etc are largely eliminated A further object of the present invention is to provide a more stable and accurate representation of a virtual environment by calculating the discrepancy between the angular orientation ofthe image being displayed and the current visual orientation of the display and use this error to shift the image to the correct position It is a further object of the invention to provide a relatively inexpensive and simple system to substantially reduce such errors thereby providing a more stable and accurate representation ofthe virtual environment
According to a broad aspect of the invention, there is provided an apparatus for displaying a virtual environment on a video display comprising position processor means for generating a visual orientation signal indicating a visual orientation of the display with respect to the virtual environment, image generator means for generating a series of component images of the virtual environment for the visual orientation, the image generating means receiving the visual orientation signals, means for detecting any change in the visual orientation signal from a time when the signal was used by the image generator means to generate each the component image and a time of display of each the component image on the display to produce an offset shift signal, and means for shifting the image on the display in response to the offset shift signal In this way, the display ofthe virtual environment is improved
According to another broad aspect of the invention, there is provided a method for displaying a virtual environment on a video display comprising the repeated steps of determining a visual orientation of the display with respect to the virtual environment, generating a series of component images of the virtual environment for the visual orientation, displaying the images on the display, detecting any change in the visual orientation which may have occurred between a time when the visual orientation was determined and the image is to be displayed, and shifting the image on the display an amount equivalent to the change, whereby the display of the virtual environment is improved
Preferably, the video display may be a color field sequential display device, and the series of component images is a series of cycles of color component images The apparatus may further comprise color filter means for making the color component images of the series appear to have a different color, such that a mixing of the color component images as seen with the color filter means provides an observer with a color image ofthe environment, and the detecting means may comprise means for determining an angular velocity of a visual orientation of the display with respect to the virtual environment and for generating a velocity signal, and the offset shift signal is a function ofthe velocity signal Also preferably, the video display may be a temporal modulation grey scale display device, and the series of component images may be a series of grey scale component images to be displayed sequentially to provide an observer with an impression of grey scale images, and the detecting means may likewise comprise means for determining an angular velocity of a visual orientation of the display with respect to the virtual environment and for generating a head velocity signal, with the offset shift signal being a function ofthe velocity signal.
According a further preferred aspect, the image generator means may have a finite transport delay time for generating and preparing an image for transmission on a video output signal. The apparatus may further comprise means for detecting at least one of an angular velocity and an angular acceleration of the visual orientation for producing a predictive signal, and means for calculating a predicted visual orientation of the display with respect to the virtual environment based on the visual orientation signal and the predictive signal to produce a predicted visual orientation signal The predicted visual orientation signal may thus be connected to the image generator means in place of the visual orientation signal generated by the determining means, the predicted visual orientation signal being for a future point in time equal to a present time plus approximately the transport delay time.
Brief Description of the Drawings
The present invention will be better understood by way of the following detailed description of three preferred embodiments with reference to the appended drawings, in which.
Figure 1 illustrates a series of 5 objects within an image being displayed to create the illusion of object motion from right to left as is known in the art,
Figure 2 illustrates a cross-section of an observer's eyeball illustrating schematically the image formed on the retina and the direction of rotation of the eye as an object is tracked during motion as illustrated in Figure 2,
Figures 3 a and 3 b illustrate respectively in schematic format the image appearing on the observer's retina for simultaneous color mode display and field sequential color mode display;
Figures 4a and 4b illustrate respectively in schematic format the image appearing on the observer's retina for temporally separated grey scale display;
Figure 5 illustrates a graph of head acceleration, head position, an example of displayed image position with no prediction correction, and displayed image position using position prediction correction in the case of a computer image generator having a transport delay of 100 milliseconds, as is known in the prior art, Figure 6 is a block schematic diagram of a horizontal and vertical offset deflection processor providing image shift means according to the preferred embodiment;
Figure 7 illustrates the waveform of the offset signal for a continuously varying function (A) for a complete cycle of RGB fields and as discrete values (B) for each RGB field, with the vertical sync pulses (C);
Figure 8 illustrates an alternative embodiment in which an opto-mechanical shifting of the viewed image is achieved by mounting a relay mirror on piezoelectric transducers which are energized by the appropriate offset signal to shift the image for each field by the appropriate offset for the speed ofthe object in motion being viewed on the screen;
Figure 9 illustrates a block diagram of a digital display screen including a digital image shifter;
Figure 10 shows a typical timing diagram for a single field divided into six bit planes;
Figure 1 1 is a block diagram of the apparatus according to the preferred embodiment;
Figure 12 illustrates a block diagram of the virtual environment display apparatus according to the preferred embodiment in which the difference between the actual head position and the predicted head position is used to control horizontal vertical offsets ofthe video display;
Figure 13 illustrates actual head position, delayed predicted head position and display offset signals on a common time scale for a simple example in which actual head position moves with constant velocity for a time X between positions Pj and P2.; Figure 14 illustrates an optical schematic for a display system using a moveable mirror to perform the shifting function; and
Figure 15 illustrates an optical schematic for a display system where the shifting function is performed by a hquid filled prism.
Detailed Description of the Preferred Embodiments
In the first preferred embodiment, as will be described with reference to
Figures 1 to 9, a field sequential color display incorporates the invention. Figure 1 shows an upstanding arrow object at five different locations or a full color display. The color of the arrow is white. In the preferred embodiment, a head mounted display is used. A left to right head movement results in the right to left image movement shown.
As illustrated in Figure 2, the eye rotates smoothly to track the upstanding arrow and moves at the same speed as the arrows such that the succession of arrow images fall on the same place and result in the observer seeing a single white upstanding arrow
Figure 3a illustrates the upstanding arrow image on the retina with all colors superimposed when a normal simultaneous display is employed Figure 3b illustrates the image that would be seen if the image displayed in Figure 3 had been five frames of a field sequential display system as illustrated in Figure 1 in which a succession of red, green and blue images were displayed at each of the five positions as the object moves from right to left on the screen As shown, the rotation ofthe eye results in a break-up of the object image into its color components due to the lag in delivery of the color component images
In the first preferred embodiment, as illustrated in Figure 6, the invention is apphed to a head mounted display 16 as is known in the art, for example, as disclosed in US Patent 5,348,477 and in "HDTV Virtual Reality", Japan Display 192, pp 407 to 410 The image presented on the screen being viewed is that of a virtual environment As the observer's head moves, the image being displayed must be shifted up and down and left to right and rotated so that the observer sees a stable representation of the environment corresponding to the orientation of his or her head The processor 10 is an electronic processor receiving from processor 11 head pitch rate data, head roll rate data and head yaw rate data The horizontal and vertical sync signals are fed to processor 10 from the field sequential converter 22 The head position processor 1 1 uses the actual position of helmet 24 from the position sensor 28 output for deteirnirung actual pitch, roll and yaw positions Based on these actual positions and the pitch, roll and yaw acceleration or velocity measurements from sensor 26, processor 1 1 computes the predicted head position for image generator 20 The appropnate vertical and horizontal scan offsets are calculated in offset processor 10 In the general case, when the optical axis ofthe CRT 16 as seen by the eye through the head mounted display optics 23 is not orthogonal to either the vertical or horizontal axis of the head, both offsets will be a function of pitch roll and yaw The optical axis ofthe CRT is defined as the line which is normal to the face ofthe CRT and passes through the center ofthe image The offsets are then added to the respective deflection signals in the amplifiers
12 and 14 which drive, respectively, the horizontal and vertical deflection mechanism of the CRT display 16 The processor 10 can also take into account any distortion introduced in the deflection signals to compensate for distortion in the optical system Typical offset waveforms are shown in Figure 7 along with the vertical sync signal pulse which occurs at the beginning of each field.
In most cases, the roll term can be omitted without causing a significant error, which simplifies the implementation ofthe processor 10 If roll is to be corrected when using a CRT display, hoπzontal and vertical offsets need to be varied over each horizontal line scan ofthe electron beam in a different manner for each subsequent scan from the top to the bottom ofthe CRT display.
The continuously varying offset (sawtooth waveform in Fig. 7) also adjusts for the vertical presentation delay, correcting the "tilted image phenomena" as discussed on page 54 of AGARD Advisory Report No. 164 entitled "Characteristics of Flight Simulator Visual Systems", published May 1981.
In Figure 8, an alternative embodiment is illustrated in which the display 16 is a ferroelectric liquid crystal display (FELCD) which is illuminated by an LED light source 19 which includes red, blue and green LEDs for illuminating a diffuser screen 15 located behind LCD display 16 The observer at 21 views screen 16 through optics 40 and a mirror 18 The mirror 18 is mounted on four electromagnetic transducers 17, the transducers 17 being connected to a housing of the display (not shown) In order to shift the color component images within the cycle with respect to one another, the transducers are energized with a current proportional to the amount of displacement required The pair of transducers 17h adjust the horizontal displacement of the image and the pair of transducers 17v adjust the vertical displacement of the image An appropriate offset waveform as illustrated at B in Figure 7 may be used to move the image viewed on display 16 In the embodiment illustrated in Figure 8, transducers 17 would be fed an amplified signal coming from processor 10 similar to the first preferred embodiment, with the exception of course, that the signal must be sloped to account for the inertia ofthe transducers and mirror
The invention also contemplates that the image memory device or video display controller used for storing each color component image could be shifted by the appropriate number of pixels in hardware dedicated to such image shifting in a matter of a very short period of time. As shown in Fig 9, the field sequential RGB video pixel data is shifted by a digital image shifter by amounts determined by the vertical and horizontal offset signals (received from processor 10) before being transferred to the video display memory. Alternatively, the digital image shifter could be integrated into the converter
22 Once the composite color video image is received, the first image to be displayed, e g. the red color component image, which does not need to be shifted for simple whole image step shifting, can be immediately relayed to the screen While displaying the first red color component image, the hardware could shift by the appropriate amount indicated by processor 10, the subsequent green and blue color component images, and relay them to the screen for display when required. A video display controller which can shift a whole video image vertically and horizontally on a screen of a simultaneous color video display unit is disclosed in U.S. Patent 4,737,778 to Nishi et al. Digital displays in which each pixel is addressed digitally are known in the art, such as ferroelectric hquid crystal display (FELCD), a deformable mirror display (DMD), an active matrix Uquid crystal display (AMLCD), and an field emitter display (FED).
The invention works equally well whether the device is mounted directly on the head or optically coupled to the head via fiber optic cables In addition to using opto-mechanical mirrors to shift the image, it would equally be possible to use an opto- electronic device to shift the image in a functionally similar manner.
In the second preferred embodiment, as will be described with reference to Figures 1, 2, 4, 10 and 11, a temporal grey scale display incorporates the invention In order to understand the invention, it is first necessary to have a clear understanding of why image break-up occurs As is well known in the art, television creates the illusion of smooth motion by drawing successive images at a sufficient fast rate that the human visual system can no longer see the individual images (i.e the image is flicker-free). If the entire image or the objects within the image are moved appropriately relative to the previous image, the visual system will interpret the sequence of images as smooth motion Figure 1 shows the motion of an understanding arrow on a display moving from right to left in five successive images The arrow represents any fixed object within the scene being displayed In order to fixate on this object, the eye makes what is known as a "smooth pursuit eye movement" in the same way as it would if looking at a real object moving in the real world. Even though the image appears at a finite number of discrete locations, the eye will move or rotate with a substantially constant velocity to track the object. The rotating eye is illustrated in Figure 2. It will be noted that all of the consecutive images are focused on the retina at or near the fovea allowing the observer to see a single image as shown in Fig 4a. If however the display uses temporal modulation as described earlier and also illustrated in Fig 10 the eye would normally track the images created in the most significant bit plane and the images created in the remaining bit planes would be focused at different points on the retina as shown in Fig.
4b.
The separation ofthe images will be proportional to the rotational velocity of the eye and the time differences between the bit planes. If all the bit planes are on and the motion is sufficiently slow separation will not be apparent but the image will appear to be smeared. If the bit planes are changing during the motion, especially if the most significant bits are changing, the observer will perceive the object to have jitter. In the case of a head mounted display, head rotation will cause an equal and opposite motion ofthe image across the display. The observers eye is still able to track specific objects within the image being displayed and sees the effects described above.
The objective of this invention is to compute the amount of separation which would occur based on the rotational head velocity of the observer and shift the entire image on the display an appropriate amount for each bit plane so that all the bit plane images in a single field are coincident on the retina. The observer will thus see a normal image; the effects described above being either eliminated or much reduced.
Figure 11 is a schematic of the second preferred embodiment and shows how the corrected display data is obtained. The head position processor 1 receives the raw head position data from a head tracking device such as a Polemus Magnetic tracker as well or head rotational velocity data from a device such as the Watson C341 rate sensor
Rotational acceleration data may also be included. The head position processor sends either predicted head position as suggested by UWE LIST or current head position to the image source 2 which may be an image sensor such as a television camera mounted on a gimbal system or a computer image generator. The video signal from the image source is sent to the Bit Plane Generator 3 which stores a complete field in a digital format, generates the timing waveforms for the particular temporal modulation scheme being used (a typical one is shown in Fig. 10) and sends the bit plane data during the appropriate intervals to the Display Electronics module 5 which drives the head mounted display 6. The Bit Plane offset Generator 4 receives timing signals (H&V) from the image source, a bit plane sync from the bit plane generator and angular head velocity data from the head position processor. It generates H&V offsets for each bit plane except the most significant bit plane according to the formulas below:
Ho = xt
Kh
Vo = _yt Kv
where: Ho = Horizontal offset in pixels
Vo = Vertical offset in pixels
x = Angular yaw velocity ofthe head in degrees/sec. y = Angular pitch velocity ofthe head in degree sec.
Kh = Is a constant for the display giving the angular subtense between centres of adjacent pixels in the horizontal direction in degrees/pixel.
Kv = is a similar constant giving the angular subtense between centres of adjacent pixels in the vertical direction in degrees/pixel.
t = the interval in time between the centre of the most significant bit plane and the bit plane being processed (in seconds).
In the third preferred embodiment, as will be described with reference to Figures 5, and 12 to 15, a video display having a finite transport delay in which display or head orientation is predicted incorporates the invention. In the third preferred embodiment, the virtual environment video display is a head mounted or helmet mounted display (HMD) of the type known in the art. The helmet is provided with position and angular acceleration sensors as is also known in the art. Figure 5 illustrates an exemplary acceleration curve as a user moves between two visual orientations. As can be seen, the acceleration is shown in the example to peak at 100 milliseconds with a deceleration or stopping of the head motion commencing near 200 milliseconds and ending near 400 milliseconds with the head ofthe helmet in its new angular position. In
Figure 5, it is presumed that the head position sensor and the computer image generator 20 requires 100 milliseconds to detect head position and generate an image for the new head position (i.e. the transport delay is 100 ms). The curve illustrating the displayed image orientation with no position prediction correction results in considerable unwanted image motion illustrated near 250 milliseconds as the difference of some 12° by the reference letter E. In the prior art improvement, prediction of future position using acceleration measurements resulted in the dashed line for the image orientation with small but noticeable divergence between the predicted line and the actual head orientation curve. As will be seen below, use ofthe method and apparatus according to the present invention can result in the displayed image orientation following the actual head orientation more closely resulting in an almost imperceptible amount of image instability.
As illustrated in Figure 12, the apparatus according to the third preferred embodiment comprises a head position processor which receives the output signals from the head position sensor 45a and the head angular acceleration sensors 45c. Optionally, angular head velocity sensors 45b may be provided as well as or in place of the acceleration sensor. The head position processor 10 reads the raw data and outputs an actual head position output signal 41 fed to a summation device 44. The head position processor 10 also predicts the head or helmet position based on actual position and the measurement ofthe head acceleration and or head velocity. If a head velocity sensor is not used, the velocity is calculated from either differentiating position or preferably integrating acceleration. The head position is predicted for a point in time ahead in the future by an amount equivalent to the transport delay inherent in the system The predicted head position signal 13 is fed into a delay circuit 48 which delays the signal by an amount of time equal to the transport delay before feeding it to the summation device 44 where it is subtracted from the actual head position signal on line 41 This difference signal is fed to offset processors lOv and lOh where the vertical and horizontal offsets respectively are determined resulting in the vertical and horizontal offset signals fed to display 16
In the case that the display is a CRT (cathode ray tube) video display, the horizontal and vertical offset signals are fed to horizontal and vertical scan circuits In the case that the shifting of the image is to be done optically, transducers
17 may be used to change the angular orientation of a mirror as illustrated in Figure 14 or similar transducers may be used to change a refraction ofthe image passing through a liquid filled prism 36 having transparent cover plates 34 and 36 moveable in angular orientation with respect to one another as shown in Figure 15 In both Figures 14 and 15, the shifted image is viewed through an eyepiece 40 by an eye 21 The vertical and horizontal offsets can alternatively be carried out by image position shifting within the video display controller, a video display controller as disclosed in U S Patent 4,737,778 (Nishi et al) may be used to vertically and horizontally shift the whole video image displayed on the screen ofthe video display unit 16 In the example illustrated in Figure 13, the observer's helmet position moves from a position P] to a position P2 under constant velocity This example is simplified in that it does not take into consideration normal acceleration and deceleration. In the first time frame labeled as the transport delay, the display offset in one or both of the horizontal and vertical directions is illustrated to ramp upwardly for the duration of the transport delay, at which time the display offset is set back to zero and the new image is displayed on the display 16. The resetting of the display offset and the update in the image ofthe virtual environment takes place without the observer seeing a sharp change in the image At the point in time X when the actual head position has reached P2 and stopped, the predicted head position based on the previous velocity is for a position which continues along the same path beyond the position P2 At the instant that the actual head position stops and the delayed predicted head position continues to increase, the display offset is ramped to decrease so that the observed image is stationary in keeping with the actual head position.
Although the invention has been described as apphed to a virtual environment system using a computer image generator as the image source, it can, with suitable modifications, take into account certain operational differences that will be apparent to one skilled in the art, be apphed to virtual presence or telepresence systems which use image sensors such as television cameras mounted on head slaved gimbal systems. Accordingly, it is within the contemplation of the invention and the claims are intended to encompass all types of virtual environment systems where delays would normally cause image instability.

Claims

1 An apparatus for displaying a virtual environment on a video display comprising: position processor means for generating a visual orientation signal indicating a visual orientation of said display with respect to said virtual environment, image generator means for generating a series of component images of said virtual environment for said visual orientation, said image generating means receiving said visual orientation signals, means for detecting any change in said visual orientation signal from a time when said signal was used by said image generator means to generate each said component image and a time of display of each said component image on said display to produce an offset shift signal, and means for shifting said image on said display in response to said offset shift signal, whereby the display ofthe virtual environment is improved
2 The apparatus as claimed in claim 1 , wherein said video display is a color field sequential display device, and said series of component images is a series of cycles of color component images, further comprising color filter means for making said color component images of said series appear to have a different color, such that a mixing of said color component images as seen with said color filter means provides an observer with a color image of said environment, wherein said detecting means comprise means for determining an angular velocity of a visual orientation of said display with respect to said virtual environment and for generating a velocity signal, and said offset shift signal is a function of said velocity signal
3 The apparatus as claimed in claim 1, wherein said video display is a temporal modulation grey scale display device, and said series of component images is a series of grey scale component images to be displayed sequentially to provide an observer with an impression of grey scale images, wherein said detecting means comprise means for deterrnining an angular velocity of a visual orientation of said display with respect to said virtual environment and for generating a head velocity signal, and said offset shift signal is a function of said velocity signal.
4 The apparatus as claimed in claim 1, wherein said image generator means has a finite transport delay time for generating and preparing an image for transmission on a video output signal.
5 The apparatus as claimed in claim 4, further comprising means for detecting at least one of an angular velocity and an angular acceleration of said visual orientation for producing a predictive signal, means for calculating a predicted visual orientation of said display with respect to said virtual environment based on said visual orientation signal and said predictive signal to produce a predicted visual orientation signal, said predicted visual orientation signal being connected to said image generator means in place of said visual orientation signal generated by said determining means, said predicted visual orientation signal being for a future point in time equal to a present time plus approximately said transport delay time
6 The apparatus as claimed in claim 4, wherein said visual orientation determining means comprise an angular head position sensor, said video display being a head mounted display
7 The apparatus as claimed in claim 5, wherein said video display is a head mounted display, said means for determining visual oπentation comprise an angular head position sensor, and said means for determining at least one of an angular velocity and an angular acceleration comprise at least one of an angular head velocity sensor and an angular head acceleration sensor
8 The apparatus as claimed in claim 1,2,3 or 4, wherein said video display is a cathode ray tube (CRT) display device, and said shifting means comprise means for adjusting a horizontal offset and a vertical offset of said CRT display device.
9 The apparatus as claimed in claim 1.2,3 or 4, wherein said detecting means comprise a processor being fed a sync signal and said velocity signal for generating vertical and horizontal offset signals for said display when displaying said color component images
10 The apparatus as claimed in claim 1,2,3 or 4, wherein said display is a cathode ray tube (CRT) display device and said offset signal comprises vertical and horizontal offset signals for controlling vertical and horizontal deflections in said CRT display device respectively.
11. The apparatus as claimed in claim 10, wherein said offset signals vary continuously over each field to compensate for delay in image presentation due to vertical scan time.
12. The apparatus as claimed in claim 10, wherein said offset signals vary continuously over each horizontal hne scan of each field to roll said images on said display.
13. The apparatus as claimed in claim 1,2,3 or 4, wherein said image shift means comprise a relay mirror having an adjustable angular orientation with respect to said video display which is adjustable by transducer means.
14. The apparatus as claimed in claim 2 or 3, wherein said velocity signal comprises head pitch, roll and yaw rate data.
15. The apparatus as claimed in claim 1,2,3 or 4, wherein said video display is a digital display, said image shift means comprising vertical and horizontal digital image shift circuits.
16. The apparatus as claimed in claim 1,2,3 or 4, wherein said shifting means comprise image relay optics including controllable means for angularly displacing horizontally and vertically an image relayed by said optics.
17. The apparatus as claimed in claim 1,2,3 or 4, wherein said video display is a head mounted display.
18. A method for displaying a virtual environment on a video display comprising the repeated steps of: determining a visual orientation of said display with respect to said virtual environment; generating a series of component images of said virtual environment for said visual orientation; displaying said images on said display; detecting any change in said -visual orientation which may have occurred between a time when said visual orientation was determined and said image is to be displayed; and shifting said image on said display an amount equivalent to said change, whereby the display ofthe virtual environment is improved.
PCT/CA1996/0007891995-11-271996-11-27Method and apparatus for displaying a virtual environment on a video displayWO1997020244A1 (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
AU76164/96AAU7616496A (en)1995-11-271996-11-27Method and apparatus for displaying a virtual environment on a video display
CA002238693ACA2238693C (en)1995-11-271996-11-27Method and apparatus for displaying a virtual environment on a video display

Applications Claiming Priority (4)

Application NumberPriority DateFiling DateTitle
US08/563,1951995-11-27
US08/563,195US5933125A (en)1995-11-271995-11-27Method and apparatus for reducing instability in the display of a virtual environment
US08/593,842US5764202A (en)1995-06-261996-01-30Suppressing image breakup in helmut mounted displays which use temporally separated bit planes to achieve grey scale
US08/593,8421996-01-30

Publications (1)

Publication NumberPublication Date
WO1997020244A1true WO1997020244A1 (en)1997-06-05

Family

ID=27073199

Family Applications (1)

Application NumberTitlePriority DateFiling Date
PCT/CA1996/000789WO1997020244A1 (en)1995-11-271996-11-27Method and apparatus for displaying a virtual environment on a video display

Country Status (3)

CountryLink
AU (1)AU7616496A (en)
CA (1)CA2238693C (en)
WO (1)WO1997020244A1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
NL1018198C2 (en)*2001-06-012002-12-03Tno Head mounted display device.
GB2381725A (en)*2001-11-052003-05-07H2EyeGraphical user interface for a remote operated vehicle
WO2004088348A1 (en)*2003-03-312004-10-14Seeing Machines Pty LtdEye tracking system and method
US8015507B2 (en)2001-11-052011-09-06H2Eye (International) LimitedGraphical user interface for a remote operated vehicle
WO2014105646A1 (en)*2012-12-262014-07-03Microsoft CorporationLow-latency fusing of color image data in a color sequential display system
WO2014194135A1 (en)2013-05-302014-12-04Oculus VR, Inc.Perception based predictive tracking for head mounted displays
CN105593924A (en)*2013-12-252016-05-18索尼公司Image processing device, image processing method, computer program, and image display system
US9874932B2 (en)2015-04-092018-01-23Microsoft Technology Licensing, LlcAvoidance of color breakup in late-stage re-projection
WO2018086941A1 (en)2016-11-082018-05-17Arcelik Anonim SirketiSystem and method for providing virtual reality environments on a curved display
EP3596705A4 (en)*2017-03-172020-01-22Magic Leap, Inc. MIXED REALITY SYSTEM WITH COLORED VIRTUAL CONTENT AND METHOD FOR PRODUCING VIRTUAL CONTENT THEREFOR
US10649211B2 (en)2016-08-022020-05-12Magic Leap, Inc.Fixed-distance virtual and augmented reality systems and methods
US10678324B2 (en)2015-03-052020-06-09Magic Leap, Inc.Systems and methods for augmented reality
US10769752B2 (en)2017-03-172020-09-08Magic Leap, Inc.Mixed reality system with virtual content warping and method of generating virtual content using same
US10812936B2 (en)2017-01-232020-10-20Magic Leap, Inc.Localization determination for mixed reality systems
US10838207B2 (en)2015-03-052020-11-17Magic Leap, Inc.Systems and methods for augmented reality
US10861237B2 (en)2017-03-172020-12-08Magic Leap, Inc.Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same
US10909711B2 (en)2015-12-042021-02-02Magic Leap, Inc.Relocalization systems and methods
US11112860B2 (en)2015-09-112021-09-07Bae Systems PlcHelmet tracker buffeting compensation
US11189189B2 (en)2018-08-022021-11-30Elbit Systems Ltd.In-flight training simulation displaying a virtual environment
US11379948B2 (en)2018-07-232022-07-05Magic Leap, Inc.Mixed reality system with virtual content warping and method of generating virtual content using same
US11429183B2 (en)2015-03-052022-08-30Magic Leap, Inc.Systems and methods for augmented reality
US11501680B2 (en)2018-07-232022-11-15Magic Leap, Inc.Intra-field sub code timing in field sequential displays
EP4145437A1 (en)*2021-09-032023-03-08Honeywell International Inc.Systems and methods for providing image motion artifact correction for a color sequential (cs) display

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP0502643A2 (en)*1991-03-061992-09-09Fujitsu LimitedImage processing unit and method for executing image processing
WO1994009472A1 (en)*1992-10-221994-04-28Board Of Regents Of The University Of WashingtonVirtual retinal display
US5369450A (en)*1993-06-011994-11-29The Walt Disney CompanyElectronic and computational correction of chromatic aberration associated with an optical system used to view a color video display
US5422653A (en)*1993-01-071995-06-06Maguire, Jr.; Francis J.Passive virtual reality
US5446834A (en)*1992-04-281995-08-29Sun Microsystems, Inc.Method and apparatus for high resolution virtual reality systems using head tracked display
EP0709816A2 (en)*1994-10-281996-05-01Canon Kabushiki KaishaDisplay apparatus and its control method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
EP0502643A2 (en)*1991-03-061992-09-09Fujitsu LimitedImage processing unit and method for executing image processing
US5446834A (en)*1992-04-281995-08-29Sun Microsystems, Inc.Method and apparatus for high resolution virtual reality systems using head tracked display
WO1994009472A1 (en)*1992-10-221994-04-28Board Of Regents Of The University Of WashingtonVirtual retinal display
US5422653A (en)*1993-01-071995-06-06Maguire, Jr.; Francis J.Passive virtual reality
US5369450A (en)*1993-06-011994-11-29The Walt Disney CompanyElectronic and computational correction of chromatic aberration associated with an optical system used to view a color video display
EP0709816A2 (en)*1994-10-281996-05-01Canon Kabushiki KaishaDisplay apparatus and its control method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
B.WELCH ET AL: "HDTV virtual reality,pages407-410,", 1992, JAPAN DISPLAY 1992, XP000618026*

Cited By (50)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
NL1018198C2 (en)*2001-06-012002-12-03Tno Head mounted display device.
WO2002097513A1 (en)*2001-06-012002-12-05Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek TnoHead mounted display device
GB2381725A (en)*2001-11-052003-05-07H2EyeGraphical user interface for a remote operated vehicle
GB2381725B (en)*2001-11-052004-01-14H2EyeGraphical user interface for a remote operated vehicle
US8015507B2 (en)2001-11-052011-09-06H2Eye (International) LimitedGraphical user interface for a remote operated vehicle
WO2004088348A1 (en)*2003-03-312004-10-14Seeing Machines Pty LtdEye tracking system and method
US7653213B2 (en)2003-03-312010-01-26Seeing Machines Pty LtdEye tracking system and method
WO2014105646A1 (en)*2012-12-262014-07-03Microsoft CorporationLow-latency fusing of color image data in a color sequential display system
WO2014194135A1 (en)2013-05-302014-12-04Oculus VR, Inc.Perception based predictive tracking for head mounted displays
US10732707B2 (en)2013-05-302020-08-04Facebook Technologies, LlcPerception based predictive tracking for head mounted displays
EP3004966A4 (en)*2013-05-302017-01-04Oculus VR, LLCPerception based predictive tracking for head mounted displays
US11181976B2 (en)2013-05-302021-11-23Facebook Technologies, LlcPerception based predictive tracking for head mounted displays
US9897807B2 (en)2013-05-302018-02-20Oculus Vr, LlcPerception based predictive tracking for head mounted displays
US10281978B2 (en)2013-05-302019-05-07Facebook Technologies, LlcPerception based predictive tracking for head mounted displays
EP3486707A1 (en)*2013-05-302019-05-22Facebook Technologies, LLCPerception based predictive tracking for head mounted displays
CN105593924A (en)*2013-12-252016-05-18索尼公司Image processing device, image processing method, computer program, and image display system
US11619988B2 (en)2015-03-052023-04-04Magic Leap, Inc.Systems and methods for augmented reality
US10678324B2 (en)2015-03-052020-06-09Magic Leap, Inc.Systems and methods for augmented reality
US12386417B2 (en)2015-03-052025-08-12Magic Leap, Inc.Systems and methods for augmented reality
US10838207B2 (en)2015-03-052020-11-17Magic Leap, Inc.Systems and methods for augmented reality
US11256090B2 (en)2015-03-052022-02-22Magic Leap, Inc.Systems and methods for augmented reality
US11429183B2 (en)2015-03-052022-08-30Magic Leap, Inc.Systems and methods for augmented reality
US9874932B2 (en)2015-04-092018-01-23Microsoft Technology Licensing, LlcAvoidance of color breakup in late-stage re-projection
US11112860B2 (en)2015-09-112021-09-07Bae Systems PlcHelmet tracker buffeting compensation
US10909711B2 (en)2015-12-042021-02-02Magic Leap, Inc.Relocalization systems and methods
US11288832B2 (en)2015-12-042022-03-29Magic Leap, Inc.Relocalization systems and methods
US11536973B2 (en)2016-08-022022-12-27Magic Leap, Inc.Fixed-distance virtual and augmented reality systems and methods
US10649211B2 (en)2016-08-022020-05-12Magic Leap, Inc.Fixed-distance virtual and augmented reality systems and methods
US11073699B2 (en)2016-08-022021-07-27Magic Leap, Inc.Fixed-distance virtual and augmented reality systems and methods
WO2018086941A1 (en)2016-11-082018-05-17Arcelik Anonim SirketiSystem and method for providing virtual reality environments on a curved display
US10812936B2 (en)2017-01-232020-10-20Magic Leap, Inc.Localization determination for mixed reality systems
US11711668B2 (en)2017-01-232023-07-25Magic Leap, Inc.Localization determination for mixed reality systems
US11206507B2 (en)2017-01-232021-12-21Magic Leap, Inc.Localization determination for mixed reality systems
US10861237B2 (en)2017-03-172020-12-08Magic Leap, Inc.Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same
US10769752B2 (en)2017-03-172020-09-08Magic Leap, Inc.Mixed reality system with virtual content warping and method of generating virtual content using same
US11315214B2 (en)2017-03-172022-04-26Magic Leap, Inc.Mixed reality system with color virtual content warping and method of generating virtual con tent using same
US11978175B2 (en)2017-03-172024-05-07Magic Leap, Inc.Mixed reality system with color virtual content warping and method of generating virtual content using same
US11410269B2 (en)2017-03-172022-08-09Magic Leap, Inc.Mixed reality system with virtual content warping and method of generating virtual content using same
US11423626B2 (en)2017-03-172022-08-23Magic Leap, Inc.Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same
US10964119B2 (en)2017-03-172021-03-30Magic Leap, Inc.Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same
EP3596705A4 (en)*2017-03-172020-01-22Magic Leap, Inc. MIXED REALITY SYSTEM WITH COLORED VIRTUAL CONTENT AND METHOD FOR PRODUCING VIRTUAL CONTENT THEREFOR
US10861130B2 (en)2017-03-172020-12-08Magic Leap, Inc.Mixed reality system with virtual content warping and method of generating virtual content using same
US10762598B2 (en)2017-03-172020-09-01Magic Leap, Inc.Mixed reality system with color virtual content warping and method of generating virtual content using same
US12190468B2 (en)2018-07-232025-01-07Magic Leap, Inc.Mixed reality system with virtual content warping and method of generating virtual content using same
US11790482B2 (en)2018-07-232023-10-17Magic Leap, Inc.Mixed reality system with virtual content warping and method of generating virtual content using same
US11501680B2 (en)2018-07-232022-11-15Magic Leap, Inc.Intra-field sub code timing in field sequential displays
US11379948B2 (en)2018-07-232022-07-05Magic Leap, Inc.Mixed reality system with virtual content warping and method of generating virtual content using same
US11189189B2 (en)2018-08-022021-11-30Elbit Systems Ltd.In-flight training simulation displaying a virtual environment
EP4145437A1 (en)*2021-09-032023-03-08Honeywell International Inc.Systems and methods for providing image motion artifact correction for a color sequential (cs) display
US11790860B2 (en)2021-09-032023-10-17Honeywell International Inc.Systems and methods for providing image motion artifact correction for a color sequential (CS) display

Also Published As

Publication numberPublication date
AU7616496A (en)1997-06-19
CA2238693A1 (en)1997-06-05
CA2238693C (en)2009-02-24

Similar Documents

PublicationPublication DateTitle
US5684498A (en)Field sequential color head mounted display with suppressed color break-up
CA2238693C (en)Method and apparatus for displaying a virtual environment on a video display
US5933125A (en)Method and apparatus for reducing instability in the display of a virtual environment
KR100520699B1 (en)Autostereoscopic projection system
KR101125978B1 (en)Display apparatus and method
JP4826602B2 (en) Display device and method
Ezra et al.New autostereoscopic display system
US11281290B2 (en)Display apparatus and method incorporating gaze-dependent display control
US6454411B1 (en)Method and apparatus for direct projection of an image onto a human retina
EP1154655B2 (en)Apparatus and method for displaying three-dimensional image
KR20180066211A (en) Improvements in display and display
EP3514606A1 (en)Eye tracking for head-worn display
JP2000047139A (en) 3D image display device
WO2019154942A1 (en)Projection array light field display
Riecke et al.Selected technical and perceptual aspects of virtual reality displays
Regan et al.The problem of persistence with rotating displays
US5764202A (en)Suppressing image breakup in helmut mounted displays which use temporally separated bit planes to achieve grey scale
US20210088787A1 (en)Image frame synchronization in a near eye display
US11830396B2 (en)Display apparatus
JPH08334730A (en) 3D image reproduction device
CN114174893A (en)Display device with reduced power consumption
WO2020263476A1 (en)Image painting with multi-emitter light source
GB2039468A (en)Improvements in or relating to visual display apparatus
CN115469462A (en)Head-up display and head-up display equipment
FernieHelmet‐mounted display with dual resolution

Legal Events

DateCodeTitleDescription
AKDesignated states

Kind code of ref document:A1

Designated state(s):AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM

ALDesignated countries for regional patents

Kind code of ref document:A1

Designated state(s):KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG

DFPERequest for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121Ep: the epo has been informed by wipo that ep was designated in this application
ENPEntry into the national phase

Ref document number:2238693

Country of ref document:CA

Kind code of ref document:A

Ref document number:2238693

Country of ref document:CA

NENPNon-entry into the national phase

Ref document number:97520032

Country of ref document:JP

REGReference to national code

Ref country code:DE

Ref legal event code:8642

122Ep: pct application non-entry in european phase

[8]ページ先頭

©2009-2025 Movatter.jp