Movatterモバイル変換


[0]ホーム

URL:


WO2006120489A1 - Procedure for the insertion of a virtual image into real environment and device for the execution of the procedure - Google Patents

Procedure for the insertion of a virtual image into real environment and device for the execution of the procedure
Download PDF

Info

Publication number
WO2006120489A1
WO2006120489A1PCT/HU2006/000042HU2006000042WWO2006120489A1WO 2006120489 A1WO2006120489 A1WO 2006120489A1HU 2006000042 WHU2006000042 WHU 2006000042WWO 2006120489 A1WO2006120489 A1WO 2006120489A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
displayed
fixed
light
display
Prior art date
Application number
PCT/HU2006/000042
Other languages
French (fr)
Inventor
László HOLAKOVSZKY
Ádám SZENTGÁLI
Original Assignee
Mta Számitástechnikai És Automatizálási Kutató Intézet
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mta Számitástechnikai És Automatizálási Kutató IntézetfiledCriticalMta Számitástechnikai És Automatizálási Kutató Intézet
Publication of WO2006120489A1publicationCriticalpatent/WO2006120489A1/en

Links

Classifications

Definitions

Landscapes

Abstract

The invention is a method of inserting a virtual image into real environment and a device executing the said method. According to the procedure as described in the present invention, the real environment be displayed in the image is created by the blending of a 3D model and 2D photo, the virtual image be displayed is created in 3D model, which then are merged together and the blended 3D model achieved is illuminated responsive to the signal output of a light meter 25 simulating real light conditions of the environment and the display is reduced to a panorama scroll from a designated point P. The display is ensured by a standing virtual display comprising an encoder 10 with its stator part 11 fixed to a mounting 1 and its rotor part 12 connected to a display supporting portion 2, a computer 24 and a light meter 25.

Description

Procedure for the insertion of a virtual image into real environment and device for the execution of the procedure
The subject of the invention relates to a procedure for the insertion of a virtual image into real environment and device for the execution of the procedure.
The insertion of a virtual image into a real environment is needed in various fields of life. In archaeology, for example, it may help a visitor to visualize the original buildings, which otherwise may be difficult based on the remaining ruins. The need for it also occurs in the property market to present the buildings before the actual construction works start. In the tourism sector visitors may be shown not only the present, but the older state of districts, streets, buildings, etc.
Paintings, models and computer displayed animation are only able to do this to a restricted extent, because these types of depiction remove you from the location, also the scale and the perspective is different, the viewer is not provided with the feeling that he/she is there among the reconstituted buildings. The technique of augmented reality makes it viable to present lifelike virtual buildings created by 3D modelling in their original place, size, distance and direction.
US 5,373,857 patent describes a virtual reality headset comprising a head tracker for determining the orientation of the headset relative to the earth's magnetic field. The image displayed by the microdisplays will change in sync with any movement of the user's head, controlling which part of the image is displayed, enabling the user to look around in the virtual space.
Similar head-mounted displays are widely known. Their common disadvantages are: 1. Resulting from their weight they exert pressure on the users nose, forehead; 2. The apparatus covers a considerable part of vision, thus it makes walking disoriented and dangerous; 3. The user has to carry the power supply required for the head-mounted display, computer, head tracker, satellite navigation system, microwave transceiver and cables in a backpack etc.; 4. Heavy, fragile, valuable equipment cannot be safely handed over to just any users, including children; 5. The unrestricted movement of the users would necessitate to program the space to be displayed from every single angle, thus preparation work would become enormous and the motion-process display would require extremely high processor capacity; 6. The display of virtual buildings is unable to follow the changing light conditions in a real environment, the changing angle of the sun, the change of shadows in length, shape and the changing weather conditions, thus the displayed picture will not be true to life.
The aim of the present invention is to eliminate the deficiencies and inconveniences mentioned above.
The set task was solved with the procedure according to the invention according to which the real environment visible on the projected image is created by the merged combination of a 3D model and a 2D photograph, the virtual buildings to be presented are created by a 3D model; this combined 3D model is illuminated using the signals of a light meter simulating real light conditions of the environment; the display is then reduced to a vertical panorama display from a specified point, thus the computing task is radically simplified. Meanwhile, the display is ensured by a device not carried by the visitors, as they simply look into the apparatus fixed at a designated point of the field to be displayed and moved around an axis.
The basis of the procedure according to the invention relates to the recognition that not only the virtual building but the real environment needs be modelled in 3 dimensions, as in this case the merged model can be illuminated by artificially generated light sources simulating and in proportion to the real light conditions measured by the light meter, thus current light conditions are displayed ensuring life-like results. This is important because in the case of sunshine the virtual buildings cast shadows onto the real environment and vice versa, through this the light conditions present in the real environment at any particular moment appear in the image, so achieving a three dimensional, life-like appearance that fits in with the real environment. We also recognized that in order to achieve an image accurately illustrating real light conditions, a light meter capable of measuring and sensing the strength of focused light of the sun and the diffuse light coming from the sky is needed and this can be realized with a light meter run by a motor, which takes the path of the sun in the sky into account, and comprising two photodetectors (light sensors) and shield strips. Furthermore, we recognized that a real life model can be created by the blending of the 3D model and 2D photo by texturing the interface elements of the 3D model viewed from a designated point P with the corresponding parts of the abovementioned panorama picture.
Finally, we recognized that by using software to monitor the signals from an encoder (angle of rotation signal generator) we can "look around" in sync with the rotation of the device, in other words scroll in the virtual image.
The step that led to the implementation of the invention in connection with the apparatus was the recognition that it is the most convenient for the users and the safest for the service provider to have the virtual image display unit fixed onto a mounting, because in this case no elements are assembled onto the body or head of the users. The elements, segments of a block of buildings, district, if viewed from a suitable angle and distance, are situated horizontally, thus to have them displayed it is enough to scroll horizontally from a designated point P, which makes the presentation considerably reduced, consequently, makes it more feasible. In case the virtual reality display is connected to the rotor part of an encoder, while the stator part of the said signal generator is fixed to the real environment, preferably to the ground, we may synchronise the overlap of the parts outside of the virtual elements of the displayed merged 3D model and the real environment after rotation using the electric signal of the encoder.
In accordance with the above recognitions, we have solved the set aim in accordance with the invention with a display device that consists of a fixed mounting to the real environment, preferably to the ground, a display supporting portion with vertical axis of rotation fixed to the mounting, an encoder with its stator part fixed to the mounting and its rotor part connected to the display supporting portion, a virtual reality display fixed onto the display supporting portion, a computer and a light meter. It is advisable that the light meter comprises two photodetectors, of which one is shaded by a shield placed into the actual path of the sunrays. It is also recommended that the virtual reality display comprises two microdisplays, corresponding magnifying optical elements, potential trans-illuminating light sources, a housing surrounding the above mentioned elements and two viewing windows designed into the said housing. It is further recommended that the mounting comprises a fixed outward pipe, preferably to the ground, and an inward pipe mechanically locked against rotation with an exterior diameter loosely fitting into the interior diameter of the outward pipe. In the said case the stator part of the encoder is fixed to the inward pipe, which can be lifted or lowered without rotation, thus it can be adjusted to the eye level of the user. Its lifted position is further assisted if the inward pipe is fastened to a, preferably fixed-sleeved, lifting-positioning apparatus, for example, to the piston of a gas telescope.
The invention is further described in conjunction with the enclosed drawings in which:
Fig. 1. is a side view ofthe assembly structure of the apparatus, the elements found in the housing are indicated by dashed lines.
Fig. 2. is a vertical cross-sectional view ofthe apparatus shown in fig. 1., the head part in view. Fig. 3 is a cross-sectional view of fig. 2. through line B-B.
Fig. 4 is a cross-sectional view of fig. 2, through line A-A.
Fig. 5. is a perspective view ofthe light meter in accordance with the invention.
Fig. 6. is a floor plan ofthe site be displayed.
Fig. 7. is a sketched panorama photo ofthe site be displayed. Fig. 8. is a 3D model of the landmarks in the site be displayed.
Fig. 9 shows the perspective view ofthe above-mentioned 3D model in accordance with
Fig. 8., which is, the site model from the desired viewing angle.
Fig. 10 is a selected item ofthe landmarks, where the sides ofthe ruin of a house 41 are textured with the corresponding part ofthe panorama photo. Fig. 11. is a site model textured with the elements ofthe panorama photo.
Fig. 12. is a 3D model ofthe reconstructed buildings.
Fig. 13. is the site model complemented by the buildings, which is the merged 3D site- building model.
Fig. 14. is the merged 3D site-building model applied to the background, which is the 3D site- building-background model.
Fig. 15. is the sile-building-background model artificially illuminated by simulated light.
Fig. 16. is the complete site-building-background model and the selected part be displayed.
Fig. 17. illustrates the various pictures displayed by the microdisplay when the rotor part of the apparatus is rotated.
Fig. 18 is the circuitry of the apparatus.
DESCRIPTION
Referring to Fig. 1 and in accordance with the present invention, the apparatus is anchored near the site to be presented as follows: a metal mounting 1 vertically fixed, a display supporting portion 2 having vertical axis of rotation connected to the said mounting 1 and where a virtual reality display 3 is fixed onto its top end. The housing 19 of the said virtual reality display 3 with two viewing windows 18 designed into it comprising two microdisplays 20, two magnifying optical elements 21 and two trans-illuminating light sources 22.
Referring to Fig. 2 and in accordance with the present invention, the metal-pipe-mounting 1 is fixed to a manhole cover 5 by screws 4, the manhole cover 5 is fixed to an underground concrete manhole 7 by screws 6. The mounting 1 comprises an outward pipe 8 and an inward pipe 9 having smaller exterior wall diameter than the interior wall diameter of the outward pipe 8 with telescopic sliding connection between the two. The stator part 11 of an electric encoder 10 is fixed to the top end of the inward pipe 9 by fastening not indicated herein. The axis like rotor part 12 of the encoder 10 is fixed by a bolt 13 to the bottom end of a display supporting portion 2, where the said display supporting portion 2 has a telescopic sliding connection with the outward pipe 8. A lifting-positioning apparatus 15, for example a piston 16 of a gas telescope or a hydraulic jack, is connected by a nut 14 to the central caliber of the bottom end-wall of the inward pipe 9. The bottom end of the lifting-positioning apparatus 15 is fixed by screws 17 to the bottom of the manhole 7. A virtual reality display 3 is secured to the top end of the display supporting portion 2. The display supporting portion 2 comprises rotating handle bars 23 at opposite sides. Further, in accordance with the invention, the apparatus comprises a computer 24, which, for example, is placed into a manhole 7 hermetically locked (to avoid dampness), yet, with cable connection it can be placed into a nearby building, a light meter 25 placed near the apparatus in open air under direct sunlight, avoiding open shade, i.e. on top of a building. The microdisplays 20, trans-illuminating light sources 22, encoder 10, computer 24 and light meter 25 are connected into the same circuit and to a power supply, for example a battery or grid. Fig. 3 shows the cross-sectional view of fig. 2 through line B-B in accordance with the enclosed drawing, where a horizontal rib 27, fitting into the vertical slot 20 of the outward pipe 8 preventing it from rotation, is assembled into the lower portion of the inward pipe 9.
Fig. 4 shows the cross-sectional view of fig. 2. through line A-A in accordance with the enclosed drawing, where a buffer nose 28, which after rotation butts on the first end 30 and second end 31 surfaces delimiting the sectored wall thinning 29 on the internal surface of the outward pipe 8, thus determining the rotation angle of the display supporting portion 2, is secured onto the lower portion of the display supporting portion 2.
Fig. 5 shows the light meter 25, which, in accordance with the invention, comprises a motor housing 32, a stepper motor 33 placed in it, a mainboard 35 fixed to the shaft 34 of the stepper motor 33 and rotated by the stepper motor 33, a lower photodetector 36 placed in the center of the upper plate of the mainboard 35, for example a photodiode, a transparent shade 37 fixed to the edge o f the mainboard 35, an upper photodetector 38 mounted on the top of the transparent shade 37, for example a photodiode, and a non transparent shield stripe 39 wider than the horizontal size of the lower photodetector 36, painted on or in the form of a plate glued to the outer surface of the shade. The mainboard 35 and the transparent shade 37 are rotated at given intervals determined by a software program calculating the actual position of the sun from astronomic coordinates and run by the stepper motor 33 commanded by a computer 24 with the condition that the shield stripe 39 is placed between the lower photodetector 36 and the sun, thus casting shadow (fig. 5) onto the lower photodetector 36. The position of the far end side of the shield stripe 39 is adjusted to the point where the sun reaches its greatest height on the horizon and its width is reduced to the minimum point capable of casting shadow onto the lower photodetector 36, thus it is ensured that the lower photodetector 36 detects the diffuse light coming from either direction of the sky with minimum - practically, with negligible - overlap. The upper photodetector 38 detects both the diffuse light coming from the sky and the focused light coming from the sun, thus the rate of the focused light coming from the sun derives from difference of the signals detected by the lower 36 and upper 38 photodetectors. The said information is needed for the simulation of real light conditions in the virtual image. Figures 6-17 illustrate the core of the method in accordance with the invention and through a simplified example present the computer insertion of a virtual image into real environment.
Fig. 6 illustrates the floor plan of an archaeological site of which in parts is to be virtually reconstructed. Out of the listed L shaped ruin 40, ruin of a house 41, ruins of a peristyle 42 and ruin of a house 43, we intend to virtually build up and present in their real environment the ruin of a house 41 and the ruins of a peristyle 42. The virtual reality display is placed at eye level at point P with proper viewpoint onto the site. In our example, the horizontal angle α of the image of the virtual reality display 3 is 55 degrees, the entire viewing angle β of the site to be illustrated is J 30 degrees viewed from the designated point P. (We must note that the rotational viewing angle can be increased to 360 degrees if needed,) Presently, it is aimed that having the virtual reality display 3 in the entire viewing angle β rotated, the horizontal angle α of the virtual image would contain every single segment of the entire viewing angle β. Based on the horizontal coordinates of the floor plan and the supplementary height data, the surface and the landmarks of the site can be modelled.
Referring to Fig. 7, first, a panorama photo of the field be presented is taken using any proper technology. It is advisable that at each 30 degree a photo is taken by a digital camera with viewing angle of 45 degrees, that, is, the frames are overlapped by 15 degrees, the frames are then merged into a single panorama picture processed by a suitable software, and the resulting file is stored digitally. Our example results in a 140-degtQe horizontal and 30-degree vertical viewing angle image. Also, it is recommended that the photo be taken in gloomy weather conditions in strong diffuse light, avoiding direct sunlight, thus resulting in a well-lit, detailed, yet shadowless image, which may be lightened or darkened and where, in its 3D textured version, the simulation of light angle and deeper shadow depends on the artificial illumination created according to the needs of the following steps of the process.
Referring to Fig. 8, daring the next phase a computized 3D object model is designed of the site to be illustrated. The approximation of the surface is achieved by a plate, or in case of a more complex surface by a geometrical shape or solid following the line of the surface, and then the landmarks are modelled in desirable details. In our simplified example, during object modelling the approximation of the L shaped ruin 40, ruin of the house 41, ruins of the peristyle 42 and the ruin of the house 43 is achieved by geometrical prisms and cylinders. Considering that the presentation is done at eye level from the perspective of a standing man, where the far horizon is also viewed, modelling is restricted to the point where the modelled landmarks may cast shadow during a reasonable period (most reasonably during the opening hours of a museum).
Referring to Fig. 9, the modelled surface and landmarks are rotated into identical perspective seen in the panorama photo.
Referring to Fig 10, the corresponding parts of the panorama photo are textured with the corresponding sraface elements of the site model in the next phase. For example, the corresponding part 44a of the panorama photo described at fig. 8 is placed onto the forefront of the model of the ruin of the house 41; part 44b onto the upper side, parts 44c and 44d onto the two sides of the entrance, part 44e onto the inner back front, and finally, part 44f onto the inner left side, thus resulting in the textured version 41' of the ruin of the house 41. Naturally, as the model is fully textured from the given view of angle and in case the model of the ruin of the house 41 is rotated in space, it becomes visible that parts, segments omitted from texturing stay white or the color applied when modelling is done. Given that the model is not rotated, parts 44a-44f may be textured onto the ruin of the house 41 at once. Moreover, the complete site model can be textured with the full panorama photo, thus the said process is considerably reduced.
Fig 11 shows the site model textured with the panorama photo.
Referring to Fig 12, the next phase of the method comprises the following steps: the buildings, ruins be virtually reconstructed are modelled according to the guidelines, drawings of architecture historians, archaeologists and based on the exact position, size of the elements be reconstructed. In our example, the corresponding reconstructed model of the house 45 based on the ruin of the house 42, the corresponding reconstructed model of the peristyle 46 based on the ruins of the peristyle 41 are designed, which thus is the imaginary complementation of the ruins. The models can be elaborated in greater details, enriched with fine particles and be rendered by the most appropriate building material textures. As shown in Fig 13, the building models (fig.12) are overlaid onto the selected ruins of the site model in the next phase of the method, which is the model of the house 44 onto the ruin of the house 40, the model of the peristyle 45 onto the ruins of the peristyle 42. Thus the merged 3D site-building model is created, which shows the imagined buildings the archaeological site is complemented by from the perspective of the virtual reality display 3.
Referring to Fig 14, in the next phase of the method, the panorama photo (fig. 8), comprising the sky and the background of the site, is merged to the blended 3 D model, thus resulting in the complete 3D site-building-background model. The display of the background can be achieved by the texture of the corresponding parts of the panorama photo with a cylinder shaped plate or spherical shaped plate where approximation is done by triangular shaped plates. It is advisable that the background plate be placed in a reasonable distance from the site model so the landmarks of the site and the modelled buildings would not cast shadow on the merged model when having it illuminated by the simulated sunlight coming from different angles of light. The background is replaced by software support according to the lighting conditions and the change of seasons, updated from a directory, thus following the characteristics of lighting, weather and the changes of the seasons.
Referring to Fig 15, the blended 3D site-building is illuminated by the present light conditions simulated by a software taking the signals of the light meter 25 described at fig. 5. into account. The signals of the upper photodetector 38 and lower photodetector 36 of the light meter 25 are converted by analog to digital conversion and transmitted to a computer 24, where the computer processes the information acquired. In gloomy weather conditions the merged 3D site-building model is illuminated by the simulated focused light of the sky with its strength proportional to value V, measured exclusively by the lower photodetector 36. Yet, during sunlight, the value measured by the upper sensor 38 and lower sensor 36 differ, thus value N derives from the difference of the two and indicates the rate of the light coming from the direction of the sun, meaning that the model is illuminated by the diffuse light simulating sunlight with strength proportional to the said value TV as well. Due to the diffuse light well-lit and shaded parts become visible, casting shadow on each other and the modelled real environment in the model and vice versa. The depth of shadow will be proportional to the difference of values /V and V, the length and depth of shadows will be identical to the ones in real life and as time passes, they change accordingly, following the travel of the sun. Fig 16 shows the entire 3D site-building-background model and its part selected for display. The variation of the entire 3D site-building-background model be displayed is a panorama photo with the dimensions of 140x30, where, in our example, the microdisplay is capable of simultaneously displaying only a part of it, namely in the dimensions of 35x30. To view further particles of the image file the handle bars 23 need to be rotated while, simultaneously, the encoder 10 moves along its axis of rotation as well. Based on the digitized signal of the encoder 10 forwarded to a computer 24, the software displays the anticipated part in the microdisplay 20. In case of precise setup, the image illustrating the real environment and its elements displayed in the virtual image and the part of the scenery viewed in the said direction are overlapped, thus operating a high-speed computer would ensure that the change of the virtual image follows the rotation without any agitating delay. Consequently, one may scroll in the virtual reality, where, respectively, the virtual buildings placed into their real environment may be viewed in life like conditions.
Fig 17 illustrates image segments shown by the rotation of the virtual reality display.
Fig. 18 is the circuitry of the apparatus.
In accordance with figures 1-4., and as described in the present invention, the device is used as follows:
User lifts the display supporting portion 2 as described in the present invention, placed at the specified point P in the real environment at eye level, by the rotating handle bars 23, then with both eyes looks into the viewing windows 18 placed in the housing 19 of the virtual reality display 3. The virtual image displayed by the microdisplays 20 and magnified by the magnifying optical elements 21 appears, depending on the focal adjustment of the optical elements in the distance of the elements be displayed. In case the user rotates the apparatus by the rotating handle bars 23, the software displays, through the signals of the built-in encoder 10 processed by a computer, the anticipated part of the 3D model on the microdisplays 20, thus one may scroll in the entire panorama and may view the virtual buildings displayed in their real environment in 130 degrees viewing angle. As the housing 19 of the virtual reality display 3 covers only the user's center of the field of view and allows tree view onto the real environment at the sides and down, it may result in the illusion that the real environment continues in the virtual image. The said effect is further enhanced as the signals of the light meter 25 processed by a computer imply light conditions similar to the real environment in the virtual image as well.
Naturally, modifications to the present invention maybe made within the scope of the present invention, which is not intended to be limited except in accordance with the following claims.
For example, the lifting-positioning apparatus 15 can be a counterweight mechanism not indicated herein, where one end of the rope flung over a worm-gear with its horizontal axis fixed to the outward pipe 8 and its other end is fixed to the counterweight. Also, besides horizontal scroll, vertical scroll of the virtual image may be necessary, for example, when high virtual buildings situated close are displayed. In this case, we may insert an encoder with horizontal axis of rotation between the display supporting portion 2 and the virtual reality display 3, where the signals of the said encoder drives the vertical shift of the virtual image into the proper degree. The light meter 25 can be realized without the stepper motor 33 in case, instead of the shield stripe 39 on the transparent shade 37, we create a patch in a size which casts shadows onto the lower photodetector 36 in any position of the sun in the desired period, and the ratio of the surface of the shading patch and of the transparent shade 37 are taken into consideration when measuring the quantity of diffuse light coming from any direction of the sky. Finally, we may induct an optical zoom function with changing the size of the surface displayed in the virtual image, which may be driven by a control appliance, activated by the rotation of the handle bars 23, not indicated herein.

Claims

1. A procedure for the insertion of a virtual image into real life environment, where both the environment and the elements be displayed virtually are modelled by a 3D object modelling program and blended, then displayed in a virtual reality display, where it is ensured that the virtual image displayed changes according to the changes of the position of the virtual reality display, c h a r a c t e r i z e d in that
- a digital or a digitized panorama or a full panorama picture is taken from the designated point P in the real environment, - the part of the real environment to be displayed is modelled by a 3D object modelling program and is arranged into a perspective viewed from the designated point P,
- the interface of the above mentioned 3D model of the real environment viewed from the point P is textured with the proper part of the said panorama, - 3D models of the virtual elements to be displayed are made from and arranged into the perspective viewed from the P point,
- the said 3D models of the virtual elements to be displayed are blended with the said 3D model of the modelled and textured real environment, thus creating the blended 3D model to be stored in a computer (24), - a particle selected from the said blended 3D model displayed in a virtual reality display (3),
- the mechanical rotation along the vertical axis of the virtual reality display (3) or the display supporting portion (2), the electric signal proportional to the rotation and its conduct into a computer (24) are ensured, - with the electric signal proportional to the rotation, we control the selection of the part of the 3D model to be displayed.
2. A procedure as in claim 1., c h a r a c t e r i z e d in that the said blended 3D model is illuminated by a source of light simulating current light conditions of the real environment.
3. A procedure as in claim 2., c h a r a c t e r i z e d in that there are two artificial light sources, one simulating light coming from the sun, the other simulating skylight.
4. A procedure as in claims 2. and 3., characterized in that the position of the artificial source of light simulating the light coming form the direction of the sun is determined by a program calculating the actual position of the sun from astronomic coordinates.
5. An apparatus executing the said method as in claims 1-4., characterized in that the apparatus comprises a fixed mounting (1), preferably to the ground
- a display supporting portion (2) with horizontal axis of rotation rotatably fixed to the mounting ( 1 )
- an encoder (10) with its stator part (11) fixed to the mounting (1) and its rotor part (12) fixed to the display supporting portion (2)
- a virtual reality display (3) fixed to the supporting portion (2) a computer (24) - a light meter (25)
6. An apparatus as in claim 5., characterized in that the light meter (25) comprises two photodetectors, where a shield shades one of the photodetectors placed into the path of the sunrays.
7. An apparatus as in claims 5. and 6., characterized in that the virtual reality display (3) comprises two microdisplays (20) with magnifying optical elements (21), trans- illuminating source lights (22) as well as the house (19) surrounding the said elements, particles and comprising viewing windows (18).
8. An apparatus as in claim 7., characterized in that the mounting (1 ) comprises a fixed outward pipe (8), preferably to the ground and an inward pipe (9) mechanically locked against rotation with an exterior diameter loosely fitting into the interior diameter of the outward pipe (8).
9. An apparatus as in claim 8., c h a r a c t e r i z e d in that the stator part (11) of the encoder (10) is fixed to the inward pipe (9).
10. An apparatus as in claim 9., characterized in that a lifting-positioning apparatus (15) is fixed to the inward pipe (9).
PCT/HU2006/0000422005-05-122006-05-11Procedure for the insertion of a virtual image into real environment and device for the execution of the procedureWO2006120489A1 (en)

Applications Claiming Priority (2)

Application NumberPriority DateFiling DateTitle
HUP05004832005-05-12
HU0500483AHUP0500483A2 (en)2005-05-122005-05-12Method and equipment for matching virtual images to the real environment

Publications (1)

Publication NumberPublication Date
WO2006120489A1true WO2006120489A1 (en)2006-11-16

Family

ID=89986014

Family Applications (1)

Application NumberTitlePriority DateFiling Date
PCT/HU2006/000042WO2006120489A1 (en)2005-05-122006-05-11Procedure for the insertion of a virtual image into real environment and device for the execution of the procedure

Country Status (2)

CountryLink
HU (1)HUP0500483A2 (en)
WO (1)WO2006120489A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US10656425B1 (en)2018-11-122020-05-19Dataking. IncVirtual reality experience device
US11227366B2 (en)2018-06-222022-01-18Volkswagen AgHeads up display (HUD) content control system and methodologies
CN114007017A (en)*2021-11-182022-02-01浙江博采传媒有限公司Video generation method and device and storage medium
CN116110080A (en)*2023-04-042023-05-12成都新希望金融信息有限公司Switching method of real facial mask and virtual facial mask

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5625765A (en)*1993-09-031997-04-29Criticom Corp.Vision systems including devices and methods for combining images for extended magnification schemes
US5815411A (en)*1993-09-101998-09-29Criticom CorporationElectro-optic vision system which exploits position and attitude
US6175343B1 (en)*1998-02-242001-01-16Anivision, Inc.Method and apparatus for operating the overlay of computer-generated effects onto a live image
US6414696B1 (en)*1996-06-122002-07-02Geo Vector Corp.Graphical user interfaces for computer vision systems

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5625765A (en)*1993-09-031997-04-29Criticom Corp.Vision systems including devices and methods for combining images for extended magnification schemes
US5815411A (en)*1993-09-101998-09-29Criticom CorporationElectro-optic vision system which exploits position and attitude
US6414696B1 (en)*1996-06-122002-07-02Geo Vector Corp.Graphical user interfaces for computer vision systems
US6175343B1 (en)*1998-02-242001-01-16Anivision, Inc.Method and apparatus for operating the overlay of computer-generated effects onto a live image

Cited By (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11227366B2 (en)2018-06-222022-01-18Volkswagen AgHeads up display (HUD) content control system and methodologies
US10656425B1 (en)2018-11-122020-05-19Dataking. IncVirtual reality experience device
WO2020101090A1 (en)*2018-11-122020-05-22데이터킹주식회사Virtual reality experience apparatus
CN114007017A (en)*2021-11-182022-02-01浙江博采传媒有限公司Video generation method and device and storage medium
CN116110080A (en)*2023-04-042023-05-12成都新希望金融信息有限公司Switching method of real facial mask and virtual facial mask
CN116110080B (en)*2023-04-042023-07-04成都新希望金融信息有限公司Switching method of real facial mask and virtual facial mask

Also Published As

Publication numberPublication date
HU0500483D0 (en)2005-07-28
HUP0500483A2 (en)2006-12-28

Similar Documents

PublicationPublication DateTitle
US11699243B2 (en)Methods for collecting and processing image information to produce digital assets
CN112263837B (en)Weather rendering method, device, equipment and storage medium in virtual environment
US6323862B1 (en)Apparatus for generating and interactively viewing spherical image data and memory thereof
US6930681B2 (en)System and method for registering multiple images with three-dimensional objects
US7019748B2 (en)Simulating motion of static objects in scenes
US4805895A (en)Image forming apparatus and method
El-Hakim et al.Effective 3d modeling of heritage sites
US20030038822A1 (en)Method for determining image intensities of projected images to change the appearance of three-dimensional objects
CN110248078A (en)A kind of exposure parameter acquisition methods of panoramic picture
CA2605962A1 (en)System for the visualization of information superimposed upon real images
WO2006120489A1 (en)Procedure for the insertion of a virtual image into real environment and device for the execution of the procedure
US9390558B2 (en)Faux-transparency method and device
ThirionRealistic 3D simulation of shapes and shadows for image processing
US6937210B1 (en)Projecting images on a sphere
CN117372655A (en)Information processing device, information processing method, and program
WO2018128137A1 (en)Globe
WO2004040521A1 (en)Image processing device, image processing program, recording medium recording the program, image processing method, and shading information acquisition device
Graf et al.Perspective terrain visualization—A fusion of remote sensing, GIS, and computer graphics
JP2007536964A (en) Method and apparatus for displaying a virtual landscape
CN1741620A (en) Augmented reality fixed-point observation system for on-site digital 3D reconstruction
JPH10340061A (en)Video appreviation facility
KR101110275B1 (en)Image processing device and image processing methods using screen cover
KR101873681B1 (en)System and method for virtual viewing based aerial photography information
KR101934345B1 (en)Field analysis system for improving recognition rate of car number reading at night living crime prevention
KR101110276B1 (en)Image processing device and image processing methods using multiple screens

Legal Events

DateCodeTitleDescription
121Ep: the epo has been informed by wipo that ep was designated in this application
NENPNon-entry into the national phase

Ref country code:DE

WWWWipo information: withdrawn in national office

Country of ref document:DE

NENPNon-entry into the national phase

Ref country code:RU

WWWWipo information: withdrawn in national office

Country of ref document:RU

122Ep: pct application non-entry in european phase

Ref document number:06727230

Country of ref document:EP

Kind code of ref document:A1


[8]ページ先頭

©2009-2025 Movatter.jp