Movatterモバイル変換


[0]ホーム

URL:


CN119672268A - Positioning method, head display device and storage medium for large space smart glasses - Google Patents

Positioning method, head display device and storage medium for large space smart glasses
Download PDF

Info

Publication number
CN119672268A
CN119672268ACN202510180013.XACN202510180013ACN119672268ACN 119672268 ACN119672268 ACN 119672268ACN 202510180013 ACN202510180013 ACN 202510180013ACN 119672268 ACN119672268 ACN 119672268A
Authority
CN
China
Prior art keywords
camera
angle
line
world
axis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202510180013.XA
Other languages
Chinese (zh)
Other versions
CN119672268B (en
Inventor
潘仲光
苏砚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lightspeed Future Xiamen Software Co ltd
Original Assignee
Dalian Situne Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Situne Technology Development Co ltdfiledCriticalDalian Situne Technology Development Co ltd
Priority to CN202510180013.XApriorityCriticalpatent/CN119672268B/en
Publication of CN119672268ApublicationCriticalpatent/CN119672268A/en
Application grantedgrantedCritical
Publication of CN119672268BpublicationCriticalpatent/CN119672268B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The invention relates to a positioning method, a head display device and a storage medium of large-space intelligent glasses, which belong to the technical field of meta-space computation, are suitable for a meta-space positioning function that players in a large space with multiple interactions can directly interact without learning or reasoning by a central server, a plurality of different texture pictures are distributed on the bottom surface or the ceiling of the large space, any one of cameras on the intelligent glasses is used for collecting images, the texture pictures are identified from the images collected by the cameras T, picture information corresponding to the texture pictures is obtained, and the world position of the cameras T can be obtained through geometric computation.

Description

Positioning method of large-space intelligent glasses, head display device and storage medium
Technical Field
The invention belongs to the technical field of meta space calculation, and particularly relates to a positioning method, head display equipment and a storage medium of large-space intelligent glasses.
Background
Extended Reality (XR) smart glasses (head-mounted devices) are divided into two classes of glasses, AR augmented Reality and VR virtual Reality. AR glasses are transparent in the manner of Optical See Through or optical lenses, and see the surrounding environment (hereinafter referred to as optical transparent OST). VR glasses are purely virtual glasses that cannot see the outside environment, and in recent years, MR mixed reality glasses, that is, VR views the surrounding environment through a camera, which is realized in a manner of Video See Through or Visual Pass Through or camera perspective (hereinafter referred to as camera perspective VST), have begun. The VST glasses can do glasses positioning calculation through a camera used by the VST or other cameras on the glasses. OST glasses have not had cameras, but now there are advanced OST glasses that add camera configurations to calculate ambient space applications, many OST or AR glasses are also known as MR mixed reality glasses because of the available computing environment space for the cameras.
In recent years, the meta-space industry has grown large-space applications. Large space applications refer to applications where multiple people walk through the same space through smart glasses, can view virtual renderings or real scenes, and can interact with real or virtual objects, virtual characters, and other real players. Large space technology refers to a virtual and real mixed environment implemented using a spatial localization method, a rendering and interaction technology allowing multiple persons and things. Because many players are in the same space (also known as "world location" World Coordinates) and are using smart glasses at the same time, the local location of each player's glasses (also known as "personal location" Personal Coordinates) and the relative location and orientation to the world location are the heart of large space technology. The incorrect calculation of the positions of the large-space glasses can cause the problems that (1) the virtual scene rendered in the glasses can drift or the position deviation of objects is too large, (2) the mutual position deviation of players can cause collision or incorrect contact of multiple persons, (3) the interaction result of the players and the virtual objects is incorrect due to incorrect position calculation, (4) the shooting or receiving direction or the incorrect position is caused by incorrect position when the players catch or shoot, and (5) the height illusion is caused by incorrect height of the players watching each other and other application errors are caused by incorrect positions of the glasses of each player.
The calculation mode of the position of the glasses in the market mainly adopts a Point Cloud vision and SIFT calculation method. The point cloud and SIFT rough logic is a neural network or machine training that performs vision through unique textures, marks or patterns (hereinafter referred to as "texture pictures") on the wall ceiling and the bottom surface, and the texture patterns in all spaces cannot be repeated, otherwise the trained neural network or machine learning gives incorrect or repeated world positions in the reasoning process. Because the point cloud and the SIFT need training, the unavoidable texture patterns shot by the glasses need to be transmitted to the server in real time for reasoning, and the reasoning result is transmitted back to the head display device, so that the problem of position information delay exists, and the problem is fatal experience in large-space application, particularly in fast athletic application. This mode also causes time consuming situations such as long training time and error correction during the manufacturing application.
In addition, when the large space is applied, a plurality of people use one space at the same time, if the texture picture is identified by the image acquired by the front camera of the glasses, the texture picture of the opposite wall may not be correctly acquired because other players block the sight, and the position error calculated as the distance is further increased. Then the glasses must rely on the assistance of the device's own gyroscope, bluetooth positioning, TOF positioning, laser positioning, or other SLAM positioning techniques when the specific position of the glasses cannot be inferred correctly. These auxiliary techniques have problems such as errors and drift, and the equipment needs to add redundant positioning sensors, which can certainly increase the cost, effort, weight and energy consumption of the equipment or system.
At present, most of large-space applications adopt a method that a plurality of head display devices share the same server, each head display device uploads texture pictures seen by personal devices to the server, world position relations of all head display devices are calculated in the server and issued, and therefore each head display device knows positions of other head display devices in the same world space to realize interaction, and therefore the large space is required to be provided with a server and time delay for reasoning positions and returning the positions to each device.
Disclosure of Invention
The invention aims to provide a positioning method, head display equipment and a storage medium of large-space intelligent glasses, which are suitable for large-space element universe application scenes of multi-person interaction, in particular suitable for scenes in which head display equipment of each player in a large space can directly interact without training and reasoning by a central server, and can rapidly calculate the world position of the glasses of the player by acquiring picture information of texture pictures in the large space, and simultaneously broadcast own world position and game state data to other players in the same space, thereby supporting the requirement of rapid response to the position information of each player in large-space application.
The invention relates to a positioning method of large-space intelligent glasses, which is characterized in that a plurality of texture pictures which are different are distributed on the bottom surface or the ceiling of a large space, an image is acquired through any camera on the intelligent glasses, the texture picture is identified from the image acquired by the camera T, picture information corresponding to the texture picture is acquired, the picture information can be directly obtained by analyzing the texture picture, or the ID of the texture picture is obtained, and the picture information is matched in a comparison table locally stored in the intelligent glasses according to the ID;
The image information comprises a world position (XYZ) of the image and a graph size (m, n), wherein three angle anchor points (A, m, n) are arranged on the texture image and form An included angle nAm, wherein the angle anchor point A is the vertex of the included angle nAm, the world position of the angle anchor point A is defined as the world position (XYZ) of the image, a connecting line An from A to n angle anchor points is the positive number direction of a Y axis, a connecting line Am from A to m angle anchor points is the positive number direction of the X axis, the length of the connecting line nA in the real world is n, the length of the connecting line Am in the real world is m, and if the included angle nAm is not 900, the image information also comprises the angle of the included angle nAm for converting the X axis and the Y axis;
Obtaining the world position A (XA,YA,ZA) of the corner anchor point A, the world position m (Xm,Ym,Zm) of the corner anchor point m and the world position n (Xn,Yn,Zn) of the corner anchor point n according to the picture information Xm=XA+m, Ym=YA, Xn=XA, Yn= YA+n,Zm=Zn=ZA;
Obtaining pixel positions PA(XPA,YPA)、Pm(XPm,YPm) and Pn(XPn,YPn corresponding to the angle anchor mAn from the acquired image);
Obtaining a pixel position PO(XPo,YPo of a vertical ground or ceiling in an image acquired by a camera through a nine-axis geomagnetic chip of the intelligent glasses;
An included angle formed by a normal line of the camera T perpendicular to the bottom surface or the ceiling and a connecting line HA from the camera T to an angle anchor point A is set asAn included angle formed by a line Hm between the normal line of the camera T perpendicular to the bottom surface or the ceiling and the angle anchor point m of the camera T isAn included angle formed by a line Hn between the normal line of the camera T perpendicular to the bottom surface or the ceiling and the angle anchor point n of the camera T isThe world position (XT,YT,ZT) of the camera T can be obtained by geometrical calculation.
The world position (XT,YT,ZT) of the camera T can be obtained through geometric calculation, the field angle FOV of the camera and the total pixel amount XD of the X axis of the screen are known on the assumption that the texture picture in the acquired image is not deformed or distorted and corrected, the pixel number PPD=XD/FOV of each degree is obtained, and the specific calculation steps are as follows:
step 1, setting a connecting line Am from an angle anchor point A to an angle anchor point m as an X axis of world space, and calculating according to pixel positions PA and Pm to obtain a slope SX of the connecting line Am, wherein the X axis passing through a point PX and a point PA is expressed as follows by a linear formula:
Y=SXX+BX(6);
Substituting pixel position PA or Pm into formula (6) to obtain BX;
Since the line POPX is perpendicular to the X-axis, the pixel position PX is the intersection of the line POPX and the X-axis, and the slope of the line POPX perpendicular to the line PAPm is-SX, expressed as:
Y=-SXX+BXV(7);
Substituting pixel position PO(XPo,YPo) into equation (7) to obtain BXV, so that an intersection point pixel position PX(XPX,YPX of the straight line POPX and the X axis can be found;
Let the line An from the angle anchor A to the angle anchor n be the Y-axis of world space, calculate the slope SY of the line An according to the pixel positions PA and Pn, then the straight line passing through the point PY and the point PA is expressed as:
Y=SYX+BY(8);
Substituting pixel position PA or Pn into equation (8) to obtain BY;
Since the line POPY is perpendicular to the Y-axis, the pixel position PY is the intersection of the line POPY and the Y-axis, and the slope of the line POPY perpendicular to the line PAPn is-SY, expressed as:
Y=-SYX+BYV(9);
Substituting pixel position PO(XPo,YPo) into equation (9) to obtain BYV, so that an intersection point pixel position PY(XPY,YPY of the straight line POPY and the Y axis can be found;
Step 2, calculating to obtain pixel distances PXPA and PXPm by Pythagorean theorem
=PXPA/PPD
=PXPm/PPD(3);
The method is calculated by the following formula:
(1);
Or alternatively(2);
The pixel distances PYPA and PYPm are calculated by Pythagorean theorem
=PYPA/PPD
=PYPn/PPD(3);
The method is calculated by the following formula:
(1);
Or (b)(2);
And 3, obtaining the world position (XT,YT,ZT) of the camera T after averaging the two obtained ZT.
The world position (XT,YT,ZT) of the camera T can be obtained through geometric calculation, and the included angle formed by the normal line of the camera T perpendicular to the bottom surface or the ceiling and the connecting line of the camera T to three angle anchor points (A, m, n) is calculated on the assumption that the texture picture in the acquired image is deformed due to the problem of the camera angle,,):
The three corner anchor points are square positioning images, the angle of the upper half image of the positioning pattern of each corner anchor point is measured, and the largest angle is selected asThe included angle formed by the normal line of the camera T perpendicular to the ground or the ceiling and the connecting line from the camera T to the angle anchor point is as follows:
(4);
Obtaining an included angle formed by the normal line of the camera T perpendicular to the ground or the ceiling and the connecting line of the camera T to three angle anchor points (A, m, n) through the formula (4),,);
The Pythagorean theorem is used to obtain the following six formulas:
;
And then the six formulas are simultaneously calculated to obtain a function formula for calculating the world position T (XT,YT,ZT) of the glasses:
(5)。
The world positions of the cameras T are subjected to average value calculation or converted according to the known position relations of the cameras to obtain the world positions of the head display equipment/intelligent glasses, and the head display equipment/intelligent glasses of the player broadcast the world positions of the player together with the current game state data to other players in a large space in real time.
The bottom surface of the large space refers to the ground, steps, walls or a tabletop of the large space.
The three corner anchor points of the texture picture can be positioning patterns without information or independent small texture pictures, and the picture information corresponding to the small texture pictures comprises the world position of the small texture picture and the size of the small texture picture.
The picture information of the texture picture further includes a spatial region domain name for distinguishing a virtual space in the large space.
The picture information of the texture picture also comprises distance information p or q between the texture picture and the adjacent texture picture, namely, the distance p between the texture picture and the adjacent texture picture in the positive X-axis direction and the distance q between the texture picture and the adjacent texture picture in the positive Y-axis direction.
The head display device comprises a camera, a display, a memory and a processor;
The camera is connected with the processor and used for scanning the real environment where the user is located;
The display is connected with the processor and used for displaying the output content of the processor;
the memory is connected with the processor and is used for storing a computer program;
the processor is used for executing the computer program to realize the positioning method of the large-space intelligent glasses.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of positioning large-space smart glasses as described above.
The invention finds and identifies the picture information of the texture picture (such as mature two-dimension code mark) through simple visual calculation on the image collected by the camera in any direction on the intelligent glasses, the picture information comprises the world position (XYZ) and the picture size (m, n) of the picture, the position of the glasses of the player is obtained according to the identified picture information, and the world position and the game state data of the player are broadcast to other players in the same space, so that the position and the state information of the head display equipment of other players in the same space are obtained. The calculation process can be rapidly realized on the local head display equipment, so that errors caused by image training to a server, image reasoning transmission and position information feedback delay and positioning sensor assistance are avoided, and error problems caused by far distance or blocked texture pictures are also avoided. The invention not only reduces the server, but also greatly reduces the investment cost of large space, and can realize the multi-person interaction scene only by the calculation power of the intelligent glasses.
Drawings
FIG. 1 is a texture picture of example one of the present invention;
FIG. 2 is a texture picture of example two of the present invention;
fig. 3 is a schematic diagram of adjacent two-dimensional code picture distance information according to the present invention;
FIG. 4 is a schematic diagram of triangle calculation based on a single-dimensional space two-dimensional code to calculate the position (XT,ZT) of a camera T;
FIG. 5 is an image showing an undeformed two-dimensional code according to the present invention;
FIG. 6 is a view showing an angle formed by a normal line of a camera center and a perpendicular projection line of a two-dimensional codeIs a variation of the schematic diagram;
fig. 7 is a schematic diagram of a change of a positioning pattern of a two-dimensional code angle anchor point a from different angles through a camera;
fig. 8 is a two-dimensional code seen at four pitch angles;
FIG. 9 is an image showing a deformed two-dimensional code in the present invention;
FIG. 10 is a schematic diagram showing the relationship between a top view camera T and a texture picture according to the present invention;
FIG. 11 is a schematic diagram of the relationship between the camera T and the angle anchor A when seen from the side;
fig. 12 is a functional block diagram of a head-display device according to the present invention.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In embodiments of the application, the words "exemplary" or "such as" are used to mean that any embodiment or aspect of the application described as "exemplary" or "such as" is not to be interpreted as preferred or advantageous over other embodiments or aspects. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The principle of the invention is explained:
(1) Texture pictures are arranged on the ground, steps, desktops and/or ceilings in a large space, are different, and are acquired by utilizing cameras in the intelligent glasses, and are identified through simple visual calculation:
Because players need to walk in a large space with smart glasses in large space applications, the ground of the space has a safe walking characteristic that is flat and does not cause a wrestling. Existing intelligent glasses are generally provided with cameras in multiple directions on a glasses frame. In order to solve the problem that a texture picture is arranged on a wall surface to cause a long distance or is blocked by other players, and the problem that the experience is poor due to the need of low head or head lifting, the invention proposes to print or paste or project the texture picture on the ground, steps, a desktop and/or a ceiling, collect images by utilizing a camera in any direction in an intelligent glasses, find the texture picture in the images through simple visual calculation, and identify picture information contained in the texture picture, or acquire the picture information in a comparison table locally stored in the intelligent glasses through an ID of the texture picture, wherein the picture information comprises the world position (XYZ) and the picture size (m, n) of the picture.
The picture information of the texture picture may further include a spatial region domain name for distinguishing a virtual space in a large space, and the player may interact with virtual positions in different virtual spaces at the same physical position. When the player is in different virtual spaces, the same texture picture can be used for realizing interaction of different virtual scenes. The large space can also be formed by a plurality of secondary spaces, the secondary spaces and the secondary spaces can be overlapped and interacted, and the overlapped virtual areas can better enable players to seamlessly transfer the used space, so that the problem of gear breaking caused by no intersection between the secondary spaces is avoided.
The picture information of the texture picture also comprises distance information p or q between the texture picture and the adjacent texture picture. As shown in fig. 3, let the distance between two adjacent texture pictures be p or q, i.e., the distance p between the adjacent texture pictures in the positive X-axis direction and the distance q between the adjacent texture pictures in the positive Y-axis direction. When the smart glasses recognize three texture pictures at the same time, the world position (XYZ) and the picture size (m, n) of the three texture pictures and the distance p or q between the smart glasses and two adjacent texture pictures are respectively obtained, so that a small matrix is formed, wherein the world position, the picture size (m, n) and the distance p between the two texture pictures in the X axis and the distance q between the two texture pictures in the Y axis of the three texture pictures are contained. And the whole large space or the space of the large space can be provided with texture pictures with fixed distances for multi-time multi-dimensional calculation of the positions of the glasses, if the calculated positions of the two data have deviation, the average value of the two data is a more accurate position of the glasses, so the calculated deviation can be corrected by combining the calculation of the distance between the adjacent texture pictures, and the more accurate position of the glasses is obtained.
According to the invention, the texture picture can be vertically attached to or printed or projected on the wall surface or the ground, the vertical texture picture can be changed into the Z axis by changing the direction An and setting the direction Am as the X axis or the Y axis, and if the direction Am is the XY two-dimensional direction, the direction Am of the texture picture is added for converting the X axis and the Y axis.
(2) Knowledge of location information through texture pictures in the field of positioning is prior art, several examples of texture pictures are given below:
A. The texture picture of the first example is a two-dimensional code picture, as shown in fig. 1, three angles (a, m, n) of four angles of the two-dimensional code are provided with positioning patterns (Position Detection Pattern) without information, the positioning patterns are called angle anchor points, the two-dimensional code is assumed to be square, adjacent angle anchor points YA or AX are connected in pairs, the three angle anchor points form an included angle nAm of 90 degrees, wherein the angle anchor point a is the vertex of the included angle nAm, the world position of the angle anchor point a is defined as the world position (XYZ) corresponding to the texture picture, the connection line from the angle anchor point a to the angle anchor point n is the positive number direction of the Y axis, the connection line from the angle anchor point a to the angle anchor point m is the positive number direction of the X axis, that is to say, the two-dimensional code information plane is the positive number plane of the X axis and the Y axis nA is the length n in the real world, the connection line Am is m in the real world, and if the two-dimensional code is square, m=n;
B. The texture picture of the second example is a two-dimensional code picture, as shown in fig. 2, the three corner anchor points of the two-dimensional code are also independent two-dimensional code pictures, the picture information analyzed by the two-dimensional code is different in size, the position information (XYZ) of the three small two-dimensional codes is different, and the position information of the angle a anchor point of the large two-dimensional code is the same as the position information of the small two-dimensional code appearing at the angle a anchor point. The picture sizes (m, n) of the large two-dimensional codes are different, wherein (m, n) of the large two-dimensional codes are larger than (m, n) of the small two-dimensional codes. The position information of the small two-dimensional code of the n-angle anchor point of the large two-dimensional code is equal to the position of YA plus n distance of the A-angle anchor point, and the position information of the small two-dimensional code of the m-angle anchor point of the large two-dimensional code is equal to the distance of XA plus m of the A-angle anchor point. If the player is too close to the texture picture, when the large two-dimensional code information cannot be analyzed because the large two-dimensional code is not complete in the acquired image, any corner anchor point can be found in the image, the small two-dimensional code corresponding to the corner anchor point is directly analyzed, the picture information corresponding to the small two-dimensional code is obtained, and when the complete large two-dimensional code can be identified in the acquired image, the picture information corresponding to the large two-dimensional code is directly obtained. Therefore, no matter the long-distance or short-distance texture picture, the picture information can be analyzed through the size two-dimensional code, and the calculated glasses position is ensured to be more accurate or faster.
C. and a plurality of patterns are arranged on the texture picture, and the patterns can be pre-bound with information content, so that the recorded picture information is obtained through the arrangement sequence of the patterns.
(3) Smart glasses typically have multiple cameras for calculating spatial distances. Besides obtaining the position of the single lens of the glasses, the position (XYZ) of the center point of each lens of the glasses can be further converted by calculating the distance and the angle between the single lens of the glasses and a certain texture picture and combining the distance between every two lenses.
Since the cameras independently calculate their positions, even if different cameras see different two-dimensional codes, the calculated world positions of the glasses should be theoretically the same, and in practice, the calculation results will be different due to various errors. Therefore, the average value of the positions of the cameras is more accurate, and therefore, the more accurate average position can be obtained after the average value of the positions calculated by the cameras is calculated. If 2 cameras are arranged on a pair of glasses, each camera acquires 2 two-dimensional codes, 4 average values are obtained, if 3 two-dimensional codes are seen, 6 average values are obtained, and the more the two-dimensional codes are seen, the more accurate the average world position of the glasses is calculated by the cameras. Since the world positions of the cameras are different, when the world positions of the cameras are calculated, the calculated results should be firstly converted into common positions of the cameras on the intelligent glasses (such as the middle of the intelligent glasses) and then averaged.
(4) The world position of the intelligent glasses in the large space is obtained through calculation based on the picture information of the texture pictures:
A. The method for calculating the world position T (XT,ZT) of the intelligent glasses based on the single-dimensional space two-dimensional code comprises the following steps:
Assuming that only the X axis does not have the Y axis in the two-dimensional code of the single-dimensional space, two ends of the two-dimensional code are angle anchor points A and m, a connecting line Am of the two-angle anchor points is set as the X axis, and the position (XA, Z=0) of the angle anchor point A and the length m of the connecting line Am are knownAs shown in FIG. 4, the vertical projection point of the camera T on the X-axis is XT, and the connecting line TA of the camera T and the angle anchor point A forms an included angle with the vertical projection of the camera T on the X-axisAn included angle is formed between a connecting line Tm of the camera T and the angle anchor point m and the vertical projection of the camera T on the X axisThe calculation formula of the world position (XT,ZT) of the two-dimensional intelligent glasses is obtained through triangular calculation deduction:
(1)
Or alternatively(2)
The included angleAnd included angleObtained by the following steps:
The number of pixels per degree PPD (Pixels PER DEGREE), ppd=xD/FOV, is obtained from the field angle FOV (Field Of View) in the smart glasses camera specification and the total number of pixels XD of the screen X axis, assuming fov=100 o, XD =2000 pixels, then 2000pixels divided by 100 degrees is 20PPD or 20 pixels per degree. Knowing that the pixel position of the corner anchor point a on the image is PA and the pixel position of the corner anchor point m is Pm, then:
(3);
If the camera is vertical to the ground, P0 is set as a pixel vertical to the ground, and if the camera is vertical to the ceiling, P0 is set as a pixel vertical to the ceiling, a P0 value can be obtained through a sensor (such as a nine-axis geomagnetic chip) of the intelligent glasses;
if the two-dimensional code in the single-dimensional space has only Y axis and no X axis, X in the formulas (1), (2) and (3) is changed into Y.
B. The method for calculating the world position T (XT,YT,ZT) of the intelligent glasses based on the undeformed two-dimensional code comprises the following steps:
As shown in fig. 5, assuming that the image acquired by the camera is 20X20 and the field angle FOV of the camera is 20 degrees, only 1 pixel (1 PPD) is required for each degree, the center point pixel position is PC, the camera is not vertically downward but has an inclined angle, the pixel position of the vertical ground is PO, the picture information of the two-dimensional code includes the world position and the picture size (m, n) of the large-space inner angle anchor a, that is, the distance between the angle anchor a and the X-axis angle anchor m is m, the distance between the angle anchor a and the Y-axis angle anchor n is n, and the world position (XT,YT,ZT) of the glasses is calculated, that is, the world position of the pixel position PO of the vertical ground relative to the angle anchor a is solved:
Setting the world position of an angle anchor point A as A (XA,YA), the corresponding pixel position in the acquired image as PA(XPA,YPA), setting the world position of an angle anchor point m as m (Xm,Ym), the corresponding pixel position in the acquired image as Pm(XPm,YPm), setting the world position of an angle anchor point n as n (Xn,Yn), setting the corresponding pixel position in the acquired image as Pn(XPn,YPn), acquiring the pixel position of the vertical ground or ceiling on the acquired image as PO(XPo,YPo through a nine-axis geomagnetic chip of the intelligent glasses, and knowing the central point pixel position of the acquired image as PC(XPc,YPc through a glasses camera specification;
Assuming that the picture information of the two-dimensional code comprises a world position WA = (1400,1000) of an angle anchor point a and m=n=300 mm, a world position m= (1700,1000) of an angle anchor point m and a world position n= (1400,1300) of an angle anchor point n, wherein m and n in the world positions only affect an X axis or a Y axis;
As shown in fig. 5 ,PC=(10,10), PO=(02,08), PA=(14,10), Pm=(17,13), Pn=(11,13),, m=n=4.24 pixels are obtained by the pythagorean theorem, and the two-dimensional code in the actually acquired image is deformed due to perspective, and is not square but trapezoid. Here, for simplicity of understanding, it is assumed that the camera T is positioned high, and the two-dimensional code is seen to be substantially undeformed.
The first step is to find the pixel position PX, then calculate the XT position of the world space X-axis using equation (1):
Knowing that the connection line Am from the angle anchor point A to the angle anchor point m is the positive direction of the world space X axis, the slope of PA=(14,10), Pm = (17, 13) is SX =3/3=1, then the straight line formula passing through the point PX point A is Y=SXX+BX, the BX = -4 is calculated according to two points PA and Pm, the world space X axis on the image is expressed as YP=XP -4 by the formula, the straight line POPX is perpendicular to the world space X axis, the intersection position is the pixel position PX, and the PX(XPX,YPX is calculated:
XPX=(YPO+SXXPO-B)/2SX
YPX=SXXPX+BX
XPX=(8+2+4)/2=7,YPX=SXXPX+BX=7-4=3,PX = (7, 3), consistent with the position on fig. 5.
According to the formula (1), two pixel distances PXPA and PXPm of the world space X axis are calculated firstly, and the PX(7,3)PA (14, 10) distance=9.90 pixels and the PX(7,3)Pm (17, 13) distance=14.14 pixels are calculated according to the Pythagorean theorem:
PX(7,3)PA (14, 10) distance/ppd=9.90 degrees;
PX(7,3)Pm (17, 13) distance/ppd=14.14 degrees;
formula (1) XT=(Xm Tan-XATan)/( Tan-Tan) =723 Mm;
formula (2) ZT=(XA-Xm)/(Tan-Tan) =3876 Mm;
Thirdly, finding the pixel position PY in the image, and calculating the YT position of the Y axis in world space by using the formula (3):
knowing that the connection An from the angle anchor point A to the angle anchor point n is the Y axis of the world space, PA=(14,10), Pn = (11, 13) and the slope SY = -1, then according to the formula (4), the straight line passing through the point PY and PA is Y=SYX+BY, and BY =24 is calculated according to two points PA and Pn, the Y axis of the world space on the image is expressed as YP= 24 -XP by the formula, the straight line POPY is perpendicular to the Y axis of the world space, the crossing position is the pixel position PY, and PY(XPY,YPY is calculated:
XPY=(YPO+SYXPO-BY)/2SY, XPY=9;YPY=SYXPY+BY=15, PY=(9,15), Consistent with the position on fig. 5.
According to the formula (3), two pixel distances PYPA and PYPn of the world space Y axis are calculated firstly, and the PY(9,15)PA (14, 10) distance=7.07 pixels and the PY(9,15)Pn (11, 13) distance=2.83 pixels are calculated according to the Pythagorean theorem, so that:
pY(9,15)PA (14, 10) distance/ppd=7.07 degrees;
PY(9,15)Pn (11, 13) distance/ppd=2.83 degrees;
Formula (1) YT=(Ym Tan-YATan)/( Tan-Tan) =1201 Mm;
Formula (2) ZT=(YA-Yn)/(Tan-Tan) =4021 Mm.
Since ZT calculated above is erroneous, the second and fourth step Z height averages can be calculated, the final average ZT = 3949 mm;
and fifthly, calculating the world position T (XT,YT,ZT) = (723,1201,3949) of the glasses.
C. Another method for obtaining the included angle formed by the normal line of the camera perpendicular to the ground or the ceiling and the connecting line of the camera to the angle anchor point A is provided:
as shown in fig. 6, it is assumed that the angle formed by the normal line of the camera perpendicular to the floor or ceiling and the line connecting the camera to the angle anchor point a isWhen the camera is used for looking down from the right upper direction of the two-dimensional code, the included angle is formed0 DEG, if the camera gradually moves to the side face, when the two-dimensional code is seen from the side face, the two-dimensional code seen through the camera is flat, at the timeIs 90 DEG, thus the angle can be knownThe direction of the camera view angle anchor point A is directly related. Therefore, the included angle formed by the normal line of the camera perpendicular to the ground or the ceiling and the connecting line of the camera to the angle anchor point A can be calculated according to the shape of the two-dimensional code seen through the camera
Fig. 7 is a schematic diagram illustrating the change of the positioning pattern corresponding to the two-dimensional code angle anchor point a from different angles through the camera. Assuming that the positioning pattern is square, any one of the three corner anchor points mAn of the two-dimensional code is seen from the camera of the head display device, and the upper pattern is seen to be farther than the lower pattern. Unless looking at the front=0°), Or at least 1 of the four corners of the square of the positioning pattern will have an angle greater than 90 °, at least one of which is less than 90 °, except that the middle one of fig. 7 is vertical, and both angles are 90 °, and from the other pitch angles there are 2 angles greater than 90 ° and 2 angles less than 90 °. We can intercept the upper half of the positioning pattern, measure the angle and select the largest angle to be set asWill thenSubtracting 900 is the included angle formed by the line between the normal of the camera perpendicular to the ground or ceiling and the camera to the angle anchor point:
(4)。
therefore, all three angle anchor points mAn of the two-dimensional code can obtain an included angle formed by the normal line of the camera perpendicular to the ground or the ceiling and the connecting line of the camera to the angle anchor points through the formula (4),,). FIG. 8 is a diagram showing two-dimensional codes seen at four pitch angles, each two-dimensional code having three corner anchor points mAn, the upper half of each corner anchor point having the largest angleThe display is as above, and the angle formed by the normal line of the camera perpendicular to the ground or the ceiling and the connecting line of the camera to the angle anchor point is reduced by 90 degrees,,). The method does not need to use the pixels and the FOV of the image acquired by the camera to convert the angle, and can be faster and more direct.
As shown in fig. 9, it is assumed that the pixel position PO of the vertical floor or ceiling in the image acquired by the camera is obtained by the nine-axis geomagnetic chip of the smart glasses, and an included angle is formed by the normal line of the camera vertical to the floor or ceiling and the connecting line of the camera to the three corner anchor points mAn,,) The method is used for rapidly obtaining;
As shown in fig. 10, when the camera T (XT,YT,ZT) sees the texture picture, the lines between the camera T (XT,YT,ZT) and the corner anchors a (XA,YA,ZA)、m(Xm,Ym,Zm) and n (Xn,Yn,Zn) are HA、Hm and Hn, and the following three formulas are formed by the pythagorean theorem:
;
As shown in FIG. 11, the relationship between the camera T and the angle anchor point A is shown from the side view (the height Z axis and HA are the transverse axes), and the angle formed by FIG. 7 and the connection line between the normal line of the camera T perpendicular to the ground or the ceiling and the camera T to the three angle anchor points mAn,,) The following three formulas are obtained:
;
The six formulas are simultaneously calculated to obtain a function formula for calculating the world position T (XT,YT,ZT) of the glasses:
(5);
The above six formulas are combined to obtain many different formulas (5) of XTYTZT.
Example 1
The first embodiment of the invention provides a positioning method of large-space intelligent glasses, which is suitable for a meta-universe application scene which can be directly interacted by head display equipment of players in a large space of multi-person interaction without training and reasoning by a central server, wherein a plurality of texture pictures which are different in each other are distributed on the ground, steps, a desktop, a wall and/or a ceiling of the large space, any one of cameras on the intelligent glasses is used for collecting images, the image is identified from the images collected by the cameras T, picture information corresponding to the texture pictures can be obtained, the world position of the cameras T can be calculated by combining the picture information, and the condition that the texture pictures in the collected images are not deformed or are calibrated in distortion is assumed, as shown in fig. 5, and the positioning method comprises the following steps:
Step 1, directly obtaining picture information or obtaining an ID of a texture picture by analyzing the texture picture, and matching the ID to the picture information in a comparison table locally stored in the intelligent glasses;
The image information comprises a world position (XYZ) and a graph size (m, n) of the image, wherein three angle anchor points (A, m, n) are arranged on the texture image and form An included angle nAm, wherein the angle anchor point A is the vertex of the included angle nAm, the world position of the angle anchor point A is defined as the world position (XYZ) of the image, a connecting line An from A to n angle anchor points is the positive direction of a Y axis, a connecting line Am from A to m angle anchor points is the positive direction of the X axis, the length of the connecting line nA in the real world is n, the length of the connecting line Am in the real world is m, and if the included angle nAm is not 900, the image information comprises the angle of the included angle nAm and is used for converting the X axis and the Y axis;
Step 2, obtaining the world position A (XA,YA,ZA) of the corner anchor point A, the world position m (Xm,Ym,Zm) of the corner anchor point m, the world position n (Xn,Yn,Zn) of the corner anchor point n according to the picture information, wherein the world position A (XA,YA,ZA) of the corner anchor point A is Xm=XA+m, Ym=YA, Xn=XA, Yn= YA+n,Zm=Zn=ZA;
Obtaining pixel positions PA(XPA,YPA)、Pm(XPm,YPm) and Pn(XPn,YPn corresponding to the angle anchor mAn from the acquired image);
Obtaining a pixel position PO(XPo,YPo of a vertical ground or ceiling in an image acquired by a camera through a nine-axis geomagnetic chip of the intelligent glasses;
Obtaining the pixel number PPD=XD/FOV of each degree according to the field angle FOV and the total pixel amount XD of the X axis of the screen in the specifications of the intelligent glasses camera;
Step 3, setting an included angle formed by a normal line of the camera T perpendicular to the ground or the ceiling and a connecting line TA of the camera T to an angle anchor point A asAn included angle formed by a line Tm between the normal line of the camera T perpendicular to the ground or the ceiling and the angle anchor point m of the camera T isAn included angle formed by a line Tn between the normal line of the camera T perpendicular to the ground or the ceiling and the angle anchor point n of the camera T is;
Let the line Am from the angle anchor a to the angle anchor m be the X-axis of world space, calculate the slope SX of the line Am according to the pixel positions PA and Pm, then the X-axis passing through the point PX and the point PA is expressed as:
Y=SXX+BX(6);
Substituting pixel positions PA and Pm into formula (6) to obtain BX;
Since the line POPX is perpendicular to the X-axis, the pixel position PX is the intersection of the line POPX and the X-axis, and the slope of the line POPX perpendicular to the line PAPm is-SX, expressed as:
Y=-SXX+BXV(7);
Substituting pixel position PO(XPo,YPo) into equation (7) to obtain BXV, so that an intersection point pixel position PX(XPX,YPX of the straight line POPX and the X axis can be found;
Let the line An from the angle anchor A to the angle anchor n be the Y-axis of world space, calculate the slope SY of the line An according to the pixel positions PA and Pn, then the straight line passing through the point PY and the point PA is expressed as:
Y=SYX+BY(8);
Substituting pixel position PA or Pn into equation (8) to obtain BY;
Since the line POPY is perpendicular to the Y-axis, the pixel position PY is the intersection of the line POPY and the Y-axis, and the slope of the line POPY perpendicular to the line PAPn is-SY, expressed as:
Y=-SYX+BYV(9);
Substituting pixel position PO(XPo,YPo) into equation (9) to obtain BYV, so that an intersection point pixel position PY(XPY,YPY of the straight line POPY and the Y axis can be found;
Step 4, calculating to obtain pixel distances PXPA and PXPm by Pythagorean theorem
=PXPA/PPD
=PXPm/PPD(3);
The method is calculated by the following formula:
(1);
Or (b)(2);
The pixel distances PYPA and PYPm are calculated by Pythagorean theorem
=PYPA/PPD
=PYPn/PPD(3);
The method is calculated by the following formula:
(1);
Or (b)(2);
After averaging the two ZT obtained above, the world position of the camera T is obtained (XT,YT,ZT).
And 5, calculating the world positions of the cameras T by averaging or according to the known position relations of the cameras to obtain the world positions of the head display equipment, and broadcasting the world positions of the players together with the current game state data to other players in a large space in real time by the head display equipment of the players.
Example two
The second embodiment of the present invention provides another positioning method of large-space intelligent glasses, which is suitable for a meta-universe application scene that a player's head display device in a large space with multiple people interaction can directly interact without reasoning by a central server, a plurality of texture pictures which are different in each other are distributed on the ground, steps, desktop, wall and/or ceiling of the large space, any one of cameras on the intelligent glasses is used for collecting images, the image corresponding to the texture picture is identified from the image collected by the camera T, picture information corresponding to the texture picture can be obtained, the world position of the camera T can be obtained by combining the picture information, the texture picture is deformed due to the angle of the camera shooting at an angle of an angle anchor point in the image, as shown in fig. 9, the method comprises the following steps:
step 1, obtaining picture information by analyzing the texture picture, or obtaining an ID of the texture picture, and matching the ID to the picture information in a comparison table locally stored in the intelligent glasses;
The image information comprises An angle anchor point world position (XA,YA,ZA) of the image and world dimensions (m, n) of An X axis and a Y axis of the image, wherein three angle anchor points (A, m, n) are arranged on the texture image and form An included angle nAm, the angle anchor points are square positioning patterns, the angle anchor point A is the vertex of the included angle nAm, the world position of the angle anchor point A is defined as the world position (XA,YA,ZA) of the image, a connecting line An from the angle anchor point A to the angle anchor point n is the positive number direction of the Y axis, a connecting line Am from the angle anchor point A to the angle anchor point m is the positive number direction of the X axis, the connecting line nA is n in the length of the real world, and the connecting line Am is m in the length of the real world;
Step 2, obtaining the world position A (XA,YA,ZA) of the corner anchor point A, the world position m (Xm,Ym,Zm) of the corner anchor point m and the world position n (Xn,Yn,Zn) of the corner anchor point n according to the picture information Xn=XA+m,Ym=YA, Xn=XA,Yn=YA+n,Zm= Zn=ZA;
Step 3, setting an included angle formed by a normal line of the camera T perpendicular to the ground or the ceiling and a connecting line of the camera T to the angle anchor point A asThe included angle formed by the line between the normal line of the camera T vertical to the ground or the ceiling and the angle anchor point m of the camera T isThe included angle formed by the line between the normal line of the camera T vertical to the ground or the ceiling and the connecting line of the camera T to the angle anchor point n isCalculating the included angle formed by the normal line of the camera T perpendicular to the ground or the ceiling and the connecting line of the camera T to three angle anchor points (A, m, n),,):
The angle is measured for the upper half of the locating pattern of each corner anchor (FIG. 7), with the largest corner being selected to beThe included angle formed by the normal line of the camera T perpendicular to the ground or the ceiling and the connecting line from the camera T to the angle anchor point is as follows:
(4);
Step 4, obtaining an included angle formed by the normal line of the camera T perpendicular to the ground or the ceiling and the connecting line of the camera T to three angle anchor points (A, m, n) through the formula (4),,) The world position (XT,YT,ZT) of the camera T is calculated by equation (5):
(5);
The formula (5) is derived by the following steps:
When the camera T (XT,YT,ZT) sees the texture picture, it is assumed that the lines between the camera T (XT,YT,ZT) and the corner anchor points A (XA,YA,ZA)、m(Xm,Ym,Zm) and n (Xn,Yn,Zn) of the texture picture are HA、Hm and Hn, and an included angle is formed by the normal line of the camera T perpendicular to the ground or the ceiling and the lines between the camera T and the three corner anchor points mAn,,) The following six formulas are obtained by Pythagorean theorem:
;
The six formulas are simultaneously calculated to obtain a function formula for calculating the world position T (XT,YT,ZT) of the glasses:
(5);
the six formulas described above can be combined to obtain many different XTYTZT function formulas (5).
And 5, obtaining the world positions of the cameras T through averaging or converting according to the known position relations of the cameras, and broadcasting the world positions of the players together with the current game state data to other players in a large space in real time by the head display equipment of the players, wherein the interaction modes comprise a wireless communication mode, an internet mode and other existing communication interaction modes.
The three corner anchor points of the texture picture can be positioning patterns without information or independent small texture pictures, and the picture information corresponding to the small texture pictures comprises the world position of the small texture picture and the size of the small texture picture.
The picture information of the texture picture further includes a spatial region domain name for distinguishing a virtual space in the large space.
The picture information of the texture picture also comprises distance information p or q between the texture picture and the adjacent texture picture, namely, the distance p between the texture picture and the adjacent texture picture in the positive X-axis direction and the distance q between the texture picture and the adjacent texture picture in the positive Y-axis direction.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Specifically, each step of the method embodiment in the embodiment of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in a software form, and the steps of the method disclosed in connection with the embodiment of the present application may be directly implemented as a hardware decoding processor or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware, performs the steps in the above method embodiments.
Example III
The third embodiment of the invention provides a head display device, as shown in fig. 12, the head display device 100 comprises a camera 101, a display 102, a memory 103 and a processor 104;
the camera 101 is connected with the processor 104 and is used for scanning the real environment where the user is located;
The display 102 is connected with the processor 104 and is used for displaying output content of the processor 104;
the memory 103 is connected to the processor 104 for storing a computer program and transmitting the program to the processor 104. In other words, the processor 104 may call and run a computer program from the memory 103 to implement the method according to the first embodiment of the present application.
In some embodiments of the application, the processor 104 may include, but is not limited to:
a general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In some embodiments of the application, the memory 103 includes, but is not limited to, volatile memory and/or non-volatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDR SDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct memory bus random access memory (DRRAM).
In some embodiments of the present application, the computer program may be divided into one or more modules, which are stored in the memory 103 and executed by the processor 104 to perform the method of embodiment one provided by the present application. The one or more modules may be a series of computer program instruction segments capable of performing the specified functions, which are used to describe the execution of the computer program on the head-mounted display device 100.
It should be appreciated that the various components in the head-mounted device 100 are connected by a bus system that includes a power bus, a control bus, and a status signal bus in addition to a data bus.
Example IV
The fourth embodiment of the present invention also provides a computer storage medium having a computer program stored thereon, which when executed by a computer enables the computer to perform the method of the first embodiment.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

The image information comprises a world position (XYZ) of the image and a graph size (m, n), wherein three angle anchor points (A, m, n) are arranged on the texture image and form An included angle nAm, wherein the angle anchor point A is the vertex of the included angle nAm, the world position of the angle anchor point A is defined as the world position (XYZ) of the image, a connecting line An from A to n angle anchor points is the positive number direction of a Y axis, a connecting line Am from A to m angle anchor points is the positive number direction of the X axis, the length of the connecting line nA in the real world is n, the length of the connecting line Am in the real world is m, and if the included angle nAm is not 900, the image information also comprises the angle of the included angle nAm for converting the X axis and the Y axis;
CN202510180013.XA2025-02-192025-02-19 Positioning method, head display device and storage medium for large space smart glassesActiveCN119672268B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202510180013.XACN119672268B (en)2025-02-192025-02-19 Positioning method, head display device and storage medium for large space smart glasses

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202510180013.XACN119672268B (en)2025-02-192025-02-19 Positioning method, head display device and storage medium for large space smart glasses

Publications (2)

Publication NumberPublication Date
CN119672268Atrue CN119672268A (en)2025-03-21
CN119672268B CN119672268B (en)2025-06-20

Family

ID=95001772

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202510180013.XAActiveCN119672268B (en)2025-02-192025-02-19 Positioning method, head display device and storage medium for large space smart glasses

Country Status (1)

CountryLink
CN (1)CN119672268B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20130307842A1 (en)*2012-05-152013-11-21Imagine Mobile Augmented Reality LtdSystem worn by a moving user for fully augmenting reality by anchoring virtual objects
CN105592103A (en)*2016-02-022016-05-18郭小虎Synchronous display method of virtual reality equipment and mobile equipment based on Unity3D
CN107507280A (en)*2017-07-202017-12-22广州励丰文化科技股份有限公司Show the switching method and system of the VR patterns and AR patterns of equipment based on MR heads
CN109086726A (en)*2018-08-102018-12-25陈涛A kind of topography's recognition methods and system based on AR intelligent glasses
CN109364480A (en)*2018-10-262019-02-22杭州电魂网络科技股份有限公司A kind of information synchronization method and device
US20200111256A1 (en)*2018-10-082020-04-09Microsoft Technology Licensing, LlcReal-world anchor in a virtual-reality environment
CN116168076A (en)*2021-11-242023-05-26腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN119011796A (en)*2024-10-212024-11-22大连三通科技发展有限公司Processing method of environment image data in camera perspective VST, head display device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20130307842A1 (en)*2012-05-152013-11-21Imagine Mobile Augmented Reality LtdSystem worn by a moving user for fully augmenting reality by anchoring virtual objects
CN105592103A (en)*2016-02-022016-05-18郭小虎Synchronous display method of virtual reality equipment and mobile equipment based on Unity3D
CN107507280A (en)*2017-07-202017-12-22广州励丰文化科技股份有限公司Show the switching method and system of the VR patterns and AR patterns of equipment based on MR heads
CN109086726A (en)*2018-08-102018-12-25陈涛A kind of topography's recognition methods and system based on AR intelligent glasses
US20200111256A1 (en)*2018-10-082020-04-09Microsoft Technology Licensing, LlcReal-world anchor in a virtual-reality environment
CN109364480A (en)*2018-10-262019-02-22杭州电魂网络科技股份有限公司A kind of information synchronization method and device
CN116168076A (en)*2021-11-242023-05-26腾讯科技(深圳)有限公司 Image processing method, device, equipment and storage medium
CN119011796A (en)*2024-10-212024-11-22大连三通科技发展有限公司Processing method of environment image data in camera perspective VST, head display device and storage medium

Also Published As

Publication numberPublication date
CN119672268B (en)2025-06-20

Similar Documents

PublicationPublication DateTitle
US10922844B2 (en)Image positioning method and system thereof
CN110809786B (en)Calibration device, calibration chart, chart pattern generation device, and calibration method
US9161027B2 (en)Method and apparatus for providing camera calibration
US7809194B2 (en)Coded visual markers for tracking and camera calibration in mobile computing systems
CN104322052B (en)System for mixing in real time or the three dimensional object of hybrid computer generation and film camera feed video
JPWO2019049421A1 (en) CALIBRATION DEVICE, CALIBRATION SYSTEM, AND CALIBRATION METHOD
CN116309854B (en) Method, device, equipment, system and storage medium for calibrating augmented reality equipment
CN112912936B (en) Mixed reality system, program, mobile terminal device and method
CN116168076B (en)Image processing method, device, equipment and storage medium
CN111242987A (en)Target tracking method and device, electronic equipment and storage medium
EP4614443A1 (en)Data processing method and apparatus, device, and storage medium
CN113240656A (en)Visual positioning method and related device and equipment
Cao et al.Camera calibration and light source estimation from images with shadows
CN110766752B (en)Virtual reality interactive glasses with light reflecting mark points and space positioning method
CN119672268B (en) Positioning method, head display device and storage medium for large space smart glasses
Fan et al.Light fields stitching for windowed-6DoF VR content
Faraji et al.Simplified active calibration
CN110473257A (en)Information scaling method, device, terminal device and storage medium
KR20090022486A (en) Object information estimator and method using single camera
Song et al.Rotated top-bottom dual-kinect for improved field of view
CN114913245A (en)Multi-calibration-block multi-camera calibration method and system based on undirected weighted graph
Schillebeeckx et al.The geometry of colorful, lenticular fiducial markers
US20250322615A1 (en)See-through display method and see-through display system
CN116886882B (en)Projection control method and system based on omnidirectional trapezoidal technology
CN117036589B (en) Three-dimensional reconstruction method, device, equipment and medium based on multi-view geometry

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
TR01Transfer of patent right
TR01Transfer of patent right

Effective date of registration:20250731

Address after:361026 Fujian Province, Xiamen City, Haicang District, Maqing Road No. 5888, Zhi Liu

Patentee after:Lightspeed Future (Xiamen) Software Co.,Ltd.

Country or region after:China

Address before:116023 Dalian City, Liaoning Province, High-tech Industrial Park, North Section of Digital Road 21.23 No.

Patentee before:DALIAN SITUNE TECHNOLOGY DEVELOPMENT CO.,LTD.

Country or region before:China


[8]ページ先頭

©2009-2025 Movatter.jp