Disclosure of Invention
The invention aims to provide a positioning method, head display equipment and a storage medium of large-space intelligent glasses, which are suitable for large-space element universe application scenes of multi-person interaction, in particular suitable for scenes in which head display equipment of each player in a large space can directly interact without training and reasoning by a central server, and can rapidly calculate the world position of the glasses of the player by acquiring picture information of texture pictures in the large space, and simultaneously broadcast own world position and game state data to other players in the same space, thereby supporting the requirement of rapid response to the position information of each player in large-space application.
The invention relates to a positioning method of large-space intelligent glasses, which is characterized in that a plurality of texture pictures which are different are distributed on the bottom surface or the ceiling of a large space, an image is acquired through any camera on the intelligent glasses, the texture picture is identified from the image acquired by the camera T, picture information corresponding to the texture picture is acquired, the picture information can be directly obtained by analyzing the texture picture, or the ID of the texture picture is obtained, and the picture information is matched in a comparison table locally stored in the intelligent glasses according to the ID;
The image information comprises a world position (XYZ) of the image and a graph size (m, n), wherein three angle anchor points (A, m, n) are arranged on the texture image and form An included angle nAm, wherein the angle anchor point A is the vertex of the included angle nAm, the world position of the angle anchor point A is defined as the world position (XYZ) of the image, a connecting line An from A to n angle anchor points is the positive number direction of a Y axis, a connecting line Am from A to m angle anchor points is the positive number direction of the X axis, the length of the connecting line nA in the real world is n, the length of the connecting line Am in the real world is m, and if the included angle nAm is not 900, the image information also comprises the angle of the included angle nAm for converting the X axis and the Y axis;
Obtaining the world position A (XA,YA,ZA) of the corner anchor point A, the world position m (Xm,Ym,Zm) of the corner anchor point m and the world position n (Xn,Yn,Zn) of the corner anchor point n according to the picture information Xm=XA+m, Ym=YA, Xn=XA, Yn= YA+n,Zm=Zn=ZA;
Obtaining pixel positions PA(XPA,YPA)、Pm(XPm,YPm) and Pn(XPn,YPn corresponding to the angle anchor mAn from the acquired image);
Obtaining a pixel position PO(XPo,YPo of a vertical ground or ceiling in an image acquired by a camera through a nine-axis geomagnetic chip of the intelligent glasses;
An included angle formed by a normal line of the camera T perpendicular to the bottom surface or the ceiling and a connecting line HA from the camera T to an angle anchor point A is set asAn included angle formed by a line Hm between the normal line of the camera T perpendicular to the bottom surface or the ceiling and the angle anchor point m of the camera T isAn included angle formed by a line Hn between the normal line of the camera T perpendicular to the bottom surface or the ceiling and the angle anchor point n of the camera T isThe world position (XT,YT,ZT) of the camera T can be obtained by geometrical calculation.
The world position (XT,YT,ZT) of the camera T can be obtained through geometric calculation, the field angle FOV of the camera and the total pixel amount XD of the X axis of the screen are known on the assumption that the texture picture in the acquired image is not deformed or distorted and corrected, the pixel number PPD=XD/FOV of each degree is obtained, and the specific calculation steps are as follows:
step 1, setting a connecting line Am from an angle anchor point A to an angle anchor point m as an X axis of world space, and calculating according to pixel positions PA and Pm to obtain a slope SX of the connecting line Am, wherein the X axis passing through a point PX and a point PA is expressed as follows by a linear formula:
Y=SXX+BX(6);
Substituting pixel position PA or Pm into formula (6) to obtain BX;
Since the line POPX is perpendicular to the X-axis, the pixel position PX is the intersection of the line POPX and the X-axis, and the slope of the line POPX perpendicular to the line PAPm is-SX, expressed as:
Y=-SXX+BXV(7);
Substituting pixel position PO(XPo,YPo) into equation (7) to obtain BXV, so that an intersection point pixel position PX(XPX,YPX of the straight line POPX and the X axis can be found;
Let the line An from the angle anchor A to the angle anchor n be the Y-axis of world space, calculate the slope SY of the line An according to the pixel positions PA and Pn, then the straight line passing through the point PY and the point PA is expressed as:
Y=SYX+BY(8);
Substituting pixel position PA or Pn into equation (8) to obtain BY;
Since the line POPY is perpendicular to the Y-axis, the pixel position PY is the intersection of the line POPY and the Y-axis, and the slope of the line POPY perpendicular to the line PAPn is-SY, expressed as:
Y=-SYX+BYV(9);
Substituting pixel position PO(XPo,YPo) into equation (9) to obtain BYV, so that an intersection point pixel position PY(XPY,YPY of the straight line POPY and the Y axis can be found;
Step 2, calculating to obtain pixel distances PXPA and PXPm by Pythagorean theorem
=PXPA/PPD
=PXPm/PPD(3);
The method is calculated by the following formula:
(1);
Or alternatively(2);
The pixel distances PYPA and PYPm are calculated by Pythagorean theorem
=PYPA/PPD
=PYPn/PPD(3);
The method is calculated by the following formula:
(1);
Or (b)(2);
And 3, obtaining the world position (XT,YT,ZT) of the camera T after averaging the two obtained ZT.
The world position (XT,YT,ZT) of the camera T can be obtained through geometric calculation, and the included angle formed by the normal line of the camera T perpendicular to the bottom surface or the ceiling and the connecting line of the camera T to three angle anchor points (A, m, n) is calculated on the assumption that the texture picture in the acquired image is deformed due to the problem of the camera angle,,):
The three corner anchor points are square positioning images, the angle of the upper half image of the positioning pattern of each corner anchor point is measured, and the largest angle is selected asThe included angle formed by the normal line of the camera T perpendicular to the ground or the ceiling and the connecting line from the camera T to the angle anchor point is as follows:
(4);
Obtaining an included angle formed by the normal line of the camera T perpendicular to the ground or the ceiling and the connecting line of the camera T to three angle anchor points (A, m, n) through the formula (4),,);
The Pythagorean theorem is used to obtain the following six formulas:
;
And then the six formulas are simultaneously calculated to obtain a function formula for calculating the world position T (XT,YT,ZT) of the glasses:
(5)。
The world positions of the cameras T are subjected to average value calculation or converted according to the known position relations of the cameras to obtain the world positions of the head display equipment/intelligent glasses, and the head display equipment/intelligent glasses of the player broadcast the world positions of the player together with the current game state data to other players in a large space in real time.
The bottom surface of the large space refers to the ground, steps, walls or a tabletop of the large space.
The three corner anchor points of the texture picture can be positioning patterns without information or independent small texture pictures, and the picture information corresponding to the small texture pictures comprises the world position of the small texture picture and the size of the small texture picture.
The picture information of the texture picture further includes a spatial region domain name for distinguishing a virtual space in the large space.
The picture information of the texture picture also comprises distance information p or q between the texture picture and the adjacent texture picture, namely, the distance p between the texture picture and the adjacent texture picture in the positive X-axis direction and the distance q between the texture picture and the adjacent texture picture in the positive Y-axis direction.
The head display device comprises a camera, a display, a memory and a processor;
The camera is connected with the processor and used for scanning the real environment where the user is located;
The display is connected with the processor and used for displaying the output content of the processor;
the memory is connected with the processor and is used for storing a computer program;
the processor is used for executing the computer program to realize the positioning method of the large-space intelligent glasses.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of positioning large-space smart glasses as described above.
The invention finds and identifies the picture information of the texture picture (such as mature two-dimension code mark) through simple visual calculation on the image collected by the camera in any direction on the intelligent glasses, the picture information comprises the world position (XYZ) and the picture size (m, n) of the picture, the position of the glasses of the player is obtained according to the identified picture information, and the world position and the game state data of the player are broadcast to other players in the same space, so that the position and the state information of the head display equipment of other players in the same space are obtained. The calculation process can be rapidly realized on the local head display equipment, so that errors caused by image training to a server, image reasoning transmission and position information feedback delay and positioning sensor assistance are avoided, and error problems caused by far distance or blocked texture pictures are also avoided. The invention not only reduces the server, but also greatly reduces the investment cost of large space, and can realize the multi-person interaction scene only by the calculation power of the intelligent glasses.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In embodiments of the application, the words "exemplary" or "such as" are used to mean that any embodiment or aspect of the application described as "exemplary" or "such as" is not to be interpreted as preferred or advantageous over other embodiments or aspects. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
The principle of the invention is explained:
(1) Texture pictures are arranged on the ground, steps, desktops and/or ceilings in a large space, are different, and are acquired by utilizing cameras in the intelligent glasses, and are identified through simple visual calculation:
Because players need to walk in a large space with smart glasses in large space applications, the ground of the space has a safe walking characteristic that is flat and does not cause a wrestling. Existing intelligent glasses are generally provided with cameras in multiple directions on a glasses frame. In order to solve the problem that a texture picture is arranged on a wall surface to cause a long distance or is blocked by other players, and the problem that the experience is poor due to the need of low head or head lifting, the invention proposes to print or paste or project the texture picture on the ground, steps, a desktop and/or a ceiling, collect images by utilizing a camera in any direction in an intelligent glasses, find the texture picture in the images through simple visual calculation, and identify picture information contained in the texture picture, or acquire the picture information in a comparison table locally stored in the intelligent glasses through an ID of the texture picture, wherein the picture information comprises the world position (XYZ) and the picture size (m, n) of the picture.
The picture information of the texture picture may further include a spatial region domain name for distinguishing a virtual space in a large space, and the player may interact with virtual positions in different virtual spaces at the same physical position. When the player is in different virtual spaces, the same texture picture can be used for realizing interaction of different virtual scenes. The large space can also be formed by a plurality of secondary spaces, the secondary spaces and the secondary spaces can be overlapped and interacted, and the overlapped virtual areas can better enable players to seamlessly transfer the used space, so that the problem of gear breaking caused by no intersection between the secondary spaces is avoided.
The picture information of the texture picture also comprises distance information p or q between the texture picture and the adjacent texture picture. As shown in fig. 3, let the distance between two adjacent texture pictures be p or q, i.e., the distance p between the adjacent texture pictures in the positive X-axis direction and the distance q between the adjacent texture pictures in the positive Y-axis direction. When the smart glasses recognize three texture pictures at the same time, the world position (XYZ) and the picture size (m, n) of the three texture pictures and the distance p or q between the smart glasses and two adjacent texture pictures are respectively obtained, so that a small matrix is formed, wherein the world position, the picture size (m, n) and the distance p between the two texture pictures in the X axis and the distance q between the two texture pictures in the Y axis of the three texture pictures are contained. And the whole large space or the space of the large space can be provided with texture pictures with fixed distances for multi-time multi-dimensional calculation of the positions of the glasses, if the calculated positions of the two data have deviation, the average value of the two data is a more accurate position of the glasses, so the calculated deviation can be corrected by combining the calculation of the distance between the adjacent texture pictures, and the more accurate position of the glasses is obtained.
According to the invention, the texture picture can be vertically attached to or printed or projected on the wall surface or the ground, the vertical texture picture can be changed into the Z axis by changing the direction An and setting the direction Am as the X axis or the Y axis, and if the direction Am is the XY two-dimensional direction, the direction Am of the texture picture is added for converting the X axis and the Y axis.
(2) Knowledge of location information through texture pictures in the field of positioning is prior art, several examples of texture pictures are given below:
A. The texture picture of the first example is a two-dimensional code picture, as shown in fig. 1, three angles (a, m, n) of four angles of the two-dimensional code are provided with positioning patterns (Position Detection Pattern) without information, the positioning patterns are called angle anchor points, the two-dimensional code is assumed to be square, adjacent angle anchor points YA or AX are connected in pairs, the three angle anchor points form an included angle nAm of 90 degrees, wherein the angle anchor point a is the vertex of the included angle nAm, the world position of the angle anchor point a is defined as the world position (XYZ) corresponding to the texture picture, the connection line from the angle anchor point a to the angle anchor point n is the positive number direction of the Y axis, the connection line from the angle anchor point a to the angle anchor point m is the positive number direction of the X axis, that is to say, the two-dimensional code information plane is the positive number plane of the X axis and the Y axis nA is the length n in the real world, the connection line Am is m in the real world, and if the two-dimensional code is square, m=n;
B. The texture picture of the second example is a two-dimensional code picture, as shown in fig. 2, the three corner anchor points of the two-dimensional code are also independent two-dimensional code pictures, the picture information analyzed by the two-dimensional code is different in size, the position information (XYZ) of the three small two-dimensional codes is different, and the position information of the angle a anchor point of the large two-dimensional code is the same as the position information of the small two-dimensional code appearing at the angle a anchor point. The picture sizes (m, n) of the large two-dimensional codes are different, wherein (m, n) of the large two-dimensional codes are larger than (m, n) of the small two-dimensional codes. The position information of the small two-dimensional code of the n-angle anchor point of the large two-dimensional code is equal to the position of YA plus n distance of the A-angle anchor point, and the position information of the small two-dimensional code of the m-angle anchor point of the large two-dimensional code is equal to the distance of XA plus m of the A-angle anchor point. If the player is too close to the texture picture, when the large two-dimensional code information cannot be analyzed because the large two-dimensional code is not complete in the acquired image, any corner anchor point can be found in the image, the small two-dimensional code corresponding to the corner anchor point is directly analyzed, the picture information corresponding to the small two-dimensional code is obtained, and when the complete large two-dimensional code can be identified in the acquired image, the picture information corresponding to the large two-dimensional code is directly obtained. Therefore, no matter the long-distance or short-distance texture picture, the picture information can be analyzed through the size two-dimensional code, and the calculated glasses position is ensured to be more accurate or faster.
C. and a plurality of patterns are arranged on the texture picture, and the patterns can be pre-bound with information content, so that the recorded picture information is obtained through the arrangement sequence of the patterns.
(3) Smart glasses typically have multiple cameras for calculating spatial distances. Besides obtaining the position of the single lens of the glasses, the position (XYZ) of the center point of each lens of the glasses can be further converted by calculating the distance and the angle between the single lens of the glasses and a certain texture picture and combining the distance between every two lenses.
Since the cameras independently calculate their positions, even if different cameras see different two-dimensional codes, the calculated world positions of the glasses should be theoretically the same, and in practice, the calculation results will be different due to various errors. Therefore, the average value of the positions of the cameras is more accurate, and therefore, the more accurate average position can be obtained after the average value of the positions calculated by the cameras is calculated. If 2 cameras are arranged on a pair of glasses, each camera acquires 2 two-dimensional codes, 4 average values are obtained, if 3 two-dimensional codes are seen, 6 average values are obtained, and the more the two-dimensional codes are seen, the more accurate the average world position of the glasses is calculated by the cameras. Since the world positions of the cameras are different, when the world positions of the cameras are calculated, the calculated results should be firstly converted into common positions of the cameras on the intelligent glasses (such as the middle of the intelligent glasses) and then averaged.
(4) The world position of the intelligent glasses in the large space is obtained through calculation based on the picture information of the texture pictures:
A. The method for calculating the world position T (XT,ZT) of the intelligent glasses based on the single-dimensional space two-dimensional code comprises the following steps:
Assuming that only the X axis does not have the Y axis in the two-dimensional code of the single-dimensional space, two ends of the two-dimensional code are angle anchor points A and m, a connecting line Am of the two-angle anchor points is set as the X axis, and the position (XA, Z=0) of the angle anchor point A and the length m of the connecting line Am are knownAs shown in FIG. 4, the vertical projection point of the camera T on the X-axis is XT, and the connecting line TA of the camera T and the angle anchor point A forms an included angle with the vertical projection of the camera T on the X-axisAn included angle is formed between a connecting line Tm of the camera T and the angle anchor point m and the vertical projection of the camera T on the X axisThe calculation formula of the world position (XT,ZT) of the two-dimensional intelligent glasses is obtained through triangular calculation deduction:
(1)
Or alternatively(2)
The included angleAnd included angleObtained by the following steps:
The number of pixels per degree PPD (Pixels PER DEGREE), ppd=xD/FOV, is obtained from the field angle FOV (Field Of View) in the smart glasses camera specification and the total number of pixels XD of the screen X axis, assuming fov=100 o, XD =2000 pixels, then 2000pixels divided by 100 degrees is 20PPD or 20 pixels per degree. Knowing that the pixel position of the corner anchor point a on the image is PA and the pixel position of the corner anchor point m is Pm, then:
(3);
If the camera is vertical to the ground, P0 is set as a pixel vertical to the ground, and if the camera is vertical to the ceiling, P0 is set as a pixel vertical to the ceiling, a P0 value can be obtained through a sensor (such as a nine-axis geomagnetic chip) of the intelligent glasses;
if the two-dimensional code in the single-dimensional space has only Y axis and no X axis, X in the formulas (1), (2) and (3) is changed into Y.
B. The method for calculating the world position T (XT,YT,ZT) of the intelligent glasses based on the undeformed two-dimensional code comprises the following steps:
As shown in fig. 5, assuming that the image acquired by the camera is 20X20 and the field angle FOV of the camera is 20 degrees, only 1 pixel (1 PPD) is required for each degree, the center point pixel position is PC, the camera is not vertically downward but has an inclined angle, the pixel position of the vertical ground is PO, the picture information of the two-dimensional code includes the world position and the picture size (m, n) of the large-space inner angle anchor a, that is, the distance between the angle anchor a and the X-axis angle anchor m is m, the distance between the angle anchor a and the Y-axis angle anchor n is n, and the world position (XT,YT,ZT) of the glasses is calculated, that is, the world position of the pixel position PO of the vertical ground relative to the angle anchor a is solved:
Setting the world position of an angle anchor point A as A (XA,YA), the corresponding pixel position in the acquired image as PA(XPA,YPA), setting the world position of an angle anchor point m as m (Xm,Ym), the corresponding pixel position in the acquired image as Pm(XPm,YPm), setting the world position of an angle anchor point n as n (Xn,Yn), setting the corresponding pixel position in the acquired image as Pn(XPn,YPn), acquiring the pixel position of the vertical ground or ceiling on the acquired image as PO(XPo,YPo through a nine-axis geomagnetic chip of the intelligent glasses, and knowing the central point pixel position of the acquired image as PC(XPc,YPc through a glasses camera specification;
Assuming that the picture information of the two-dimensional code comprises a world position WA = (1400,1000) of an angle anchor point a and m=n=300 mm, a world position m= (1700,1000) of an angle anchor point m and a world position n= (1400,1300) of an angle anchor point n, wherein m and n in the world positions only affect an X axis or a Y axis;
As shown in fig. 5 ,PC=(10,10), PO=(02,08), PA=(14,10), Pm=(17,13), Pn=(11,13),, m=n=4.24 pixels are obtained by the pythagorean theorem, and the two-dimensional code in the actually acquired image is deformed due to perspective, and is not square but trapezoid. Here, for simplicity of understanding, it is assumed that the camera T is positioned high, and the two-dimensional code is seen to be substantially undeformed.
The first step is to find the pixel position PX, then calculate the XT position of the world space X-axis using equation (1):
Knowing that the connection line Am from the angle anchor point A to the angle anchor point m is the positive direction of the world space X axis, the slope of PA=(14,10), Pm = (17, 13) is SX =3/3=1, then the straight line formula passing through the point PX point A is Y=SXX+BX, the BX = -4 is calculated according to two points PA and Pm, the world space X axis on the image is expressed as YP=XP -4 by the formula, the straight line POPX is perpendicular to the world space X axis, the intersection position is the pixel position PX, and the PX(XPX,YPX is calculated:
XPX=(YPO+SXXPO-B)/2SX
YPX=SXXPX+BX
XPX=(8+2+4)/2=7,YPX=SXXPX+BX=7-4=3,PX = (7, 3), consistent with the position on fig. 5.
According to the formula (1), two pixel distances PXPA and PXPm of the world space X axis are calculated firstly, and the PX(7,3)PA (14, 10) distance=9.90 pixels and the PX(7,3)Pm (17, 13) distance=14.14 pixels are calculated according to the Pythagorean theorem:
PX(7,3)PA (14, 10) distance/ppd=9.90 degrees;
PX(7,3)Pm (17, 13) distance/ppd=14.14 degrees;
formula (1) XT=(Xm Tan-XATan)/( Tan-Tan) =723 Mm;
formula (2) ZT=(XA-Xm)/(Tan-Tan) =3876 Mm;
Thirdly, finding the pixel position PY in the image, and calculating the YT position of the Y axis in world space by using the formula (3):
knowing that the connection An from the angle anchor point A to the angle anchor point n is the Y axis of the world space, PA=(14,10), Pn = (11, 13) and the slope SY = -1, then according to the formula (4), the straight line passing through the point PY and PA is Y=SYX+BY, and BY =24 is calculated according to two points PA and Pn, the Y axis of the world space on the image is expressed as YP= 24 -XP by the formula, the straight line POPY is perpendicular to the Y axis of the world space, the crossing position is the pixel position PY, and PY(XPY,YPY is calculated:
XPY=(YPO+SYXPO-BY)/2SY, XPY=9;YPY=SYXPY+BY=15, PY=(9,15), Consistent with the position on fig. 5.
According to the formula (3), two pixel distances PYPA and PYPn of the world space Y axis are calculated firstly, and the PY(9,15)PA (14, 10) distance=7.07 pixels and the PY(9,15)Pn (11, 13) distance=2.83 pixels are calculated according to the Pythagorean theorem, so that:
pY(9,15)PA (14, 10) distance/ppd=7.07 degrees;
PY(9,15)Pn (11, 13) distance/ppd=2.83 degrees;
Formula (1) YT=(Ym Tan-YATan)/( Tan-Tan) =1201 Mm;
Formula (2) ZT=(YA-Yn)/(Tan-Tan) =4021 Mm.
Since ZT calculated above is erroneous, the second and fourth step Z height averages can be calculated, the final average ZT = 3949 mm;
and fifthly, calculating the world position T (XT,YT,ZT) = (723,1201,3949) of the glasses.
C. Another method for obtaining the included angle formed by the normal line of the camera perpendicular to the ground or the ceiling and the connecting line of the camera to the angle anchor point A is provided:
as shown in fig. 6, it is assumed that the angle formed by the normal line of the camera perpendicular to the floor or ceiling and the line connecting the camera to the angle anchor point a isWhen the camera is used for looking down from the right upper direction of the two-dimensional code, the included angle is formed0 DEG, if the camera gradually moves to the side face, when the two-dimensional code is seen from the side face, the two-dimensional code seen through the camera is flat, at the timeIs 90 DEG, thus the angle can be knownThe direction of the camera view angle anchor point A is directly related. Therefore, the included angle formed by the normal line of the camera perpendicular to the ground or the ceiling and the connecting line of the camera to the angle anchor point A can be calculated according to the shape of the two-dimensional code seen through the camera。
Fig. 7 is a schematic diagram illustrating the change of the positioning pattern corresponding to the two-dimensional code angle anchor point a from different angles through the camera. Assuming that the positioning pattern is square, any one of the three corner anchor points mAn of the two-dimensional code is seen from the camera of the head display device, and the upper pattern is seen to be farther than the lower pattern. Unless looking at the front=0°), Or at least 1 of the four corners of the square of the positioning pattern will have an angle greater than 90 °, at least one of which is less than 90 °, except that the middle one of fig. 7 is vertical, and both angles are 90 °, and from the other pitch angles there are 2 angles greater than 90 ° and 2 angles less than 90 °. We can intercept the upper half of the positioning pattern, measure the angle and select the largest angle to be set asWill thenSubtracting 900 is the included angle formed by the line between the normal of the camera perpendicular to the ground or ceiling and the camera to the angle anchor point:
(4)。
therefore, all three angle anchor points mAn of the two-dimensional code can obtain an included angle formed by the normal line of the camera perpendicular to the ground or the ceiling and the connecting line of the camera to the angle anchor points through the formula (4),,). FIG. 8 is a diagram showing two-dimensional codes seen at four pitch angles, each two-dimensional code having three corner anchor points mAn, the upper half of each corner anchor point having the largest angleThe display is as above, and the angle formed by the normal line of the camera perpendicular to the ground or the ceiling and the connecting line of the camera to the angle anchor point is reduced by 90 degrees,,). The method does not need to use the pixels and the FOV of the image acquired by the camera to convert the angle, and can be faster and more direct.
As shown in fig. 9, it is assumed that the pixel position PO of the vertical floor or ceiling in the image acquired by the camera is obtained by the nine-axis geomagnetic chip of the smart glasses, and an included angle is formed by the normal line of the camera vertical to the floor or ceiling and the connecting line of the camera to the three corner anchor points mAn,,) The method is used for rapidly obtaining;
As shown in fig. 10, when the camera T (XT,YT,ZT) sees the texture picture, the lines between the camera T (XT,YT,ZT) and the corner anchors a (XA,YA,ZA)、m(Xm,Ym,Zm) and n (Xn,Yn,Zn) are HA、Hm and Hn, and the following three formulas are formed by the pythagorean theorem:
;
As shown in FIG. 11, the relationship between the camera T and the angle anchor point A is shown from the side view (the height Z axis and HA are the transverse axes), and the angle formed by FIG. 7 and the connection line between the normal line of the camera T perpendicular to the ground or the ceiling and the camera T to the three angle anchor points mAn,,) The following three formulas are obtained:
;
The six formulas are simultaneously calculated to obtain a function formula for calculating the world position T (XT,YT,ZT) of the glasses:
(5);
The above six formulas are combined to obtain many different formulas (5) of XTYTZT.
Example 1
The first embodiment of the invention provides a positioning method of large-space intelligent glasses, which is suitable for a meta-universe application scene which can be directly interacted by head display equipment of players in a large space of multi-person interaction without training and reasoning by a central server, wherein a plurality of texture pictures which are different in each other are distributed on the ground, steps, a desktop, a wall and/or a ceiling of the large space, any one of cameras on the intelligent glasses is used for collecting images, the image is identified from the images collected by the cameras T, picture information corresponding to the texture pictures can be obtained, the world position of the cameras T can be calculated by combining the picture information, and the condition that the texture pictures in the collected images are not deformed or are calibrated in distortion is assumed, as shown in fig. 5, and the positioning method comprises the following steps:
Step 1, directly obtaining picture information or obtaining an ID of a texture picture by analyzing the texture picture, and matching the ID to the picture information in a comparison table locally stored in the intelligent glasses;
The image information comprises a world position (XYZ) and a graph size (m, n) of the image, wherein three angle anchor points (A, m, n) are arranged on the texture image and form An included angle nAm, wherein the angle anchor point A is the vertex of the included angle nAm, the world position of the angle anchor point A is defined as the world position (XYZ) of the image, a connecting line An from A to n angle anchor points is the positive direction of a Y axis, a connecting line Am from A to m angle anchor points is the positive direction of the X axis, the length of the connecting line nA in the real world is n, the length of the connecting line Am in the real world is m, and if the included angle nAm is not 900, the image information comprises the angle of the included angle nAm and is used for converting the X axis and the Y axis;
Step 2, obtaining the world position A (XA,YA,ZA) of the corner anchor point A, the world position m (Xm,Ym,Zm) of the corner anchor point m, the world position n (Xn,Yn,Zn) of the corner anchor point n according to the picture information, wherein the world position A (XA,YA,ZA) of the corner anchor point A is Xm=XA+m, Ym=YA, Xn=XA, Yn= YA+n,Zm=Zn=ZA;
Obtaining pixel positions PA(XPA,YPA)、Pm(XPm,YPm) and Pn(XPn,YPn corresponding to the angle anchor mAn from the acquired image);
Obtaining a pixel position PO(XPo,YPo of a vertical ground or ceiling in an image acquired by a camera through a nine-axis geomagnetic chip of the intelligent glasses;
Obtaining the pixel number PPD=XD/FOV of each degree according to the field angle FOV and the total pixel amount XD of the X axis of the screen in the specifications of the intelligent glasses camera;
Step 3, setting an included angle formed by a normal line of the camera T perpendicular to the ground or the ceiling and a connecting line TA of the camera T to an angle anchor point A asAn included angle formed by a line Tm between the normal line of the camera T perpendicular to the ground or the ceiling and the angle anchor point m of the camera T isAn included angle formed by a line Tn between the normal line of the camera T perpendicular to the ground or the ceiling and the angle anchor point n of the camera T is;
Let the line Am from the angle anchor a to the angle anchor m be the X-axis of world space, calculate the slope SX of the line Am according to the pixel positions PA and Pm, then the X-axis passing through the point PX and the point PA is expressed as:
Y=SXX+BX(6);
Substituting pixel positions PA and Pm into formula (6) to obtain BX;
Since the line POPX is perpendicular to the X-axis, the pixel position PX is the intersection of the line POPX and the X-axis, and the slope of the line POPX perpendicular to the line PAPm is-SX, expressed as:
Y=-SXX+BXV(7);
Substituting pixel position PO(XPo,YPo) into equation (7) to obtain BXV, so that an intersection point pixel position PX(XPX,YPX of the straight line POPX and the X axis can be found;
Let the line An from the angle anchor A to the angle anchor n be the Y-axis of world space, calculate the slope SY of the line An according to the pixel positions PA and Pn, then the straight line passing through the point PY and the point PA is expressed as:
Y=SYX+BY(8);
Substituting pixel position PA or Pn into equation (8) to obtain BY;
Since the line POPY is perpendicular to the Y-axis, the pixel position PY is the intersection of the line POPY and the Y-axis, and the slope of the line POPY perpendicular to the line PAPn is-SY, expressed as:
Y=-SYX+BYV(9);
Substituting pixel position PO(XPo,YPo) into equation (9) to obtain BYV, so that an intersection point pixel position PY(XPY,YPY of the straight line POPY and the Y axis can be found;
Step 4, calculating to obtain pixel distances PXPA and PXPm by Pythagorean theorem
=PXPA/PPD
=PXPm/PPD(3);
The method is calculated by the following formula:
(1);
Or (b)(2);
The pixel distances PYPA and PYPm are calculated by Pythagorean theorem
=PYPA/PPD
=PYPn/PPD(3);
The method is calculated by the following formula:
(1);
Or (b)(2);
After averaging the two ZT obtained above, the world position of the camera T is obtained (XT,YT,ZT).
And 5, calculating the world positions of the cameras T by averaging or according to the known position relations of the cameras to obtain the world positions of the head display equipment, and broadcasting the world positions of the players together with the current game state data to other players in a large space in real time by the head display equipment of the players.
Example two
The second embodiment of the present invention provides another positioning method of large-space intelligent glasses, which is suitable for a meta-universe application scene that a player's head display device in a large space with multiple people interaction can directly interact without reasoning by a central server, a plurality of texture pictures which are different in each other are distributed on the ground, steps, desktop, wall and/or ceiling of the large space, any one of cameras on the intelligent glasses is used for collecting images, the image corresponding to the texture picture is identified from the image collected by the camera T, picture information corresponding to the texture picture can be obtained, the world position of the camera T can be obtained by combining the picture information, the texture picture is deformed due to the angle of the camera shooting at an angle of an angle anchor point in the image, as shown in fig. 9, the method comprises the following steps:
step 1, obtaining picture information by analyzing the texture picture, or obtaining an ID of the texture picture, and matching the ID to the picture information in a comparison table locally stored in the intelligent glasses;
The image information comprises An angle anchor point world position (XA,YA,ZA) of the image and world dimensions (m, n) of An X axis and a Y axis of the image, wherein three angle anchor points (A, m, n) are arranged on the texture image and form An included angle nAm, the angle anchor points are square positioning patterns, the angle anchor point A is the vertex of the included angle nAm, the world position of the angle anchor point A is defined as the world position (XA,YA,ZA) of the image, a connecting line An from the angle anchor point A to the angle anchor point n is the positive number direction of the Y axis, a connecting line Am from the angle anchor point A to the angle anchor point m is the positive number direction of the X axis, the connecting line nA is n in the length of the real world, and the connecting line Am is m in the length of the real world;
Step 2, obtaining the world position A (XA,YA,ZA) of the corner anchor point A, the world position m (Xm,Ym,Zm) of the corner anchor point m and the world position n (Xn,Yn,Zn) of the corner anchor point n according to the picture information Xn=XA+m,Ym=YA, Xn=XA,Yn=YA+n,Zm= Zn=ZA;
Step 3, setting an included angle formed by a normal line of the camera T perpendicular to the ground or the ceiling and a connecting line of the camera T to the angle anchor point A asThe included angle formed by the line between the normal line of the camera T vertical to the ground or the ceiling and the angle anchor point m of the camera T isThe included angle formed by the line between the normal line of the camera T vertical to the ground or the ceiling and the connecting line of the camera T to the angle anchor point n isCalculating the included angle formed by the normal line of the camera T perpendicular to the ground or the ceiling and the connecting line of the camera T to three angle anchor points (A, m, n),,):
The angle is measured for the upper half of the locating pattern of each corner anchor (FIG. 7), with the largest corner being selected to beThe included angle formed by the normal line of the camera T perpendicular to the ground or the ceiling and the connecting line from the camera T to the angle anchor point is as follows:
(4);
Step 4, obtaining an included angle formed by the normal line of the camera T perpendicular to the ground or the ceiling and the connecting line of the camera T to three angle anchor points (A, m, n) through the formula (4),,) The world position (XT,YT,ZT) of the camera T is calculated by equation (5):
(5);
The formula (5) is derived by the following steps:
When the camera T (XT,YT,ZT) sees the texture picture, it is assumed that the lines between the camera T (XT,YT,ZT) and the corner anchor points A (XA,YA,ZA)、m(Xm,Ym,Zm) and n (Xn,Yn,Zn) of the texture picture are HA、Hm and Hn, and an included angle is formed by the normal line of the camera T perpendicular to the ground or the ceiling and the lines between the camera T and the three corner anchor points mAn,,) The following six formulas are obtained by Pythagorean theorem:
;
The six formulas are simultaneously calculated to obtain a function formula for calculating the world position T (XT,YT,ZT) of the glasses:
(5);
the six formulas described above can be combined to obtain many different XTYTZT function formulas (5).
And 5, obtaining the world positions of the cameras T through averaging or converting according to the known position relations of the cameras, and broadcasting the world positions of the players together with the current game state data to other players in a large space in real time by the head display equipment of the players, wherein the interaction modes comprise a wireless communication mode, an internet mode and other existing communication interaction modes.
The three corner anchor points of the texture picture can be positioning patterns without information or independent small texture pictures, and the picture information corresponding to the small texture pictures comprises the world position of the small texture picture and the size of the small texture picture.
The picture information of the texture picture further includes a spatial region domain name for distinguishing a virtual space in the large space.
The picture information of the texture picture also comprises distance information p or q between the texture picture and the adjacent texture picture, namely, the distance p between the texture picture and the adjacent texture picture in the positive X-axis direction and the distance q between the texture picture and the adjacent texture picture in the positive Y-axis direction.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
Specifically, each step of the method embodiment in the embodiment of the present application may be implemented by an integrated logic circuit of hardware in a processor and/or an instruction in a software form, and the steps of the method disclosed in connection with the embodiment of the present application may be directly implemented as a hardware decoding processor or implemented by a combination of hardware and software modules in the decoding processor. Alternatively, the software modules may be located in a well-established storage medium in the art such as random access memory, flash memory, read-only memory, programmable read-only memory, electrically erasable programmable memory, registers, and the like. The storage medium is located in a memory, and the processor reads information in the memory, and in combination with hardware, performs the steps in the above method embodiments.
Example III
The third embodiment of the invention provides a head display device, as shown in fig. 12, the head display device 100 comprises a camera 101, a display 102, a memory 103 and a processor 104;
the camera 101 is connected with the processor 104 and is used for scanning the real environment where the user is located;
The display 102 is connected with the processor 104 and is used for displaying output content of the processor 104;
the memory 103 is connected to the processor 104 for storing a computer program and transmitting the program to the processor 104. In other words, the processor 104 may call and run a computer program from the memory 103 to implement the method according to the first embodiment of the present application.
In some embodiments of the application, the processor 104 may include, but is not limited to:
a general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), field programmable gate array (Field Programmable GATE ARRAY, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
In some embodiments of the application, the memory 103 includes, but is not limited to, volatile memory and/or non-volatile memory. The nonvolatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable EPROM (EEPROM), or a flash Memory. The volatile memory may be random access memory (Random Access Memory, RAM) which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available, such as static random access memory (STATIC RAM, SRAM), dynamic random access memory (DYNAMIC RAM, DRAM), synchronous Dynamic Random Access Memory (SDRAM), double data rate Synchronous dynamic random access memory (Double DATA RATE SDRAM, DDR SDRAM), enhanced Synchronous dynamic random access memory (ENHANCED SDRAM, ESDRAM), synchronous link dynamic random access memory (SYNCH LINK DRAM, SLDRAM), and Direct memory bus random access memory (DRRAM).
In some embodiments of the present application, the computer program may be divided into one or more modules, which are stored in the memory 103 and executed by the processor 104 to perform the method of embodiment one provided by the present application. The one or more modules may be a series of computer program instruction segments capable of performing the specified functions, which are used to describe the execution of the computer program on the head-mounted display device 100.
It should be appreciated that the various components in the head-mounted device 100 are connected by a bus system that includes a power bus, a control bus, and a status signal bus in addition to a data bus.
Example IV
The fourth embodiment of the present invention also provides a computer storage medium having a computer program stored thereon, which when executed by a computer enables the computer to perform the method of the first embodiment.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the scope of the invention, but to limit the invention to the particular embodiments, and any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the invention are intended to be included within the scope of the invention.