Movatterモバイル変換


[0]ホーム

URL:


CN114742933B - A remote annotation method, terminal and system based on mixed reality - Google Patents

A remote annotation method, terminal and system based on mixed reality

Info

Publication number
CN114742933B
CN114742933BCN202210258880.7ACN202210258880ACN114742933BCN 114742933 BCN114742933 BCN 114742933BCN 202210258880 ACN202210258880 ACN 202210258880ACN 114742933 BCN114742933 BCN 114742933B
Authority
CN
China
Prior art keywords
dimensional
array
coordinates
annotation
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210258880.7A
Other languages
Chinese (zh)
Other versions
CN114742933A (en
Inventor
黄伟杰
张长乐
张梦华
程新功
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Jinan
Original Assignee
University of Jinan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of JinanfiledCriticalUniversity of Jinan
Priority to CN202210258880.7ApriorityCriticalpatent/CN114742933B/en
Publication of CN114742933ApublicationCriticalpatent/CN114742933A/en
Application grantedgrantedCritical
Publication of CN114742933BpublicationCriticalpatent/CN114742933B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention provides a remote labeling method, a terminal and a system based on mixed reality, which are used for acquiring a two-dimensional labeling array, converting the acquired two-dimensional labeling array into a three-dimensional coordinate array, solving a barycentric coordinate according to the sum of all coordinates in the three-dimensional coordinate array in all directions, sending out rays at the barycentric coordinate, determining an object impacted by the rays as the object to be labeled, subtracting the coordinates of the collision point of the rays with the object from the barycentric coordinate to obtain a transformation displacement, adding each point in the three-dimensional coordinate array with the transformation displacement to obtain a new three-dimensional array, and rendering a line segment between every two points in the new three-dimensional array to finish real-time three-dimensional labeling in space.

Description

Remote labeling method, terminal and system based on mixed reality
Technical Field
The invention belongs to the technical field of remote labeling, and particularly relates to a remote labeling method, a terminal and a system based on mixed reality.
Background
In work and life, when people encounter problems, remote assistance may be performed, but objects to be operated are often in a relatively complex real environment, if only a unilateral view angle can be watched depending on a traditional video interaction mode, limb language cannot be used, the environment of the real objects and the positions of the real objects are difficult to describe in a language mode, and errors are easily caused when the eyes are used for watching and describing the positions of the real objects.
The inventor finds that the traditional video interaction mode can only watch a unilateral visual angle, cannot use limb language, is difficult to describe the environment of a real object and the position of the real object in a language mode, cannot accurately realize remote marking between the devices of an assisted end (PC end) and an assisted end (Hololens end), and has low accuracy.
Disclosure of Invention
In order to solve the problems, the invention provides a remote labeling method, a terminal and a system based on mixed reality, and the method provided by the invention can realize remote mixed reality labeling between Hololens end and other equipment such as a PC end and has higher accuracy.
In order to achieve the above object, in a first aspect, the present invention provides a remote labeling method based on mixed reality, which adopts the following technical scheme:
A remote labeling method based on mixed reality comprises the following steps:
Acquiring a two-dimensional annotation array;
Converting the acquired two-dimensional annotation array into a three-dimensional coordinate array;
The method comprises the steps of obtaining a three-dimensional array, obtaining a barycentric coordinate according to the sum of all coordinates in the three-dimensional array in all directions, sending out rays at the barycentric coordinate, determining an object impacted by the rays as the object to be marked, subtracting the coordinates of the rays and the collision points of the object from the barycentric coordinate to obtain a transformation displacement, adding each point in the three-dimensional array with the transformation displacement to obtain a new three-dimensional array, and rendering a line segment between every two points of the new three-dimensional array to finish real-time three-dimensional marking in space.
Further, when the two-dimensional annotation array is obtained, a line renderer and an inverse interpolation algorithm are used for two-dimensional annotation.
Further, the interpolation lerp between two numbers y1,y2 is lerp=y1+(y2-y1) x weighy;
Wherein weight is a real number in interval [0,1], and the inverse interpolation algorithm uses the known interpolation lerp and two numbers y1,y2 to calculate the weight value
Further, screen space coordinates of each frame of the mouse are automatically captured and stored in a storage coordinate array, a point number is added, meanwhile, the last two-bit coordinates in the coordinate array form a new coordinate array, the last two-bit coordinates are (point 1, point 2), the method is circulated for a plurality of times, 0 is used as y1, 3 is used as y2 and 0 is used as interpolation for each circulation, a weight value is obtained, a supplementary coordinate= (1-weight) point1+weight point s2 is added, the supplementary coordinate is added to a new coordinate array, meanwhile, the interpolation value is added by 1 to enter the next circulation, the supplementary coordinate between 2 two points and the original two coordinate points are obtained after the circulation is finished for a plurality of times, and a smooth curve is obtained by connecting each point of the obtained coordinates until end point labeling is assisted.
Further, when converting the acquired two-dimensional annotation array into a three-dimensional coordinate array, converting the two-dimensional coordinate array of the screen into a three-dimensional coordinate array at the same position of the object picture used by the assisted terminal,
Where xm is the sum of the x-axes of all coordinates, ym is the sum of the y-axes of all coordinates, zm is the sum of the z-axes of all coordinates, and m is the number of coordinates.
In order to achieve the above purpose, in a second aspect, the present invention further provides a remote labeling terminal based on mixed reality, which adopts the following technical scheme:
a mixed reality based remote annotation terminal comprising at least a processor configured to:
Acquiring a two-dimensional annotation array;
Converting the acquired two-dimensional annotation array into a three-dimensional coordinate array;
The method comprises the steps of obtaining a three-dimensional array, obtaining a barycentric coordinate according to the sum of all coordinates in the three-dimensional array in all directions, sending out rays at the barycentric coordinate, determining an object impacted by the rays as the object to be marked, subtracting the coordinates of the rays and the collision points of the object from the barycentric coordinate to obtain a transformation displacement, adding each point in the three-dimensional array with the transformation displacement to obtain a new three-dimensional array, and rendering a line segment between every two points of the new three-dimensional array to finish real-time three-dimensional marking in space.
When initiating video voice call invitation, the remote server pulls the token of the labeling terminal and the token of the assisted terminal into the same room, and sends the internet protocol address and port of the other party to the two parties, so that the assisted terminal and the assisted terminal establish a user datagram protocol.
In order to achieve the above purpose, in a third aspect, the present invention further provides a remote labeling system based on mixed reality, which adopts the following technical scheme:
a mixed reality based remote labeling system, comprising:
a remote server, and at least one assisting terminal and at least one assisted terminal connected thereto;
When initiating video voice call invitation, the remote server pulls the token of the marked terminal and the token of the assisted terminal into the same room, and sends the internet protocol address and port of the other party to the two parties, so that the assisted terminal and the assisted terminal establish a user datagram protocol;
The labeling module is configured to calculate barycentric coordinates according to the sum of all coordinates in the three-dimensional coordinate array, send out rays at the barycentric coordinates, determine the object impacted by the rays as the object to be labeled, subtract the coordinates of the collision points of the rays and the object from the barycentric coordinates to obtain a transformation displacement, add each point in the three-dimensional coordinate array with the transformation displacement to obtain a new three-dimensional array, and render a line segment between every two points of the new three-dimensional array to complete real-time three-dimensional labeling in space.
In order to achieve the above object, in a fourth aspect, the present invention further provides a remote labeling system based on mixed reality, which adopts the following technical scheme:
A remote labeling system based on mixed reality comprises a data acquisition module, a conversion module and a labeling module;
the data acquisition module is configured to acquire a two-dimensional annotation array;
the conversion module is configured to convert the acquired two-dimensional annotation array into a three-dimensional coordinate array;
The labeling module is configured to calculate barycentric coordinates according to the sum of all coordinates in the three-dimensional coordinate array, send out rays at the barycentric coordinates, determine the object impacted by the rays as the object to be labeled, subtract the coordinates of the collision points of the rays and the object from the barycentric coordinates to obtain a transformation displacement, add each point in the three-dimensional coordinate array with the transformation displacement to obtain a new three-dimensional array, and render a line segment between every two points of the new three-dimensional array to complete real-time three-dimensional labeling in space.
In order to achieve the above object, the present invention also provides, in a fifth aspect, a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the mixed reality based remote labeling method of the first aspect.
In order to achieve the above object, the present invention further provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the mixed reality based remote labeling method according to the first aspect when executing the program.
Compared with the prior art, the invention has the beneficial effects that:
The method comprises the steps of converting a received two-dimensional coordinate array into a three-dimensional coordinate array which is the same with an object picture used by a assisted end, solving a barycentric coordinate, sending a ray by using the barycentric coordinate, wherein an object which is impacted by the ray is the object to be marked, subtracting the coordinate of an impact point and the coordinate point of the barycentric coordinate to obtain a transformation displacement, adding each point in the three-dimensional coordinate array to the transformation displacement to obtain a new three-dimensional array, and rendering a line segment between every two points of the new three-dimensional array to finish real-time three-dimensional marking in space; the experimental results show that the method and the system provided by the invention can realize remote mixed reality labeling between Hololens end and other equipment such as PC end, and have higher accuracy.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments and are incorporated in and constitute a part of this specification, illustrate and explain the embodiments and together with the description serve to explain the embodiments.
FIG. 1 is a schematic diagram of an encapsulation of application layer communication according to embodiment 1 of the present invention;
fig. 2 is a schematic diagram of a specific structure of a voice-video call according to embodiment 1 of the present invention;
FIG. 3 is a two-dimensional labeling flow chart of embodiment 1 of the present invention;
FIG. 4 is a three-dimensional labeling flow chart of embodiment 1 of the present invention;
fig. 5 is a schematic diagram of labeling effect performed by the assisting terminal in embodiment 1 of the present invention;
fig. 6 is a schematic diagram showing the labeling effect of the assisted terminal in real time according to embodiment 1 of the present invention.
Detailed Description
The invention will be further described with reference to the drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the application. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
Example 1:
As shown in fig. 1, the present embodiment provides a remote labeling method based on mixed reality, including:
Acquiring a two-dimensional annotation array;
Converting the acquired two-dimensional annotation array into a three-dimensional coordinate array;
The method comprises the steps of obtaining a three-dimensional array, obtaining a barycentric coordinate according to the sum of all coordinates in the three-dimensional array in all directions, sending out rays at the barycentric coordinate, determining an object impacted by the rays as the object to be marked, subtracting the coordinates of the rays and the collision points of the object from the barycentric coordinate to obtain a transformation displacement, adding each point in the three-dimensional array with the transformation displacement to obtain a new three-dimensional array, and rendering a line segment between every two points of the new three-dimensional array to finish real-time three-dimensional marking in space.
The method in the embodiment can be realized according to a remote server, an assisting terminal (namely an assisting terminal) and a labeling terminal (namely an assisted terminal), wherein the server is connected with the assisting terminal and the assisted terminal, in the embodiment, the assisting terminal is configured to perform two-dimensional labeling and send the two-dimensional labeling to the assisted terminal, the assisted terminal is configured to convert the two-dimensional labeling into three-dimensional labeling and display the three-dimensional labeling in a specific position in real space in real time, convert the received two-dimensional coordinate array into a three-dimensional coordinate array which is the same with an object picture, calculate a barycentric coordinate, send out a ray by utilizing the barycentric coordinate, the object which is impacted by the ray is the object to be labeled, subtract the coordinate of an impact point and the barycentric coordinate point to obtain a transformation displacement, add each point in the three-dimensional coordinate array to obtain a new three-dimensional array, render a line segment between every two points of the new three-dimensional array, and finish the three-dimensional labeling in real time in space.
Specifically, as shown in fig. 2, the working process or principle of the remote labeling system based on mixed reality in this embodiment is as follows:
s1, constructing a remote server, and connecting an assisted end with an assisted end;
The method that two modes of Transmission Control Protocol (TCP) and user data packet protocol (UDP) are combined in the process of constructing the server is adopted, because the TCP protocol has very good reliability, before the TCP is adopted to transmit data, three handshakes are adopted to ensure the synchronous transmission and reception of a data end for tracking and negotiating the data transmitted each time, and in addition, mechanisms such as acknowledgement, timeout retransmission, sliding window, congestion control, flow control and delayed acknowledgement are adopted to ensure the integrity of the data in the process of data transmission, after the data transmission is completed, the connection is automatically disconnected, so that the system resources are saved, and the like.
As shown in fig. 1, the method of establishing a user pool (UserTokenPool) is used to achieve the improvement of communication efficiency by pulling the user into a Room (Room), the server uniformly processes the request of establishment and disconnection of two protocols by UnityNetWorkManager, the server delivers the received information to MESSAGEHANDLERCENTER to process the type and content of the identification information and execute the corresponding operation, the server uses MESSAGESENDMANAGER to send the request, and then the application layer is only responsible for interacting with MESSAGEHANDLERCENTER and MESSAGESENDMANAGER types, so that the encapsulation of the application layer communication can be achieved.
When the server is opened, the port and user pool are initialized, connection client events are registered, and receive and send events are registered for all users in all user pools. And then, each time a user is connected to the server, a user (UserToken) is popped (Pop) from the user pool, assigned as a socket of the current connected user, and after the server receives the user's request, a corresponding judgment is made for the user, and a result is returned. The server judges the heartbeat time, and the user sends heartbeat packets to the server at intervals of 30 seconds to judge whether the user is online, if the user is overtime, the user is disconnected, or the user is disconnected actively.
S2, establishing video connection;
When the assistance terminal and the assisted terminal initiate the video voice call invitation after establishing connection with the server, the remote server pulls the Token (Token) of the assistance terminal and the Token (Token) of the assisted terminal into the same Room (Room) and sends the internet protocol address (IP) and the Port (Port) of the other party to the two parties, thereby establishing the assistance terminal and the assisted terminal
User datagram protocol (User Datagram Protocol, UDP) connections.
After connection is established, a designated network camera is opened by using a Unity Application Programming Interface (API) to acquire video data, a real-time video input rendered Texture (WebConm Texture) is created, the real-time video input rendered Texture is required to be converted into a Texture2D format to acquire the video data, the Texture2D is created by taking RenderTexture as an intermediary, and an image is created by directly pointing to a memory address of the real-time camera rendered Texture and is stored in the Texture data, so that the memory is saved to finish conversion to acquire the video data.
S3, the assisting end marks and sends the two-dimensional mark to the assisted end;
as shown in fig. 3, a line renderer and an inverse interpolation algorithm may be used in the labeling process, specifically:
Interpolation lerp between two numbers y1,y2 is lerp=y1+(y2-y1) weigt, weigt is a real number in interval [0,1], and the inverse interpolation algorithm uses the known interpolation lerp and two numbers y1,y2 to find the weight value
After the auxiliary end clicks to start marking, every time the mouse presses the left button, the screen space coordinates of each frame of the mouse are automatically captured and saved into the stored coordinate array (Positions) and a point number is added, meanwhile, the last two coordinates (point 1, point 2) in the Positions array form a new coordinate array (NewPositions) to be circulated for a plurality of times, in this embodiment, 4 times, 0 is used as y1 and 3 is used as y2 and 0 is used as lerp, the value of weigt is obtained, the algorithm is used for obtaining the supplementary coordinates= (1-weigt) x point1+ weigt x point2, the calculated supplementary coordinates are added into a new coordinate array, meanwhile, the lerp value is added by 1 to enter the next circulation, after 4 times of circulation, the obtained supplementary coordinates between 2 points and the original two coordinates are obtained, as shown in table 1, the obtained coordinates are connected between each point until the auxiliary end is smoothed, and the auxiliary end marking is obtained until the auxiliary end is sent to the auxiliary end Positions.
Table 1 complements the coordinates and original two coordinate points
S4, the assisted end converts the two-dimensional annotation into a three-dimensional annotation, and displays the three-dimensional annotation at a specific position in a display space in real time to finish the annotation;
as shown in fig. 4, after the assisted terminal receives the coordinate array of the assisted terminal, the received coordinates are converted into a three-dimensional coordinate array at the same position from the user (camera) screen of the assisted terminal by using a Unity Application Programming Interface (API), and an algorithm is usedThe method comprises the steps of obtaining barycentric coordinates (x, y, z), utilizing barycentric coordinates to send out a ray forwards, storing coordinates (n, o, a) of a collision point when an object to be marked is the object to be marked, subtracting the coordinates of the collision point and the barycentric coordinates to obtain transformation displacement amounts (a, b, c), adding each point in a three-dimensional coordinate array to the transformation displacement amounts (a, b, c) to obtain a new three-dimensional array, and rendering a line segment between every two points of the new three-dimensional array to finish real-time three-dimensional marking in space, wherein xm is the sum of x axes of all coordinates, ym is the sum of y axes of all coordinates, zm is the sum of z axes of all coordinates, and m is the number of coordinates.
S5, as shown in fig. 5 and 6, the experimental results show that the method provided by the patent can realize remote labeling between the equipment of the assisted terminal and the equipment of the assisted terminal, and has higher accuracy.
Example 2:
the present embodiment provides a remote labeling terminal based on mixed reality, at least including a processor, where other settings of the terminal are conventional techniques, and the processor is configured to:
Acquiring a two-dimensional annotation array;
Converting the acquired two-dimensional annotation array into a three-dimensional coordinate array;
The method comprises the steps of obtaining a three-dimensional array, obtaining a barycentric coordinate according to the sum of all coordinates in the three-dimensional array in all directions, sending out rays at the barycentric coordinate, determining an object impacted by the rays as the object to be marked, subtracting the coordinates of the rays and the collision points of the object from the barycentric coordinate to obtain a transformation displacement, adding each point in the three-dimensional array with the transformation displacement to obtain a new three-dimensional array, and rendering a line segment between every two points of the new three-dimensional array to finish real-time three-dimensional marking in space.
When initiating video voice call invitation, the remote server pulls the token of the labeling terminal and the token of the assisted terminal into the same room, and sends the internet protocol address and port of the other party to the two parties, so that the assisted terminal and the assisted terminal establish a user datagram protocol.
The working method of the system is the same as that of the mixed reality-based remote labeling method in embodiment 1, and will not be described here again.
Example 3:
a mixed reality based remote labeling system, comprising:
a remote server, and at least one assisting terminal and at least one assisted terminal connected thereto;
When initiating video voice call invitation, the remote server pulls the token of the marked terminal and the token of the assisted terminal into the same room, and sends the internet protocol address and port of the other party to the two parties, so that the assisted terminal and the assisted terminal establish a user datagram protocol;
The labeling module is configured to calculate barycentric coordinates according to the sum of all coordinates in the three-dimensional coordinate array, send out rays at the barycentric coordinates, determine the object impacted by the rays as the object to be labeled, subtract the coordinates of the collision points of the rays and the object from the barycentric coordinates to obtain a transformation displacement, add each point in the three-dimensional coordinate array with the transformation displacement to obtain a new three-dimensional array, and render a line segment between every two points of the new three-dimensional array to complete real-time three-dimensional labeling in space.
The working method of the system is the same as that of the mixed reality-based remote labeling method in embodiment 1, and will not be described here again.
Example 4:
the embodiment provides a remote labeling system based on mixed reality, which comprises a data acquisition module, a conversion module and a labeling module;
the data acquisition module is configured to acquire a two-dimensional annotation array;
the conversion module is configured to convert the acquired two-dimensional annotation array into a three-dimensional coordinate array;
The labeling module is configured to calculate barycentric coordinates according to the sum of all coordinates in the three-dimensional coordinate array, send out rays at the barycentric coordinates, determine the object impacted by the rays as the object to be labeled, subtract the coordinates of the collision points of the rays and the object from the barycentric coordinates to obtain a transformation displacement, add each point in the three-dimensional coordinate array with the transformation displacement to obtain a new three-dimensional array, and render a line segment between every two points of the new three-dimensional array to complete real-time three-dimensional labeling in space.
The working method of the system is the same as that of the mixed reality-based remote labeling method in embodiment 1, and will not be described here again.
Example 5:
the present embodiment provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the mixed reality based remote labeling method described in embodiment 1.
Example 6:
The present embodiment provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the steps of the mixed reality-based remote labeling method described in embodiment 1 are implemented when the processor executes the program.
The above description is only a preferred embodiment of the present embodiment, and is not intended to limit the present embodiment, and various modifications and variations can be made to the present embodiment by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present embodiment should be included in the protection scope of the present embodiment.

Claims (9)

Translated fromChinese
1.一种基于混合现实的远程标注方法,其特征在于,包括:1. A remote annotation method based on mixed reality, comprising:获取二维标注数组;Get the two-dimensional annotation array;将获取二维标注数组转化为三维坐标数组;Convert the obtained two-dimensional annotation array into a three-dimensional coordinate array;利用算法,求出重心坐标;其中,为所有坐标的轴的和,为所有坐标的轴的和,为所有坐标的轴的和,为坐标的个数;在重心坐标处发出射线,射线碰撞到的对象确定为所要标注的对象,将射线与对象碰撞点的坐标和所述重心坐标相减得到变换位移量;将三维坐标数组中的每个点与所述变换位移量相加,得到新的三维数组;在新的三维数组的每两个点之间渲染一条线段,完成在空间中实时的三维标注;Utilizing Algorithms , find the coordinates of the center of gravity; where, For all coordinates The sum of the axes, For all coordinates The sum of the axes, For all coordinates The sum of the axes, is the number of coordinates; a ray is emitted at the barycentric coordinate, and the object hit by the ray is determined to be the object to be annotated. The coordinates of the point where the ray and the object collide are subtracted from the barycentric coordinate to obtain a transformation displacement; each point in the three-dimensional coordinate array is added to the transformation displacement to obtain a new three-dimensional array; a line segment is rendered between every two points in the new three-dimensional array to complete real-time three-dimensional annotation in space;所述获取二维标注数组,将获取二维标注数组转化为三维坐标数组的方法实现可依据远程服务器、协助端和受协助端实现,所述服务器连接协助端和受协助端;The method of obtaining a two-dimensional annotation array and converting the obtained two-dimensional annotation array into a three-dimensional coordinate array can be implemented based on a remote server, an assisting terminal and an assisted terminal, wherein the server is connected to the assisting terminal and the assisted terminal;所述协助端,被配置为:进行二维标注,并将二维标注发送给受协助端;所述受协助端,被配置为:将二维标注数组转化为三维数组,实时显示在现实空间中的具体位置。The assisting end is configured to: perform two-dimensional annotation and send the two-dimensional annotation to the assisted end; the assisted end is configured to: convert the two-dimensional annotation array into a three-dimensional array and display the specific position in the real space in real time.2.如权利要求1所述的一种基于混合现实的远程标注方法,其特征在于,获取二维标注数组时,使用线渲染器和反插值算法进行二维标注。2. A remote annotation method based on mixed reality as described in claim 1, characterized in that when obtaining a two-dimensional annotation array, a line renderer and an inverse interpolation algorithm are used for two-dimensional annotation.3.如权利要求2所述的一种基于混合现实的远程标注方法,其特征在于,两数之间的插值为:=+(-)*3. A remote annotation method based on mixed reality as claimed in claim 2, characterized in that the two numbers Interpolation between for: = +( - )* ;其中,是一个在区间[0,1]的实数,反插值算法即利用已知插值和两数求出的值in, Is a real number in the interval [0, 1], the inverse interpolation algorithm uses the known interpolation and two numbers Find Value .4.如权利要求3所述的一种基于混合现实的远程标注方法,其特征在于,鼠标每一帧的屏幕空间坐标被自动捕获保存到存储坐标数组中,并增加一个点数;同时每一帧将坐标数组中的后两位坐标构成一个新的坐标数组,后两位坐标为(point1,point2);循环多次,每一次循环使0作为,3作为,0作为插值,求出的值,补充坐标=(1-)*point1+*points2,将补充坐标加入到一个新的坐标数组,同时使插值值加1进入下一次循环,循环多次后结束会得到2个两点之间补充坐标和原来的两个坐标点,将得到的坐标每个点之间进行连接得到一个平滑的曲线,直到协助端结束标注。4. A remote annotation method based on mixed reality as claimed in claim 3, characterized in that the screen space coordinates of the mouse are automatically captured and saved in the storage coordinate array for each frame, and a point number is added; at the same time, the last two coordinates in the coordinate array are used to form a new coordinate array for each frame, and the last two coordinates are (point1, point2 ); the loop is repeated multiple times, and each loop uses 0 as the , 3 as , 0 as interpolation, find The value of the supplementary coordinates = (1- )*point1 + *points2 , add the supplementary coordinates to a new coordinate array, and add 1 to the interpolation value to enter the next loop. After multiple cycles, the supplementary coordinates between the two points and the original two coordinate points will be obtained. Connect each point of the obtained coordinates to obtain a smooth curve until the assisting end finishes labeling.5.采用如权利要求1所述的一种基于混合现实的远程标注方法的一种基于混合现实的远程标注终端,其特征在于,至少包括处理器,所述处理器被配置为:5. A remote annotation terminal based on mixed reality using the remote annotation method based on mixed reality according to claim 1, characterized in that it includes at least a processor, wherein the processor is configured to:获取二维标注数组;Get the two-dimensional annotation array;将获取二维标注数组转化为三维坐标数组;Convert the obtained two-dimensional annotation array into a three-dimensional coordinate array;利用算法,求出重心坐标;其中,为所有坐标的轴的和,为所有坐标的轴的和,为所有坐标的轴的和,为坐标的个数;在重心坐标处发出射线,射线碰撞到的对象确定为所要标注的对象,将射线与对象碰撞点的坐标和所述重心坐标相减得到变换位移量;将三维坐标数组中的每个点与所述变换位移量相加,得到新的三维数组;在新的三维数组的每两个点之间渲染一条线段,完成在空间中实时的三维标注。Utilizing algorithms , find the coordinates of the center of gravity; where, For all coordinates The sum of the axes, For all coordinates The sum of the axes, For all coordinates The sum of the axes, is the number of coordinates; a ray is emitted at the barycentric coordinate, and the object hit by the ray is determined to be the object to be labeled, and the coordinates of the collision point between the ray and the object are subtracted from the barycentric coordinate to obtain the transformation displacement; each point in the three-dimensional coordinate array is added to the transformation displacement to obtain a new three-dimensional array; a line segment is rendered between every two points in the new three-dimensional array to complete real-time three-dimensional labeling in space.6.采用如权利要求1所述的一种基于混合现实的远程标注方法的一种基于混合现实的远程标注系统,其特征在于,包括:6. A remote annotation system based on mixed reality using the remote annotation method based on mixed reality according to claim 1, characterized in that it includes:远程服务器,以及与其连接的至少一个协助终端和至少一个受协助终端;a remote server, and at least one assisting terminal and at least one assisted terminal connected thereto;所述受协助终端为标注终端,与其他协助终端通过远程服务器连接;发起视频语音通话邀请时,所述远程服务器把所述标注终端的令牌与所述受协助终端的令牌拉入同一房间,并向双方发送对方的互联网协议地址与端口,使所述协助端与所述受协助端建立用户数据报协议;The assisted terminal is a marking terminal, which is connected to other assisting terminals via a remote server. When initiating a video or voice call invitation, the remote server pulls the token of the marking terminal and the token of the assisted terminal into the same room and sends each other's Internet Protocol address and port to establish a User Datagram Protocol (UDP) connection between the assisting terminal and the assisted terminal.标注模块,被配置为:根据三维坐标数组中所有坐标各个方向上的和,求出重心坐标;在重心坐标处发出射线,射线碰撞到的对象确定为所要标注的对象,将射线与对象碰撞点的坐标和所述重心坐标相减得到变换位移量;将三维坐标数组中的每个点与所述变换位移量相加,得到新的三维数组;在新的三维数组的每两个点之间渲染一条线段,完成在空间中实时的三维标注。The annotation module is configured to: calculate the center of gravity coordinates based on the sum of all coordinates in all directions in the three-dimensional coordinate array; emit a ray at the center of gravity coordinates, determine the object hit by the ray as the object to be annotated, subtract the coordinates of the point where the ray and the object collide from the center of gravity coordinates to obtain a transformation displacement; add each point in the three-dimensional coordinate array to the transformation displacement to obtain a new three-dimensional array; render a line segment between every two points in the new three-dimensional array to complete real-time three-dimensional annotation in space.7.采用如权利要求1所述的一种基于混合现实的远程标注方法的一种基于混合现实的远程标注系统,其特征在于,包括数据采集模块、转换模块和标注模块;7. A remote annotation system based on mixed reality using the remote annotation method based on mixed reality according to claim 1, characterized in that it includes a data acquisition module, a conversion module and an annotation module;所述数据采集模块,被配置为:获取二维标注数组;The data acquisition module is configured to: obtain a two-dimensional annotation array;所述转换模块,被配置为:将获取二维标注数组转化为三维坐标数组;The conversion module is configured to: convert the acquired two-dimensional annotation array into a three-dimensional coordinate array;所述标注模块,被配置为:根据三维坐标数组中所有坐标各个方向上的和,求出重心坐标;在重心坐标处发出射线,射线碰撞到的对象确定为所要标注的对象,将射线与对象碰撞点的坐标和所述重心坐标相减得到变换位移量;将三维坐标数组中的每个点与所述变换位移量相加,得到新的三维数组;在新的三维数组的每两个点之间渲染一条线段,完成在空间中实时的三维标注。The annotation module is configured to: calculate the center of gravity coordinates based on the sum of all coordinates in all directions in the three-dimensional coordinate array; emit a ray at the center of gravity coordinates, determine the object hit by the ray as the object to be annotated, subtract the coordinates of the point where the ray and the object collide from the center of gravity coordinates to obtain a transformation displacement; add each point in the three-dimensional coordinate array to the transformation displacement to obtain a new three-dimensional array; render a line segment between every two points in the new three-dimensional array to complete real-time three-dimensional annotation in space.8.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现了如权利要求1-4任一项所述的基于混合现实的远程标注方法的步骤。8. A computer-readable storage medium having a computer program stored thereon, characterized in that when the program is executed by a processor, the steps of the remote annotation method based on mixed reality as described in any one of claims 1 to 4 are implemented.9.一种电子设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现了如权利要求1-4任一项所述的基于混合现实的远程标注方法的步骤。9. An electronic device comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein when the processor executes the program, the steps of the remote annotation method based on mixed reality are implemented.
CN202210258880.7A2022-03-162022-03-16 A remote annotation method, terminal and system based on mixed realityActiveCN114742933B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210258880.7ACN114742933B (en)2022-03-162022-03-16 A remote annotation method, terminal and system based on mixed reality

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210258880.7ACN114742933B (en)2022-03-162022-03-16 A remote annotation method, terminal and system based on mixed reality

Publications (2)

Publication NumberPublication Date
CN114742933A CN114742933A (en)2022-07-12
CN114742933Btrue CN114742933B (en)2025-09-26

Family

ID=82276922

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210258880.7AActiveCN114742933B (en)2022-03-162022-03-16 A remote annotation method, terminal and system based on mixed reality

Country Status (1)

CountryLink
CN (1)CN114742933B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
GB2617218B (en)*2022-09-272024-04-10Imagination Tech LtdRay tracing
CN117218320B (en)*2023-11-082024-02-27济南大学Space labeling method based on mixed reality

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113242398A (en)*2021-04-162021-08-10杭州易现先进科技有限公司Three-dimensional labeled audio and video call method and system
CN113496507A (en)*2020-03-202021-10-12华为技术有限公司Human body three-dimensional model reconstruction method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108830894B (en)*2018-06-192020-01-17亮风台(上海)信息科技有限公司 Augmented reality-based remote guidance method, device, terminal and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113496507A (en)*2020-03-202021-10-12华为技术有限公司Human body three-dimensional model reconstruction method
CN113242398A (en)*2021-04-162021-08-10杭州易现先进科技有限公司Three-dimensional labeled audio and video call method and system

Also Published As

Publication numberPublication date
CN114742933A (en)2022-07-12

Similar Documents

PublicationPublication DateTitle
CN114742933B (en) A remote annotation method, terminal and system based on mixed reality
EP2833242A1 (en)Method for transferring playing of digital media contents and playing device and system
CN110866977B (en) Augmented reality processing method and device, system, storage medium and electronic equipment
CN107113396B (en)Method implemented at user terminal during video call, user terminal and computer-readable storage medium
EP2478709B1 (en)Three dimensional (3d) video for two-dimensional (2d) video messenger applications
CN102833321B (en)Embedded device remotely assists control method and system
CN112241201B (en)Remote labeling method and system for augmented/mixed reality
CN113242398A (en)Three-dimensional labeled audio and video call method and system
EP3261341A2 (en)Video image generation system and video image generating method thereof
CN107360364B (en) Image capturing method, terminal and computer-readable storage medium
JP3610423B2 (en) Video display system and method for improving its presence
CN114979564B (en) Video shooting method, electronic equipment, device, system and medium
CN110168630A (en)Enhance video reality
CN111524240A (en) Scene switching method, device and augmented reality device
CN110990109A (en)Spliced screen redisplay method, terminal, system and storage medium
JP6830112B2 (en) Projection suitability detection system, projection suitability detection method and projection suitability detection program
WO2019085623A1 (en)Interaction method and device
WO2011043334A1 (en)Communication terminal, communication method, computer-readable recording medium having communication program recorded thereon, and network system
CN110418127B (en) An operation method of a virtual-real fusion device based on a pixel template in a Web environment
CN115686206A (en)Camera module AI system
CN115761190B (en)Multi-user augmented reality photo browsing method and system based on scene mapping
WO2023088383A1 (en)Method and apparatus for repositioning target object, storage medium and electronic apparatus
KR102391898B1 (en)Remote access system for transmitting video data and method performing there of
CN106791888A (en)The transmission method and device of the panoramic pictures based on user perspective
CN112148122A (en)Third-party visual angle implementation method for wearable augmented/mixed reality equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp