Movatterモバイル変換


[0]ホーム

URL:


CN113538701B - Rendering position prediction method and device, electronic device and storage medium - Google Patents

Rendering position prediction method and device, electronic device and storage medium
Download PDF

Info

Publication number
CN113538701B
CN113538701BCN202110721791.7ACN202110721791ACN113538701BCN 113538701 BCN113538701 BCN 113538701BCN 202110721791 ACN202110721791 ACN 202110721791ACN 113538701 BCN113538701 BCN 113538701B
Authority
CN
China
Prior art keywords
information
moment
motion state
prediction
prediction information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110721791.7A
Other languages
Chinese (zh)
Other versions
CN113538701A (en
Inventor
杭蒙
刘浩敏
章国锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Shangtang Technology Development Co Ltd
Original Assignee
Zhejiang Shangtang Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Shangtang Technology Development Co LtdfiledCriticalZhejiang Shangtang Technology Development Co Ltd
Priority to CN202110721791.7ApriorityCriticalpatent/CN113538701B/en
Publication of CN113538701ApublicationCriticalpatent/CN113538701A/en
Application grantedgrantedCritical
Publication of CN113538701BpublicationCriticalpatent/CN113538701B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本公开涉及一种渲染位置预测方法及装置、电子设备和存储介质,所述方法包括:获取摄像装置在第一时刻的运动状态信息以及惯性传感信息;根据所述摄像装置在第一时刻的运动状态信息以及所述惯性传感信息,对所述摄像装置在第二时刻的运动状态进行预测,得到第一预测信息;对所述第一预测信息进行滤波处理,得到第二预测信息;基于所述第二预测信息,确定目标时刻增强现实AR对象在所述摄像装置采集的图像中的渲染位置。本公开实施例可提高渲染位置预存的准确性。

The present disclosure relates to a rendering position prediction method and device, electronic device and storage medium, the method comprising: obtaining motion state information and inertial sensor information of a camera device at a first moment; predicting the motion state of the camera device at a second moment according to the motion state information and inertial sensor information of the camera device at the first moment to obtain first prediction information; filtering the first prediction information to obtain second prediction information; and determining the rendering position of an augmented reality (AR) object in an image captured by the camera device at a target moment based on the second prediction information. The embodiments of the present disclosure can improve the accuracy of pre-storing rendering positions.

Description

Rendering position prediction method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a rendering position prediction method and device, electronic equipment and a storage medium.
Background
The augmented reality (Augmented Reality, AR) technology is a new technology that combines real information and virtual information with each other, and virtual information (visual information, sound information, tactile information) that does not exist in the real world originally can be superimposed with the real information through the AR technology and perceived by a person, thereby achieving sensory experience beyond reality.
When combining virtual information with real information, the virtual information may be superimposed in an image of the real scene by image rendering. However, when virtual information is rendered in an image in real time, a certain deviation exists in a determined rendering position.
Disclosure of Invention
The present disclosure proposes a rendering position prediction technique.
According to an aspect of the present disclosure, there is provided a rendering position prediction method including:
The method comprises the steps of obtaining motion state information and inertial sensing information of an image pickup device at a first moment, predicting the motion state of the image pickup device at a second moment according to the motion state information and the inertial sensing information of the image pickup device at the first moment to obtain first prediction information, filtering the first prediction information to obtain second prediction information, and determining the rendering position of an Augmented Reality (AR) object at a target moment in an image acquired by the image pickup device based on the second prediction information.
In some possible implementations, the predicting the motion state of the image capturing device at the second moment according to the motion state information of the image capturing device at the first moment and the inertial sensing information to obtain first prediction information includes obtaining first estimation information for predicting the motion state of the image capturing device at the second moment according to the motion state information of the image capturing device at the first moment, and updating the first estimation information based on the inertial sensing information to obtain the first prediction information.
In some possible implementations, the motion state information includes parameter values of a first motion parameter and a second motion parameter at the first time, the first motion parameter and the second motion parameter satisfy a preset motion relation, the first estimation information includes parameter values of the second motion parameter at the second time, and the first estimation information for predicting the motion state of the image capturing device at the second time according to the motion state information of the image capturing device at the first time includes determining the parameter values of the second motion parameter at the second time based on the parameter values of the first motion parameter at the first time and the parameter values of the second motion parameter at the first time.
In some possible implementations, the predicting the motion state of the image capturing device at the second moment according to the motion state information of the image capturing device at the first moment and the inertial sensing information to obtain first prediction information includes determining measurement information of the image capturing device at the second moment based on the inertial sensing information, and inputting the motion state information of the image capturing device at the first moment and the measurement information into a first filter to obtain the first prediction information.
In some possible implementations, the filtering the first prediction information to obtain second prediction information includes inputting the first prediction information into a second filter to obtain the second prediction information.
In some possible implementations, the inputting the first prediction information into a second filter to obtain the second prediction information includes predicting, by using the second filter, a motion state of the image capturing device at a second moment according to the motion state information of the image capturing device at the first moment to obtain second estimation information, and updating the second estimation information based on the first prediction information to obtain the second prediction information.
In some possible implementations, determining the rendering position of the AR object at the target time in the image acquired by the image capturing device based on the second prediction information includes acquiring pose information of the image capturing device at the target time based on the second prediction information, and determining the rendering position of the AR object at the target time in the image acquired by the image capturing device according to the pose information.
In some possible implementations, the determining the rendering position of the AR object in the image acquired by the image capturing device at the target time according to the pose information includes determining the position of a spatial point captured by the image capturing device according to the pose information of the image capturing device at the target time, determining the projection position of the spatial point in the image according to the position of the spatial point, and determining the rendering position of the AR object in the image acquired by the image capturing device at the target time according to the projection position.
According to an aspect of the present disclosure, there is provided a rendering position prediction apparatus including:
The acquisition module is used for acquiring the motion state information and the inertial sensing information of the image pickup device at the first moment;
The prediction module is used for predicting the motion state of the image pickup device at the second moment according to the motion state information of the image pickup device at the first moment and the inertial sensing information to obtain first prediction information;
The filtering module is used for carrying out filtering processing on the first prediction information to obtain second prediction information;
And the determining module is used for determining the rendering position of the Augmented Reality (AR) object at the target moment in the image acquired by the image pickup device based on the second prediction information.
In some possible implementations, the prediction module is configured to obtain first estimation information for predicting a motion state of the image capturing device at a second moment according to motion state information of the image capturing device at the first moment, and update the first estimation information based on the inertial sensing information to obtain the first prediction information.
In some possible implementations, the motion state information includes parameter values of a first motion parameter and a second motion parameter at the first time, the first motion parameter and the second motion parameter satisfy a preset motion relationship, the first estimation information includes parameter values of the second motion parameter at the second time, and the prediction module is configured to determine parameter values of the second motion parameter at the second time based on the parameter values of the first motion parameter at the first time and the parameter values of the second motion parameter at the first time.
In some possible implementations, the prediction module is configured to determine measurement information of the image capturing device at a second moment based on the inertial sensing information, and input motion state information of the image capturing device at a first moment and the measurement information into a first filter to obtain the first prediction information.
In some possible implementations, the filtering module is configured to input the first prediction information into a second filter to obtain the second prediction information.
In some possible implementations, the filtering module is configured to predict, according to the motion state information of the image capturing device at the first moment, the motion state of the image capturing device at the second moment by using the second filter to obtain second estimation information, and update the second estimation information based on the first prediction information to obtain the second prediction information.
In some possible implementations, the determining module is configured to obtain pose information of the image capturing device at a target moment based on the second prediction information, and determine a rendering position of the AR object in an image captured by the image capturing device at the target moment according to the pose information.
In some possible implementations, the determining module is configured to determine a position of a spatial point captured by the image capturing device according to pose information of the image capturing device at a target moment, determine a projection position of the spatial point in the image according to the position of the spatial point, and determine a rendering position of the AR object in the image captured by the image capturing device at the target moment according to the projection position.
According to an aspect of the disclosure, there is provided an electronic device comprising a processor, a memory for storing processor-executable instructions, wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the motion state information and the inertial sensing information of the image pickup device at the first moment can be obtained, then the motion state of the image pickup device at the second moment can be predicted according to the motion state information and the inertial sensing information of the image pickup device at the first moment to obtain the first prediction information, and then the first prediction information is filtered to obtain the second prediction information, so that the jitter of a prediction result is reduced. And further determining the rendering position of the target time AR object in the image acquired by the image pickup device based on the second prediction information, so that the rendering position of the target time AR object can be predicted, and the deviation of the rendering position is reduced.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 illustrates a flowchart of a rendering position prediction method according to an embodiment of the present disclosure.
Fig. 2 illustrates a flowchart of a rendering position prediction method according to an embodiment of the present disclosure.
Fig. 3 illustrates a block diagram of a rendering position prediction apparatus according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of an electronic device, according to an embodiment of the disclosure.
Fig. 5 shows a block diagram of an electronic device, according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean that a exists alone, while a and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
In the related art, the rendering frame rate of most AR glasses is greater than 60HZ, the acquisition frame rate of the image is only 30HZ, and due to the difference of the rendering frame rate and the acquisition frame rate, and the image rendering also requires a certain time, a certain deviation exists in the rendering position determined based on the latest image, so that the effect of the AR glasses on the picture presented to the user is poor.
The rendering position prediction scheme provided by the embodiment of the disclosure can be applied to scenes such as AR wearable equipment, image rendering, pose prediction and the like. For example, in the process that the user wears the AR glasses, the rendering position at a future time can be predicted in real time, the AR object can be rendered in the image according to the rendering position, and the rendering delay existing in the image rendering can be considered, so that the possible deviation of the rendering position of the AR object on the image is reduced, the reality of the picture presented by the AR glasses to the user is improved, and the user experience is improved.
The rendering location prediction method provided in the embodiments of the present disclosure may be performed by a terminal device, a server, or other types of electronic devices, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, or the like. In some possible implementations, the data processing method may be implemented by way of a processor invoking computer readable instructions stored in a memory. Or the method may be performed by a server. For convenience of description, the execution subject of the information presentation method will be collectively referred to as a terminal hereinafter.
Fig. 1 illustrates a flowchart of a rendering position prediction method according to an embodiment of the present disclosure, as illustrated in fig. 1, including:
step S11, motion state information and inertial sensing information of the image pickup device at a first moment are acquired.
In the embodiment of the disclosure, the terminal can acquire the motion state information and the inertial sensing information of the image pickup device in real time. The motion state information may be used to represent a motion state of an image capturing device, which may be a device for capturing a scene, for example, a camera, or the like. The motion state of the camera device can be changed in real time, and the motion state information of the camera device can be different at different moments. The inertial sensing information may be information measured by an inertial sensor, and the inertial sensing information may include acceleration of three axes and angular velocity of the three axes, wherein the three axes refer to an x-axis, a y-axis, and a z-axis of a coordinate system established with a center of the image pickup device as an origin. The inertial sensor can measure inertial sensing information of the image pickup device in real time. In some implementations, the terminal may be configured with an image capturing device and an inertial sensor, and the terminal may acquire motion state information and inertial sensing information of the image capturing device in real time. In some implementations, the camera and the inertial sensor may be configured in other devices, and the terminal may obtain motion state information and inertial sensing information of the camera from the other devices. The first time may be any known time, for example, the first time may be a current time, and the terminal may acquire motion state information and inertial sensing information of the image capturing device at the first time.
In some implementations, the motion state information may include at least one motion parameter of acceleration, speed, position, acceleration of angular acceleration, acceleration of angular velocity, azimuth. The camera device can move in real time, and the motion state of the camera device can be described through one or more motion parameters, so that the camera device can move and also can rotate. The motion parameters of acceleration, speed and position of acceleration can describe the movement of the camera device, and the motion parameters of acceleration, acceleration of angular velocity, angular velocity and azimuth angle can describe the rotation of the camera device.
In step S12, the motion state of the image capturing device at the second moment is predicted according to the motion state information of the image capturing device at the first moment and the inertial sensing information, so as to obtain first prediction information.
In the embodiment of the present disclosure, the second time may be a time next to the first time, where the first time is shorter than the second time, and the second time may be equal to the first time plus a preset duration. The preset time period may be a very small amount. The terminal may predict a motion state of the image capturing device at a second time according to the motion state information and the inertial sensing information of the image capturing device at the first time to obtain first prediction information, for example, may integrate the motion state information and the inertial sensing information at the first time, for example, obtain an acceleration of the acceleration at the first time in the motion state information, integrate the acceleration of the acceleration at the first time to obtain an acceleration at the second time, obtain an acceleration at the first time in the inertial sensing information, integrate the acceleration at the first time to obtain a speed at the second time. Or the motion state information and the inertial sensing information of the image pickup device at the first moment can be obtained by using some optimization algorithms, for example, newton method, gauss newton method and the like can be used for carrying out batch optimization on the motion state information and the inertial sensing information of the image pickup device at the first moment, so as to obtain the first prediction information of the image pickup device. The first prediction information may be used to represent a motion state of the image capturing apparatus at the second time.
In some implementations, first estimation information for predicting a motion state of the image capturing device at the second moment may be obtained according to motion state information of the image capturing device at the first moment. For example, the motion state information at the first time may be integrated, or the motion state information at the first time may be input into a first filter, to obtain first estimation information for predicting the motion state at the second time. And updating the first estimation information based on the inertial sensing information at the first moment to obtain first prediction information. For example, measurement information corresponding to a motion state of the imaging device at a second time may be obtained based on inertial sensing information at the first time, and the first estimation information may be updated according to the measurement information at the second time to obtain the first prediction information. For example, the inertial sensing information acquired at the first moment may be integrated to obtain measurement information corresponding to the motion state of the image capturing device at the second moment, and then the parameter values of the same motion parameter in the measurement information and the first estimation information may be weighted to obtain the first prediction information. Taking the motion parameter as an example of the speed, the acceleration can be obtained from the inertial sensing information at the first moment, then the acceleration can be integrated from the first moment to the second moment to obtain a measured value of the speed of the image pickup device at the second moment, then the predicted value of the speed of the image pickup device at the second moment can be obtained from the first estimated information, and the measured value and the predicted value of the speed are further weighted and summed to obtain an updated value of the speed in the first predicted information. In this way, the predicted value can be updated by the measured value of the motion state of the image capturing apparatus, and the obtained first predicted information can more accurately describe the motion state of the image capturing apparatus at the second moment.
In some implementations, the measurement information of the image capturing device at the second time may be determined based on the inertial sensing information at the first time, for example, the inertial sensing information at the first time may be integrated to obtain the measurement information of the image capturing device at the second time. The motion state information of the image pickup device at the first moment and the measurement information of the image pickup device at the second moment can be further input into the first filter, and the motion state of the image pickup device at the second moment can be predicted by the first filter according to the motion state information of the image pickup device at the first moment and the measurement information of the image pickup device at the second moment, so that first prediction information can be obtained. The first filter may be a kalman filter, and the first filter may consider the influence of noise, and may filter noise and interference in the motion state information and the inertial sensing information through the first filter, so that relatively accurate first prediction information may be obtained.
And step S13, filtering the first prediction information to obtain second prediction information.
In the embodiment of the disclosure, since the first prediction information is predicted based on the motion state information and the inertial sensing information at the first moment, some jitter may exist, so that after the first prediction information is obtained, the first prediction information may be further subjected to filtering processing, for example, filtering processing may be performed on the first pre-stored information by using some filtering algorithms, such as kalman filtering, clipping filtering, median filtering, recursive average filtering, and the like, so that the phenomenon that the pre-stored result of the motion state has jitter may be reduced, and the second prediction information obtained after the filtering processing may be smoothly transited.
In some implementations, the first prediction information may be input into a second filter, resulting in second prediction information that predicts a motion state at a second time. Here, the second filter may be a kalman filter. The second filter may be the same as the prediction process corresponding to the first filter, and the first prediction information may be corrected again by the second filter, so as to obtain second prediction information that is more accurate and smoother than the first prediction information. Meanwhile, the second prediction information is corrected by using the second filter, so that the calculation amount is small, the speed is high, and the accuracy of predicting the motion state of the image pickup device at the second moment can be improved.
Step S14, determining a rendering position of the augmented reality AR object in the image acquired by the image capturing device at the target moment based on the second prediction information.
In the embodiment of the present disclosure, after the second prediction information is obtained, a rendering position of the target time AR object in the image acquired by the image capturing apparatus may be determined based on the second prediction information. The target time may be an upcoming rendering time, which may be determined according to a refresh frequency of the display. For example, the moving distance and the rotating angle of the image capturing device from the first time to the target time may be determined according to the second prediction information, then the pixel distance of the target time for the AR object to move in the image may be determined according to the corresponding relationship between the distance and the rotating angle of the real three-dimensional space and the pixel distance in the image, and further the rendering position of the AR object corresponding to the target time may be determined according to the pixel distance of the AR object to move in the image. Here, the AR object may be virtual information in a screen presented by the terminal, for example, the AR object may be a virtual logo, a virtual character, a virtual text, or the like.
In some implementations, after determining the rendering position of the AR object corresponding to the target moment, the AR object may be rendered in the image of the real scene acquired by the image capturing device according to the rendering position corresponding to the second moment, so as to superimpose the virtual information in the image of the real scene.
In some implementations, the terminal may also select the rendered AR object according to a user operation, e.g., may determine a target AR object presented in the screen in the AR object library according to the user operation. In some implementations, the effect of the AR object may also be changed according to a user operation, for example, the color, transparency, etc. of the AR object may be changed according to a user operation.
According to the rendering position prediction method provided by the embodiment of the disclosure, the rendering position of the future moment after the first moment can be predicted, the motion state of the second moment is predicted through the motion state information and the inertia sensing information of the first moment, then the predicted first prediction information is subjected to filtering processing, the second prediction information which more accurately describes the motion state of the second moment can be obtained, and the rendering position of the AR object at the target moment can be accurately determined through the second prediction information, so that deviation and jitter of the rendering position can be reduced.
In some implementations, when determining the rendering position of the target moment AR object in the image acquired by the image capturing device based on the second prediction information, pose information of the image capturing device at the target moment may also be acquired based on the second prediction information. And then determining the rendering position of the AR object at the target moment in the image acquired by the camera according to the pose information of the camera at the target moment. For example, in the case where the target time is the second time, pose information of the image capturing device at the target time may be obtained directly from the second prediction information. In the case that the target time is a future time after the second time, the second prediction information may be integrated from the second time to the target time to obtain motion state information corresponding to the target time, and pose information of the image capturing device at the target time may be further obtained from the motion state information corresponding to the target time. Here, the pose information of the image capturing device at the target time may include a position and an azimuth angle of the image capturing device, and according to the pose information of the image capturing device at the target time, a real scene that can be captured by the image capturing device at the target time may be determined, and further according to a position of a real object to be superimposed on the real scene by the AR object, a rendering position of the AR object at the target time in an image of the real scene may be determined. In this way, the rendering position of the AR object at the target time can be quickly and accurately determined based on the pose information of the image pickup device at the target time acquired by the second prediction information.
In some implementations, when determining the rendering position of the AR object in the image acquired by the image capturing device according to the pose information of the image capturing device at the target moment, the position of the spatial point captured by the image capturing device may be determined according to the pose information of the image capturing device at the target moment, for example, the visual field range that may be captured by the image capturing device may be determined according to the pose information of the image capturing device at the target moment and camera parameters (such as focal length, shooting angle, etc.) of the image capturing device, and the positions of the plurality of spatial points captured by the image capturing device may be determined according to the spatial position where the visual field range is located. The projection position of the spatial point in the image may be further determined according to the position of the spatial point, for example, the projection position of the spatial point in the image may be determined according to a preset correspondence between the three-dimensional spatial coordinate and the image coordinate. And then according to the projection position of the space point in the image, the rendering position of the AR object in the image acquired by the image pickup device at the target moment can be determined, for example, the target space point attached to the AR object can be determined in a plurality of space points, and then the projection position of the target space point can be determined as the rendering position of the AR object in the image. In this way, the rendering position of the AR object at the target time can be quickly determined.
In the embodiment of the disclosure, the motion state of the image pickup device at the second moment can be predicted according to the motion state information and the inertial sensing information of the image pickup device at the first moment to obtain the first prediction information, and then the first prediction information is filtered to obtain the second prediction information, so that the jitter of a prediction result is reduced. And further determining the rendering position of the target time AR object in the image acquired by the image pickup device based on the second prediction information, so that the rendering position of the target time AR object can be predicted, and the deviation of the rendering position is reduced. For example, the motion state information at the first time and the measurement information at the second time obtained based on the inertial sensor information at the first time may be input to a first filter, and first prediction information for predicting the motion state of the imaging device at the second time may be obtained by the first filter. And then inputting the motion state information at the first moment and the first prediction information into a second filter, and smoothing the first prediction information by using the second filter to obtain second prediction information. The second prediction information can accurately represent the motion state of the camera device, and the rendering position can be further determined based on the second prediction information, so that deviation and jitter of the rendering position can be reduced, and user experience is improved.
Here, in the case where the first filter and the second filter are both kalman filters, the process of estimating the motion state by each kalman filter may include a prediction stage and an update stage, where the prediction stage may predict the motion state corresponding to the second time based on the motion state information at the first time, and the update stage may update the prediction information obtained in the prediction stage using the measurement information at the second time. The first filter may obtain measurement information at the second time based on the inertial sensor information at the first time, and the measurement information may be used as measurement information at the update stage. For the second filter, the first prediction information obtained by the first filter may be used as measurement information for the second filter update stage.
In some implementations, inputting the first prediction information into the second filter to obtain the second prediction information may include predicting, with the second filter, a motion state of the image capturing device at a second time according to motion state information of the image capturing device at the first time to obtain second estimation information. And updating the second estimation information based on the first prediction information to obtain second prediction information.
Here, when the second prediction information is obtained by using the second filter, the second estimation information may be obtained by predicting the motion state of the imaging device at the second time by using the motion state information at the first time based on the preset motion relation by using the second filter, and for example, the motion state information at the first time may be integrated from the first time to the second time to obtain the second estimation information corresponding to the second time. The second estimation information may be further updated by the second filter based on the first prediction information, for example, parameter values of the same motion parameters in the first prediction information and the second estimation information may be weighted to obtain the second prediction information. The motion state of the image pickup device at the second moment can be predicted based on the motion state information at the first moment through the second filter, and further the second estimated information obtained through prediction can be corrected through the first predicted information, so that more accurate second predicted information is obtained. In this way, the rendering position corresponding to the second time obtained based on the second prediction information can be more accurate and smooth.
In some implementations, the first filter may be the same as a prediction model corresponding to the second filter, which may be understood as a motion relationship utilized to predict a motion device of the image capturing device at the second time based on the motion state information at the first time. The following describes the prediction model by taking a process of obtaining first estimation information based on motion state information at a first time as an example.
In some implementations, parameter values of the first motion parameter and the second motion parameter at the first time may be determined in motion state information at the first time, where the first motion parameter and the second motion parameter satisfy a preset motion relationship. Then, based on the parameter value of the first motion parameter at the first time and the parameter value of the second motion parameter at the first time, the parameter value of the second motion parameter at the second time can be determined, for example, the parameter value of the first motion parameter at the first time and the parameter value of the second motion parameter at the first time can be brought into a preset motion relation, and the parameter value of the second motion parameter at the second time can be obtained. The first estimation information comprises parameter values of the second motion parameter at the second instant.
For example, the first motion parameter is acceleration of the acceleration, the second motion parameter is acceleration, the acceleration at the first moment may be taken as an initial value, the acceleration at the first moment may be taken as a change rate of the acceleration, and the parameter value of the acceleration at the second moment may be obtained in a linear motion relationship that the acceleration and the acceleration of the acceleration satisfy.
The parameter value of the first motion parameter at the first moment and the parameter value of the second motion parameter at the first moment can be used for accurately predicting the parameter value of the second motion parameter at the second moment, so that a basis is provided for determining the rendering position corresponding to the second moment.
In some implementations, when determining the parameter value of the second motion parameter at the second time based on the parameter value of the first motion parameter at the first time and the parameter value of the second motion parameter at the first time, the parameter value of the first motion parameter at the second time may also be determined first, for example, the parameter value of the first motion parameter at the first time may be added to a preset noise to obtain the parameter value of the first motion parameter at the second time. And obtaining the parameter value of the second motion parameter at the second moment according to the parameter value of the first motion parameter at the first moment, the parameter value of the first motion parameter at the second moment and the parameter value of the second motion parameter at the first moment, for example, determining the average value of the first motion parameter from the first moment to the second moment according to the parameter value of the first motion parameter at the first moment and the parameter value of the first motion parameter at the second moment, and obtaining the parameter value of the second motion parameter at the second moment according to the parameter value of the second motion parameter at the first moment and the average value of the first motion parameter.
For example, the first motion parameter is acceleration, the second motion parameter is velocity, an average value of the acceleration at the first moment and the acceleration at the second moment can be calculated, the velocity at the first moment is taken as an initial velocity, the average value is taken as a rate of change of the velocity, and the parameter value of the velocity at the second moment can be obtained by taking the average value into a linear motion relation that the acceleration velocity satisfies.
In some implementations, in the prediction phase of the first filter and the second filter, the motion relationship satisfied by each motion parameter may be represented by the following formula:
bk+1=bk+δw
ak+1=ak+bk*Δt+δa
vk+1=vk+0.5*(ak+1+ak)*Δt+δv
pk+1=pk+vk*Δt+0.5*(ak+1+ak)*Δt*Δt+δp
wbk+1=wbk+δwb
wak+1=wak+wbk*Δt+δwa
wk+1=wk+0.5*(wak+1+wak)*Δt+δw
Where bk is the acceleration at the first time, bk+1 is the acceleration at the second time, ak is the acceleration at the first time, ak+1 is the acceleration at the second time, vk is the velocity at the first time, vk+1 is the velocity at the second time, pk is the position at the first time, pk+1 is the position at the second time, wbk is the acceleration at the first time, wbk+1 is the acceleration at the angular acceleration at the second time, wak is the angular acceleration at the first time, wak+1 is the angular acceleration at the second time, wk is the angular velocity at the first time, wk+1 is the angular velocity at the second time, qk is the azimuth angle at the first time, and qk+1 is the azimuth angle at the second time. Δt is a time difference between the second time and the first time, δw is noise corresponding to angular velocity, δa is noise corresponding to acceleration, δv is noise corresponding to velocity, δp is noise corresponding to position, δwb is noise corresponding to acceleration of angular acceleration, δwa is noise corresponding to angular acceleration, and δq is noise corresponding to azimuth. These noises can be set according to the actual application scenario or requirements. For the motion parameters in which the initial value cannot be directly determined, the initial value of the motion parameters can be set according to the actual application scene or the requirement.
The rendering position prediction method provided by the embodiment of the present disclosure is exemplarily described below by way of an example. Fig. 2 illustrates a flowchart of a rendering position prediction method according to an embodiment of the present disclosure, in which an example may be AR glasses in which an image capturing device and an inertial sensor are configured, and the AR glasses may provide an AR screen in which a real scene and virtual information are combined to a user during wearing of the AR glasses. The method comprises the following steps:
In step S201, the AR glasses acquire the motion state information and the inertial sensing information at the first moment.
Here, the image pickup device may be disposed in the AR glasses such that the movement state information and the inertial sensing information of the image pickup device can be regarded as movement state information and inertial sensing information of the AR glasses. The first time may be the current time.
Step S202, integrating the inertial sensing information at the first moment to obtain the measurement information of the motion state of the AR glasses at the second moment.
Here, the second time may be a time next to the first time.
Step S203, the motion state information and the measurement information at the first moment are input into the first filter, so as to obtain the first prediction information.
Here, the first prediction information may be information that performs preliminary estimation of a motion state of the AR glasses at the second time, wherein the measurement information may be taken as measurement information of the first filter at the update stage.
Step S204, the motion state information and the first prediction information at the first moment are input into a second filter to obtain second prediction information.
Here, the second prediction information may be used as measurement information of the second filter in the update phase. That is, the measurement information corresponding to the first filter and the second filter is different.
Step S205, the pose information of the AR glasses at the target moment is obtained according to the second prediction information.
Here, the second prediction information may be integrated from the second time to the target time, so as to obtain motion state information corresponding to the target time, and further obtain pose information of the imaging device at the target time from the motion state information corresponding to the target time.
Step S206, determining the rendering position of the AR object in the picture displayed by the AR glasses according to the pose information of the AR glasses at the target moment.
In the example, the process of predicting the rendering position is described by taking the current time as the first time as an example, in practical application, the rendering position at the future time can be continuously predicted along with time change, that is, when the current time is converted to the second time, the second prediction information obtained at the second time can be used as the motion state information at the current time, the motion state at the next time is predicted based on the motion state information at the current time and the inertial sensing information measured at the current time, and the prediction of the rendering position at the future target time is realized in real time.
In the example provided by the embodiment of the disclosure, the filter can be combined with the inertial sensor to predict the rendering position of the AR object on the AR glasses picture at the future moment, the predicted rendering position can be determined through multiple times of filtering, the rendering position can be smoother and more accurate, the jitter of the AR glasses picture is reduced, the rendering effect of the AR glasses picture is improved, and the user experience is improved. In addition, because the obtained rendering position corresponds to the future target time, the dependence of the rendering position on the acquired image can be reduced, and the rendering position prediction with high frame rate can be realized.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides an information prompting device, an electronic device, a computer readable storage medium and a program, which can be used for implementing any information prompting method provided by the disclosure, and corresponding technical schemes and descriptions and corresponding records referring to method parts are not repeated.
Fig. 3 shows a block diagram of a rendering position prediction apparatus according to an embodiment of the present disclosure, as shown in fig. 3, the apparatus including:
An acquiring module 31, configured to acquire motion state information and inertial sensing information of the image capturing device at a first moment;
A prediction module 32, configured to predict a motion state of the image capturing device at a second moment according to the motion state information of the image capturing device at the first moment and the inertial sensing information, so as to obtain first prediction information;
A filtering module 33, configured to perform filtering processing on the first prediction information to obtain second prediction information;
The determining module 34 is configured to determine, based on the second prediction information, a rendering position of the augmented reality AR object at the target moment in the image acquired by the image capturing device.
In some possible implementations, the prediction module 32 is configured to obtain first estimation information for predicting a motion state of the image capturing device at a second moment according to motion state information of the image capturing device at the first moment, and update the first estimation information based on the inertial sensing information to obtain the first prediction information.
In some possible implementations, the motion state information includes a parameter value of a first motion parameter and a parameter value of a second motion parameter at the first time, the first motion parameter and the second motion parameter satisfy a preset motion relationship, the first estimation information includes a parameter value of the second motion parameter at the second time, and the prediction module 32 is configured to determine the parameter value of the second motion parameter at the second time based on the parameter value of the first motion parameter at the first time and the parameter value of the second motion parameter at the first time.
In some possible implementations, the prediction module 32 is configured to determine measurement information of the image capturing device at the second moment based on the inertial sensing information, and input the motion state information of the image capturing device at the first moment and the measurement information into a first filter to obtain the first prediction information.
In some possible implementations, the filtering module 33 is configured to input the first prediction information into a second filter, to obtain the second prediction information.
In some possible implementations, the filtering module 33 is configured to predict, according to the motion state information of the image capturing device at the first moment, the motion state of the image capturing device at the second moment by using the second filter to obtain second estimation information, and update the second estimation information based on the first prediction information to obtain the second prediction information.
In some possible implementations, the determining module 34 is configured to obtain pose information of the image capturing device at a target time based on the second prediction information, and determine a rendering position of the AR object in an image captured by the image capturing device at the target time according to the pose information.
In some possible implementations, the determining module 34 is configured to determine a position of a spatial point captured by the image capturing device according to pose information of the image capturing device at a target moment, determine a projection position of the spatial point in the image according to the position of the spatial point, and determine a rendering position of the AR object in the image captured by the image capturing device according to the projection position at the target moment.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a non-volatile computer readable storage medium.
The embodiment of the disclosure also provides electronic equipment, which comprises a processor and a memory for storing instructions executable by the processor, wherein the processor is configured to call the instructions stored by the memory so as to execute the method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code which, when run on a device, causes a processor in the device to execute instructions for implementing the rendering position prediction method provided in any of the embodiments above.
The disclosed embodiments also provide another computer program product for storing computer readable instructions that, when executed, cause a computer to perform the operations of the rendering position prediction method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 4 illustrates a block diagram of an electronic device 800, according to an embodiment of the disclosure. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to FIG. 4, the electronic device 800 can include one or more of a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to, a home button, a volume button, an activate button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a photosensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of electronic device 800 to perform the above-described methods.
Fig. 5 illustrates a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server. Referring to FIG. 5, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as the Microsoft Server operating system (Windows ServerTM), the apple Inc. promoted graphical user interface-based operating system (Mac OS XTM), the multi-user, multi-process computer operating system (UnixTM), the free and open source Unix-like operating system (LinuxTM), the open source Unix-like operating system (FreeBSDTM), or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium include a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical encoding device, punch cards or intra-groove protrusion structures such as those having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (9)

CN202110721791.7A2021-06-282021-06-28 Rendering position prediction method and device, electronic device and storage mediumActiveCN113538701B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110721791.7ACN113538701B (en)2021-06-282021-06-28 Rendering position prediction method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110721791.7ACN113538701B (en)2021-06-282021-06-28 Rendering position prediction method and device, electronic device and storage medium

Publications (2)

Publication NumberPublication Date
CN113538701A CN113538701A (en)2021-10-22
CN113538701Btrue CN113538701B (en)2025-05-27

Family

ID=78126123

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110721791.7AActiveCN113538701B (en)2021-06-282021-06-28 Rendering position prediction method and device, electronic device and storage medium

Country Status (1)

CountryLink
CN (1)CN113538701B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118608720B (en)*2024-02-282025-03-14武昌首义学院High-precision three-dimensional simulation enhancement method and system based on satellite data
CN118264700B (en)*2024-04-172024-10-08北京尺素科技有限公司AR rendering method

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112380989A (en)*2020-11-132021-02-19歌尔光学科技有限公司Head-mounted display equipment, data acquisition method and device thereof, and host

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20130017258A (en)*2011-08-102013-02-20중앙대학교 산학협력단Apparatus and method about implementation of augmented reality based on model with multiple-planes
US20130218461A1 (en)*2012-02-222013-08-22Leonid NaimarkReduced Drift Dead Reckoning System
KR101732890B1 (en)*2015-08-192017-05-08한국전자통신연구원Method of rendering augmented reality on mirror display based on motion of target of augmented reality and apparatus using the same
US10962780B2 (en)*2015-10-262021-03-30Microsoft Technology Licensing, LlcRemote rendering for virtual images
EP3236211A1 (en)*2016-04-212017-10-25Thomson LicensingMethod and apparatus for estimating a pose of a rendering device
US10782668B2 (en)*2017-03-162020-09-22Siemens AktiengesellschaftDevelopment of control applications in augmented reality environment
CN107274472A (en)*2017-06-162017-10-20福州瑞芯微电子股份有限公司A kind of method and apparatus of raising VR play frame rate
CN109085915B (en)*2017-12-292021-05-14成都通甲优博科技有限责任公司Augmented reality method, system, equipment and mobile terminal
WO2020031659A1 (en)*2018-08-062020-02-13国立研究開発法人産業技術総合研究所Position and attitude estimation system, position and attitude estimation apparatus, and position and attitude estimation method
US10748344B2 (en)*2018-09-122020-08-18Seiko Epson CorporationMethods and devices for user interaction in augmented reality
CN109712224B (en)*2018-12-292023-05-16海信视像科技股份有限公司Virtual scene rendering method and device and intelligent device
CN111161408B (en)*2019-12-272021-12-21华南理工大学Method for realizing augmented reality, application thereof and computing equipment
CN112230242B (en)*2020-09-302023-04-25深兰人工智能(深圳)有限公司Pose estimation system and method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112380989A (en)*2020-11-132021-02-19歌尔光学科技有限公司Head-mounted display equipment, data acquisition method and device thereof, and host

Also Published As

Publication numberPublication date
CN113538701A (en)2021-10-22

Similar Documents

PublicationPublication DateTitle
CN110647834B (en) Face and hand correlation detection method and device, electronic device and storage medium
CN109697734B (en)Pose estimation method and device, electronic equipment and storage medium
CN111551191B (en)Sensor external parameter calibration method and device, electronic equipment and storage medium
CN110928627B (en)Interface display method and device, electronic equipment and storage medium
CN109584362B (en)Three-dimensional model construction method and device, electronic equipment and storage medium
CN111323007B (en)Positioning method and device, electronic equipment and storage medium
CN110134532A (en)A kind of information interacting method and device, electronic equipment and storage medium
CN111401230B (en)Gesture estimation method and device, electronic equipment and storage medium
EP3276478A1 (en)Mobile terminal and method for determining scrolling speed
CN111563138B (en)Positioning method and device, electronic equipment and storage medium
CN108900903B (en)Video processing method and device, electronic equipment and storage medium
CN110989884A (en)Image positioning operation display method and device, electronic equipment and storage medium
CN112950712B (en)Positioning method and device, electronic equipment and storage medium
CN113074726A (en)Pose determination method and device, electronic equipment and storage medium
CN113538701B (en) Rendering position prediction method and device, electronic device and storage medium
CN112541971A (en)Point cloud map construction method and device, electronic equipment and storage medium
CN113608616A (en) Display method and device of virtual content, electronic device and storage medium
CN111784773B (en) Image processing method and device, neural network training method and device
CN112767541B (en)Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN106228077B (en)It handles image and shows the method, apparatus and terminal of image
CN110121115B (en)Method and device for determining wonderful video clip
CN109813295B (en) Orientation determination method and device, and electronic equipment
CN110896492B (en)Image processing method, device and storage medium
CN112461245A (en)Data processing method and device, electronic equipment and storage medium
CN112651880A (en)Video data processing method and device, electronic equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp