Movatterモバイル変換


[0]ホーム

URL:


CN105953796A - Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone - Google Patents

Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
Download PDF

Info

Publication number
CN105953796A
CN105953796ACN201610346493.3ACN201610346493ACN105953796ACN 105953796 ACN105953796 ACN 105953796ACN 201610346493 ACN201610346493 ACN 201610346493ACN 105953796 ACN105953796 ACN 105953796A
Authority
CN
China
Prior art keywords
frame
pose
map
imu
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610346493.3A
Other languages
Chinese (zh)
Inventor
邓欢军
方维
李�根
乔羽
古鉴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Storm Mirror Technology Co Ltd
Original Assignee
Beijing Storm Mirror Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Storm Mirror Technology Co LtdfiledCriticalBeijing Storm Mirror Technology Co Ltd
Priority to CN201610346493.3ApriorityCriticalpatent/CN105953796A/en
Publication of CN105953796ApublicationCriticalpatent/CN105953796A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The invention discloses a stable motion tracking method and a stable motion tracking device based on integration of a simple camera and an IMU (inertial measurement unit) of a smart cellphone, and belongs to the technical field of AR (augmented reality)/VR (virtual reality) motion tracking. The method includes processing an acquired image according to an ORB (object request broker) algorithm, performing 3D (three-dimensional) reconstruction to obtain initial map points, and completing map initialization; performing visual tracking through ORB algorithm real-time matching and parallel partial keyframe mapping to obtain a visual pose; acquiring accelerated velocity and angular velocity, both generated in a three-dimensional space, of the IMU, and performing integral operation on the accelerated velocity and the angular velocity to obtain an IMU pose prediction result; performing Kalman fusion on the visual pose and the IMU pose prediction result, and performing motion tracking according to pose information acquired after fusion. Compared with the prior art, the stable motion tracking method and the stable motion tracking device have the advantages that a stable motion tracking mode can be acquired and real-time online dimension estimation can be achieved.

Description

Smart phone monocular and IMU fused stable motion tracking method and device
Technical Field
The invention relates to the field of mobile communication, in particular to a stable motion tracking method and device integrating a smart phone monocular and an IMU.
Background
With the development of VR technology, the utilization of advanced motion tracking technology is one of the prerequisites for its application, on the basis of which better interaction and better immersion can be achieved. The current mobile VR mainly uses a handle for interaction, only uses a gyroscope of a mobile phone for rotation tracking in the interaction process, and due to the influence of the self deviation and noise of the gyroscope of the mobile phone, the rotation estimation is inaccurate, and the repetition precision is poor; when a user sits and stands up to move forwards, if the handle is not used for interaction, the virtual scene is kept fixed, and the interaction experience is poor like nothing happens; when the user sits and immerses in the virtual environment, the subconscious stands up and tries to move, the virtual scene does not change, and the immersion sense disappears.
The motion tracking technology is to measure, track and record the motion track of an object in a three-dimensional space, mainly obtains the information of a motion scene through a sensor technology, calculates the attitude of the tracked object in the space in real time, and is widely applied to the fields of robot navigation, unmanned aerial vehicle navigation, unmanned vehicle automatic driving navigation and the like. Since the concept of Visual Odometry (VO) was first proposed by Nister in 2004, a method based on Visual Odometry has become the mainstream of real-time pose estimation and motion tracking. It determines the motion trajectory of the camera in time space by estimating the incremental motion of the camera in space. And a Visual Inertial Odometer (VIO) integrates information of a camera and an inertial sensor, mainly a gyroscope and an accelerometer, and provides a scheme with complementary advantages. For example, a single camera can estimate relative position, but it cannot provide absolute scale, cannot get the size of an object or the actual distance between two objects, and the camera sampling frame rate is generally low and the noise of the image sensor is relatively large, making it less adaptive to the environment during motion tracking. Inertial sensors can provide absolute dimensions and measure at a higher sampling frequency, thereby increasing robustness when the device is moving rapidly. However, the self-contained low-cost inertial sensor is prone to larger drift than the camera-based position estimation, and cannot achieve stable motion tracking.
Disclosure of Invention
The invention aims to provide a stable motion tracking method and a stable motion tracking device integrating a smart phone monocular and an IMU, which can acquire a more stable motion tracking mode and realize real-time online scale estimation.
In order to solve the technical problems, the invention provides the following technical scheme:
a stable motion tracking method fusing a single eye and an IMU of a smart phone comprises the following steps:
processing the acquired image by using an ORB algorithm, and then performing 3D reconstruction to obtain an initial map point and complete map initialization;
performing visual tracking by using an ORB algorithm in a real-time matching and parallel local key frame mapping mode to obtain a visual pose;
acquiring acceleration and angular velocity values generated by the IMU in a three-dimensional space, and performing integral operation on the acceleration and angular velocity values to obtain an IMU pose prediction result;
and performing Kalman fusion on the prediction results of the visual pose and the IMU pose, and performing motion tracking according to pose information obtained after fusion.
Further, the processing the acquired image by using the ORB algorithm, and then performing 3D reconstruction to obtain an initial map point, and completing map initialization includes:
extracting feature points of the acquired first frame image by using an ORB algorithm, calculating a descriptor, recording the first frame as a key frame, and marking the absolute pose of the camera;
after the camera translates one end distance, extracting feature points and calculating a descriptor by adopting an ORB algorithm on the acquired image, matching the feature points with the feature points of the first frame image, recording a second frame as a key frame, and calculating the relative pose of the camera under the second frame relative to the first frame;
and 3D reconstruction is carried out on the successfully matched feature point set to obtain an initial map point.
Further, the calculating the relative pose of the phase with respect to the first frame under the second frame includes:
calculating a basic matrix between the two frames of images according to the corresponding matched feature point sets on the first frame of image and the second frame of image;
calculating to obtain an essential matrix according to the basic matrix and the internal parameters of the camera;
and carrying out singular value decomposition on the essential matrix to obtain the relative pose of the phase under the second frame relative to the first frame.
Further, the performing the visual tracking by using the ORB algorithm to perform real-time matching and parallel local key frame mapping, and obtaining the visual pose includes:
rasterizing a current frame of the image by adopting an ORB algorithm to extract image feature points and a calculation descriptor;
estimating the corresponding camera pose of the current frame by adopting a constant speed motion model, projecting all map points of the previous frame of image onto the current image frame, matching the feature points, and assigning the successfully matched map points of the previous frame to the corresponding feature points of the current frame;
updating the pose and the map point of the current frame by adopting an LM algorithm and Huber estimation;
and projecting all map points of the local key frame onto the current image frame according to the updated pose, matching the feature points, assigning all the map points successfully matched to the corresponding feature points of the current frame after the matching is successful, and updating the pose of the current frame and the map points of the current frame again by using an LM algorithm and Huber estimation.
Further, the performing the visual tracking by using the ORB algorithm to perform real-time matching and parallel local key frame mapping, and obtaining the visual pose further includes:
judging whether a key frame needs to be added or not according to the time interval condition and/or the number of map points of the current frame, and if the time is more than a certain time from the last time of adding the key frame or the number of the map points of the current frame is less than a threshold value, adding a new key frame;
judging whether the current frame is a new key frame, if so, adding new map points, performing feature point matching on all feature points of the new key frame without the map points and all feature points in the local key frame, and performing 3D reconstruction after successful matching to obtain new map points;
and (4) optimizing local beam adjustment, correcting accumulated errors, and obtaining an optimized pose and a map point.
A smart phone monocular and IMU fused stable motion tracking device comprising:
the map initialization module is used for processing the acquired image by utilizing an ORB algorithm and then performing 3D reconstruction to obtain an initial map point and finish map initialization;
the visual tracking module is used for carrying out visual tracking in a mode of matching in real time and parallel local key frame mapping by using an ORB algorithm to obtain a visual pose;
IMU position appearance calculation module: the IMU pose prediction method comprises the steps of obtaining acceleration and angular velocity values generated by the IMU in a three-dimensional space, and performing integral operation on the acceleration and angular velocity values to obtain an IMU pose prediction result;
a fusion module: the method is used for performing Kalman fusion on the prediction results of the visual pose and the IMU pose and performing motion tracking according to pose information obtained after fusion.
Further, the map initialization module is further configured to:
extracting feature points of the acquired first frame image by using an ORB algorithm, calculating a descriptor, recording the first frame as a key frame, and marking the absolute pose of the camera;
after the camera translates one end distance, extracting feature points and calculating a descriptor by adopting an ORB algorithm on the acquired image, matching the feature points with the feature points of the first frame image, recording a second frame as a key frame, and calculating the relative pose of the camera under the second frame relative to the first frame;
and 3D reconstruction is carried out on the successfully matched feature point set to obtain an initial map point.
Further, the calculating the relative pose of the phase with respect to the first frame under the second frame includes:
calculating a basic matrix between the two frames of images according to the corresponding matched feature point sets on the first frame of image and the second frame of image;
calculating to obtain an essential matrix according to the basic matrix and the internal parameters of the camera;
and carrying out singular value decomposition on the essential matrix to obtain the relative pose of the phase under the second frame relative to the first frame.
Further, the visual tracking module is further configured to:
rasterizing a current frame of the image by adopting an ORB algorithm to extract image feature points and a calculation descriptor;
estimating the corresponding camera pose of the current frame by adopting a constant speed motion model, projecting all map points of the previous frame of image onto the current image frame, matching the feature points, and assigning the successfully matched map points of the previous frame to the corresponding feature points of the current frame;
updating the pose and the map point of the current frame by adopting an LM algorithm and Huber estimation;
and projecting all map points of the local key frame onto the current image frame according to the updated pose, matching the feature points, assigning all the map points successfully matched to the corresponding feature points of the current frame after the matching is successful, and updating the pose of the current frame and the map points of the current frame again by using an LM algorithm and Huber estimation.
Further, the visual tracking module is further configured to:
judging whether a key frame needs to be added or not according to the time interval condition and/or the number of map points of the current frame, and if the time is more than a certain time from the last time of adding the key frame or the number of the map points of the current frame is less than a threshold value, adding a new key frame;
judging whether the current frame is a new key frame, if so, adding new map points, performing feature point matching on all feature points of the new key frame without the map points and all feature points in the local key frame, and performing 3D reconstruction after successful matching to obtain new map points;
and (4) optimizing local beam adjustment, correcting accumulated errors, and obtaining an optimized pose and a map point.
The invention has the following beneficial effects:
in the invention, a map is initialized, and after the map is initialized successfully, images are obtained for continuous tracking and pose estimation; meanwhile, IMU data are obtained to carry out integral prediction pose; and performing data fusion under an Extended Kalman Filter (EKF) frame to obtain stable pose estimation. Aiming at the motion tracking problem of the current mobile VR, the invention can accurately estimate the pose and the absolute scale by combining visual measurement and inertial sensor measurement under an EKF frame by using a VIO with a camera and an IMU which are carried by a mobile terminal. A fast and stable motion tracking method of moving VR is achieved. Compared with the prior art, the method has the characteristics of acquiring a more stable motion tracking mode and realizing real-time online estimation of the scale.
Drawings
FIG. 1 is a schematic flow chart of a smartphone monocular and IMU fused stable motion tracking method of the present invention;
FIG. 2 is a schematic view of a visual pose estimation flow of a smartphone monocular and IMU fused stable motion tracking method of the present invention;
FIG. 3 is a schematic diagram of a visual pose and IMU pose Kalman fusion principle of the smartphone monocular and IMU fusion stabilization motion tracking method of the present invention;
FIG. 4 is a schematic diagram of a coordinate system of a smartphone monocular and IMU fused stable motion tracking method of the present invention;
FIG. 5 is a schematic diagram of a monocular vision and IMU system of the smartphone monocular and IMU fused stable motion tracking method of the present invention;
fig. 6 is a general flowchart of a technical solution of the smartphone monocular and IMU fused stable motion tracking method of the present invention;
fig. 7 is a schematic structural diagram of a smartphone monocular and IMU integrated stable motion tracking device according to the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention more apparent, the following detailed description is given with reference to the accompanying drawings and specific embodiments.
In one aspect, the present invention provides a method for tracking stable motion by fusing a smartphone monocular and an IMU, as shown in fig. 1, including:
step S101: processing the acquired image by using an ORB algorithm, and then performing 3D reconstruction to obtain an initial map point and complete map initialization;
in this step, the purpose of map initialization is to construct an initial three-dimensional point cloud. Since depth information cannot be obtained from only a single frame, it is necessary to select two or more frames of images from an image sequence, estimate a camera pose, and reconstruct an initial three-dimensional point cloud. In this step, two key frames are used, one is an initial key frame (initial frame) and the other is a key frame (end frame) which moves for a certain angle, matching of key points is performed between the initial frame and the end frame, then 3D reconstruction is performed on a feature point set which is successfully matched, and finally map initialization is completed.
Step S102: performing visual tracking by using an ORB algorithm in a real-time matching and parallel local key frame mapping mode to obtain a visual pose;
in this step, after the map is initialized successfully, the movement tracking is performed based on the vision. And (4) considering the weak computing power of the mobile terminal, performing visual tracking by using real-time matching and pose estimation of an ORB algorithm and a parallel local key frame maintenance and mapping mode to obtain a visual pose. The ORB algorithm is used for real-time matching and pose estimation as a tracking thread, and the maintenance and mapping of local key frames are local joint frame threads.
Step S103: acquiring acceleration and angular velocity values generated by the IMU in a three-dimensional space, and performing integral operation on the acceleration and angular velocity values to obtain an IMU pose prediction result;
in this step, the involved IMU (Inertial measurement unit, IMU for short) is a device for measuring the three-axis attitude angular velocity (or angular velocity) and acceleration of an object. Generally, an IMU includes three single-axis accelerometers and three single-axis gyroscopes, the accelerometers are used for detecting acceleration signals of an object in three independent axes of a carrier coordinate system, and the gyroscopes are used for detecting angular velocity signals of a carrier relative to a navigation coordinate system. IMU data is acquired between the front and the back adjacent frames for pose prediction, and the vision pose estimation of the next frame is used as a measurement value for updating.
Step S104: performing Kalman fusion on the prediction results of the visual pose and the IMU pose, and performing motion tracking according to pose information obtained after fusion;
in the step, in order to acquire a stable tracking pose and fully utilize information obtained by sensors of the vision and IMU, the invention fuses the vision pose obtained by a vision image and a pose prediction result obtained by IMU integration by using a Kalman fusion method so as to realize information complementation and target state estimation of two heterogeneous sensors, thereby acquiring a more accurate and reliable pose after fusion. And then, carrying out motion tracking according to the fused pose information.
In the invention, a map is initialized, and after the map is initialized successfully, images are obtained for continuous tracking and pose estimation; meanwhile, IMU data are obtained to carry out integral prediction pose; and performing data fusion under an Extended Kalman Filter (EKF) frame to obtain stable pose estimation. Aiming at the motion tracking problem of the current mobile VR, the invention can accurately estimate the pose and the absolute scale by combining visual measurement and inertial sensor measurement under an EKF frame by using a VIO with a camera and an IMU which are carried by a mobile terminal. A fast and stable motion tracking method of moving VR is achieved. Compared with the prior art, the method has the characteristics of acquiring a more stable motion tracking mode and realizing real-time online estimation of the scale.
As an improvement of the present invention, processing the acquired image by utilizing the ORB algorithm, and then performing 3D reconstruction to obtain an initial map point, and completing map initialization includes:
extracting feature points of the acquired first frame image by using an ORB algorithm, calculating a descriptor, recording the first frame as a key frame, and marking the absolute pose of the camera;
after the camera translates one end distance, extracting feature points and calculating a descriptor by adopting an ORB algorithm on the acquired image, matching the feature points with the feature points of the first frame image, recording a second frame as a key frame, and calculating the relative pose of the camera under the second frame relative to the first frame;
and 3D reconstruction is carried out on the successfully matched feature point set to obtain an initial map point.
In view of this improvement, the present invention provides a complete and specific embodiment as follows:
1. acquiring a first frame image, extracting feature points by using an Oriented fast computed conditional Brief (ORB) with local invariance, and calculating a descriptor, wherein the first frame is a key frame, and the absolute pose of a marking camera is [ R [ ](0,k)|t(0,k)]The subscript (0, k) denotes the absolute pose of the kth frame, then [ R(0,0)|t(0,0)]=[I|0];
2. After the image is translated for a certain distance, the image is collected again, feature points are extracted by using an ORB algorithm, and descriptors are calculated. And after the matching with the image feature points of the first frame is successful, marking the frame as a key frame. And calculating the relative pose of the phase under the second frame relative to the first frame as [ R ](0,1)|t(0,1)]=[R|t];
3. And 3D reconstruction is carried out on the successfully matched feature point set to obtain an initial map point.
In the embodiment, an ORB algorithm is adopted to extract features and directly match and estimate the pose, the ORB algorithm is an algorithm improvement combining FAST corner detection and BRIEF feature description, and the efficiency and the precision in the monocular vision tracking process are considered.
As a further improvement of the present invention, calculating the relative pose of the phase under the second frame with respect to the first frame includes:
calculating a basic matrix between the two frames of images according to the corresponding matched feature point sets on the first frame of image and the second frame of image;
calculating to obtain an essential matrix according to the basic matrix and the internal parameters of the camera;
and performing singular value decomposition on the essential matrix to obtain the relative pose of the phase machine under the second frame relative to the first frame.
For further improvement of the invention, the invention provides the following complete specific examples:
① translating for a certain distance, extracting feature points and calculating descriptors for the second frame image by using ORB algorithm, matching with the feature points of the first frame image successfully to obtain corresponding matched feature point sets (X) on the two key framesL,XR);
② according to XLTFXRCalculating a basic matrix F when the value is 0;
③ is formed by the correlation between the basic matrix F and the essential matrix ELTFKRWherein (K)L,KR) Intrinsic parameters of respective cameras, which can be calibrated in advance and KL=KR. Obtaining an essential matrix E, wherein the essential matrix is only related to the external parameters of the camera and is not related to the internal parameters of the camera;
④ according to E ═ t]×R, wherein [ t ]]×Is given by (t) the translation amount tx,ty,tz)TR is a rotation matrix. R and t can be calculated by using Singular Value Decomposition (SVD) on the matrix E, and the relative pose of the phase machine in the second frame relative to the first frame is [ R [ ](0,1)|t(0,1)]=[R|t]。
In the embodiment, in the moving process of the camera, a series of relative poses corresponding to each frame of picture can be acquired in sequence.
As a further improvement of the invention, the method for performing visual tracking by using an ORB algorithm to match in real time and parallel local key frame mapping comprises the following steps:
rasterizing a current frame of the image by adopting an ORB algorithm to extract image feature points and a calculation descriptor;
estimating the corresponding camera pose of the current frame by adopting a constant speed motion model, projecting all map points of the previous frame of image onto the current image frame, matching the feature points, and assigning the successfully matched map points of the previous frame to the corresponding feature points of the current frame;
updating the pose and the map point of the current frame by adopting an LM algorithm and Huber estimation;
and projecting all map points of the local key frame onto the current image frame according to the updated pose, matching the feature points, assigning all the map points successfully matched to the corresponding feature points of the current frame after the matching is successful, and updating the pose of the current frame and the map points of the current frame again by using an LM algorithm and Huber estimation.
In accordance with a further development of the invention described above, as shown in fig. 2, for the current frame (ith) of the picturekFrame image), a specific example of the tracking step is as follows:
(1) the method comprises the steps that an ORB algorithm is used for rasterizing (an image is equally divided into a series of grids with the same size) areas to extract image feature points and calculation descriptors, and the rasterization extraction can ensure that the feature points on the image are uniformly extracted and distributed, so that the stability and the precision of subsequent tracking are improved;
(2) and estimating the corresponding camera pose of the current frame by adopting a constant speed motion model. The last frame image Ik-1All map points are projected onto the current image frame. Matching the feature points, and assigning the map point of the last frame successfully matched to the corresponding feature point of the current frame;
(3) updating the pose and the map point of the current frame by using an LM (Levenberg-Marquardt) algorithm and a Huber estimation;
(4) according to the updated pose, all map points of the local key frame (the map points do not include the map points in (2))Map points) are projected onto the current image frame and feature point matching is performed. And after the matching is successful, assigning all map points successfully matched to the corresponding feature points of the current frame. And re-updates the current frame pose [ R ] using the LM algorithm and the Huber estimate(0,k)|t(0,k)]And a current frame map point.
In the embodiment, the ORB algorithm is used for carrying out visual tracking in a real-time matching and parallel local key frame mapping mode, so that the visual pose is obtained. The ORB algorithm is used for real-time matching and pose estimation as a tracking thread, and the maintenance and mapping of local key frames are local joint frame threads. In the embodiment, the tracking thread and the local key frame thread are processed in parallel, so that the real-time tracking is realized with high efficiency.
As an improvement of the present invention, the method of performing the visual tracking by using the ORB algorithm to perform real-time matching and parallel local key frame mapping, and obtaining the visual pose may further include:
judging whether a key frame needs to be added or not according to the time interval condition and/or the number of map points of the current frame, and if the time is more than a certain time from the last time of adding the key frame or the number of the map points of the current frame is less than a threshold value, adding a new key frame;
judging whether the current frame is a new key frame, if so, adding new map points, performing feature point matching on all feature points of the new key frame without the map points and all feature points in the local key frame, and performing 3D reconstruction after successful matching to obtain new map points;
and (4) optimizing local beam adjustment, correcting accumulated errors, and obtaining an optimized pose and a map point.
For such improvement, the present invention provides the following complete specific embodiments:
1) and adding a new key frame, and judging whether the key frame needs to be enhanced or not according to the time dimension and the number of map points of the current frame. When the time is longer than a certain time from the last time of adding the key frame or the number of map points of the current frame is less than a threshold value, adding a new key frame;
2) and if the current frame is a new key frame, adding a new map point. Carrying out feature point matching on all feature points of the new key frame without map points and all feature points in the local key frame, and obtaining new map points through 3D reconstruction after successful matching;
3) in order to ensure the tracking efficiency and the tracking continuity, the number of local key frames is controlled, and when the number of the key frames is greater than a threshold value, the key frame which is added into the local key frames at the earliest is deleted;
4) and (4) optimizing local beam Adjustment (Bundle Adjustment) and correcting accumulated errors. And obtaining the optimized pose and map points.
In this embodiment, steps 1) to 4) may be placed in the local key frame thread (the local key frame thread is (4) in the above embodiment) for parallel processing, so as to improve efficiency. Repeating (1) to (4), and 1) to 4) in the above embodiment enables continuous tracking.
In the embodiment, the tracking continuity can be ensured, the number of key frames needing to be processed can be reduced, the processing time is reduced, and the motion tracking efficiency is improved.
In the present invention, the kalman fusion process performed by the visual pose and the IMU pose may be implemented by various methods known to those skilled in the art, and preferably, the kalman fusion process may be performed with reference to the following embodiments:
for convenience of description, the subscripts w, i, v, c are defined to respectively represent a world coordinate system, an IMU coordinate system, a visual coordinate system, and a camera coordinate system, as shown in fig. 3. Coordinate system definition, as shown in FIG. 4;
step 1: assuming that the inertial measurement includes a specific bias b and white gaussian noise n, the actual angular velocity ω and the actual acceleration a are as follows:
ω=ωm-bω-nωa=am-ba-na
where the subscript m denotes the measured value, the dynamic deviation b can be expressed as a random process:
b·ω=nbωb·a=nba
the state of the filter includes the position of the IMU in the world coordinate systemAnd the speed of the world coordinate system relative to the IMU coordinate systemAnd attitude four-elementAt the same time, there is also the gyro and accelerometer bias bω,baAnd a visual scale factor λ. And calibrating the rotational relationship between the obtained IMU and the cameraTranslation relationA state vector X comprising 24 elements can thus be obtained, as shown in the prediction module of fig. 5.
X={pwiTvwiTqwiTbωTbaTλpicqic}
Step 2: in the state expression description above, we describe the gesture using four elements. In this case, we use a four element error to represent the error and its covariance, which can increase numerical stability and be expressed at a minimum. Therefore, we define an error state vector of 22 elements.
x~={ΔpwiTΔvwiTδθwiTΔbωTΔbaTΔλΔpicTδθicT}
Taking into account the estimated valuesAnd its true value x, e.g.We use this method for all state variables except for the four-element error, which is defined as:
δqwi=qwi⊗q^wi-1≈12δθwiT1T,δqic=qic⊗q^ic-1≈12δθicT1T
from this, a linearized equation for the continuous error state can be obtained:
x~·=Fcx~+Gcn
wherein,is a noise vector. In the current solution we are particularly concerned with the speed of the algorithm, for which we assume F during the integration time of two adjacent statescAnd GcIs a constant value. To discretize this representation:
Fd=exp(FcΔt)=Id+FcΔt+12Fc2Δt2+...
meanwhile, a covariance matrix Q of discrete time can be obtained through integrationd
Qd=∫ΔtFd(τ)GcQcGCTFd(τ)Tdτ
F obtained by calculationdAnd QdAnd according to Kalman filtering, calculating to obtain a state covariance matrix:
Pk+1|k=FdPk|kFdT+Qd
and step 3: position measurement for cameraWe obtain pose [ R ] from single camera's visual tracking(0,k)|t(0,k)],(And) Position vector and rotation quaternion description of the camera pose. And then the corresponding measuring position is obtained. The following measurement models were obtained:
zp=pvc=C(qvw)T(pwi+C(qwi)Tpic)λ+np
wherein,is the pose of the IMU in the world coordinate system,is the rotation of the visual coordinate system relative to the world coordinate system.
And 4, step 4: defining a position measurement error model
z~p=zp-z^p=C(qvw)T(pwi+C(qwi)Tpic)λ+np-C(qvw)T(p^wi+C(q^wi)Tp^ic)λ^
Defining a rotation measurement error model
z~q=zq-z^q=qic⊗qwi⊗qvw⊗(qic⊗qwi⊗qvw)-1=δqic⊗q^ic⊗δqwi⊗qic-1=Hqwiδqwi=Hqicδqic
Wherein,andare respectively error state quantitiesAndthe error measurement matrix of (2). Finally, the measurement matrix may be accumulated as:
z~pz~q=Hp03×6H~qwi03×10H~qicx~
and 5: when we acquire the measurement matrix H, we can update according to the steps of the kalman filter, as shown by the update block in fig. 5.
Calculating a residual vector:
calculating a new tracking quantity: s ═ HPHT+R;
Calculating Kalman gain K-PHTS-1
And (3) calculating correction amount:according to the correction amountWe can calculate the update amount of the X state. The error state four elements can be updated as follows:
Pk+1|k+1=(Id-KH)Pk+1|k(Id-KH)T+KRKT
through the monocular tracking and the IMU fusion, stable attitude output of a mobile terminal is obtained, and stable motion tracking is further realized.
The above embodiment is only an example of kalman fusion performed by the visual pose and the IMU pose of the present invention, and other methods known to those skilled in the art may be adopted in addition to this embodiment to achieve the technical effects of the present invention.
In the embodiments of the methods of the present invention, the sequence numbers of the steps are not used to limit the sequence of the steps, and for those skilled in the art, the sequence of the steps is not changed without creative efforts.
On the other hand, corresponding to the above method, the present invention further provides a stable motion tracking apparatus with a smart phone monocular and IMU integrated, as shown in fig. 7, including:
the map initialization module 11 is configured to process the acquired image by using an ORB algorithm, and then perform 3D reconstruction to obtain an initial map point and complete map initialization;
the visual tracking module 12 is used for performing visual tracking in a mode of matching in real time and parallel local key frame mapping by using an ORB algorithm to obtain a visual pose;
the IMU pose calculation module 13: the IMU pose prediction method comprises the steps of obtaining acceleration and angular velocity values generated by the IMU in a three-dimensional space, and performing integral operation on the acceleration and angular velocity values to obtain an IMU pose prediction result;
the fusion module 14: the method is used for performing Kalman fusion on the prediction results of the visual pose and the IMU pose and performing motion tracking according to pose information obtained after fusion.
Compared with the prior art, the method has the characteristics of acquiring a more stable motion tracking mode and realizing real-time online estimation of the scale.
As an improvement of the present invention, the map initialization module 11 is further configured to:
extracting feature points of the acquired first frame image by using an ORB algorithm, calculating a descriptor, recording the first frame as a key frame, and marking the absolute pose of the camera;
after the camera translates one end distance, extracting feature points and calculating a descriptor by adopting an ORB algorithm on the acquired image, matching the feature points with the feature points of the first frame image, recording a second frame as a key frame, and calculating the relative pose of the camera under the second frame relative to the first frame;
and 3D reconstruction is carried out on the successfully matched feature point set to obtain an initial map point.
In the invention, an ORB algorithm is adopted to extract features and directly match and estimate the pose, the ORB algorithm is an algorithm improvement combining FAST corner detection and BRIEF feature description, and the efficiency and the precision in the monocular vision tracking process are considered.
As a further improvement of the present invention, calculating the relative pose of the phase under the second frame with respect to the first frame includes:
calculating a basic matrix between the two frames of images according to the corresponding matched feature point sets on the first frame of image and the second frame of image;
calculating to obtain an essential matrix according to the basic matrix and the internal parameters of the camera;
and performing singular value decomposition on the essential matrix to obtain the relative pose of the phase machine under the second frame relative to the first frame.
According to the invention, a series of relative poses corresponding to each frame of picture can be sequentially obtained in the moving process of the camera.
As a further improvement of the present invention, the visual tracking module 12 is further configured to:
rasterizing a current frame of the image by adopting an ORB algorithm to extract image feature points and a calculation descriptor;
estimating the corresponding camera pose of the current frame by adopting a constant speed motion model, projecting all map points of the previous frame of image onto the current image frame, matching the feature points, and assigning the successfully matched map points of the previous frame to the corresponding feature points of the current frame;
updating the pose and the map point of the current frame by adopting an LM algorithm and Huber estimation;
and projecting all map points of the local key frame onto the current image frame according to the updated pose, matching the feature points, assigning all the map points successfully matched to the corresponding feature points of the current frame after the matching is successful, and updating the pose of the current frame and the map points of the current frame again by using an LM algorithm and Huber estimation.
In the invention, visual tracking is carried out by using an ORB algorithm in a real-time matching and parallel local key frame mapping mode, so as to obtain a visual pose. The ORB algorithm is used for real-time matching and pose estimation as a tracking thread, and the maintenance and mapping of local key frames are local joint frame threads. In the invention, the tracking thread and the local key frame thread are processed in parallel, so that the real-time tracking is realized with high efficiency.
As an improvement of the present invention, the visual tracking module 12 is further configured to:
judging whether a key frame needs to be added or not according to the time interval condition and/or the number of map points of the current frame, and if the time is more than a certain time from the last time of adding the key frame or the number of the map points of the current frame is less than a threshold value, adding a new key frame;
judging whether the current frame is a new key frame, if so, adding new map points, performing feature point matching on all feature points of the new key frame without the map points and all feature points in the local key frame, and performing 3D reconstruction after successful matching to obtain new map points;
and (4) optimizing local beam adjustment, correcting accumulated errors, and obtaining an optimized pose and a map point.
The invention can ensure the tracking continuity, reduce the number of key frames to be processed, reduce the processing time and improve the motion tracking efficiency.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

CN201610346493.3A2016-05-232016-05-23Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphonePendingCN105953796A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201610346493.3ACN105953796A (en)2016-05-232016-05-23Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610346493.3ACN105953796A (en)2016-05-232016-05-23Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone

Publications (1)

Publication NumberPublication Date
CN105953796Atrue CN105953796A (en)2016-09-21

Family

ID=56909351

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610346493.3APendingCN105953796A (en)2016-05-232016-05-23Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone

Country Status (1)

CountryLink
CN (1)CN105953796A (en)

Cited By (79)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106548486A (en)*2016-11-012017-03-29浙江大学A kind of unmanned vehicle location tracking method based on sparse visual signature map
CN106546238A (en)*2016-10-262017-03-29北京小鸟看看科技有限公司Wearable device and the method that user's displacement is determined in wearable device
CN106556391A (en)*2016-11-252017-04-05上海航天控制技术研究所A kind of fast vision measuring method based on multi-core DSP
CN106570820A (en)*2016-10-182017-04-19浙江工业大学Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV)
CN106595570A (en)*2016-12-162017-04-26杭州奥腾电子股份有限公司Vehicle single camera and six-axis sensor combination range finding system and range finding method thereof
CN106767785A (en)*2016-12-232017-05-31成都通甲优博科技有限责任公司The air navigation aid and device of a kind of double loop unmanned plane
CN106885574A (en)*2017-02-152017-06-23北京大学深圳研究生院A kind of monocular vision robot synchronous superposition method based on weight tracking strategy
CN107194968A (en)*2017-05-182017-09-22腾讯科技(上海)有限公司Recognition and tracking method, device, intelligent terminal and the readable storage medium storing program for executing of image
CN107246868A (en)*2017-07-262017-10-13上海舵敏智能科技有限公司A kind of collaborative navigation alignment system and navigation locating method
CN107687850A (en)*2017-07-262018-02-13哈尔滨工业大学深圳研究生院A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit
CN107748569A (en)*2017-09-042018-03-02中国兵器工业计算机应用技术研究所Motion control method, device and UAS for unmanned plane
CN107909614A (en)*2017-11-132018-04-13中国矿业大学Crusing robot localization method under a kind of GPS failures environment
CN108022302A (en)*2017-12-012018-05-11深圳市天界幻境科技有限公司A kind of sterically defined AR 3 d display devices of Inside-Out
CN106920279B (en)*2017-03-072018-06-19百度在线网络技术(北京)有限公司Three-dimensional map construction method and device
CN108225345A (en)*2016-12-222018-06-29乐视汽车(北京)有限公司The pose of movable equipment determines method, environmental modeling method and device
CN108364319A (en)*2018-02-122018-08-03腾讯科技(深圳)有限公司Scale determines method, apparatus, storage medium and equipment
CN108648235A (en)*2018-04-272018-10-12腾讯科技(深圳)有限公司 Relocation method, device and storage medium for camera pose tracking process
CN108759826A (en)*2018-04-122018-11-06浙江工业大学A kind of unmanned plane motion tracking method based on mobile phone and the more parameter sensing fusions of unmanned plane
CN108829116A (en)*2018-10-092018-11-16上海岚豹智能科技有限公司Barrier-avoiding method and equipment based on monocular cam
CN108871311A (en)*2018-05-312018-11-23北京字节跳动网络技术有限公司Pose determines method and apparatus
CN109089100A (en)*2018-08-132018-12-25西安理工大学A kind of synthetic method of binocular tri-dimensional video
CN109079799A (en)*2018-10-232018-12-25哈尔滨工业大学(深圳)It is a kind of based on bionical robot perception control system and control method
CN109307508A (en)*2018-08-292019-02-05中国科学院合肥物质科学研究院 A Panoramic Inertial Navigation SLAM Method Based on Multiple Keyframes
CN109376785A (en)*2018-10-312019-02-22东南大学 A Navigation Method Based on Iterative Extended Kalman Filter Fusion Inertial and Monocular Vision
CN109631894A (en)*2018-12-112019-04-16智灵飞(北京)科技有限公司A kind of monocular vision inertia close coupling method based on sliding window
CN109671120A (en)*2018-11-082019-04-23南京华捷艾米软件科技有限公司A kind of monocular SLAM initial method and system based on wheel type encoder
CN109712170A (en)*2018-12-272019-05-03广东省智能制造研究所Environmental objects method for tracing, device, computer equipment and storage medium
CN109727287A (en)*2018-12-272019-05-07江南大学 An improved registration method and system for augmented reality
CN109739079A (en)*2018-12-252019-05-10广东工业大学 A Method to Improve the Accuracy of VSLAM System
CN109752717A (en)*2017-11-072019-05-14现代自动车株式会社 Apparatus and method for correlating sensor data in a vehicle
CN109887029A (en)*2019-01-172019-06-14江苏大学 A monocular visual odometry method based on image color features
CN109900294A (en)*2019-05-132019-06-18奥特酷智能科技(南京)有限公司Vision inertia odometer based on hardware accelerator
CN109978943A (en)*2017-12-282019-07-05深圳市优必选科技有限公司Working method and system of moving lens and device with storage function
CN110009739A (en)*2019-01-292019-07-12浙江省北大信息技术高等研究院 Extraction and Coding Method of Motion Features of Digital Retina of Mobile Cameras
CN110006423A (en)*2019-04-042019-07-12北京理工大学 An Adaptive Inertial Navigation and Vision Combined Navigation Method
CN110095752A (en)*2019-05-072019-08-06百度在线网络技术(北京)有限公司Localization method, device, equipment and medium
CN110140100A (en)*2017-01-022019-08-16摩致实验室有限公司Three-dimensional enhanced reality object user's interface function
CN110147164A (en)*2019-05-222019-08-20京东方科技集团股份有限公司 Head motion tracking method, device, system and storage medium
WO2019157925A1 (en)*2018-02-132019-08-22视辰信息科技(上海)有限公司Visual-inertial odometry implementation method and system
CN110196047A (en)*2019-06-202019-09-03东北大学Robot autonomous localization method of closing a position based on TOF depth camera and IMU
CN110211239A (en)*2019-05-302019-09-06杭州远传新业科技有限公司Augmented reality method, apparatus, equipment and medium based on unmarked identification
CN110243381A (en)*2019-07-112019-09-17北京理工大学 A collaborative sensing and monitoring method for ground-air robots
CN110319772A (en)*2019-07-122019-10-11上海电力大学Visual large-span distance measurement method based on unmanned aerial vehicle
CN110361005A (en)*2019-06-262019-10-22深圳前海达闼云端智能科技有限公司Positioning method, positioning device, readable storage medium and electronic equipment
CN110490900A (en)*2019-07-122019-11-22中国科学技术大学Binocular visual positioning method and system under dynamic environment
CN110520694A (en)*2017-10-312019-11-29深圳市大疆创新科技有限公司A kind of visual odometry and its implementation
CN110517319A (en)*2017-07-072019-11-29腾讯科技(深圳)有限公司A kind of method and relevant apparatus that camera posture information is determining
WO2020024182A1 (en)*2018-08-012020-02-06深圳市大疆创新科技有限公司Parameter processing method and apparatus, camera device and aircraft
CN110874100A (en)*2018-08-132020-03-10北京京东尚科信息技术有限公司System and method for autonomous navigation using visual sparse maps
CN111052183A (en)*2017-09-042020-04-21苏黎世大学 Visual-Inertial Odometry Using Event Cameras
CN111076733A (en)*2019-12-102020-04-28亿嘉和科技股份有限公司Robot indoor map building method and system based on vision and laser slam
CN111307165A (en)*2020-03-062020-06-19新石器慧通(北京)科技有限公司 A vehicle positioning method, positioning system and unmanned vehicle
CN111462179A (en)*2020-03-262020-07-28北京百度网讯科技有限公司 Three-dimensional object tracking method, device and electronic device
CN111652933A (en)*2020-05-062020-09-11Oppo广东移动通信有限公司 Relocation method, device, storage medium and electronic device based on monocular camera
CN111780764A (en)*2020-06-302020-10-16杭州海康机器人技术有限公司Visual positioning method and device based on visual map
CN111879306A (en)*2020-06-172020-11-03杭州易现先进科技有限公司Visual inertial positioning method, device, system and computer equipment
CN112074705A (en)*2017-12-182020-12-11Alt有限责任公司Method and system for optical inertial tracking of moving object
CN112288803A (en)*2019-07-252021-01-29阿里巴巴集团控股有限公司Positioning method and device for computing equipment
CN112393721A (en)*2020-09-302021-02-23苏州大学应用技术学院Camera pose estimation method
CN112489176A (en)*2020-11-262021-03-12香港理工大学深圳研究院Tightly-coupled graph building method fusing ESKF, g2o and point cloud matching
CN112581514A (en)*2019-09-302021-03-30浙江商汤科技开发有限公司Map construction method and device and storage medium
CN112639883A (en)*2020-03-172021-04-09华为技术有限公司Relative attitude calibration method and related device
CN112907742A (en)*2021-02-182021-06-04湖南国科微电子股份有限公司Visual synchronous positioning and mapping method, device, equipment and medium
CN112936269A (en)*2021-02-042021-06-11珠海市一微半导体有限公司Robot control method based on intelligent terminal
CN113052897A (en)*2021-03-252021-06-29浙江商汤科技开发有限公司Positioning initialization method and related device, equipment and storage medium
CN113091738A (en)*2021-04-092021-07-09安徽工程大学Mobile robot map construction method based on visual inertial navigation fusion and related equipment
CN113139456A (en)*2018-02-052021-07-20浙江商汤科技开发有限公司Electronic equipment state tracking method and device, electronic equipment and control system
CN113298692A (en)*2021-05-212021-08-24北京索为云网科技有限公司Terminal pose tracking method, AR rendering method, terminal pose tracking device and storage medium
CN113409368A (en)*2020-03-162021-09-17北京京东乾石科技有限公司Drawing method and device, computer readable storage medium and electronic equipment
CN113447014A (en)*2021-08-302021-09-28深圳市大道智创科技有限公司Indoor mobile robot, mapping method, positioning method, and mapping positioning device
CN113632135A (en)*2019-04-302021-11-09三星电子株式会社System and method for low latency, high performance pose fusion
CN113936120A (en)*2021-10-122022-01-14北京邮电大学Mark-free lightweight Web AR method and system
CN114460617A (en)*2020-11-032022-05-10阿里巴巴集团控股有限公司 Positioning method, apparatus, electronic device, and computer-readable storage medium
CN114494825A (en)*2021-12-312022-05-13重庆特斯联智慧科技股份有限公司Robot positioning method and device
CN114663822A (en)*2022-05-182022-06-24广州市影擎电子科技有限公司Servo motion trajectory generation method and device
CN115115707A (en)*2022-06-302022-09-27小米汽车科技有限公司Vehicle drowning detection method, vehicle, computer readable storage medium and chip
CN116645400A (en)*2023-07-212023-08-25江西红声技术有限公司Vision and inertia mixed pose tracking method, system, helmet and storage medium
CN117392518A (en)*2023-12-132024-01-12南京耀宇视芯科技有限公司Low-power-consumption visual positioning and mapping chip and method thereof
CN119006769A (en)*2024-10-242024-11-22自然资源部第二海洋研究所Marine vision SLAM pose estimation method, device, computer equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20090081968A (en)*2008-01-252009-07-29성균관대학교산학협력단 Real-time Object Recognition and Pose Estimation System and Method Using Situation Monitoring
CN102435188A (en)*2011-09-152012-05-02南京航空航天大学 A Monocular Vision/Inertial Fully Autonomous Navigation Method for Indoor Environment
CN102768042A (en)*2012-07-112012-11-07清华大学Visual-inertial combined navigation method
CN102967297A (en)*2012-11-232013-03-13浙江大学Space-movable visual sensor array system and image information fusion method
CN103646391A (en)*2013-09-302014-03-19浙江大学Real-time camera tracking method for dynamically-changed scene
CN103954283A (en)*2014-04-012014-07-30西北工业大学Scene matching/visual odometry-based inertial integrated navigation method
CN104501814A (en)*2014-12-122015-04-08浙江大学Attitude and position estimation method based on vision and inertia information
CN104680522A (en)*2015-02-092015-06-03浙江大学Visual positioning method based on synchronous working of front and back cameras of smart phone

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
KR20090081968A (en)*2008-01-252009-07-29성균관대학교산학협력단 Real-time Object Recognition and Pose Estimation System and Method Using Situation Monitoring
CN102435188A (en)*2011-09-152012-05-02南京航空航天大学 A Monocular Vision/Inertial Fully Autonomous Navigation Method for Indoor Environment
CN102768042A (en)*2012-07-112012-11-07清华大学Visual-inertial combined navigation method
CN102967297A (en)*2012-11-232013-03-13浙江大学Space-movable visual sensor array system and image information fusion method
CN103646391A (en)*2013-09-302014-03-19浙江大学Real-time camera tracking method for dynamically-changed scene
CN103954283A (en)*2014-04-012014-07-30西北工业大学Scene matching/visual odometry-based inertial integrated navigation method
CN104501814A (en)*2014-12-122015-04-08浙江大学Attitude and position estimation method based on vision and inertia information
CN104680522A (en)*2015-02-092015-06-03浙江大学Visual positioning method based on synchronous working of front and back cameras of smart phone

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李仁厚: "《自主移动机器人导论 第2版》", 31 May 2013, 西安交通大学出版社*
邹建成,牛少彰: "《数学及其在图像处理中的应用》", 31 July 2015*

Cited By (125)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106570820A (en)*2016-10-182017-04-19浙江工业大学Monocular visual 3D feature extraction method based on four-rotor unmanned aerial vehicle (UAV)
CN106570820B (en)*2016-10-182019-12-03浙江工业大学 A monocular vision 3D feature extraction method based on quadrotor UAV
WO2018077176A1 (en)*2016-10-262018-05-03北京小鸟看看科技有限公司Wearable device and method for determining user displacement in wearable device
CN106546238A (en)*2016-10-262017-03-29北京小鸟看看科技有限公司Wearable device and the method that user's displacement is determined in wearable device
CN106548486A (en)*2016-11-012017-03-29浙江大学A kind of unmanned vehicle location tracking method based on sparse visual signature map
CN106548486B (en)*2016-11-012024-02-27浙江大学Unmanned vehicle position tracking method based on sparse visual feature map
CN106556391A (en)*2016-11-252017-04-05上海航天控制技术研究所A kind of fast vision measuring method based on multi-core DSP
CN106595570A (en)*2016-12-162017-04-26杭州奥腾电子股份有限公司Vehicle single camera and six-axis sensor combination range finding system and range finding method thereof
CN108225345A (en)*2016-12-222018-06-29乐视汽车(北京)有限公司The pose of movable equipment determines method, environmental modeling method and device
CN106767785A (en)*2016-12-232017-05-31成都通甲优博科技有限责任公司The air navigation aid and device of a kind of double loop unmanned plane
CN106767785B (en)*2016-12-232020-04-07成都通甲优博科技有限责任公司Navigation method and device of double-loop unmanned aerial vehicle
CN110140100B (en)*2017-01-022020-02-28摩致实验室有限公司Three-dimensional augmented reality object user interface functionality
CN110140100A (en)*2017-01-022019-08-16摩致实验室有限公司Three-dimensional enhanced reality object user's interface function
CN106885574B (en)*2017-02-152020-02-07北京大学深圳研究生院Monocular vision robot synchronous positioning and map construction method based on re-tracking strategy
CN106885574A (en)*2017-02-152017-06-23北京大学深圳研究生院A kind of monocular vision robot synchronous superposition method based on weight tracking strategy
CN106920279B (en)*2017-03-072018-06-19百度在线网络技术(北京)有限公司Three-dimensional map construction method and device
CN107194968A (en)*2017-05-182017-09-22腾讯科技(上海)有限公司Recognition and tracking method, device, intelligent terminal and the readable storage medium storing program for executing of image
CN107194968B (en)*2017-05-182024-01-16腾讯科技(上海)有限公司 Image recognition and tracking method, device, smart terminal and readable storage medium
CN110517319A (en)*2017-07-072019-11-29腾讯科技(深圳)有限公司A kind of method and relevant apparatus that camera posture information is determining
CN107687850A (en)*2017-07-262018-02-13哈尔滨工业大学深圳研究生院A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit
CN107246868B (en)*2017-07-262021-11-02上海舵敏智能科技有限公司Collaborative navigation positioning system and navigation positioning method
CN107246868A (en)*2017-07-262017-10-13上海舵敏智能科技有限公司A kind of collaborative navigation alignment system and navigation locating method
CN107687850B (en)*2017-07-262021-04-23哈尔滨工业大学深圳研究生院 A Pose and Attitude Estimation Method for Unmanned Aircraft Based on Vision and Inertial Measurement Unit
CN111052183B (en)*2017-09-042024-05-03苏黎世大学 Visual Inertial Odometry with Event Camera
CN107748569B (en)*2017-09-042021-02-19中国兵器工业计算机应用技术研究所Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
CN107748569A (en)*2017-09-042018-03-02中国兵器工业计算机应用技术研究所Motion control method, device and UAS for unmanned plane
CN111052183A (en)*2017-09-042020-04-21苏黎世大学 Visual-Inertial Odometry Using Event Cameras
CN110520694A (en)*2017-10-312019-11-29深圳市大疆创新科技有限公司A kind of visual odometry and its implementation
CN109752717B (en)*2017-11-072023-10-17现代自动车株式会社Apparatus and method for correlating sensor data in a vehicle
CN109752717A (en)*2017-11-072019-05-14现代自动车株式会社 Apparatus and method for correlating sensor data in a vehicle
CN107909614B (en)*2017-11-132021-02-26中国矿业大学 A positioning method of inspection robot under GPS failure environment
CN107909614A (en)*2017-11-132018-04-13中国矿业大学Crusing robot localization method under a kind of GPS failures environment
CN108022302A (en)*2017-12-012018-05-11深圳市天界幻境科技有限公司A kind of sterically defined AR 3 d display devices of Inside-Out
CN108022302B (en)*2017-12-012021-06-29深圳市天界幻境科技有限公司Stereo display device of Inside-Out space orientation's AR
CN112074705A (en)*2017-12-182020-12-11Alt有限责任公司Method and system for optical inertial tracking of moving object
CN109978943A (en)*2017-12-282019-07-05深圳市优必选科技有限公司Working method and system of moving lens and device with storage function
CN109978943B (en)*2017-12-282021-06-04深圳市优必选科技有限公司Working method and system of moving lens and device with storage function
CN113139456A (en)*2018-02-052021-07-20浙江商汤科技开发有限公司Electronic equipment state tracking method and device, electronic equipment and control system
CN108364319B (en)*2018-02-122022-02-01腾讯科技(深圳)有限公司Dimension determination method and device, storage medium and equipment
CN108364319A (en)*2018-02-122018-08-03腾讯科技(深圳)有限公司Scale determines method, apparatus, storage medium and equipment
WO2019157925A1 (en)*2018-02-132019-08-22视辰信息科技(上海)有限公司Visual-inertial odometry implementation method and system
CN108759826A (en)*2018-04-122018-11-06浙江工业大学A kind of unmanned plane motion tracking method based on mobile phone and the more parameter sensing fusions of unmanned plane
CN108759826B (en)*2018-04-122020-10-27浙江工业大学 A UAV motion tracking method based on the fusion of multi-sensing parameters of mobile phones and UAVs
CN108648235A (en)*2018-04-272018-10-12腾讯科技(深圳)有限公司 Relocation method, device and storage medium for camera pose tracking process
CN108648235B (en)*2018-04-272022-05-17腾讯科技(深圳)有限公司 Relocation method, device and storage medium for camera attitude tracking process
US11205282B2 (en)2018-04-272021-12-21Tencent Technology (Shenzhen) Company LimitedRelocalization method and apparatus in camera pose tracking process and storage medium
CN108871311A (en)*2018-05-312018-11-23北京字节跳动网络技术有限公司Pose determines method and apparatus
CN108871311B (en)*2018-05-312021-01-19北京字节跳动网络技术有限公司Pose determination method and device
WO2020024182A1 (en)*2018-08-012020-02-06深圳市大疆创新科技有限公司Parameter processing method and apparatus, camera device and aircraft
CN109089100B (en)*2018-08-132020-10-23西安理工大学Method for synthesizing binocular stereo video
CN109089100A (en)*2018-08-132018-12-25西安理工大学A kind of synthetic method of binocular tri-dimensional video
CN110874100A (en)*2018-08-132020-03-10北京京东尚科信息技术有限公司System and method for autonomous navigation using visual sparse maps
CN110874100B (en)*2018-08-132024-07-19北京京东乾石科技有限公司System and method for autonomous navigation using visual sparse maps
CN109307508B (en)*2018-08-292022-04-08中国科学院合肥物质科学研究院Panoramic inertial navigation SLAM method based on multiple key frames
CN109307508A (en)*2018-08-292019-02-05中国科学院合肥物质科学研究院 A Panoramic Inertial Navigation SLAM Method Based on Multiple Keyframes
CN108829116A (en)*2018-10-092018-11-16上海岚豹智能科技有限公司Barrier-avoiding method and equipment based on monocular cam
CN109079799A (en)*2018-10-232018-12-25哈尔滨工业大学(深圳)It is a kind of based on bionical robot perception control system and control method
CN109079799B (en)*2018-10-232021-11-12哈尔滨工业大学(深圳)Robot perception control system and control method based on bionics
CN109376785B (en)*2018-10-312021-09-24东南大学 A Navigation Method Based on Iterative Extended Kalman Filter Fusion Inertial and Monocular Vision
CN109376785A (en)*2018-10-312019-02-22东南大学 A Navigation Method Based on Iterative Extended Kalman Filter Fusion Inertial and Monocular Vision
CN109671120A (en)*2018-11-082019-04-23南京华捷艾米软件科技有限公司A kind of monocular SLAM initial method and system based on wheel type encoder
CN109631894A (en)*2018-12-112019-04-16智灵飞(北京)科技有限公司A kind of monocular vision inertia close coupling method based on sliding window
CN109739079B (en)*2018-12-252022-05-10九天创新(广东)智能科技有限公司Method for improving VSLAM system precision
CN109739079A (en)*2018-12-252019-05-10广东工业大学 A Method to Improve the Accuracy of VSLAM System
CN109712170B (en)*2018-12-272021-09-07广东省智能制造研究所 Environmental object tracking method and device based on visual inertial odometry
CN109727287B (en)*2018-12-272023-08-08江南大学 An improved registration method and system suitable for augmented reality
CN109727287A (en)*2018-12-272019-05-07江南大学 An improved registration method and system for augmented reality
CN109712170A (en)*2018-12-272019-05-03广东省智能制造研究所Environmental objects method for tracing, device, computer equipment and storage medium
CN109887029A (en)*2019-01-172019-06-14江苏大学 A monocular visual odometry method based on image color features
CN110009739A (en)*2019-01-292019-07-12浙江省北大信息技术高等研究院 Extraction and Coding Method of Motion Features of Digital Retina of Mobile Cameras
CN110006423B (en)*2019-04-042020-11-06北京理工大学Self-adaptive inertial navigation and visual combined navigation method
CN110006423A (en)*2019-04-042019-07-12北京理工大学 An Adaptive Inertial Navigation and Vision Combined Navigation Method
CN113632135A (en)*2019-04-302021-11-09三星电子株式会社System and method for low latency, high performance pose fusion
CN110095752A (en)*2019-05-072019-08-06百度在线网络技术(北京)有限公司Localization method, device, equipment and medium
CN109900294A (en)*2019-05-132019-06-18奥特酷智能科技(南京)有限公司Vision inertia odometer based on hardware accelerator
CN110147164A (en)*2019-05-222019-08-20京东方科技集团股份有限公司 Head motion tracking method, device, system and storage medium
CN110211239B (en)*2019-05-302022-11-08杭州远传新业科技股份有限公司Augmented reality method, apparatus, device and medium based on label-free recognition
CN110211239A (en)*2019-05-302019-09-06杭州远传新业科技有限公司Augmented reality method, apparatus, equipment and medium based on unmarked identification
CN110196047A (en)*2019-06-202019-09-03东北大学Robot autonomous localization method of closing a position based on TOF depth camera and IMU
CN110361005A (en)*2019-06-262019-10-22深圳前海达闼云端智能科技有限公司Positioning method, positioning device, readable storage medium and electronic equipment
CN110243381B (en)*2019-07-112020-10-30北京理工大学 A land-air robot collaborative sensing monitoring method
CN110243381A (en)*2019-07-112019-09-17北京理工大学 A collaborative sensing and monitoring method for ground-air robots
CN110490900B (en)*2019-07-122022-04-19中国科学技术大学 Binocular vision localization method and system in dynamic environment
CN110319772A (en)*2019-07-122019-10-11上海电力大学Visual large-span distance measurement method based on unmanned aerial vehicle
CN110490900A (en)*2019-07-122019-11-22中国科学技术大学Binocular visual positioning method and system under dynamic environment
CN112288803A (en)*2019-07-252021-01-29阿里巴巴集团控股有限公司Positioning method and device for computing equipment
CN112581514A (en)*2019-09-302021-03-30浙江商汤科技开发有限公司Map construction method and device and storage medium
CN111076733A (en)*2019-12-102020-04-28亿嘉和科技股份有限公司Robot indoor map building method and system based on vision and laser slam
CN111307165A (en)*2020-03-062020-06-19新石器慧通(北京)科技有限公司 A vehicle positioning method, positioning system and unmanned vehicle
CN113409368B (en)*2020-03-162023-11-03北京京东乾石科技有限公司 Mapping method and device, computer-readable storage medium, electronic equipment
CN113409368A (en)*2020-03-162021-09-17北京京东乾石科技有限公司Drawing method and device, computer readable storage medium and electronic equipment
CN112639883B (en)*2020-03-172021-11-19华为技术有限公司Relative attitude calibration method and related device
CN112639883A (en)*2020-03-172021-04-09华为技术有限公司Relative attitude calibration method and related device
CN111462179A (en)*2020-03-262020-07-28北京百度网讯科技有限公司 Three-dimensional object tracking method, device and electronic device
CN111462179B (en)*2020-03-262023-06-27北京百度网讯科技有限公司Three-dimensional object tracking method and device and electronic equipment
CN111652933A (en)*2020-05-062020-09-11Oppo广东移动通信有限公司 Relocation method, device, storage medium and electronic device based on monocular camera
CN111652933B (en)*2020-05-062023-08-04Oppo广东移动通信有限公司Repositioning method and device based on monocular camera, storage medium and electronic equipment
CN111879306A (en)*2020-06-172020-11-03杭州易现先进科技有限公司Visual inertial positioning method, device, system and computer equipment
CN111780764B (en)*2020-06-302022-09-02杭州海康机器人技术有限公司Visual positioning method and device based on visual map
CN111780764A (en)*2020-06-302020-10-16杭州海康机器人技术有限公司Visual positioning method and device based on visual map
CN112393721B (en)*2020-09-302024-04-09苏州大学应用技术学院Camera pose estimation method
CN112393721A (en)*2020-09-302021-02-23苏州大学应用技术学院Camera pose estimation method
CN114460617A (en)*2020-11-032022-05-10阿里巴巴集团控股有限公司 Positioning method, apparatus, electronic device, and computer-readable storage medium
CN112489176B (en)*2020-11-262021-09-21香港理工大学深圳研究院Tightly-coupled graph building method fusing ESKF, g2o and point cloud matching
CN112489176A (en)*2020-11-262021-03-12香港理工大学深圳研究院Tightly-coupled graph building method fusing ESKF, g2o and point cloud matching
CN112936269A (en)*2021-02-042021-06-11珠海市一微半导体有限公司Robot control method based on intelligent terminal
CN112907742A (en)*2021-02-182021-06-04湖南国科微电子股份有限公司Visual synchronous positioning and mapping method, device, equipment and medium
CN113052897A (en)*2021-03-252021-06-29浙江商汤科技开发有限公司Positioning initialization method and related device, equipment and storage medium
CN113091738A (en)*2021-04-092021-07-09安徽工程大学Mobile robot map construction method based on visual inertial navigation fusion and related equipment
CN113298692A (en)*2021-05-212021-08-24北京索为云网科技有限公司Terminal pose tracking method, AR rendering method, terminal pose tracking device and storage medium
CN113298692B (en)*2021-05-212024-04-16北京索为云网科技有限公司Augmented reality method for realizing real-time equipment pose calculation based on mobile terminal browser
CN113447014A (en)*2021-08-302021-09-28深圳市大道智创科技有限公司Indoor mobile robot, mapping method, positioning method, and mapping positioning device
CN113936120B (en)*2021-10-122024-07-12北京邮电大学Label-free lightweight Web AR method and system
CN113936120A (en)*2021-10-122022-01-14北京邮电大学Mark-free lightweight Web AR method and system
CN114494825A (en)*2021-12-312022-05-13重庆特斯联智慧科技股份有限公司Robot positioning method and device
CN114494825B (en)*2021-12-312024-04-19重庆特斯联智慧科技股份有限公司Robot positioning method and device
CN114663822A (en)*2022-05-182022-06-24广州市影擎电子科技有限公司Servo motion trajectory generation method and device
CN115115707B (en)*2022-06-302023-10-10小米汽车科技有限公司Vehicle falling water detection method, vehicle, computer readable storage medium and chip
CN115115707A (en)*2022-06-302022-09-27小米汽车科技有限公司Vehicle drowning detection method, vehicle, computer readable storage medium and chip
CN116645400A (en)*2023-07-212023-08-25江西红声技术有限公司Vision and inertia mixed pose tracking method, system, helmet and storage medium
CN116645400B (en)*2023-07-212023-12-08江西红声技术有限公司Vision and inertia mixed pose tracking method, system, helmet and storage medium
CN117392518B (en)*2023-12-132024-04-09南京耀宇视芯科技有限公司Low-power-consumption visual positioning and mapping chip and method thereof
CN117392518A (en)*2023-12-132024-01-12南京耀宇视芯科技有限公司Low-power-consumption visual positioning and mapping chip and method thereof
CN119006769A (en)*2024-10-242024-11-22自然资源部第二海洋研究所Marine vision SLAM pose estimation method, device, computer equipment and storage medium
CN119006769B (en)*2024-10-242025-03-25自然资源部第二海洋研究所 Marine visual SLAM pose estimation method, device, computer equipment and storage medium

Similar Documents

PublicationPublication DateTitle
CN105953796A (en)Stable motion tracking method and stable motion tracking device based on integration of simple camera and IMU (inertial measurement unit) of smart cellphone
CN109991636B (en)Map construction method and system based on GPS, IMU and binocular vision
CN112815939B (en)Pose estimation method of mobile robot and computer readable storage medium
CN105931275A (en)Monocular and IMU fused stable motion tracking method and device based on mobile terminal
Xiong et al.G-VIDO: A vehicle dynamics and intermittent GNSS-aided visual-inertial state estimator for autonomous driving
CN110084832B (en) Correction method, device, system, device and storage medium for camera pose
Panahandeh et al.Vision-aided inertial navigation based on ground plane feature detection
CN106708066A (en)Autonomous landing method of unmanned aerial vehicle based on vision/inertial navigation
CN111156998A (en)Mobile robot positioning method based on RGB-D camera and IMU information fusion
CN108627153A (en)A kind of rigid motion tracing system and its working method based on inertial sensor
CN110956665B (en)Bidirectional calculation method, system and device for turning track of vehicle
CN107289933A (en)Double card Kalman Filtering guider and method based on MEMS sensor and VLC positioning fusions
CN106017463A (en)Aircraft positioning method based on positioning and sensing device
CN110824453A (en)Unmanned aerial vehicle target motion estimation method based on image tracking and laser ranging
CN112731503B (en)Pose estimation method and system based on front end tight coupling
Bloesch et al.Fusion of optical flow and inertial measurements for robust egomotion estimation
CN112284381B (en) Visual-inertial real-time initialization alignment method and system
Jin et al.Fast and accurate initialization for monocular vision/INS/GNSS integrated system on land vehicle
CN117029809A (en)Underwater SLAM system and method integrating visual inertial pressure sensor
CN117073720A (en)Method and equipment for quick visual inertia calibration and initialization under weak environment and weak action control
CN119845247A (en)Multi-sensor fusion SLAM method under dynamic scene
CN112179373A (en) A kind of measurement method of visual odometer and visual odometer
CN120008584A (en) A binocular visual inertial odometry method based on direct method in dynamic environment
Ling et al.RGB-D inertial odometry for indoor robot via keyframe-based nonlinear optimization
CN117928527B (en)Visual inertial positioning method based on pedestrian motion feature optimization

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20160921

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp