Movatterモバイル変換


[0]ホーム

URL:


CN115790585B - A vision-assisted pedestrian navigation method constrained by gait features - Google Patents

A vision-assisted pedestrian navigation method constrained by gait features

Info

Publication number
CN115790585B
CN115790585BCN202211568439.5ACN202211568439ACN115790585BCN 115790585 BCN115790585 BCN 115790585BCN 202211568439 ACN202211568439 ACN 202211568439ACN 115790585 BCN115790585 BCN 115790585B
Authority
CN
China
Prior art keywords
navigation
pedestrian
speed
state
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211568439.5A
Other languages
Chinese (zh)
Other versions
CN115790585A (en
Inventor
朱建良
王立雅
田超亚
赵高鹏
王超尘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and TechnologyfiledCriticalNanjing University of Science and Technology
Priority to CN202211568439.5ApriorityCriticalpatent/CN115790585B/en
Publication of CN115790585ApublicationCriticalpatent/CN115790585A/en
Application grantedgrantedCritical
Publication of CN115790585BpublicationCriticalpatent/CN115790585B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

The invention discloses a visual auxiliary gait feature constraint pedestrian navigation method, which comprises the steps of firstly analyzing gait features of pedestrians by using data acquired by an inertial measurement unit, calculating a zero-speed interval, calibrating the inertial measurement unit and a camera to obtain corresponding internal parameters and external parameters, resolving position and posture data of six degrees of freedom of a visual inertial odometer, acquiring relative posture information of pedestrian movement, then adopting a strapdown inertial navigation calculation method to perform initial alignment and inertial navigation resolving to obtain instantaneous posture, speed and position information of the pedestrians, finally establishing a Kalman filtering model, taking speed difference and angular speed difference in the zero-speed interval and posture error and position error when visual signals are effective as observed quantity for measurement and updating, and taking current corrected state quantity as initial quantity for next inertial navigation resolving, and repeating navigation correction process. The invention can restrict the divergence of navigation errors under the environment of GNSS signal failure and improve the precision of pedestrian navigation positioning.

Description

Pedestrian navigation method based on vision-assisted gait feature constraint
Technical Field
The invention relates to the technical field of pedestrian positioning and combined navigation, in particular to a vision-assisted gait feature constrained pedestrian navigation method.
Background
With the rapid development of urban modernization, the number of large buildings is increasing, and the demand of indoor positioning service is also increasing. Management of personnel and articles, navigation of firefighters in fire scene, intelligent transportation, guidance of airport personnel, health assistance systems of old people, virtual reality games and the like all need indoor positioning services. Therefore, the method provides accurate, reliable and real-time indoor positioning service for users, and has very important scientific significance and social application value.
The inertial navigation/satellite integrated navigation positioning technology is widely applied to a plurality of fields such as public service, daily travel and the like, but the problems of satellite signal shielding, missing, easy interference and the like exist in indoor, underground, jungle and other environments. The positioning data obtained by inertial navigation calculation is good in accuracy in a short period, but has integral accumulated errors, and can diverge with time, so that the positioning accuracy is poor, even the positioning cannot be performed, and the long-term working state is not facilitated. The visual odometer collects image data through a camera, performs pose estimation according to image features, does not generate accumulated errors, has high environmental requirements, and can cause the reduction of the number of feature points to reduce the precision when the data sampling rate of the visual sensor is lower than the pedestrian movement rate. In order to meet the demands of people on indoor positioning, in recent years, continuous development of Wi-Fi, ultra Wideband (UWB), bluetooth, RFID and other technologies fills up the blank of indoor positioning, but the indoor positioning technologies need to arrange base stations in advance and have higher cost.
Aiming at the problems of indoor navigation positioning such as satellite signal loss, researchers have done a great deal of work to improve the performance of a pedestrian navigation system, the most representative is zero-speed correction, and the result of inertial navigation estimation is corrected by detecting the periodicity of zero speed, but the zero-speed correction technology can only release the divergent speed to a certain extent and does not have long-time and large-range autonomous estimation capability. Nist er et al designed a real-time visual odometer system, and also provided two types of implementation ways and processes of visual odometers, namely monocular vision and stereoscopic vision methods, which laid a new foundation for the subsequent study of visual odometers, but the real-time visual odometer system had the disadvantages of large sample size and calculation amount and poor robustness.
Disclosure of Invention
The invention aims to provide a pedestrian indoor navigation method capable of restraining integral accumulated errors from diverging along with time, reducing positioning errors and improving positioning accuracy.
The technical scheme for realizing the purpose of the invention is that the visual auxiliary gait feature constraint pedestrian navigation method comprises the following steps:
Step 1, analyzing gait characteristics of pedestrians based on data acquired by an inertial measurement unit fixed on feet, and acquiring a zero-speed interval by using a three-condition method, a sliding filtering technology and a median filtering technology;
Step 2, calibrating the inertial measurement unit and the camera which are fixedly connected together to obtain corresponding internal parameters and external parameters;
step 3, running a VINS-Mono/VINS-Fusion algorithm, and calculating to obtain the position and posture data of six degrees of freedom of the visual inertial odometer, so as to obtain the relative posture information of pedestrian movement;
step 4, adopting a strapdown inertial navigation calculation method, and carrying out initial alignment and inertial navigation calculation by utilizing foot inertial measurement unit data to obtain the instantaneous posture, speed and position information of the pedestrian;
Step 5, establishing a Kalman filtering model, taking the speed difference and the angular speed difference in the zero-speed interval and the attitude error and the position error when the visual signal is effective as observables, triggering the measurement update of the Kalman filter, and realizing the optimal estimation of the navigation state;
and step 6, continuing inertial navigation calculation by taking the current corrected state quantity as an initial quantity, and repeating the navigation correction process.
Compared with the prior art, the method has the remarkable advantages that (1) the characteristic of high accuracy in a short period of an inertial measurement unit is utilized to drive a process model, positioning and navigation accuracy are improved, (2) the observed quantity is constructed by utilizing the gait zero speed characteristics of pedestrians as constraint conditions, the accumulated inertial navigation error is corrected on the basis of no hardware addition, the high navigation accuracy is ensured to be obtained in a short period of time, (3) the observed quantity is constructed by utilizing pose information estimated by a visual inertial odometer as constraint conditions, the accumulated error of the inertial measurement unit is further corrected, the high navigation accuracy is ensured to be still obtained in a long period of time, and (4) the accuracy of indoor pedestrian navigation positioning is improved by utilizing a visual inertial odometer and zero speed correction dual auxiliary technology.
Drawings
Fig. 1 is a flow chart of a vision-aided gait feature-constrained pedestrian navigation method of the present invention.
FIG. 2 is a characteristic diagram of the linear and angular motion of the foot in an embodiment of the present invention.
FIG. 3 is a block diagram of a three condition gait detection in an embodiment of the invention.
Detailed Description
The invention will now be described in further detail with reference to the drawings and examples.
With reference to fig. 1, the vision-assisted gait feature-constrained pedestrian navigation method of the invention comprises the following steps:
Step 1, analyzing gait characteristics of pedestrians based on data acquired by an inertial measurement unit fixed on feet, and acquiring a zero-speed interval by using a three-condition method, a sliding filtering technology and a median filtering technology;
Step 2, calibrating the inertial measurement unit and the camera which are fixedly connected together to obtain corresponding internal parameters and external parameters;
step 3, running a VINS-Mono/VINS-Fusion algorithm, and calculating to obtain the position and posture data of six degrees of freedom of the visual inertial odometer, so as to obtain the relative posture information of pedestrian movement;
step 4, adopting a strapdown inertial navigation calculation method, and carrying out initial alignment and inertial navigation calculation by utilizing foot inertial measurement unit data to obtain the instantaneous posture, speed and position information of the pedestrian;
Step 5, establishing a Kalman filtering model, taking the speed difference and the angular speed difference in the zero-speed interval and the attitude error and the position error when the visual signal is effective as observables, triggering the measurement update of the Kalman filter, and realizing the optimal estimation of the navigation state;
and step 6, continuing inertial navigation calculation by taking the current corrected state quantity as an initial quantity, and repeating the navigation correction process.
As a specific example, in step 1, gait characteristics of a pedestrian are analyzed based on data acquired by an inertial measurement unit fixed to a foot, and a zero-speed interval is acquired by using a tri-condition method, a sliding filtering technique and a median filtering technique, which is specifically as follows:
Step 1.1, acquiring acceleration and angular velocity data obtained by sampling a sensor;
step 1.2, calculating three judging conditions of a three-condition method according to acceleration and angular velocity data, and when judging results of the three conditions are all 1, enabling pedestrians to be in a static state;
Step 1.3, selecting a sampling center and a certain window space through a median filtering algorithm, and taking the median of the data in the window as the data of the time;
And 1.4, removing noise points, and finally obtaining gait detection data.
As a specific example, the acceleration and angular velocity data sampled by the sensor in step 1.1 is specifically as follows:
fixing an inertial sensor on the foot surface of a walker, sampling the motion characteristics of the foot in the walking process, and extracting acceleration and angular velocity data obtained by sampling the sensor;
As a specific example, in step 1.2, three judgment conditions of the three-condition method are calculated according to the acceleration and angular velocity data, and when the judgment results of the three conditions are all "1", the pedestrian is in a stationary state, specifically as follows:
the inertial measurement unit consists of a triaxial gyroscope and a triaxial accelerometer and directly outputs triaxial angular velocity and specific force measurement values under an inertial coordinate system. The inertial sensor is fixed on the foot surface of the walker, and the motion characteristics of the foot during walking are sampled. The acceleration and angular velocity data obtained by sampling the sensor are extracted, and it can be found that the acceleration and angular velocity output in the process of moving the pedestrian have periodicity and have zero transient characteristics, as shown in fig. 2, the zero transient is outlined by a black rectangle.
The three-condition method completes the recognition work of the gait zero-speed interval based on the acceleration and angular velocity related physical information measured by the inertial sensor. And the identified static gait is used as a trigger condition for zero speed correction, and long-time accumulated errors caused by IMU positioning are corrected, so that the positioning accuracy is improved. The addition of the local method difference sliding filtering technology and the median filtering technology in the three-condition method can effectively inhibit system noise and remove wild points, so that the zero-speed detection result is more reliable. A specific flow for acquiring the zero-speed interval is shown in fig. 3.
According to the acceleration and angular velocity data, three judging conditions of a three-condition method are calculated, and when judging results of the three conditions are all 1, the pedestrian is in a static state, and the method specifically comprises the following steps:
the three-condition method adopts three conditions to judge the human body movement gait, wherein the state '0' represents movement and the state '1' represents rest;
judging the condition 1, namely judging that the pedestrian is stationary when the synthesized amplitude output by the accelerometer is between a given upper threshold value and a given lower threshold value by an acceleration vector sum threshold value method;
the resulting amplitude of the accelerometer output is defined as ak as shown in equation (1):
Wherein, |ak | represents the modulus of the acceleration at time k,AndThe x, y and z triaxial acceleration values of the carrier acquired by the kth accelerometer are respectively shown;
The judgment formula defining the condition 1 is shown in the formula (2):
Wherein, C1 is the judgment result of condition 1, c1=1 indicates that the pedestrian is in a static state, and c1=0 indicates that the pedestrian is in a moving state, thamin=9m/s2 and thamax=11m/s2 respectively indicate an acceleration vector and a threshold value;
Judging the condition 2, namely judging that the pedestrian is stationary when the synthesized amplitude output by the gyroscope is between a given upper threshold value and a given lower threshold value by an angular velocity vector sum threshold value method;
the resulting amplitude of the gyroscope output is defined as # omegak @ shown in equation (3):
wherein, |omegak | is the modulus of the angular velocity at the time k; AndThe x, y and z triaxial angular speeds of the carrier acquired by the kth gyroscope are respectively shown;
the judgment formula defining the condition 2 is shown in the formula (4):
wherein, C2 is the judgment result of condition 2, c2=1 indicates that the pedestrian is in a static state, c2=0 indicates that the pedestrian is in a static state, thωmax =2 deg/s indicates an angular velocity vector and a threshold value;
judging the condition 3, namely judging that the pedestrian is stationary when the local variance output by the accelerometer is lower than a given threshold value;
The method comprises the steps of adopting a local variance sliding filtering technology, judging the stability of 2s+1 sampling points in a region by calculating the local variance in a window, setting a central point of the window as a gait stationary point and a rectangular wave corresponding value as 1 when the local variance in the window is smaller than a threshold value and setting the central point of the window as a stationary state 1 when the local variance in the window is larger than the threshold value, removing the window because the central point of the window cannot be calculated as an absolute stationary point and the rectangular wave value is 0 when the local variance in the window is larger than the threshold value, sliding the central point through each sampling point in sequence according to a step of stepping to be 1, setting the first s sampling points and the last s sampling points as central points, and setting the stationary state 1 in the stationary state in place before and after sampling;
The local variance of the accelerometer output is defined as shown in equation (5):
Wherein, theSynthesizing a local variance of the amplitude for the window accelerometer; Synthesizing amplitude values for the accelerometer with the q sampling points, wherein q is the subscript of each sampling point in the window, k is the central value of the sliding window, s is the sampling number of the half window, and the set value is 15; the accelerometer for this window interval synthesizes an output average of the magnitudes as shown in equation (6):
The judgment formula defining the condition 3 is shown in the formula (7):
Wherein C3 is the judgment result of condition 3, c3=1 indicates that the pedestrian is in a stationary state, and c3=0 indicates that the pedestrian is in a stationary state; Synthesizing local standard deviation of amplitude values for the accelerometer of the window; representing the acceleration local standard deviation threshold.
As a specific example, in step 2, calibrating the inertial measurement unit and the camera that are fixedly connected together to obtain corresponding internal parameters and external parameters, which are specifically as follows:
And calibrating the internal parameters and the external parameters of the sensor, including IMU internal parameter calibration, camera internal parameter calibration, IMU and camera external parameter calibration.
Further, IMU internal parameter calibration adopts an imu_ utils open source tool of Kong university, and IMU and camera external parameter calibration adopts a kalibr tool box.
As a specific example, step 3 runs the VINS-Mono/VINS-Fusion algorithm, and obtains the position and posture data of six degrees of freedom of the visual inertial odometer by calculating, and obtains the relative posture information of the pedestrian motion, which is specifically as follows:
The camera is used for providing images at corresponding moments and providing raw data for subsequent pose estimation. Before image data are acquired, the internal parameters and the external parameters of the sensor are required to be calibrated, and the method mainly comprises three steps of IMU internal parameter calibration, camera internal parameter calibration, IMU and camera external parameter calibration, and the quality of a calibration result is important to pose resolving precision. Imu_ utils, a commercially available product from harbor, is recommended to calibrate IMU references, and kalibr toolbox is recommended to calibrate camera references and camera and IMU references.
And a Visual Inertial Odometer (VIO) is used for fusing the data of the camera and the inertial measurement unit to realize instant positioning and map construction. After calibration, the relevant configuration file is modified, and then the position and posture data of six degrees of freedom of the visual inertial odometer are obtained by means of the VINS-Mono/VINS-Fusion visual inertial Fusion positioning algorithm and Fusion of the camera and the inertial measurement unit data, so that the relative posture information of pedestrian movement is obtained.
As a specific example, the method for calculating strapdown inertial navigation in step 4 uses the data of the foot inertial measurement unit to perform initial alignment and inertial navigation calculation to obtain the instantaneous posture, speed and position information of the pedestrian, and specifically includes the following steps:
in step 4.1, when describing navigation information such as position, speed, gesture and the like of the carrier, a corresponding reference coordinate system needs to be determined first, so that the current motion state and positioning position information of the carrier are meaningful. Mainly used in the method described herein are a carrier coordinate system (b-system) and a navigation coordinate system (n-system), which are defined as follows:
The carrier coordinate system is represented by obxbybzb, the origin is defined as the gravity center or the central position of the carrier, the obxb axis is forwards along the longitudinal axis of the carrier, the obyb axis is rightwards along the transverse axis of the carrier, the obzb axis is downwards along the vertical axis of the carrier, and the carrier coordinate system is fixedly connected with the carrier;
The navigation coordinate system is represented by onxnynzn, the origin is defined as the gravity center or the center position of the carrier at the beginning, the onxn axis points to the geographic north direction, the onyn axis points to the geographic east direction, and the onzn axis is perpendicular to the local rotation elliptical surface and points to the ground direction, namely, the north east ground is selected as the navigation reference coordinate system;
step 4.2, determining initial values of the attitude, the speed and the position of the carrier, and performing initial alignment by utilizing foot inertia measurement unit data, wherein the initial alignment comprises horizontal alignment and azimuth alignment and is specifically as follows:
The first stage is horizontal alignment, and pitch angle and roll angle are calculated by using an accelerometer, specifically as follows:
the measured values of the accelerometer along three axes under a carrier coordinate system are respectively fxb、fyb、fzb, the projected values corresponding to a navigation coordinate system are respectively fxn、fyn、fzn, and the conversion relation between the measured values meets the formula (8):
Wherein, theA coordinate transformation matrix from n-system to b-system, also called a directional cosine matrix;
assuming that the navigation coordinate system is "north east", and fn=[0 0g]T is under the static base condition, formula (8) may be further expressed as formula (9):
Wherein g is the gravitational acceleration, which is a known quantity, and fxb、fyb、fzb is the accelerometer measurement;
The pitch angle theta and the roll angle gamma are calculated according to the formula (9) to finish horizontal alignment, and the calculation formula is shown as the formula (10):
The second stage is azimuth alignment, which calculates a course angle by using geomagnetic components output by the magnetic sensor and combining roll angle and pitch angle information obtained by horizontal alignment, and the method comprises the following steps:
the measured values of the geomagnetic sensor along three axes under the carrier coordinate system are respectively set to beProjection values corresponding to the navigation coordinate system are respectivelyThe conversion relationship therebetween satisfies the formula (11):
wherein, psi is the yaw angle of the carrier relative to the magnetic north direction;
Calculating psi according to formula (11), obtaining the value of local geomagnetic deflection angle D according to table lookup, and finally calculating the true course angle according to formula (12)And finishing azimuth alignment, wherein a calculation formula is shown in a formula (12):
and 4.3, calculating positioning parameters by using a strapdown inertial navigation algorithm, wherein the positioning parameters are specifically as follows:
Updating the running gesture of the pedestrian by using the current position as a zero point and using the quaternion and the angular increment of the gyroscope, as shown in a formula (13):
Wherein q1|k+1、q2|k+1、q3|k+1、q4|k+1 is the q1, q2, q3, q4 value of the quaternion at k+1 time, q1|k、q2|k、q3|k、q4|k is the q1, q2, q3, q4 value of the quaternion at k time, ωx、ωy、ωz is the x, y, z axis angular velocity scalar value of the gyroscope output, Tm is the sampling time, Δ is the carrier movement angle increment in the sampling time Tm, and the calculation formula is as follows:
and 4.4, calculating pedestrian speed and position information, wherein the method specifically comprises the following steps of:
Step 4.4.1, carrying out normalization processing on the quaternion, wherein the normalization processing is as shown in a formula (14):
Wherein, q1'|k+1、q'2|k+1、q'3|k+1、q'4|k+1 is the value after q1|k+1、q2|k+1、q3|k+1、q4|k+1 normalization treatment, normalization is carried out to make the quaternion become a unit quaternion, and the square sum of the four values is 1;
step 4.4.2, obtaining a direction cosine matrix according to the unit quaternionThe following are provided:
and 4.4.3, calculating an Euler angle by adopting a trigonometric function according to the direction cosine array:
Step 4.4.4, converting the specific force measured by the accelerometer into a navigation coordinate system, then compensating for gravity acceleration to obtain geometrical movement acceleration under the navigation coordinate system, and carrying out Newton mechanical integration on the movement acceleration to obtain speed and position, wherein the speed and position are shown in a formula (17):
Wherein, theTo compensate for the motion accelerometer output after the gravitational component,A direction cosine array from time b of tk to n; The specific force value output by the accelerometer at the time tk is measured, deltat=tk-tk-1 represents one sampling period, vk-1|k-1、vk|k-1 is the speed information at the time tk-1、tk respectively, and rk-1|k-1、rk|k-1 is the position information at the time tk-1、tk respectively.
As a specific example, in the step 5, a kalman filter model is established, and the speed difference and the angular speed difference in the zero speed interval, and the attitude error and the position error when the visual signal is valid are taken as observables, so as to trigger the measurement update of the kalman filter, thereby realizing the optimal estimation of the navigation state, which is specifically as follows:
step 5.1, the inertial navigation technology can achieve better positioning accuracy in a short period, but integration accumulated errors exist for a long time, and the accumulated errors become larger and larger, so that the positioning accuracy is poor. The Kalman filtering technology is a method widely used for eliminating integral accumulated errors, and recursive linear minimum variance estimation is mainly realized through a discrete recursive expression form.
The state space model of the Kalman filtering discrete system is as follows:
Wherein Xk、Xk-1 is the state vector of the system at the time of k and the time of k-1, namely the required estimated state quantity, phik,k-1 is the state transition matrix from the time of k-1 to the time of k, gammak-1 is the system noise distribution matrix and represents the degree that each system noise from the time of k-1 to the time of k affects each state at the time of k respectively, Wk-1 represents the system noise vector at the time of k-1, Zk represents the measurement vector of the system at the time of k, Hk is the system measurement matrix at the time of k, and Vk represents the measurement noise vector at the time of k;
According to the Kalman filtering requirement, Wk and Vk are mutually independent zero-mean Gaussian white noise vector sequences, and the following conditions are satisfied:
The above formula is the condition that noise needs to meet in the Kalman filtering process, wherein E [ Wk ] represents the requirement on Wk, deltakj is 0 only when k=j is 1 and other times are all 0, meanwhile, Qk is a non-negative definite matrix because Qk is a noise variance matrix of the system and a certain state quantity of the system possibly does not have noise, Rk is a measured noise variance matrix, and each quantity measurement contains noise, so Rk is a positive definite matrix;
Step 5.2, the Kalman filtering process is divided into two parts of time updating and measurement updating, which are represented by the following equations:
(1) State one-step prediction
Wherein, thePhik,k-1 is a one-step transition matrix from k-1 time to k time; The optimal estimation of the state at the moment k-1, namely the last moment;
(2) State one-step prediction mean square error array
Wherein, Pk|k-1 is a state one-step prediction mean square error matrix, Pk-1 is a k-1 moment state optimal estimated mean square error matrix;
(3) Filtering gain
Wherein, the coefficient matrix Kk is the filtering gain; Pk is the mean square error matrix of the optimal estimation of the state at the moment k;
(4) State estimation
Wherein, theThe method is used for estimating the state at the moment k optimally, wherein Zk is the measurement at the moment k, and Hk is the system measurement matrix at the moment k;
(5) State estimation mean square error matrix
Pk=(I-KkHk)Pk|k-1 (24)
Step 5.3, selecting observables to directly influence the filtering effect, and correcting inertial navigation accumulated errors by using a visual inertial odometer and a zero-speed correction dual auxiliary technology, and selecting four observables of speed, angular speed, attitude and position;
firstly, a state space equation of 21 dimensions including 3 position errors, 3 speed errors, 3 attitude errors, 3-axis gyroscope zero offset errors, 3-axis accelerometer zero offset errors, 3-axis gyroscope scale factors and 3-axis accelerometer scale factors is established, namely a filtering model, wherein the 3 attitude errors are respectively a pitch angle theta, a roll angle gamma and a yaw angle
The state space equation of the system is shown in formula (25):
Wherein, theΔrk is a position error, δvk is a velocity error, δψk is an attitude angle error,Is the zero offset error of the gyroscope,For the zero offset error of the accelerometer,For the scale factor of the accelerometer,Is a gyroscope scale factor, and W (t) is a system process noise matrix: G (t) is a noise matrix coefficient matched with the system, F (t) is a system state matrix, Z (t) is an observed quantity, H (t) is an observed matrix, and V (t) is an observed noise matrix;
Step 5.4, speed and angular velocity correction, specifically as follows:
In the walking process of pedestrians, the moment that the sole contacts with the ground is regarded as a theoretical static state, the output speed is 0 at the moment, but measurement errors and algorithm errors occur due to the influence of accelerometer noise and gyroscope noise, so that the calculated speed value at the moment is not zero, and the angular speed is not zero in the same way;
The auxiliary speed and angular speed correction is that if the zero speed stillness of the pedestrian is detected at the moment tk, the speed vk calculated by using the output of the accelerometer at the moment tk is differenced from the theoretical speed 0 to obtain a speed error Deltavk, the angular speed wk output by the gyroscope at the moment tk is differenced from the theoretical angular speed 0 to obtain an angular speed error Deltawk, meanwhile, the speed error and the angular speed error are adopted as observational quantity to carry out filtering, and the corresponding observational quantity and the observation matrix are shown as the formula (26):
Wherein Zk is observed quantity, Hk is observation matrix, O3×3 represents 3*3 zero matrix, and I3×3 represents 3-order identity matrix;
step 5.5, correcting the pose and the posture, wherein the method specifically comprises the following steps:
When the visual signal is valid, the Euler angle psi 'k output by the VIO and a group of Euler angles psik output by the inertial navigation system through gesture updating are subtracted to obtain gesture angle errors delta psik, the position information r'k output by the VIO and a group of position information rk output by the inertial navigation system are subtracted to obtain position errors delta rk, and the position errors and the gesture errors are adopted as observational quantities for filtering, wherein the corresponding observational quantities and observation matrixes are respectively shown as formula (27):
and step 6, continuing inertial navigation calculation by taking the current corrected state quantity as an initial quantity, and repeating the navigation correction process, wherein the method comprises the following specific steps of:
and (3) returning the current corrected state quantity to the step (3) as an initial quantity, performing next inertial navigation calculation, and repeating the navigation correction process until the navigation is finished.
In summary, the invention utilizes the characteristic of higher accuracy in a short period of an inertial measurement unit to drive a process model, improves positioning and navigation accuracy, utilizes the gait zero speed characteristic of a pedestrian as a constraint condition to construct an observed quantity, corrects inertial navigation accumulated errors on the basis of no increase of hardware, ensures higher navigation accuracy in a shorter period, utilizes pose information estimated by a visual inertial odometer as the constraint condition to construct the observed quantity, further corrects the accumulated errors of the inertial measurement unit, ensures higher navigation accuracy in a longer period, and improves the accuracy of indoor pedestrian navigation positioning by utilizing a visual inertial odometer and zero speed correction dual auxiliary technology.

Claims (10)

The method comprises the steps of adopting a local variance sliding filtering technology, judging the stability of 2s+1 sampling points in a region by calculating the local variance in a window, setting a central point of the window as a gait stationary point and a rectangular wave corresponding value as 1 when the local variance in the window is smaller than a threshold value and setting the central point of the window as a stationary state 1 when the local variance in the window is larger than the threshold value, removing the window because the central point of the window cannot be calculated as an absolute stationary point and the rectangular wave value is 0 when the local variance in the window is larger than the threshold value, sliding the central point through each sampling point in sequence according to a step of stepping to be 1, setting the first s sampling points and the last s sampling points as central points, and setting the stationary state 1 in the stationary state in place before and after sampling;
CN202211568439.5A2022-12-082022-12-08 A vision-assisted pedestrian navigation method constrained by gait featuresActiveCN115790585B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202211568439.5ACN115790585B (en)2022-12-082022-12-08 A vision-assisted pedestrian navigation method constrained by gait features

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202211568439.5ACN115790585B (en)2022-12-082022-12-08 A vision-assisted pedestrian navigation method constrained by gait features

Publications (2)

Publication NumberPublication Date
CN115790585A CN115790585A (en)2023-03-14
CN115790585Btrue CN115790585B (en)2025-10-03

Family

ID=85417787

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202211568439.5AActiveCN115790585B (en)2022-12-082022-12-08 A vision-assisted pedestrian navigation method constrained by gait features

Country Status (1)

CountryLink
CN (1)CN115790585B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN116972833B (en)*2023-05-192024-09-24哈尔滨工程大学Horizontal attitude measurement system and method based on self-adaptive incremental Kalman filtering
CN116989778B (en)*2023-07-242025-09-12北京航空航天大学 Pedestrian Navigation Method Using Consumer-Grade IMU with Badge Type
CN118960723A (en)*2024-07-122024-11-15北京理工大学 A VIO zero-speed detection and correction method based on the fusion of vision and inertial information
CN119268682A (en)*2024-08-292025-01-07南京理工大学 A strapdown inertial navigation method based on north-east coordinate system
CN119700088A (en)*2024-11-252025-03-28王海梁 Gait and balance self-management system based on multimodal sensing technology
CN119197458B (en)*2024-11-262025-02-25南京航空航天大学 UAV altitude measurement method and system based on data fusion
CN119321768B (en)*2024-12-172025-03-28河北美泰电子科技有限公司 Combined navigation solution method and device for unmanned aerial vehicle, unmanned aerial vehicle and medium
CN119555067B (en)*2025-01-262025-05-23中国科学院空天信息创新研究院Human body trunk inertial navigation autonomous positioning method and device based on inverted pendulum model

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN103968827B (en)*2014-04-092017-11-28北京信息科技大学A kind of autonomic positioning method of wearable body gait detection
CN108106613B (en)*2017-11-062022-03-01上海交通大学Positioning method and system based on visual assistance
CN112985392B (en)*2021-04-192021-07-30中国人民解放军国防科技大学 Pedestrian inertial navigation method and device based on graph optimization framework
CN113639743B (en)*2021-06-292023-10-17北京航空航天大学 A visual-inertial SLAM positioning method assisted by pedestrian step length information
CN114136315B (en)*2021-11-302024-04-16山东天星北斗信息科技有限公司Monocular vision-based auxiliary inertial integrated navigation method and system
CN115235454B (en)*2022-09-152022-12-30中国人民解放军国防科技大学Pedestrian motion constraint visual inertial fusion positioning and mapping method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于步态特征约束的行人导航系统;周维;《中国优秀硕士学位论文全文数据库 信息科技辑》;20240215(第02期);第I136-1037页*

Also Published As

Publication numberPublication date
CN115790585A (en)2023-03-14

Similar Documents

PublicationPublication DateTitle
CN115790585B (en) A vision-assisted pedestrian navigation method constrained by gait features
US10352959B2 (en)Method and system for estimating a path of a mobile element or body
CN109991636B (en)Map construction method and system based on GPS, IMU and binocular vision
EP3933166B1 (en)Attitude measurement method
CN108731670B (en)Inertial/visual odometer integrated navigation positioning method based on measurement model optimization
JP6019504B2 (en) Traveling direction estimation apparatus and traveling direction estimation method for moving body
CN102538781A (en)Machine vision and inertial navigation fusion-based mobile robot motion attitude estimation method
CN107490378B (en)Indoor positioning and navigation method based on MPU6050 and smart phone
CN107063246A (en) A Loose Combination Navigation Method of Visual Navigation/Inertial Navigation
CN111721288A (en) A MEMS device zero offset correction method, device and storage medium
CN114383612B (en)Vision-assisted inertial differential pose measurement system
CN114812554B (en) Multi-source fusion robot indoor absolute positioning method based on filtering
JP4876204B2 (en) Small attitude sensor
CN110986997A (en)Method and system for improving indoor inertial navigation precision
Tao et al.Precise displacement estimation from time-differenced carrier phase to improve PDR performance
Karam et al.Integrating a low-cost mems imu into a laser-based slam for indoor mobile mapping
CN117073667A (en)Combined navigation positioning method based on ultra-wideband, inertial measurement and wheel speed meter
Luna et al.An indoor pedestrian positioning system based on inertial measurement unit and wireless local area network
CN112556681A (en)Visual-based orchard machine navigation positioning method
Asano et al.A robust pedestrian dead-reckoning positioning based on pedestrian behavior and sensor validity
Belfadel et al.Optical flow for drone horizontal velocity estimation without gps
Kronenwett et al.Multi sensor pedestrian navigation system for indoor and outdoor environments
Wang et al.Robust 3D Velocity-Constrained Vehicle Positioning System: Immunity to Variances in Smartphone Mounting Angles
Matsumoto et al.A study on estimation of traveling direction for pedestrian dead reckoning by using inertial sensors
CN107289935B (en)Indoor navigation algorithm suitable for wearable equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp