Movatterモバイル変換


[0]ホーム

URL:


CN108426582B - Pedestrian indoor 3D map matching method - Google Patents

Pedestrian indoor 3D map matching method
Download PDF

Info

Publication number
CN108426582B
CN108426582BCN201810176554.5ACN201810176554ACN108426582BCN 108426582 BCN108426582 BCN 108426582BCN 201810176554 ACN201810176554 ACN 201810176554ACN 108426582 BCN108426582 BCN 108426582B
Authority
CN
China
Prior art keywords
indoor
height
pedestrian
state
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810176554.5A
Other languages
Chinese (zh)
Other versions
CN108426582A (en
Inventor
任明荣
郭红雨
王普
韩红桂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of TechnologyfiledCriticalBeijing University of Technology
Priority to CN201810176554.5ApriorityCriticalpatent/CN108426582B/en
Publication of CN108426582ApublicationCriticalpatent/CN108426582A/en
Application grantedgrantedCritical
Publication of CN108426582BpublicationCriticalpatent/CN108426582B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了行人室内三维地图匹配方法,属于室内行人定位技术领域。利用捆绑于行人脚面上的MEMS‑INS传感器采集行人室内的运动信息并解算行人的速度、位置与航向,分析室内结构并创建状态点,根据导航输出位置信息与状态点位置信息建立条件随机场模型。以定长行走距离提取室内行人的水平二维位置信息,零速时刻提取行人高度信息,获取CRF模型的观测点,并分别记录室内行人的二维位置与高度信息的采样时刻。建立室内电子地图,并根据室内结构信息创建状态点,存储状态点坐标。本发明能够实现行人三维位置定位,算法精度高;采用二维位置与高度信息分开匹配的方法,简化算法复杂性;采用临近时刻对二维位置与高度信息进行融合,提高了地图匹配的正确率。

Figure 201810176554

The invention discloses a pedestrian indoor three-dimensional map matching method, which belongs to the technical field of indoor pedestrian positioning. Use the MEMS‑INS sensor tied to the foot of the pedestrian to collect the indoor motion information of the pedestrian and calculate the speed, position and heading of the pedestrian, analyze the indoor structure and create the state point, and establish the conditional random field according to the navigation output position information and the position information of the state point Model. The horizontal two-dimensional position information of indoor pedestrians is extracted by the fixed-length walking distance, the height information of pedestrians is extracted at zero speed, the observation points of the CRF model are obtained, and the sampling time of the two-dimensional position and height information of indoor pedestrians is recorded respectively. Build an indoor electronic map, create status points based on indoor structure information, and store the coordinates of the status points. The invention can realize the three-dimensional position positioning of pedestrians, and the algorithm has high precision; the method of separately matching the two-dimensional position and the height information is adopted to simplify the algorithm complexity; the two-dimensional position and the height information are fused by the approaching moment, and the correct rate of map matching is improved. .

Figure 201810176554

Description

Indoor three-dimensional map matching method for pedestrians
Technical Field
The invention belongs to the technical field of indoor pedestrian positioning, and relates to a three-dimensional map matching method based on Micro-Electro-Mechanical System (MEMS) technology, namely, an Inertial Navigation System (INS) synchronous positioning, matching algorithm structure design and position information fusion under the known indoor environment.
Background
In recent years, with the appearance and development of logistics, intelligent wards and new concept supermarkets, indoor navigation is widely concerned by academic circles and engineering circles. The integration and miniaturization of the MEMS-INS make the MEMS-INS become the leading technology in the navigation field.
However, inertial device errors can accumulate over time, which can eventually lead to a divergence in the pedestrian's position trajectory if not effectively corrected. To solve the problem of inertial navigation position errors, different map matching algorithms are proposed. Such as a particle filter algorithm, a map matching algorithm based on the main heading, a map matching algorithm based on hidden markov, etc. However, these algorithms pay more attention to the two-dimensional information of pedestrians, and walking indoors by pedestrians not only includes movement of two-dimensional plane coordinates, but also can change the height through stairs. Therefore, the indoor three-dimensional map matching method for the pedestrians has important theoretical significance and application value.
Y.M' eneroux et al propose two common measurement methods of average Hausdorff distance and area difference to solve the problem of the relationship between matching accuracy and network quality index, and also provide the upper limit of the influence of the reference network; xiao Z et al, however, propose to solve the problem of inertial navigation position error divergence by using a Conditional Random Field (CRF) algorithm to mathematically model the navigation output position and the indoor state points. However, the former method is considered as an outdoor map matching method to perform path matching for vehicles, pedestrians walk indoors more randomly, and the concept of paths is relatively weak, so that the method is not suitable for indoor environments; the latter considers the indoor special environment and assists map matching with the state points, but ignores indoor height information and only carries out experimental research on a single floor, thereby only presenting two-dimensional position information of pedestrians.
In order to effectively realize the three-dimensional map matching of the pedestrian under the indoor environmental condition, the state point and the navigation output position information acquired by the sensor are subjected to the condition random field model for simultaneous connection, so that the indoor three-dimensional positioning of the pedestrian is realized. Through three-dimensional position positioning, it is very important to improve the accuracy of the algorithm precision and map matching and simplify the complexity of the algorithm.
Disclosure of Invention
Aiming at the problem of divergence caused by the navigation position of a pedestrian wearing an MEMS-INS sensor under an indoor environment condition, the invention provides a three-dimensional map matching method for indoor pedestrian navigation. The method comprises the steps of collecting indoor motion information of pedestrians by using an MEMS-INS sensor bound on the instep of the pedestrian, resolving the speed, position and course of the pedestrian, analyzing an indoor structure, creating a state point, and establishing a conditional random field model according to navigation output position information and state point position information to achieve indoor three-dimensional positioning of the pedestrian.
In order to achieve the technical purpose, the invention adopts the technical scheme that the indoor three-dimensional map matching method for the pedestrian comprises the following steps:
step 1: and acquiring data, and preliminarily calculating the three-dimensional position and the course of the indoor pedestrian.
Step 1.1, a pedestrian collects pedestrian movement data by wearing an MEMS-INS sensor, wherein the pedestrian movement data comprises: three-axis acceleration data and three-axis gyro data.
And step 1.2, solving the three-dimensional position and course information of the collected pedestrian motion data by using a strapdown calculation algorithm.
Step 2: and extracting the observation points of the conditional random field model.
And 2.1, extracting horizontal two-dimensional position information of the indoor pedestrian according to the fixed-length walking distance, extracting pedestrian height information at the zero-speed moment, acquiring an observation point of the CRF model, and respectively recording the two-dimensional position of the indoor pedestrian and the sampling moment of the height information.
And step 3: and establishing an indoor electronic map, creating a state point according to the indoor structure information, and storing the coordinate of the state point.
And 3.1, creating an indoor electronic map by using the known indoor map, and storing the indoor electronic map in the navigation computer.
And 3.2, solving coordinate points with the minimum and maximum numerical values in the electronic map as the value range of the state points, covering the whole indoor range with the equally spaced state points, and storing the state point information.
And 3.3, adding a state point on the height information by taking the step height as a standard.
And 4, performing a two-dimensional position map matching algorithm based on the conditional random field algorithm.
Step 4.1, establishing a characteristic equation according to the relationship between the two-dimensional position observation point coordinates and each state point coordinate;
and 4.2, establishing a characteristic equation according to the azimuth angle between the azimuth information of the observation point and the state point at the corresponding moment.
And 4.3, establishing a two-dimensional map matching mathematical model based on the conditional random field, and obtaining the maximum probability of the state sequence under the condition that the two-dimensional position is taken as the observation sequence, wherein the maximum probability sequence is the optimal state matching of the position.
And 5, a height information map matching algorithm based on the conditional random field algorithm.
And 5.1, dividing the walking height of each step of the pedestrian into different states according to the height of the step and the limit value of the step of the pedestrian.
And 5.2, establishing a characteristic equation which takes the height as an observation point and a state point corresponding to the observation point.
And 5.3, solving the mean square error between the height information of all the previous adjacent observation points and the height of the matched state point.
And 5.4, establishing a characteristic equation by taking the mean square error as another characteristic according to the relation between the heights of the adjacent state points and the height difference of each state point.
And 5.5, establishing a height map matching mathematical model based on the conditional random field, and obtaining the maximum probability of the state sequence under the condition that the height is taken as the observation sequence, wherein the maximum probability sequence is the optimal state matching of the height.
Step 6: two-dimensional position and height information fusion
And 6.1, inquiring the sampling time of the corresponding observation sequence by using the state sequence with the best two-dimensional position matching for storage, and inquiring the sampling time of the corresponding observation sequence by using the state sequence with the best height matching for storage.
And 6.2, combining the two-dimensional optimal matching point and the height optimal matching point by using a method of adjacent time.
And 6.3, correcting the three-dimensional position information output by the inertial navigation system according to the matched three-dimensional position.
Compared with the prior art, the invention has the following beneficial effects:
firstly, positioning of the three-dimensional position of the pedestrian is realized, and the algorithm precision is high; secondly, a method of separately matching two-dimensional position and height information is adopted, so that the complexity of an algorithm is simplified; thirdly, the two-dimensional position and the height information are fused by adopting the approaching moment, so that the accuracy of map matching is improved.
Drawings
FIG. 1 is a block diagram of a framework of a method according to the invention;
FIG. 2 is a manner in which a pedestrian wears a sensor;
FIG. 3 is a system flow diagram of an inertial navigation solution;
FIG. 4 is a diagram showing a comparison of the indoor structures before and after the treatment;
FIG. 5 is the processed electronic map and status points;
FIG. 6 is a flow chart of an indoor three-dimensional positioning system;
FIG. 7 is three-dimensional position information of inertial navigation output;
fig. 8 is a three-dimensional indoor map matching output pedestrian matching trajectory.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The frame structure of the method of the invention is shown in figure 1 and comprises the following steps:
step 1: and acquiring data, and preliminarily calculating the three-dimensional position and the course of the indoor pedestrian.
Step 1.1, the pedestrian collects pedestrian movement data by wearing the MEMS-INS sensor, and the mode that the pedestrian wears the sensor is shown in figure 2. The pedestrian motion data includes: three-axis acceleration data and three-axis gyro data.
Step 1.2, solving the three-dimensional position and heading information of the collected pedestrian motion data by adopting a strapdown calculation algorithm, wherein a system flow chart of inertial navigation calculation is shown in fig. 3.
Based on physics, under the condition that the sampling interval is very short, the speed position relation satisfies:
Figure BDA0001587479100000041
Figure BDA0001587479100000042
Figure BDA0001587479100000043
wherein p isnThree-dimensional position coordinates representing a pedestrian; t denotes the sampling time, dnRepresenting the distance traveled by the pedestrian; t is a sampling interval; v. ofnRepresenting speed information of a pedestrian, anRepresents a pedestrian acceleration;
Figure BDA0001587479100000044
a coordinate transformation matrix representing a transformation from the carrier coordinate system b to the navigation coordinate system n; gnIs the acceleration of gravity.
Step 2: and extracting the observation points of the conditional random field model.
And 2.1, extracting horizontal two-dimensional position information according to a fixed-length walking distance, extracting pedestrian height information at a zero-speed moment, acquiring observation points of the CRF model, and respectively recording the horizontal two-dimensional position of the pedestrian and the sampling moment of the height information.
(1) The horizontal two-dimensional position observation point extraction model is as follows, and when the following conditions are met:
Figure BDA0001587479100000051
recording the coordinates of the position point at the moment as a two-dimensional position observation point, and simultaneously recording the corresponding time:
Figure BDA0001587479100000052
time1(tob1)=t
wherein, Distance represents the Euclidean Distance between the current time and the sampling point coordinate of the last time; pob1(tob1) Represents tob1Observing two-dimensional coordinates of the points;
Figure BDA0001587479100000053
when represents tPosition coordinates of navigation output are carved; threshold is the set walking distance Threshold; time1 represents the time of day for all observation points.
(2) The height observation point extraction model is as follows when the condition is satisfied:
if v(t)==0
recording the height information at the moment as a height observation point, and simultaneously recording the corresponding time:
Figure BDA0001587479100000054
time2(tob2)=t
wherein Hob2(tob2) Denotes the t-thob2An individual height observation point; time2 represents the time at which the altitude observation point corresponds;
Figure BDA0001587479100000055
height information indicating the navigation output at time t.
And step 3: and establishing an indoor electronic map, creating a state point according to the indoor structure information, and storing the coordinate of the state point.
And 3.1, creating an indoor electronic map by using the known indoor map, wherein the electronic map is as shown in figure 4, and storing the electronic map in the navigation computer.
And 3.2, solving coordinate points with the minimum and maximum numerical values in the electronic map as the value range of the state points, covering the whole value range with the equally spaced state points, and storing coordinate information (X, Y) of all the state points.
(1) Solving coordinate points with minimum and maximum numerical values in the electronic map:
pmin=(xmin,ymin)=min(X,Y)
pmax=(xmax,ymax)=max(X,Y)
wherein < pmin,pmaxThe minimum position point and the maximum position point of the coverage range of the state point are represented.
(2) The distribution range of the state points obtained according to the maximum position point and the minimum position point is as follows:
r1=pmin=(xmin,ymin)
r2=(xmin,ymax)
r3=(xmax,ymin)
r4=pmax=(xmax,ymax)
wherein < r1,r2,r3,r4The four vertex coordinates of the state point range matrix.
(3) And (3) solving the coordinates of the state points, namely selecting the minimum coordinate point as a first state point, selecting the Threshold length as the interval between the state points, wherein the state point model is as follows:
state1(0,0)=r1
state1=(is,js)=r1+(is×Threshold,js×Threshold)<r4
wherein state1 is a collective term for all state points; (i)s,js) Indicating the state of the state point store.
Step 3.3, because the height variation of each step is integral multiple of the height of the step when the pedestrian walks the corridor, the height of the state point in the stair adding area is based on the height of the step, the rule that the step height standing _ high is combined with the state point distribution of the two-dimensional position is set, and the operation model of the step height is as follows:
State2(N)=0+N×stair_high
wherein, State2 is the set of all step height State points, N represents the number, and State2(N) is determined by the number of steps. And combining the State1 and the State2 to obtain the three-dimensional coordinates of the State points, and storing the coordinate information of all the State points. The three-dimensional state point and the digital map are as shown in fig. 5.
And 4, a flow chart of a two-dimensional position map matching algorithm, a matching and fusing method based on the conditional random field algorithm is shown in FIG. 6.
Step 4.1, establishing a characteristic method according to the relation between the two-dimensional position observation point coordinates and each state point coordinatesA process; the state point corresponding to the two-dimensional position is represented as Sp
Figure BDA0001587479100000061
Figure BDA0001587479100000062
Wherein f iscRepresenting the relationship between the observation point coordinates and the state point coordinates; sp(tob1) Represents tob1(x, y) coordinates of the time of day state point, and
Figure BDA0001587479100000071
Pob1(tob1) Represents tob1The (x, y) coordinates of the observation point at the moment of time, and
Figure BDA0001587479100000072
σcrepresenting the covariance of the range error of the state point from the observation point.
And 4.2, calculating the azimuth information of the observation point, and establishing a characteristic equation according to the azimuth angle between the azimuth information of the observation point and the state point at the moment corresponding to the azimuth information of the observation point.
Figure BDA0001587479100000073
Figure BDA0001587479100000074
Figure BDA0001587479100000075
Figure BDA0001587479100000076
Wherein sigmaθA covariance representing an observed azimuth error; b (S)p(tob1-1),Sp(tob1) ) represents tob1-1 and tob1Functions between the time of day state points; theta (S)p(tob1-1),Sp(tob1) Represents tob1-1 and tob1And an azimuth angle function between the time state points, wherein the function takes the positive direction of the X axis of the map coordinate system as a reference.
And 4.3, establishing a two-dimensional map matching mathematical model based on the conditional random field, and obtaining the maximum probability of the state sequence under the condition that the two-dimensional position is taken as the observation sequence, wherein the maximum probability sequence is the optimal state matching of the position.
Figure BDA0001587479100000077
Figure BDA0001587479100000078
Figure BDA0001587479100000079
Calculating a maximum probability state point sequence S by using a Viterbi algorithmP*
Wherein λ isppRespectively representing the weight corresponding to each feature in the two-dimensional map matching mathematical model, wherein all the weights are set to be 1; i, l represents the number of characteristic functions; zob1Is a normalization factor.
And 5, a height information map matching algorithm based on the conditional random field algorithm.
And 5.1, estimating the state corresponding to each step of walking of the pedestrian according to the height of the step and the limit value of the step of walking of the pedestrian.
By using the limit of the step crossed by the pedestrian every time walking, the pedestrian is supposed to cross N at most every timeTThe step number is one, the height variation of each pedestrian walking should be (-N)T,NT) Within this range. Thus, the pedestrian is high for each walkThe state of the degree is:
Sh=(((-NT)×stair_high),((1-NT)×stair_high),…,(NT×stair_high))
and 5.2, establishing a characteristic equation which takes the height as an observation point and the state point corresponding to the observation point.
Figure BDA0001587479100000081
Figure BDA0001587479100000082
Figure BDA0001587479100000083
Wherein S ish(tob2) Represents tob2The height of the time status point; hob2(tob2) Represents tob2The height of the observation point at the moment; g represents the functional relation between the observation point and the height of the state point; sigmahRepresenting the covariance between the heights between the state point and the observation point; STAIR _ HIGH represents the height of each step; b (S)h(tob2-1),Sh(tob2) ) represents tob2-1 and tob2Functions between the time of day state points; h (S)h(tob2-1),Sh(tob2) ) represents tob2-1 and tob2The relative height function between the state points at the time.
And 5.3, solving the mean square error between the height information of all the previous observation points and the height of the matched state point.
δHob2(tob2)=Hob2(tob2)-Sh*(tob2)
Figure BDA0001587479100000084
Figure BDA0001587479100000085
δHob2The error vector between the observation point height and the matching state point height,
Figure BDA0001587479100000086
the average error vector is represented.
And 5.4, establishing a characteristic equation by taking the mean square error as another characteristic according to the relation between the heights of the adjacent state points and the height difference of each state point.
Figure BDA0001587479100000091
gc(Sh(tob2),Hob2(tob2),S_Hob2(tob2))=(Hob2(tob2)-Sh(tob2))-S_Hob2(tob2)
Wherein σsRepresents the height error covariance; s _ Hob2(tob2) Represents tob2Covariance of all observed errors prior to time of day.
And 5.5, establishing a height map matching mathematical model based on the conditional random field, and obtaining the maximum probability of the state sequence under the condition that the height is taken as the observation sequence, wherein the maximum probability sequence is the optimal state matching of the height.
Figure BDA0001587479100000092
Figure BDA0001587479100000093
Figure BDA0001587479100000094
Calculating a maximum probability state point sequence S by using a Viterbi algorithmh*。
Wherein λ ishhAnd representing the weight values corresponding to the features, wherein the weight values are all set to be 1. i, l represents the number of characteristic functions; zob2Is a normalization factor.
Step 6: two-dimensional position and height information fusion
And 6.1, inquiring the sampling time of the corresponding observation sequence by using the state sequence with the best two-dimensional position matching for storage, and inquiring the sampling time of the corresponding observation sequence by using the state sequence with the best height matching for storage.
And 6.2, combining the two-dimensional optimal matching point and the height optimal matching point by using a method of adjacent time.
time=|time1(tob1)-time2(kob2)|kob2=1…tob2
Wherein the requirements are as follows: 0 < tob1<tob2. Finding out the minimum value in time and corresponding kob2Marking as
Figure BDA0001587479100000095
Figure BDA0001587479100000101
S*(tob1)=<SP*(tob1),S(tob1)>
Wherein, S (t)ob1) Represents tob1Pedestrian height information obtained by adopting near point fusion is adopted at any moment; s*Is the final pedestrian trajectory information.
And 6.3, correcting and feeding back the three-dimensional position information output by the inertial navigation system according to the matched three-dimensional position. The mathematical model of the correction feedback is as follows:
pn(t)=S*(tob1)
to verify the validity of the algorithm, experimental verification was performed. Taking a certain indoor office environment as an example, the experimental place comprises two indoor environments, namely a corridor and a corridor. Fig. 7 shows three-dimensional position information calculated by the inertial navigation system. Therefore, errors exist in navigation output no matter two-dimensional track or height information, and positioning accuracy is not accurate. A three-dimensional indoor map matching algorithm based on conditional random fields is shown in fig. 8. The experimental result shows that the matching result of the method has high accuracy and effectiveness.

Claims (1)

1. The pedestrian indoor three-dimensional map matching method is characterized by comprising the following steps: the method comprises the following steps of,
step 1: data acquisition, namely preliminarily resolving the three-dimensional position and the course of an indoor pedestrian;
step 1.1, a pedestrian collects pedestrian movement data by wearing an MEMS-INS sensor, wherein the pedestrian movement data comprises: three-axis acceleration data and three-axis gyro data;
step 1.2, solving three-dimensional position and course information of the collected pedestrian motion data by using a strapdown resolving algorithm;
step 2: extracting observation points of the conditional random field model;
step 2.1, extracting horizontal two-dimensional position information of indoor pedestrians according to a fixed-length walking distance, extracting pedestrian height information at a zero-speed moment, acquiring observation points of a CRF (learning random access control) model, and respectively recording two-dimensional positions of the indoor pedestrians and sampling moments of the height information;
and step 3: establishing an indoor electronic map, creating state points according to indoor structure information, and storing state point coordinates;
step 3.1, creating an indoor electronic map by using a known indoor map, and storing the indoor electronic map in a navigation computer;
step 3.2, coordinate points with the minimum and maximum numerical values in the electronic map are obtained and used as value ranges of the state points, then the state points at equal intervals cover the whole indoor range, and state point information is stored;
step 3.3, adding a state point on the height information by taking the step height as a standard;
step 4, a two-dimensional position map matching algorithm based on a conditional random field algorithm;
step 4.1, establishing a characteristic equation according to the relationship between the two-dimensional position observation point coordinates and each state point coordinate;
step 4.2, establishing a characteristic equation according to the azimuth angle between the azimuth information of the observation point and the state point at the corresponding moment;
step 4.3, establishing a two-dimensional map matching mathematical model based on the conditional random field, and obtaining the maximum probability of the state sequence under the condition that the two-dimensional position is taken as the observation sequence, wherein the maximum probability sequence is the optimal state matching of the position;
step 5, a height information map matching algorithm based on the conditional random field algorithm;
step 5.1, dividing the walking height of each step of the pedestrian into different states according to the height of the step and the limit value of the step of the pedestrian;
step 5.2, establishing a characteristic equation which takes the height as an observation point and a state point corresponding to the observation point;
step 5.3, solving the mean square error between the height information of all the previous adjacent observation points and the height of the matched state point;
step 5.4, taking the mean square error as another characteristic, and establishing a characteristic equation according to the relation of the height difference of the adjacent state points;
step 5.5, establishing a height map matching mathematical model based on the conditional random field, and obtaining the maximum probability of a state sequence under the condition that the height is taken as an observation sequence, wherein the maximum probability sequence is the optimal state matching of the height;
step 6: two-dimensional position and height information fusion
Step 6.1, inquiring the sampling time of the corresponding observation sequence by using the state sequence with the best matching in the two-dimensional position for storage, and inquiring the sampling time of the corresponding observation sequence by using the state sequence with the best matching in the height for storage;
step 6.2, combining the two-dimensional optimal matching point and the height optimal matching point by using a method of adjacent time;
and 6.3, correcting the three-dimensional position information output by the inertial navigation system according to the matched three-dimensional position.
CN201810176554.5A2018-03-032018-03-03 Pedestrian indoor 3D map matching methodActiveCN108426582B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201810176554.5ACN108426582B (en)2018-03-032018-03-03 Pedestrian indoor 3D map matching method

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201810176554.5ACN108426582B (en)2018-03-032018-03-03 Pedestrian indoor 3D map matching method

Publications (2)

Publication NumberPublication Date
CN108426582A CN108426582A (en)2018-08-21
CN108426582Btrue CN108426582B (en)2021-07-30

Family

ID=63157697

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201810176554.5AActiveCN108426582B (en)2018-03-032018-03-03 Pedestrian indoor 3D map matching method

Country Status (1)

CountryLink
CN (1)CN108426582B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110032709B (en)*2019-01-242023-04-14太原理工大学 A Method for Locating and Estimating Abnormal Points in Geographic Coordinate Transformation
CN110337065A (en)*2019-05-092019-10-15南京工程学院 A three-dimensional map-based intelligent hub personnel positioning monitoring and early warning system and method
CN111982132B (en)*2019-05-222022-06-14合肥四维图新科技有限公司Data processing method, device and storage medium
CN110543917B (en)*2019-09-062021-09-28电子科技大学Indoor map matching method by utilizing pedestrian inertial navigation track and video information
CN113720332B (en)*2021-06-302022-06-07北京航空航天大学Floor autonomous identification method based on floor height model
CN115420291B (en)*2022-08-292025-03-14卓宇智能科技有限公司 A multi-source fusion positioning method and device in a large-scale indoor scene

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8605998B2 (en)*2011-05-062013-12-10Toyota Motor Engineering & Manufacturing North America, Inc.Real-time 3D point cloud obstacle discriminator apparatus and associated methodology for training a classifier via bootstrapping
CN104023228A (en)*2014-06-122014-09-03北京工业大学Self-adaptive indoor vision positioning method based on global motion estimation
CN106871894A (en)*2017-03-232017-06-20北京工业大学A kind of map-matching method based on condition random field
CN107179085A (en)*2016-03-102017-09-19中国科学院地理科学与资源研究所A kind of condition random field map-matching method towards sparse floating car data
CN107635204A (en)*2017-09-272018-01-26深圳大学 A motion behavior-assisted indoor fusion positioning method, device, and storage medium
CN108322889A (en)*2018-02-012018-07-24深圳市交投科技有限公司A kind of method, storage medium and the intelligent terminal of multisource data fusion indoor positioning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US8605998B2 (en)*2011-05-062013-12-10Toyota Motor Engineering & Manufacturing North America, Inc.Real-time 3D point cloud obstacle discriminator apparatus and associated methodology for training a classifier via bootstrapping
CN104023228A (en)*2014-06-122014-09-03北京工业大学Self-adaptive indoor vision positioning method based on global motion estimation
CN107179085A (en)*2016-03-102017-09-19中国科学院地理科学与资源研究所A kind of condition random field map-matching method towards sparse floating car data
CN106871894A (en)*2017-03-232017-06-20北京工业大学A kind of map-matching method based on condition random field
CN107635204A (en)*2017-09-272018-01-26深圳大学 A motion behavior-assisted indoor fusion positioning method, device, and storage medium
CN108322889A (en)*2018-02-012018-07-24深圳市交投科技有限公司A kind of method, storage medium and the intelligent terminal of multisource data fusion indoor positioning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Indoor Pedestrian Navigation Based on Conditional Random Field Algorithm;Mingrong Ren 等;《micromachines》;20171030;第8卷(第11期);第1-11页*
一种基于语音识别的室内定位方法;张晓军 等;《小型微型计算机系统》;20160831;第37卷(第8期);第1883-1888页*

Also Published As

Publication numberPublication date
CN108426582A (en)2018-08-21

Similar Documents

PublicationPublication DateTitle
CN108426582B (en) Pedestrian indoor 3D map matching method
Zhou et al.Activity sequence-based indoor pedestrian localization using smartphones
Philipp et al.Mapgenie: Grammar-enhanced indoor map construction from crowd-sourced data
Cai et al.Mobile robot localization using gps, imu and visual odometry
CN112639502A (en)Robot pose estimation
KR20190064311A (en)Method and apparatus for building map using LiDAR
CN108152831A (en)A kind of laser radar obstacle recognition method and system
Engel et al.Deeplocalization: Landmark-based self-localization with deep neural networks
CN107504969A (en)Four rotor-wing indoor air navigation aids of view-based access control model and inertia combination
CN105652871A (en)Repositioning method for mobile robot
CN110207704B (en) A pedestrian navigation method based on intelligent recognition of building stairs scene
CN110208740A (en)TDOA-IMU data adaptive merges positioning device and method
CN111060099A (en)Real-time positioning method for unmanned automobile
CN113554705B (en) A robust lidar positioning method under changing scenarios
CN113744308B (en)Pose optimization method, pose optimization device, electronic equipment, medium and program product
CN113741503B (en)Autonomous positioning unmanned aerial vehicle and indoor path autonomous planning method thereof
CN112380314B (en)Road network information processing method and device, storage medium and electronic equipment
CN109741372A (en) An odometry motion estimation method based on binocular vision
CN120063235B (en) A digital construction method based on UAV 3D mapping
CN119354189B (en)Geomagnetic vector and INS fusion navigation method based on multidimensional constraint factor graph
CN113188557A (en)Visual inertial integrated navigation method fusing semantic features
Ding et al.OGI-SLAM2: A hybrid map SLAM framework grounded in inertial-based SLAM
CN113971438A (en)Multi-sensor fusion positioning and mapping method in desert environment
CN117901126A (en) A humanoid robot dynamic perception method and computing system
CA2894863A1 (en)Indoor localization using crowdsourced data

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp