技术领域technical field
本发明涉及一种低分辨率下物体移动方向的马尔科夫判断方法,属于低分辨率下物体运动方向判定技术领域。The invention relates to a Markov method for judging the moving direction of an object at low resolution, and belongs to the technical field of judging the moving direction of an object at low resolution.
背景技术Background technique
目前数码摄像技术被广泛使用于多种场合中,但有时由于客观条件的限制,拍摄设备只能在较低的分辨率下成像,且经常会遇到图像信息缺失的问题,如何在低分辨率下利用有限的图像信息准确、高效的判断出物体的运动方向成为了当今低分辨率下判定物体运动方向所面临的一个重要问题。At present, digital camera technology is widely used in many occasions, but sometimes due to the limitation of objective conditions, the shooting equipment can only image at a lower resolution, and often encounters the problem of missing image information. Using limited image information to accurately and efficiently determine the direction of motion of objects has become an important problem in today's low-resolution determination of the direction of motion of objects.
发明内容Contents of the invention
本发明提供了一种低分辨率下物体移动方向的马尔科夫判断方法,可以在低分辨率下利用有限的图像信息准确、高效的判断出物体的运动方向。The invention provides a Markov method for judging the moving direction of an object at low resolution, which can accurately and efficiently judge the moving direction of the object by using limited image information at low resolution.
本发明的技术方案是:一种低分辨率下物体移动方向的马尔科夫判断方法,包括如下步骤:The technical solution of the present invention is: a Markov method for judging the moving direction of an object at a low resolution, comprising the following steps:
Step1,建立完备的低分辨率下物体连续等时间间隔图像信息数据库;Step1, establish a complete low-resolution image information database of objects at continuous equal time intervals;
Step2,针对低分辨的图像信息,进行图像中值滤波去噪预处理,并采用sift特征匹配算法获得图像角点,角点指的是能够代表图像信息的特征点;Step2, for low-resolution image information, perform image median filter denoising preprocessing, and use sift feature matching algorithm to obtain image corner points, which refer to feature points that can represent image information;
Step3,从处理过的图像数据库中选出(m+1)帧图像作为训练集合,并按照时间先后顺序依次标记为A0,A1,...,Am-1,Am,且取拍摄时间最早的图像A0作为参考帧;Step3, select (m+1) frame images from the processed image database as the training set, and mark them as A0 , A1 ,...,Am-1 ,Am in chronological order, and take The image A0 with the earliest shooting time is used as the reference frame;
Step4,在参考帧内建立平面直角坐标系划分状态空间I,并再次采用sift特征匹配算法分别依次把图像A0后面拍摄的m帧图像与参考帧匹配,得到后面m帧图像相对于参考帧的状态空间S,进而得到一阶状态转移概率矩阵P,从而完成马尔可夫预测模型的构造;Step4, establish a planar Cartesian coordinate system in the reference frame to divide the state space I, and use the SIFT feature matching algorithm to match the m frames of images taken after the image A0 with the reference frame in sequence, and obtain the following m frames of images relative to the reference frame The state space S, and then obtain the first-order state transition probability matrix P, so as to complete the construction of the Markov prediction model;
Step5,若已知任意一帧Aa图像及其关于参考帧的方向状态,则根据上一步得到的转移概率矩阵预测Aa之后的物体运动状态,即运动方向。Step5, if any frame of Aa image and its direction state with respect to the reference frame are known, predict the motion state of the object after Aa according to the transition probability matrix obtained in the previous step, that is, the motion direction.
具体地,所述的Step3具体包括:Specifically, described Step3 specifically includes:
Step3.1:筛选训练集合:Step3.1: Filter the training set:
从所有预处理过的图像中选(m+1)帧时间连续的图像,并按拍摄时间的先后顺序分别标记为A0,A1,...,Am,则每帧图像对应的时间点定义为t0,t1,...,tm且满足t0<...<tm;Select (m+1) consecutive images from all preprocessed images, and mark them as A0 , A1 ,...,Am in the order of shooting time, then the time point corresponding to each frame of image defined as t0 , t1 ,...,tm and satisfying t0 <...<tm ;
Step3.2:选取参考帧:Step3.2: Select a reference frame:
选择拍摄时间最早的图像为参考帧,则A0为参考帧;Select the image with the earliest shooting time as the reference frame, then A0 is the reference frame;
具体地,所述的Step4具体包括:Specifically, described Step4 specifically includes:
Step4.1:状态空间划分定义Step4.1: Definition of state space division
在参考帧A0内建立平面直角坐标系,以A0帧图像内角点坐标为原点,把平面分为n个状态空间,即I={I1,I2,...,In};Establish a plane Cartesian coordinate system in the reference frame A0 , take the coordinates of the inner corners of the image in the frame A0 as the origin, and divide the plane into n state spaces, that is, I={I1 , I2 ,...,In };
Step4.2:状态空间划分原则Step4.2: Principle of state space division
只考虑物体的平面运动,则对于平面内的n个状态空间满足式(4-1),Considering only the plane motion of the object, the n state spaces in the plane satisfy the formula (4-1),
其中数值360表示角度,则每个状态空间I实际意义可依次表示为(4-2)式,其中n≥2,Wherein the value 360 represents the angle, then the actual meaning of each state space I can be expressed as formula (4-2) in turn, where n≥2,
Step4.3:sift算法得到相对状态空间S:Step4.3: The sift algorithm obtains the relative state space S:
采用sift特征匹配算法依次把m帧图像与参考帧图像匹配,并分别得到m帧图像相对参考帧方向所处的状态,即可生成相对状态空间SUse the sift feature matching algorithm to match the m frame images with the reference frame images in turn, and obtain the states of the m frame images relative to the reference frame direction, and then generate the relative state space S
S={S1,S2,...,Sm} (4-3);S={S1 ,S2 ,...,Sm } (4-3);
Step4.4:得到状态转移概率矩阵P:Step4.4: Get the state transition probability matrix P:
由得到的m个相对参考帧的相对状态空间S,得出一阶马尔可夫转移概率矩阵,其中P表示一阶状态转移矩阵,Pij表示从状态i转移到状态j的一步转移概率,且i,j=1,2,...,n,如(4-4)式所示:From the obtained relative state space S of m relative reference frames, the first-order Markov transition probability matrix is obtained, where P represents the first-order state transition matrix, Pij represents the one-step transition probability from state i to state j, and i,j=1,2,...,n, as shown in formula (4-4):
具体地,所述的Step5具体包括:Specifically, described Step5 specifically includes:
Step5.1:已知某帧图像状态:Step5.1: The image status of a certain frame is known:
若已知任意一帧Aa图像及其关于参考帧的方向状态为I3,I3表示行向量的第三个元素为1,其余为0,则可取初始状态概率向量为共有(n-1)个0;If it is known that any frame Aa image and its orientation state with respect to the reference frame is I3 , and I3 means that the third element of the row vector is 1, and the rest are 0, then the initial state probability vector can be taken as There are (n-1) 0s in total;
Step5.2:预测下一个时间点运动方向:Step5.2: Predict the direction of movement at the next time point:
由马尔可夫相邻状态之间的关系式可得下一个时间点这帧图像可能处的状态向量查阅状态空间可得下一个时间点的物体运动方向。By the relationship between Markov adjacent states The possible state vector of this frame image at the next time point can be obtained The direction of motion of the object at the next point in time can be obtained by consulting the state space.
本发明的有益效果是:The beneficial effects of the present invention are:
1、本发明专利选用sift特征匹配算法,不仅可以获得每帧图像的角点信息,还可令得到的每帧图像的角点与参考帧的角点进行高精度匹配,并达到高效、精确获得每帧图像所处方向状态的目的。1. The patent of the present invention uses the SIFT feature matching algorithm, which can not only obtain the corner point information of each frame of image, but also make the obtained corner point of each frame of image match with the corner point of the reference frame with high precision, and achieve efficient and accurate acquisition The purpose of the orientation state each frame of the image is in.
2、本发明专利针对低分辨率下采集到的运动物体连续等间隔照片,采用一阶马尔可夫预测模型对物体运动方向进行预测,非常符合相邻状态具有相关性的研究理念,同时在很大程度上对已知物体运动的低分辨率图像来预测物体运动方向的问题做出了探索。2. The patent of the present invention aims at the continuous equidistant photos of moving objects collected at low resolution, and uses a first-order Markov prediction model to predict the direction of object movement, which is very consistent with the research concept that adjacent states are correlated, and at the same time To a large extent, the problem of predicting the direction of motion of an object with known low-resolution images of the motion of the object has been explored.
附图说明Description of drawings
图1为本发明的思路流程图;Fig. 1 is a flow chart of thinking of the present invention;
图2为本发明图像信息空间状态示意图;Fig. 2 is a schematic diagram of the image information space state of the present invention;
图3为举例说明中第一帧图片的sift特征点图;Fig. 3 is the sift feature point diagram of the first frame picture in the illustration;
图4为举例说明中第二帧图片的sift特征点图;Fig. 4 is the sift feature point diagram of the second frame picture in the illustration;
图5为图3和图4的sift特征点匹配图。Fig. 5 is a matching map of sift feature points in Fig. 3 and Fig. 4 .
具体实施方式detailed description
下面结合附图及具体实施例对本发明进行进一步说明。The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments.
实施例1:如图1-5所示,一种低分辨率下物体移动方向的马尔科夫判断方法,首先,建立完备的低分辨率下物体连续等时间间隔图像信息数据库;其次,针对低分辨的图像信息,进行图像中值滤波去噪预处理,并采用sift特征匹配算法获得图像角点;再次,从处理过的图像数据库中选出(m+1)帧图像作为训练集合,并按照时间先后顺序依次标记为A0,A1,...,Am-1,Am,且取拍摄时间最早的图像A0作为参考帧;然后,在参考帧内建立平面直角坐标系划分状态空间I,并再次采用sift特征匹配算法分别依次把后面拍摄的m帧图像与参考帧匹配,得到后面m帧图像相对于参考帧的状态空间S,进而得到状态转移概率矩阵P,从而完成马尔可夫预测模型的构造;最后,若已知任意一帧Aa图像及其关于参考帧的方向状态,则根据上一步得到转移概率矩阵可预测Aa之后的物体运动的状态即运动方向。Embodiment 1: As shown in Figures 1-5, a Markov method for judging the moving direction of an object at low resolution, firstly, a complete image information database of objects at continuous equal time intervals at low resolution is established; secondly, for low-resolution The resolved image information is subjected to image median filter denoising preprocessing, and the sift feature matching algorithm is used to obtain image corner points; again, (m+1) frame images are selected from the processed image database as a training set, and according to The chronological order is marked as A0 , A1 ,...,Am-1 ,Am , and the image A0 with the earliest shooting time is taken as the reference frame; then, the division state of the plane Cartesian coordinate system is established in the reference frame Space I, and again use the sift feature matching algorithm to match the subsequent m frames of images with the reference frame in sequence, and obtain the state space S of the subsequent m frames of images relative to the reference frame, and then obtain the state transition probability matrix P, thus completing the Markov Finally, if any frame Aa image and its direction state with respect to the reference frame are known, the transition probability matrix obtained according to the previous step can predict the state of the object’s motion after Aa , that is, the direction of motion.
进一步地,相关参数设置:Further, related parameter settings:
能够代表图像信息的特征点被称为角点;Feature points that can represent image information are called corner points;
状态空间中所有状态的总数定义为n;The total number of all states in the state space is defined as n;
训练集合所选图像的数目为(m+1)帧;The number of images selected in the training set is (m+1) frames;
选取的图像训练集合中每帧图像分别定义为A0,A1,...,Am,且取A0为参考帧;Each frame of image in the selected image training set is defined as A0 , A1 ,...,Am , and A0 is taken as the reference frame;
其余m帧图像相对于参考帧的相对状态空间定义为S;The relative state space of the remaining m frames of images relative to the reference frame is defined as S;
一阶状态转移矩阵定义为P;The first-order state transition matrix is defined as P;
从状态i转移到状态j的一步转移概率定义为Pij,且i,j=1,2,...,n。The one-step transition probability from state i to state j is defined as Pij , and i,j=1,2,...,n.
进一步地,本发明所述方法的整个步骤具体为:Further, the whole steps of the method of the present invention are specifically:
Step1:建立完备的低分辨率下物体连续等时间间隔图像信息数据库:Step1: Establish a complete low-resolution image information database of objects at continuous equal time intervals:
采集低分辨率下运动物体的图像信息,此处只考虑平面运动,不考虑相机的调焦以及物体的三维运动;而且图像是在物体运动过程中连续、等间隔拍摄取得,以便后续构造马尔可夫预测模型。The image information of the moving object is collected at low resolution. Here, only the planar motion is considered, and the focusing of the camera and the three-dimensional motion of the object are not considered; and the images are obtained continuously and at equal intervals during the motion of the object, so that the subsequent construction of the Mark husband forecasting model.
Step2:图像信息预处理:Step2: Image information preprocessing:
Step2.1:图像去噪处理:Step2.1: Image denoising processing:
由于图像是在低分辨率下拍摄的物体运动过程瞬时信息,所以对图像进行中值滤波去噪处理;Since the image is the instantaneous information of the object's motion process taken at low resolution, the image is subjected to median filtering and denoising processing;
Step2.2:获得图像角点:Step2.2: Get image corners:
利用sift特征匹配算法获取每帧图像的特征点,即角点信息。Use the SIFT feature matching algorithm to obtain the feature points of each frame image, that is, the corner point information.
Step3:选取训练集合:Step3: Select the training set:
Step3.1:筛选训练集合:Step3.1: Filter the training set:
从所有预处理过的图像中选(m+1)帧时间连续的图像,并按拍摄时间的先后顺序分别标记为A0,A1,...,Am,令每帧图像对应的时间点定义为t0,t1,...,tm且满足t0<...<tm;Select (m+1) consecutive images from all preprocessed images, and mark them as A0 , A1 ,...,Am in the order of shooting time, so that the time point corresponding to each frame of image defined as t0 , t1 ,...,tm and satisfying t0 <...<tm ;
Step3.2:选取参考帧:Step3.2: Select a reference frame:
选择拍摄时间最早的图像为参考帧,则可知A0为参考帧。If the image with the earliest shooting time is selected as the reference frame, it can be seen that A0 is the reference frame.
Step4:划分状态空间I:Step4: Divide state space I:
Step4.1:状态空间划分定义Step4.1: Definition of state space division
在参考帧A0内建立平面直角坐标系,以A0帧图像内的角点坐标为原点,把平面分为n个状态空间,即I={I1,I2,...,In};Establish a plane Cartesian coordinate system in the reference frame A0 , take the corner point coordinates in the frame image of A0 as the origin, divide the plane into n state spaces, that is, I={I1 ,I2 ,...,In };
Step4.2:状态空间划分原则Step4.2: Principle of state space division
只考虑物体的平面运动,则平面内的n个状态空间满足式(4-1)Consider only the plane motion of the object, then the n state spaces in the plane satisfy the formula (4-1)
其中数值360表示角度,则每个状态空间I的实际意义可依次表示为(4-2)式,其中n≥2(至少把平面分为两部分)Where the value 360 represents the angle, then the actual meaning of each state space I can be expressed as formula (4-2) in turn, where n≥2 (at least divide the plane into two parts)
Step4.3:采用sift算法得到相对状态空间S:Step4.3: Use the sift algorithm to obtain the relative state space S:
再次采用sift特征匹配算法依次把m帧图像与参考帧图像匹配,并分别得到m帧图像相对参考帧的的方向所处状态,即可得到生成的相对状态空间SAgain, the sift feature matching algorithm is used to sequentially match the m frame images with the reference frame images, and respectively obtain the direction and state of the m frame images relative to the reference frame, and then the generated relative state space S can be obtained.
S={S1,S2,...,Sm} (4-3);S={S1 ,S2 ,...,Sm } (4-3);
Step4.4:得到状态转移概率矩阵P:Step4.4: Get the state transition probability matrix P:
由得到的m个相对参考帧的方向状态集合S,得出一阶马尔可夫转移概率矩阵,其中P表示一阶状态转移矩阵,Pij表示从状态i转移到状态j的一步转移概率,且i,j=1,2,...,n,如(4-4)式所示:The first-order Markov transition probability matrix is obtained from the m direction state sets S relative to the reference frame, where P represents the first-order state transition matrix, Pij represents the one-step transition probability from state i to state j, and i,j=1,2,...,n, as shown in formula (4-4):
Step5:预测物体运动方向:Step5: Predict the direction of object movement:
Step5.1:已知某帧图像状态:Step5.1: The image status of a certain frame is known:
若已知任意一帧Aa图像及其关于参考帧的方向状态为I3,I3表示行向量的第三个元素为1,其余为0,则可取初始状态概率向量为共有(n-1)个0;If it is known that any frame Aa image and its orientation state with respect to the reference frame is I3 , and I3 means that the third element of the row vector is 1, and the rest are 0, then the initial state probability vector can be taken as There are (n-1) 0s in total;
Step5.2:预测下一个时间点运动方向:Step5.2: Predict the direction of movement at the next time point:
由马尔可夫相邻状态之间的关系式By the relationship between Markov adjacent states
可求出下一个时间点这帧图像可能处的状态向量根据状态空间可得下一个时间点的物体运动方向。The possible state vector of the frame image at the next time point can be obtained According to the state space, the motion direction of the object at the next time point can be obtained.
具体说明:Specific instructions:
已知运动物体运动过程中的20帧图片,对图像进行中值滤波去噪并采用sift特征匹配算法获取20帧图像的特征点(其中两帧图片的sift特征点图见图3、图4)。The 20 frames of pictures in the process of moving objects are known, and the median filter is used to denoise the images, and the feature points of the 20 frames of images are obtained by using the SIFT feature matching algorithm (see Figure 3 and Figure 4 for the SIFT feature point maps of the two frames of pictures) .
Step3:选取训练集合:Step3: Select the training set:
Step3.1:筛选训练集合:Step3.1: Filter the training set:
从所有预处理过的图像中选10帧时间连续的图像,并按拍摄时间的先后顺序分别标记为A0,A1,...,A9,令每帧图像对应的时间点定义为t0,t1,...,t9且满足t0<...<t9;Select 10 time-continuous images from all preprocessed images, and mark them as A0 , A1 ,...,A9 in the order of shooting time, and define the time point corresponding to each frame as t0 ,t1 ,...,t9 and satisfy t0 <...<t9 ;
Step3.2:选取参考帧:Step3.2: Select a reference frame:
选择拍摄时间最早的图像为参考帧,则可知A0为参考帧。If the image with the earliest shooting time is selected as the reference frame, it can be seen that A0 is the reference frame.
Step4:划分状态空间I:Step4: Divide state space I:
Step4.1:状态空间划分定义Step4.1: Definition of state space division
在参考帧A0内建立平面直角坐标系,以A0帧图像内的角点坐标为原点,把平面分为4个状态空间,即I={I1,I2,...,I4};Establish a plane Cartesian coordinate system in the reference frame A0 , take the corner point coordinates in the frame image of A0 as the origin, divide the plane into 4 state spaces, that is, I={I1 ,I2 ,...,I4 };
Step4.2:状态空间划分原则Step4.2: Principle of state space division
只考虑物体的平面运动,则平面内的4个状态空间满足则每个状态空间I的实际意义为:[0,90),[90,180),[180,270),[270,360)。Considering only the plane motion of the object, the four state spaces in the plane satisfy Then the actual meaning of each state space I is: [0,90), [90,180), [180,270), [270,360).
Step4.3:采用sift算法得到相对状态空间S:Step4.3: Use the sift algorithm to obtain the relative state space S:
再次采用sift特征匹配算法依次把9帧图像与参考帧图像匹配(匹配示例图见图5),并分别得到9帧图像相对参考帧的的方向所处状态,即可得到生成的相对状态空间S={S1,S2,...,S9}。The sift feature matching algorithm is used again to match the 9 frames of images with the reference frame images in sequence (see Figure 5 for a matching example), and the states of the directions of the 9 frames of images relative to the reference frame are respectively obtained, and the generated relative state space S can be obtained ={S1 ,S2 ,...,S9 }.
Step4.4:得到状态转移概率矩阵P:Step4.4: Get the state transition probability matrix P:
由得到的9个相对参考帧的方向状态集合S,得出一阶马尔可夫转移概率矩阵,其中P表示一阶状态转移矩阵,Pij表示从状态i转移到状态j的一步转移概率,且i,j=1,2,...,4,可得一步状态转移概率矩阵为:The first-order Markov transition probability matrix is obtained from the obtained 9 direction state sets S relative to the reference frame, where P represents the first-order state transition matrix, Pij represents the one-step transition probability from state i to state j, and i,j=1,2,...,4, the one-step state transition probability matrix can be obtained as:
Step5:预测物体运动方向:Step5: Predict the direction of object movement:
若已知任意一帧Aa图像及其关于参考帧的方向状态为I3,则可取初始状态概率向量为根据马尔可夫相邻状态之间的关系式可得下一个时间点图像的运动状态向量根据查阅状态空间可得下一个时间点的物体运动方向状态为I3,即[180,270),则即是运动方向为左下方。If it is known that any frame Aa image and its orientation state with respect to the reference frame is I3, then the initial state probability vector can be taken as According to the relationship between Markov adjacent states The motion state vector of the image at the next time point can be obtained According to consulting the state space, the state of the object's motion direction at the next time point is I3 , that is, [180,270), which means the motion direction is the lower left.
上面结合附图对本发明的具体实施方式作了详细说明,但是本发明并不限于上述实施方式,在本领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下作出各种变化。The specific implementation of the present invention has been described in detail above in conjunction with the accompanying drawings, but the present invention is not limited to the above-mentioned implementation, within the knowledge of those of ordinary skill in the art, it can also be made without departing from the gist of the present invention. Variations.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710212916.7ACN107045724B (en) | 2017-04-01 | 2017-04-01 | Markov judgment method for object moving direction under low resolution |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710212916.7ACN107045724B (en) | 2017-04-01 | 2017-04-01 | Markov judgment method for object moving direction under low resolution |
| Publication Number | Publication Date |
|---|---|
| CN107045724Atrue CN107045724A (en) | 2017-08-15 |
| CN107045724B CN107045724B (en) | 2020-02-07 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201710212916.7AActiveCN107045724B (en) | 2017-04-01 | 2017-04-01 | Markov judgment method for object moving direction under low resolution |
| Country | Link |
|---|---|
| CN (1) | CN107045724B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109657699A (en)* | 2018-11-22 | 2019-04-19 | 昆明理工大学 | A method of the Dynamic fault tree analysis based on Markov evaluates turbogenerator |
| CN114820717A (en)* | 2022-03-30 | 2022-07-29 | 四川云从天府人工智能科技有限公司 | Motion state prediction and target tracking method, motion state prediction and target tracking device and computer storage medium |
| CN114820717B (en)* | 2022-03-30 | 2025-10-14 | 云从科技集团股份有限公司 | Motion state prediction and target tracking method, device and computer storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080243439A1 (en)* | 2007-03-28 | 2008-10-02 | Runkle Paul R | Sensor exploration and management through adaptive sensing framework |
| CN102810158A (en)* | 2011-05-31 | 2012-12-05 | 中国科学院电子学研究所 | A high-resolution remote sensing target extraction method based on multi-scale semantic model |
| CN103186775A (en)* | 2013-03-27 | 2013-07-03 | 西安电子科技大学 | Human body motion recognition method based on mixed descriptor |
| CN103761748A (en)* | 2013-12-31 | 2014-04-30 | 北京邮电大学 | Method and device for detecting abnormal behaviors |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20080243439A1 (en)* | 2007-03-28 | 2008-10-02 | Runkle Paul R | Sensor exploration and management through adaptive sensing framework |
| CN102810158A (en)* | 2011-05-31 | 2012-12-05 | 中国科学院电子学研究所 | A high-resolution remote sensing target extraction method based on multi-scale semantic model |
| CN103186775A (en)* | 2013-03-27 | 2013-07-03 | 西安电子科技大学 | Human body motion recognition method based on mixed descriptor |
| CN103761748A (en)* | 2013-12-31 | 2014-04-30 | 北京邮电大学 | Method and device for detecting abnormal behaviors |
| Title |
|---|
| YUEXIN WU ET AL.: "Human b ehav ior recognition b ased on 3D features and hidden markov models", 《SIVIP》* |
| 王惠宇 等: "隐马尔可夫模型下基于SIFT特征的局部遮挡目标识别", 《计算技术与自动化》* |
| 赵万香: "基于马尔科夫预测法的车道偏离预警系统研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅱ集》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109657699A (en)* | 2018-11-22 | 2019-04-19 | 昆明理工大学 | A method of the Dynamic fault tree analysis based on Markov evaluates turbogenerator |
| CN109657699B (en)* | 2018-11-22 | 2023-08-11 | 昆明理工大学 | Method for analyzing and evaluating turbine engine based on dynamic fault tree of Markov |
| CN114820717A (en)* | 2022-03-30 | 2022-07-29 | 四川云从天府人工智能科技有限公司 | Motion state prediction and target tracking method, motion state prediction and target tracking device and computer storage medium |
| CN114820717B (en)* | 2022-03-30 | 2025-10-14 | 云从科技集团股份有限公司 | Motion state prediction and target tracking method, device and computer storage medium |
| Publication number | Publication date |
|---|---|
| CN107045724B (en) | 2020-02-07 |
| Publication | Publication Date | Title |
|---|---|---|
| CN106803271B (en) | Camera calibration method and device for visual navigation unmanned aerial vehicle | |
| CN103530894B (en) | A kind of video object method for tracing based on multiple dimensioned piece of rarefaction representation and system thereof | |
| CN110378264A (en) | Method for tracking target and device | |
| Li et al. | Automatic bridge crack identification from concrete surface using ResNeXt with postprocessing | |
| CN103544483B (en) | A kind of joint objective method for tracing based on local rarefaction representation and system thereof | |
| CN110210621A (en) | Target detection method based on residual error network improvement | |
| CN110838145B (en) | Visual positioning and mapping method for indoor dynamic scene | |
| CN111027505B (en) | Hierarchical multi-target tracking method based on significance detection | |
| CN108182695B (en) | Target tracking model training method and device, electronic device and storage medium | |
| CN106033621A (en) | A method and device for three-dimensional modeling | |
| JP2019008571A (en) | Object recognition device, object recognition method, program, and trained model | |
| CN113989760B (en) | Method, device, equipment and storage medium for detecting lane lines on high-precision maps | |
| CN110555377A (en) | pedestrian detection and tracking method based on fisheye camera overlook shooting | |
| CN112802197A (en) | Visual SLAM method and system based on full convolution neural network in dynamic scene | |
| CN105631939A (en) | Three-dimensional point cloud distortion correction method and system based on curvature filtering | |
| Wu et al. | Monitoring the work cycles of earthmoving excavators in earthmoving projects using UAV remote sensing | |
| Rho et al. | Automated construction progress management using computer vision-based CNN model and BIM | |
| CN109410246A (en) | The method and device of vision tracking based on correlation filtering | |
| CN107045724A (en) | The Markov determination methods of object moving direction under a kind of low resolution | |
| Deng et al. | Unified Few-shot Crack Segmentation and its Precise 3D Automatic Measurement in Concrete Structures | |
| CN102054278A (en) | Object tracking method based on grid contraction | |
| CN118097350A (en) | Multi-mode fusion type high-voltage transmission line inspection method under complex limited scene | |
| CN116189020B (en) | UAV line patrol navigation method and system based on infrared image power line recognition | |
| CN108492308B (en) | A method and system for determining variational optical flow based on mutual structure-guided filtering | |
| Zhang et al. | Lane crack detection based on saliency |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |