




技术领域technical field
本发明属于数字图像处理领域,具体涉及一种基于惯性传感器和摄像机的图像配准方法。The invention belongs to the field of digital image processing, and in particular relates to an image registration method based on an inertial sensor and a camera.
背景技术Background technique
图像配准技术的关键问题是对摄像机运动参数的估计,一般通过摄像机标定来完成。现有技术中,摄像机自标定的方法有:The key issue of image registration technology is the estimation of camera motion parameters, which is usually done through camera calibration. In the prior art, methods for camera self-calibration include:
李立春等人提出了基于八点法基础矩阵三维重建算法[李立春,邱志强,王鲲鹏等.基于匹配测度加权求解基础矩阵的三维重建算法[J].计算机应用,2007,27(10):2530-2533],该方法复杂并且计算量很大,难以实现图像处理的实时性。Li Lichun and others proposed a three-dimensional reconstruction algorithm based on the eight-point method of the fundamental matrix [Li Lichun, Qiu Zhiqiang, Wang Kunpeng, etc. A three-dimensional reconstruction algorithm based on matching measure weighting to solve the fundamental matrix [J]. Computer Applications, 2007, 27 (10): 2530-2533], this method is complex and computationally intensive, and it is difficult to achieve real-time image processing.
张洋等人提出了FOE模型验证了摄像机运动的平移效果[张洋,凯瑟勒威驰,柏森.可见光范围内基于移动相机的运动目标检测算法.计算机视觉和模式识别工作室.IEEE国际会议,2006,130-131],该方法没有考虑摄像机的旋转运动和摄像机的内参矩阵,精度不高。Zhang Yang and others proposed the FOE model to verify the translation effect of camera motion [Zhang Yang, Kaiser Wei Chi, Bo Sen. Moving object detection algorithm based on moving camera in the visible light range. Computer Vision and Pattern Recognition Studio. IEEE International Conference, 2006,130-131], this method does not consider the rotation of the camera and the internal reference matrix of the camera, and the accuracy is not high.
郑红等人提出了基于主动视觉的标定方法主要是利用云台等已知运动的硬件平台来提供摄像机运动信息[郑红,刘振强.基于精确模型的云台摄像机自标定[J].机器人,2013.5,35(3):1994-2013],该方法由于云台等设备较大,虽然能够很好地实现实时性,却很难普及到手持设备和移动设备中,使用面有限。Zheng Hong and others proposed a calibration method based on active vision, which mainly uses known motion hardware platforms such as pan/tilt to provide camera motion information [Zheng Hong, Liu Zhenqiang. Self-calibration of pan/tilt camera based on accurate model[J]. Robotics, 2013.5,35(3):1994-2013], this method is difficult to be popularized in handheld devices and mobile devices due to the large equipment such as the cloud platform, although it can realize real-time performance, and the application area is limited.
发明内容Contents of the invention
本发明提出一种基于惯性传感器和摄像机的图像配准方法,该方法计算量小,实时性和精度均较高,能适用于手持设备和移动设备。The invention proposes an image registration method based on an inertial sensor and a camera. The method has small calculation amount, high real-time performance and high precision, and can be applied to handheld devices and mobile devices.
为了解决上述技术问题,本发明提供一种基于惯性传感器和摄像机的图像配准方法,包括以下步骤:In order to solve the above technical problems, the present invention provides a method for image registration based on inertial sensors and cameras, comprising the following steps:
步骤一、同步控制摄像机和惯性传感器工作,同时获取同一场景的彩色图像I和J、深度图像I’和J’以及摄像机的三轴运动角速度[wx,wy,wz]和三轴运动加速度[ax,ay,az];Step 1. Synchronously control the work of the camera and the inertial sensor, and simultaneously acquire the color images I and J, depth images I' and J' of the same scene, as well as the camera's three-axis motion angular velocity [wx , wy , wz ] and three-axis motion acceleration[ax , ay , az ];
步骤二、提取彩色图像I的角点m1(x1,y1);Step 2, extracting the corner point m1 (x1 , y1 ) of the color image I;
步骤三、利用卡尔曼滤波对摄像机的三轴角速度[wx,wy,wz]和三轴加速度[ax,ay,az]进行滤波,获得卡尔曼滤波估计三轴角速度和卡尔曼滤波估计三轴加速度使用欧拉动力学方程式将转换成欧拉角[αk,βk,γk],使用匀加速运动公式将转换成三轴位移[txk,tyk,tzk],从而获得摄像机外参矩阵[R,T]中的旋转矩阵R和平移矩阵T;Step 3: Use the Kalman filter to filter the three-axis angular velocity [wx , wy , wz ] and the three-axis acceleration [ax , ay , az ] of the camera to obtain the estimated three-axis angular velocity of the Kalman filter and Kalman filtering to estimate the three-axis acceleration Using Euler's equation of kinetics the Converted to Euler angles [αk , βk , γk ], using the uniform acceleration motion formula to Convert to three-axis displacement [txk , tyk , tzk ], so as to obtain the rotation matrix R and translation matrix T in the camera extrinsic parameter matrix [R, T];
步骤四、利用彩色图像和深度图像的仿射变换关系,求解出彩色图像I的角点m1(x1,y1)在深度图像I’和J’中的坐标位置m1ir(x1,y1)和m2ir(x1,y1);利用深度图像灰度值与像素深度信息的比例关系求解出角点m1(x1,y1)在深度图像I’和J’中的深度信息u1和u2;Step 4. Using the affine transformation relationship between the color image and the depth image, solve the coordinate position m1ir (x1 , y1 ) of the corner point m1 (x 1 , y1 ) of the color image I in the depth images I' and J' y1 ) and m2ir (x1 ,y1 ); use the proportional relationship between the gray value of the depth image and the depth information of the pixel to solve the corner point m1 (x1 ,y1 ) in the depth image I' and J' Depth information u1 and u2 ;
步骤五、将步骤三获得的旋转矩阵R和平移矩阵T以及步骤四获得的彩色图像的深度信息u1、u2带入OFCM模型,计算获得图像J的背景光流m2(x2,y2),OFCM模型如公式(1)所示,Step 5. Bring the rotation matrix R and translation matrix T obtained in Step 3 and the depth information u1 and u2 of the color image obtained in Step 4 into the OFCM model, and calculate the background optical flow m2 (x2 ,y2 ), the OFCM model is shown in formula (1),
式(1)中,K为摄像机的内参矩阵,K-1为K的逆矩阵;In formula (1), K is the internal reference matrix of the camera, and K-1 is the inverse matrix of K;
步骤六、使用RANSAC算法对彩色图像I的背景光流m2进行鲁棒性估计,将背景光流m2中存在的由运动物体产生的光流和误配准光流剔除掉,从而完成彩色图像I和彩色图像J的图像配准,然后利用保留的彩色图像I的正确背景光流对彩色图像J进行初始化,进行后续图像的配准。Step 6. Use the RANSAC algorithm to robustly estimate the background optical flow m2 of the color image I, and remove the optical flow and misalignment optical flow generated by moving objects in the background optical flow m2 to complete the color image I. Image registration of image I and color image J, and then use the correct background optical flow of color image I to initialize color image J for registration of subsequent images.
主要发明原理:Main invention principle:
1、本发明结合欧式空间的刚体变化关系提出了图像背景光流与摄像机运动参数的关系模型(OFCM),该模型主要推导方式如下:1, the present invention proposes the relationship model (OFCM) of image background optical flow and camera motion parameter in conjunction with the rigid body variation relation of Euclidean space, and the main derivation mode of this model is as follows:
假设彩色CCD为针孔模型,场景中的背景只存在与彩色CCD的相对运动,摄像机坐标系的建立是以彩色CCD的光心为中心O,光轴所在的直线为Z轴,再根据左手原则建立X,Y轴。Assume that the color CCD is a pinhole model, and the background in the scene only has relative motion with the color CCD. The camera coordinate system is established with the optical center of the color CCD as the center O, and the straight line where the optical axis is located is the Z axis, and then according to the left-hand principle Create X, Y axis.
假设摄像机运动前摄像机坐标系与世界坐标系重合,那么空间背景中的任意一点P的坐标满足:Assuming that the camera coordinate system coincides with the world coordinate system before the camera moves, then the coordinates of any point P in the space background satisfy:
PW=PC (2)PW =PC (2)
式(2)中,PW为空间某点的世界坐标系坐标,PW为空间某点的摄像机坐标系坐标,空间背景点P与其成像平面上的成像点的齐次坐标m1(x1,y1,1)满足公式(3):In formula (2), PW is the world coordinate system coordinates of a certain point in space, PW is the camera coordinate system coordinates of a certain point in space, and the homogeneous coordinates m1 (x1 ,y1 ,1) satisfy formula (3):
u1m1=KPC (3)u1 m1 =KPC (3)
式(3)中,m1(x1,y1)为成像平面上的成像点,利用模型计算时采用Harris角点代替;u1可近似为m1(x1,y1)对应空间点的深度信息;K为摄像机的内参矩阵,考虑到图像中像素坐标系和物理坐标系的关系,可得出摄像机内参矩阵的五参数模型,如公式(4)所示,In formula (3), m1 (x1 , y1 ) is the imaging point on the imaging plane, which is replaced by the Harris corner point when calculating with the model; u1 can be approximated as the corresponding space point of m1 (x1 , y1 ) The depth information of the camera; K is the internal reference matrix of the camera, considering the relationship between the pixel coordinate system and the physical coordinate system in the image, the five-parameter model of the internal reference matrix of the camera can be obtained, as shown in formula (4),
式(4)中,其中,dx为像素点在x方向上的物理尺寸,dy为像素点在y方向上的物理尺寸,[u0,v0]为主点在像素坐标系中的坐标,s为相机畸变系数。In formula (4), Among them, dx is the physical size of the pixel in the x direction, dy is the physical size of the pixel in the y direction, [u0 , v0 ] is the coordinate of the main point in the pixel coordinate system, and s is the camera distortion coefficient.
假设摄像机存在任意方向的平移运动和旋转运动,平移运动可用平移向量T=[tx,ty,tz]T来表示,旋转运动可用3*3的旋转矩阵R来表示,此时空间背景点P在摄像机坐标系中的坐标满足:Assuming that the camera has translational motion and rotational motion in any direction, the translational motion can be represented by a translational vector T=[tx , ty , tz ]T , and the rotational motion can be represented by a 3*3 rotation matrix R. At this time, the space background The coordinates of point P in the camera coordinate system satisfy:
P'C=R×PW+T=R×PC+T (5)P'C = R×PW +T = R×PC +T (5)
式(5)中,旋转矩阵R可用欧式空间的欧拉角α,β,γ表示,如式(6)所示,In Equation (5), the rotation matrix R can be represented by Euler angles α, β, γ in Euclidean space, as shown in Equation (6),
摄像机运动后,空间点P与其成像平面上的成像点的齐次坐标m2(x2,y2,1)满足关系:After the camera moves, the homogeneous coordinate m2 (x2 ,y2 ,1) of the spatial point P and its imaging point on the imaging plane satisfies the relationship:
u2m2=KP'C=K(R×PC+T) (7)u2 m2 =KP'C =K(R×PC +T) (7)
根据公式(3)和公式(5),可以获得摄像机运动前后空间点P在成像平面上的运动光流,即OFCM的最终形式:According to formula (3) and formula (5), the motion optical flow of the spatial point P on the imaging plane before and after the camera movement can be obtained, that is, the final form of OFCM:
其中,u1和u2可以近似为空间点P在摄像机坐标系中的深度信息,通过摄像机获取的深度图像计算得出。外参矩阵[R,T]通过硬件平台获取并处理得出,m1可以通过Harris角点检测获取。假定摄像机的深度CCD和彩色CCD均工作在固定焦距f的条件下,此时像平面与物平面是一一对应的。因此,彩色CCD的内参矩阵K在整个图像获取过程中是一个恒定不变的量,可以经过一次标定获取后作为经验矩阵使用。Among them, u1 and u2 can be approximated as the depth information of the spatial point P in the camera coordinate system, which is calculated from the depth image obtained by the camera. The external parameter matrix [R, T] is obtained and processed through the hardware platform, and m1 can be obtained through Harris corner detection. It is assumed that both the depth CCD and the color CCD of the camera work under the condition of a fixed focal length f, and the image plane and the object plane are in one-to-one correspondence at this time. Therefore, the internal reference matrix K of the color CCD is a constant quantity during the entire image acquisition process, and can be used as an experience matrix after a calibration acquisition.
优势:现今存在的摄像机模型有FOE模型、MOF模型,FOE模型和MOF模型均没有考虑摄像机的内参矩阵,且FOE模型只支持摄像机的平移运动,因此在摄像机复杂运动下无法保持较高的精度。本发明提出的OFCM模型包含摄像机的内参矩阵,支持摄像机的三轴平移运动和三轴旋转运动,能够克服场景亮度突变。因此,OFCM模型在摄像机复杂运动和场景亮度突变情况下,都能保持高精度的图像配准。Advantages: The existing camera models include FOE model and MOF model. Both FOE model and MOF model do not consider the internal parameter matrix of the camera, and the FOE model only supports the translational movement of the camera, so it cannot maintain high accuracy under complex camera movements. The OFCM model proposed by the present invention includes the internal reference matrix of the camera, supports the three-axis translation movement and the three-axis rotation movement of the camera, and can overcome sudden changes in scene brightness. Therefore, the OFCM model can maintain high-precision image registration in the case of complex camera movements and sudden changes in scene brightness.
2、基于惯性传感器和摄像机的硬件平台:2. Hardware platform based on inertial sensor and camera:
本发明利用OFCM模型来计算图像的背景光流,由OFCM模型可知,图像背景光流的计算需要获知摄像机的外参矩阵[R,T]和图像像素对应的深度信息u1、u2。本发明采用硬件平台来完成摄像机外参矩阵和图像像素深度信息的获取。The present invention uses the OFCM model to calculate the background optical flow of the image. It can be seen from the OFCM model that the calculation of the background optical flow of the image needs to know the external parameter matrix [R, T] of the camera and the depth information u1 and u2 corresponding to the image pixels. The present invention adopts a hardware platform to complete the acquisition of the external parameter matrix of the camera and the depth information of the image pixel.
本发明所述硬件平台包括三个部分:控制器件FPGA、惯性传感器和摄像机,摄像机可以采用Kinect摄像机,惯性传感器中包括陀螺仪和加速度计,Kinect摄像机包括深度图像获取CCD和彩色图像获取CCD,FPGA和惯性传感器的最小系统均固定在深度图像获取CCD上,惯性传感器位于Kinect摄像机的正上方,陀螺仪用于获取摄像机的三轴运动角速度[wx,wy,wz],加速度计用于获取摄像机的三轴运动加速度[ax,ay,az],Kinect摄像机用于拍摄彩色图像I、J和深度图像I’、J’。硬件平台上电后,FPGA通过控制脉冲控制Kinect摄像机的快门工作,从而完成彩色图像和深度图像的获取,FPGA通过SPI协议同步控制Kinect摄像机和惯性传感器工作,以保证本步骤获取的图像序列和采集的摄像机运动参数一一对应。The hardware platform of the present invention comprises three parts: control device FPGA, inertial sensor and video camera, video camera can adopt Kinect video camera, comprise gyroscope and accelerometer in the inertial sensor, Kinect video camera comprises depth image acquisition CCD and color image acquisition CCD, FPGA The minimum system of and inertial sensor is fixed on the depth image acquisition CCD, the inertial sensor is located directly above the Kinect camera, the gyroscope is used to obtain the three-axis motion angular velocity [wx , wy , wz ] of the camera, and the accelerometer is used for Acquire the camera's three-axis motion acceleration [ax , ay , az ], and the Kinect camera is used to capture color images I, J and depth images I', J'. After the hardware platform is powered on, the FPGA controls the shutter of the Kinect camera through control pulses to complete the acquisition of color images and depth images. The FPGA controls the work of the Kinect camera and inertial sensors synchronously through the SPI protocol to ensure the image sequence and acquisition of this step. One-to-one correspondence of camera motion parameters.
优势:基于本模型的图像配准算法计算量小,系统实时性好;由于惯性传感器具有体积小、精度高等优点,和云台等获取摄像机运动参数的设备相比,能够更好地普及在手持设备和移动设备中。Advantages: The image registration algorithm based on this model has a small amount of calculation, and the system has good real-time performance; because the inertial sensor has the advantages of small size and high precision, it can be better popularized in handheld devices and mobile devices.
3、基于惯性传感器的卡尔曼滤波模型的建立:3. Establishment of Kalman filter model based on inertial sensor:
由于惯性传感器采集到的摄像机运动参数存在零点偏移、刻度因子误差和随机噪声等误差,因此需要对采集到的摄像机运动参数进行滤波去噪处理。Since the camera motion parameters collected by the inertial sensor have errors such as zero offset, scale factor error, and random noise, it is necessary to filter and denoise the collected camera motion parameters.
在使用卡尔曼滤波的经典公式时,需要根据惯性传感器的性能与参数建立惯性传感器的误差模型和状态转移关系式:When using the classic formula of Kalman filter, it is necessary to establish the error model and state transition relationship of the inertial sensor according to the performance and parameters of the inertial sensor:
z′k=zk(M+1)+Bf+U (9)z′k =zk (M+1)+Bf +U (9)
zk=Ψk,k-1zk-1+V (10)zk =Ψk,k-1 zk-1 +V (10)
其中,z′k为k时刻系统的实际观测值,zk为k时刻系统的真实值,M为刻度因子误差,Bf为零点漂移,V和U为随机高斯白噪声,Ψk,k-1为状态转移矩阵。Among them, z′k is the actual observed value of the system at k time, zk is the real value of the system at k time, M is the scale factor error, Bf is the zero drift, V and U are random Gaussian white noise, Ψk,k- 1 is the state transition matrix.
根据公式(9)并结合时间序列分析法求解出惯性传感器的刻度因子误差M,零点漂移Bf,和高斯白噪声V和U;根据公式(10)并结合时间序列分析法求解出参数间的状态转移矩阵Ψk,k-1。According to the formula (9) combined with the time series analysis method, the scale factor error M, zero drift Bf , and Gaussian white noise V and U of the inertial sensor are solved; according to the formula (10) combined with the time series analysis method, the relationship between the parameters is solved State transition matrix Ψk,k-1 .
将公式公式(9)和公式(10)计算的结果带入卡尔曼滤波的经典公式(11),对k时刻的参数进行滤波,求出估计值Bring the calculation results of formula (9) and formula (10) into the classical formula (11) of Kalman filter, filter the parameters at time k, and obtain the estimated value
其中,Q为系统的噪声方差,Kk为卡尔曼增益,Pk为状态协方差,wk,k-1为状态预估值。Among them, Q is the noise variance of the system, Kk is the Kalman gain, Pk is the state covariance, and wk,k-1 is the estimated value of the state.
本发明与现有技术相比,其显著优点在于:Compared with the prior art, the present invention has significant advantages in that:
(1)由于摄像机参数的估计是通过惯性传感器和摄像机实时获取的,其中惯性传感器中的陀螺仪和加速度计分别获取摄像机的旋转运动参数和平移运动参数;摄像机包含深度图像获取CCD(深度CCD)和彩色图像获取CCD(彩色CCD),二者可以结合获取彩色图像的深度信息。因此本方法虽然基于硬件平台,但由于惯性传感器具有体积小、精度高等优点,与摄像机绑定后,可以很好地普及在手持设备和移动设备中。(1) Since the estimation of the camera parameters is obtained in real time through the inertial sensor and the camera, the gyroscope and the accelerometer in the inertial sensor respectively obtain the rotational motion parameters and the translational motion parameters of the camera; the camera includes a depth image acquisition CCD (depth CCD) And color image acquisition CCD (color CCD), the two can be combined to obtain the depth information of the color image. Therefore, although this method is based on a hardware platform, because the inertial sensor has the advantages of small size and high precision, it can be well popularized in handheld devices and mobile devices after being bound to a camera.
(2)从摄影几何角度出发,结合欧式空间的刚体变化关系推导出图像背景光流与摄像机运动参数的关系模型(optical flow and camera’s motion),记作OFCM模型。此模型包含摄像机的内参矩阵,并且支持摄像机的三轴平移运动和三轴旋转运动;同时,在亮度突变的情况下,仍然有着较高的精度,相较于FOE模型和光流匹配法有了较大的突破。(2) From the perspective of photography geometry, combined with the rigid body change relationship in Euclidean space, the relationship model (optical flow and camera’s motion) between the image background optical flow and camera’s motion parameters is derived, which is denoted as the OFCM model. This model contains the internal reference matrix of the camera, and supports three-axis translation and three-axis rotation of the camera; at the same time, in the case of sudden changes in brightness, it still has high accuracy, compared with the FOE model and the optical flow matching method. Big breakthrough.
附图说明Description of drawings
图1为本发明基于惯性传感器和摄像机的图像配准方法流程图。FIG. 1 is a flow chart of the image registration method based on inertial sensors and cameras in the present invention.
图2为在场景亮度突变情况下使用本发明方法获得的图像配准效果图。Fig. 2 is an image registration effect diagram obtained by using the method of the present invention in the case of sudden changes in scene brightness.
图3为在摄像机发生平移运动时使用本发明方法获得的图像配准效果图。Fig. 3 is an effect diagram of image registration obtained by using the method of the present invention when the camera moves in translation.
图4为在摄像机发生旋转运动时使用本发明方法获得的图像配准效果图。Fig. 4 is an effect diagram of image registration obtained by using the method of the present invention when the camera rotates.
图5为在摄像机发生平移运动和旋转运动时使用本发明方法获得的图像配准效果图。FIG. 5 is an effect diagram of image registration obtained by using the method of the present invention when the camera undergoes translational motion and rotational motion.
具体实施方式Detailed ways
如图1所示,本发明基于惯性传感器和摄像机的图像配准方法,包括以下步骤As shown in Figure 1, the image registration method based on the inertial sensor and camera of the present invention includes the following steps
步骤一、在硬件平台中,同步控制摄像机和惯性传感器工作,同时获取同一场景的彩色图像I和J、深度图像I’和J’以及摄像机的运动参数,即三轴运动角速度[wx,wy,wz]和三轴运动加速度[ax,ay,az],具体实现如下:Step 1. In the hardware platform, synchronously control the work of the camera and the inertial sensor, and simultaneously acquire the color images I and J, the depth images I' and J' of the same scene, and the motion parameters of the camera, that is, the three-axis motion angular velocity [wx , wy , wz ] and three-axis motion acceleration [ax , ay , az ], the specific implementation is as follows:
本发明所述硬件平台包括三个部分:控制器件FPGA、惯性传感器和摄像机,惯性传感器包括陀螺仪和加速度计,摄像机包括深度图像获取CCD和彩色图像获取CCD,用于拍摄彩色图像I、J和深度图像I’、J’,FPGA和惯性传感器的最小系统均固定在深度图像获取CCD上,陀螺仪用于获取摄像机的三轴运动角速度[wx,wy,wz],加速度计用于获取摄像机的三轴运动加速度[ax,ay,az]。硬件平台上电后,FPGA通过控制脉冲控制摄像机的快门工作,从而完成彩色图像和深度图像的获取,FPGA通过SPI协议同步控制摄像机和惯性传感器工作,以保证本步骤获取的图像序列和采集的摄像机运动参数一一对应。The hardware platform of the present invention comprises three parts: control device FPGA, inertial sensor and video camera, inertial sensor comprises gyroscope and accelerometer, video camera comprises depth image acquisition CCD and color image acquisition CCD, is used to take color image I, J and Depth image I', J', the minimum system of FPGA and inertial sensor are all fixed on the depth image acquisition CCD, the gyroscope is used to acquire the three-axis motion angular velocity [wx , wy , wz ] of the camera, and the accelerometer is used for Get the three-axis movement acceleration [ax , ay , az ] of the camera. After the hardware platform is powered on, the FPGA controls the shutter of the camera by controlling the pulse to complete the acquisition of color images and depth images. The FPGA controls the camera and the inertial sensor synchronously through the SPI protocol to ensure that the sequence of images acquired in this step and the captured camera One-to-one correspondence of motion parameters.
步骤二、利用Harris角点检测提取彩色图像I的角点m1(x1,y1)。具体实现如下:Step 2, using Harris corner detection to extract the corner m1 (x1 , y1 ) of the color image I. The specific implementation is as follows:
2.1计算彩色图像I中像素点p[x,y]的相关矩阵M,计算方法如公式(1)所示,2.1 Calculate the correlation matrix M of the pixel point p[x, y] in the color image I, the calculation method is shown in formula (1),
式(1)中,(x,y)为像素点p的图像坐标,w(x,y)是窗口大小为w*w的高斯权重函数,Ix代表行方向的导数,Iy代表列方向的导数;In formula (1), (x, y) is the image coordinate of pixel point p, w(x, y) is a Gaussian weight function with a window size of w*w, Ix represents the derivative in the row direction, and Iy represents the column direction derivative of
2.2计算像素点m1(x1,y1)的角点响应R(x,y),计算方法如公式(2)所示,2.2 Calculate the corner response R(x,y) of pixel m1 (x1 ,y1 ), the calculation method is shown in formula (2),
R(x,y)=detM-k*(traceM)2 (2)R(x,y)=detM-k*(traceM)2 (2)
式(2)中,detM代表相关矩阵M行列式的值,traceM代表相关矩阵M的迹,k为常数,一般为0.04-0.06;In the formula (2), detM represents the value of the determinant of the correlation matrix M, traceM represents the trace of the correlation matrix M, and k is a constant, generally 0.04-0.06;
2.3判断像素点m1(x1,y1)是否为角点,判断方法为公式(3)所示,若像素点m1(x1,y1)的角点响应R(x,y)满足公式(3),则像素点m1(x1,y1)为角点;若像素点m1(x1,y1)的角点响应R(x,y)不满足公式(11),则将m1(x1,y1)舍弃;2.3 Judging whether the pixel point m1 (x1 ,y1 ) is a corner point, the judgment method is shown in the formula (3), if the corner point response R(x,y ) of the pixel point m1 (x1 ,y 1 ) If the formula (3) is satisfied, the pixel point m1 (x1 ,y1 ) is a corner point; if the corner point response R(x,y) of the pixel point m1 (x1 ,y1 ) does not satisfy the formula (11) , discard m1 (x1 ,y1 );
[R(x,y)≥R(x′,y′)]∩[R(x,y)≥0.01Rmax] (3)[R(x,y)≥R(x′,y′)]∩[R(x,y)≥0.01Rmax ] (3)
式(3)中,R(x′,y′)代表窗口w*w中除像素点m1(x1,y1)之外的其它像素点的角点响应,Rmax为边缘特征信息图像If中的最大角点响应。In formula (3), R(x′,y′) represents the corner response of other pixels in the window w*w except pixel m1 (x1 ,y1 ), and Rmax is the edge feature information image Maximum corner response inIf .
步骤三、利用卡尔曼滤波对摄像机的三轴角速度[wx,wy,wz]和三轴加速度[ax,ay,az]进行滤波,获得卡尔曼滤波估计三轴角速度和卡尔曼滤波估计三轴加速度使用欧拉动力学方程式将转换成欧拉角[αk,βk,γk],使用匀加速运动公式将转换成三轴位移[txk,tyk,tzk]从而获得摄像机外参矩阵[R,T]中的旋转矩阵R和平移矩阵T;Step 3: Use the Kalman filter to filter the three-axis angular velocity [wx , wy , wz ] and the three-axis acceleration [ax , ay , az ] of the camera to obtain the estimated three-axis angular velocity of the Kalman filter and Kalman filtering to estimate the three-axis acceleration Using Euler's equation of kinetics the Converted to Euler angles [αk , βk , γk ], using the uniform acceleration motion formula to Convert to three-axis displacement [txk , tyk , tzk ] to obtain the rotation matrix R and translation matrix T in the camera extrinsic matrix [R, T];
3.1利用卡尔曼滤波对摄像机的运动参数,即三轴角速度[wx,wy,wz]和三轴加速度[ax,ay,az]进行滤波,获得卡尔曼滤波估计三轴角速度和卡尔曼滤波估计三轴加速度的具体计算方式如下:3.1 Use the Kalman filter to filter the motion parameters of the camera, namely the three-axis angular velocity [wx , wy , wz ] and the three-axis acceleration [ax , ay , az ] to obtain the Kalman filter estimated three-axis angular velocity and Kalman filtering to estimate the three-axis acceleration The specific calculation method is as follows:
3.1.1根据惯性传感器的性能与运动参数建立惯性传感器的误差模型和状态转移关系式,结合时间序列分析法求解出惯性传感器的刻度因子误差M、零点漂移Bf、高斯白噪声U、运动参数间的状态转移矩阵Ψk,k-1以及高斯白噪声V;3.1.1 Establish the error model and state transition relational formula of the inertial sensor according to the performance and motion parameters of the inertial sensor, and solve the scale factor error M, zero drift Bf , Gaussian white noise U, and motion parameters of the inertial sensor by combining the time series analysis method Between the state transition matrix Ψk, k-1 and Gaussian white noise V;
误差模型如公式(4)所示,状态转移关系式公式(5)所示,The error model is shown in formula (4), and the state transition relation formula is shown in formula (5),
z′k=zk(M+1)+Bf+U (4)z′k =zk (M+1)+Bf +U (4)
zk=Ψk,k-1zk-1+V (5)zk =Ψk,k-1 zk-1 +V (5)
式(4)和式(5)中,z代表三轴角速度[wx,wy,wz]和三轴加速度[ax,ay,az]中的某一运动参数,z′k为k时刻惯性传感器测量获得的某一运动参数的测量值,zk为k时刻某一运动参数的真实值,M为刻度因子误差,Bf为零点漂移,V和U为随机高斯白噪声,Ψk,k-1为状态转移矩阵;In formula (4) and formula (5), z represents a motion parameter in three-axis angular velocity [wx , wy , wz ] and three-axis acceleration [ax , ay , az ], z′k is the measured value of a certain motion parameter obtained by inertial sensor measurement at time k, zk is the real value of a certain motion parameter at time k, M is the scale factor error, Bf is the zero drift, V and U are random Gaussian white noise, Ψk,k-1 is the state transition matrix;
根据公式(4)并结合时间序列分析法求解出惯性传感器的刻度因子误差M,零点漂移Bf,和高斯白噪声V和U;根据公式(5)并结合时间序列分析法求解出运动参数间的状态转移矩阵Ψk,k-1;According to the formula (4) combined with the time series analysis method, the scale factor error M of the inertial sensor, the zero drift Bf , and the Gaussian white noise V and U are solved; according to the formula (5) combined with the time series analysis method, the motion parameters between The state transition matrix Ψk,k-1 of ;
3.1.2将步骤3.1.1所求取的结果带入卡尔曼滤波的经典公式(6)对k时刻的运动参数zk进行滤波,求出该运动参数的估计值3.1.2 Bring the result obtained in step 3.1.1 into the classic formula (6) of Kalman filter to filter the motion parameter zk at time k to obtain the estimated value of the motion parameter
式(6)中,Q为系统的噪声方差,Kk为卡尔曼增益,Pk为状态协方差,zk,k-1为状态预估值。In formula (6), Q is the noise variance of the system, Kk is the Kalman gain, Pk is the state covariance, and zk,k-1 is the estimated value of the state.
按照上述方法分别对三轴运动角速度[wx,wy,wz]和三轴运动加速度[ax,ay,az]进行卡尔曼滤波,获得和Carry out Kalman filtering on three-axis motion angular velocity [wx , wy , wz ] and three-axis motion acceleration [ax , ay , az ] according to the above method, and obtain and
3.2使用欧拉动力学方程将k时刻滤波后的三轴运动角速度wxk,wyk,wzk转换成欧式坐标系中的欧拉角αk,βk,γk,将所得的αk,βk,γk带入旋转矩阵R中,完成彩色CCD旋转矩阵R的计算;3.2 Use the Euler dynamics equation to convert the filtered three-axis motion angular velocity wxk , wyk , wzk at time k into Euler angles αk , βk , γk in the Euclidean coordinate system, and convert the obtained αk , βk and γk are brought into the rotation matrix R to complete the calculation of the color CCD rotation matrix R;
所述欧拉动力学方程如公式(7)所示,Described Euler kinetic equation is as shown in formula (7),
式(7)中,为欧拉角αk,βk,γk的一阶导数,将k-1时刻求得的参量αk-1,βk-1,γk-1作为公式(7)的初值;In formula (7), is the first derivative of Euler angles αk , βk , γk, and the parameters αk-1 , βk-1 , γk-1 obtained at time k-1 are taken as the initial values of formula (7);
所述旋转矩阵R如公式(8)所示:The rotation matrix R is shown in formula (8):
3.3使用如公式(9)所示的速度位移公式将三轴加速度axk,ayk,azk转换成三轴位移txk,tyk,tzk。由于摄像机运动参数的采集频率比较高,因此可以将摄像机的平移运动近似为匀加速运动,计算公式如公式(9)所示:3.3 Convert the three-axis acceleration axk , ayk , azk into three-axis displacement txk , tyk , tzk by using the velocity-displacement formula shown in formula (9). Since the acquisition frequency of camera motion parameters is relatively high, the translational motion of the camera can be approximated as a uniform acceleration motion, and the calculation formula is shown in formula (9):
式(9)中,tk代表三轴位移txk,tyk,tzk中任意一坐标轴方向的位移,T为摄像机的拍摄周期,ak-1为k-1时刻对应坐标轴方向的加速度,vk-1和vk分别为k-1时刻和k时刻的初始速度。本发明给定系统初始时刻的初始速度0,即v0=0。将三轴加速度axk,ayk,azk分别带入公式(9),可计算获得三轴平移运动的平移向量T=[txk,tyk,tzk]。In formula (9), tk represents the displacement of any one of the three-axis displacements txk , tyk , tzk in the direction of the coordinate axis, T is the shooting period of the camera, and ak-1 is the displacement in the direction of the corresponding coordinate axis at time k-1 Acceleration, vk-1 and vk are the initial velocities at time k-1 and time k, respectively. In the present invention, the initial velocity of the system at the initial moment is 0, that is, v0 =0. The three-axis accelerations axk , ayk , and azk are brought into formula (9) respectively, and the translation vector T=[txk ,tyk , tzk ] of the three-axis translation motion can be calculated.
步骤四、利用彩色图像和深度图像的仿射变换关系,分别求解出彩色图像I的角点m1(x1,y1)在深度图像I’和J’中的坐标位置m1ir(x1,y1)和m2ir(x1,y1);利用深度图像中灰度值与像素深度信息的比例关系求解出角点m1(x1,y1)在深度图像I’和J’中的深度信息u1和u2,具体实现如下:Step 4.Using the affine transformation relationship betweenthe color image andthe depth image, respectively solve the coordinate position m1ir (x1 ,y1 ) and m2ir (x1 ,y1 ); use the proportional relationship between the gray value in the depth image and the pixel depth information to solve the corner point m1 (x1 ,y1 ) in the depth image I' and J' The depth information u1 and u2 in , the specific implementation is as follows:
4.1利用彩色图像和深度图像的仿射变换关系,分别求解出彩色图像I的角点m1(x1,y1)在深度图像I’和J’中的坐标位置m1ir(x1,y1)和m2ir(x1,y1),具体实现如下:4.1 Using the affine transformation relationship between the color image and the depth image, respectively solve the coordinate position m1ir (x 1 ,y ) of the corner point m1 (x1 ,y1 )of the color image I in the depth images I' and J'1 ) and m2ir (x1 ,y1 ), the specific implementation is as follows:
根据直接线性变换算法计算出彩色图像和深度图像的仿射变换关系,具体如公式(10)所示:According to the direct linear transformation algorithm, the affine transformation relationship between the color image and the depth image is calculated, as shown in formula (10):
mir=Hm (10)mir =Hm (10)
式(10)中,mir为深度图像中的像素坐标,m为彩色图像中的像素坐标。将彩色图像I的角点m1(x1,y1)带入公式(9)求解出角点m1(x1,y1)在深度图像I’和J’中的坐标位置m1ir(x1,y1)和m2ir(x1,y1)。In formula (10), mir is the pixel coordinate in the depth image, and m is the pixel coordinate in the color image. Put the corner point m1 (x1 ,y1 )of the color image I into formula (9) to solve the coordinateposition m1ir( x1 ,y1 ) and m2ir (x1 ,y1 ).
4.2利用深度图像中灰度值与像素深度信息的比例关系求解出彩色图像角点m1(x1,y1)在深度图像I’和J’中的深度信息u1和u2,具体计算过程如下4.2 Solve the depth information u 1 andu 2of the color image corner point m1 (x1 , y1 ) in the depth images I' and J' by using the proportional relationship between the gray value and the pixel depth information in the depth image, and calculate specifically The process is as follows
灰度值与像素深度信息的比例关系可以通过反向工程提出的经验关系式,如公式(11)所示:The proportional relationship between the gray value and the pixel depth information can be expressed through the empirical relationship proposed by reverse engineering, as shown in formula (11):
u=l×m (11)u=l×m (11)
式(11)中,m(x,y)为像素坐标(x,y)处的灰度值,u为像素坐标(x,y)处对应的深度值,l为比例系数;将步骤4.2求解出的m1ir(x1,y1),m2ir(x1,y1)带入公式(11)求解出对应的深度信息u1和u2。In formula (11), m(x, y) is the gray value at the pixel coordinate (x, y), u is the corresponding depth value at the pixel coordinate (x, y), and l is the proportional coefficient; solve step 4.2 The obtained m1ir (x1 ,y1 ), m2ir (x1 ,y1 ) are brought into formula (11) to solve the corresponding depth information u1 and u2 .
步骤五、将步骤三获得的运动外参矩阵R和T以及步骤四获得的彩色图像的深度信息u1、u2带入OFCM模型,计算获得图像J的背景光流m2(x2,y2),具体实现如下:Step 5. Bring the motion extrinsic matrices R and T obtained in Step 3 and the depth information u1 and u2 of the color image obtained in Step 4 into the OFCM model, and calculate the background optical flow m2 (x2 ,y2 ), the specific implementation is as follows:
将步骤三获得的运动外参矩阵R和T以及步骤四获得的彩色图像的深度信息u1、u2带入如公式(12)所示的OFCM模型,计算获得第k帧角点m1在第k+1帧图像中的背景光流m2,Bring the motion extrinsic matrix R and T obtained in step 3 and the depth information u1 and u2 of the color image obtained in step 4 into the OFCM model shown in formula (12), and calculate and obtain the corner point m1 of the kth frame at The background optical flow m2 in the k+1th frame image,
式(12)中,K为摄像机的内参矩阵,K-1为K的逆矩阵。In formula (12), K is the internal reference matrix of the camera, and K-1 is the inverse matrix of K.
步骤六、使用RANSAC算法对彩色图像I的背景光流m2进行鲁棒性估计,将背景光流m2中存在的由运动物体产生的光流和误配准光流剔除掉,从而完成彩色图像I和彩色图像J的图像配准,然后利用保留的彩色图像I的正确背景光流对彩色图像J进行初始化,进行后续图像的配准。具体实现如下:Step 6. Use the RANSAC algorithm to robustly estimate the background optical flow m2 of the color image I, and remove the optical flow and misalignment optical flow generated by moving objects in the background optical flow m2 to complete the color image I. Image registration of image I and color image J, and then use the correct background optical flow of color image I to initialize color image J for registration of subsequent images. The specific implementation is as follows:
所述彩色图像I的RANSAC计算过程为:The RANSAC calculation process of the color image I is:
6.1初始化当前所有的角点m1(x1,y1),m2(x2,y2)为局内点;6.1 Initialize all current corner points m1 (x1 , y1 ), and m2 (x2 , y2 ) are internal points;
6.2利用DLT算法计算出仿射变换矩阵H1,根据公式(13)结合角点m1(x1,y1)估计出角点估计值m2′(x2′,y2′):6.2 Use the DLT algorithm to calculate the affine transformation matrix H1 , and combine the corner point m1 (x1 ,y1 ) to estimate the corner point estimated value m2 ′(x2 ′,y2 ′) according to the formula (13):
m2′=H1m1 (13)m2 ′=H1 m1 (13)
6.3根据公式(14)计算角点m2(x2,y2)和角点估计值m2′(x2′,y2′)的欧氏距离d,若d小于阈值d′,则将角点m1(x1,y1),m2(x2,y2)置为局内点(即背景光流),若d大于等于阈值d′,则将角点m1(x1,y1),m2(x2,y2)置为局外点,局外点即为运动物体光流或误配准光流。当完成一次RANSAC计算后,初始化当前局内点为局外点,重复步骤6.2和6.3,当不再产生新的局内点或达到规定重复次数时,结束当前RANSAC算法,6.3 Calculate the Euclidean distance d between the corner point m2 (x2 , y2 ) and the corner point estimated value m2 ′(x2 ′, y2 ′) according to the formula (14). If d is less than the threshold d′, the Corner points m1 (x1 , y1 ), m2 (x2 , y2 ) are set as internal points (i.e. background optical flow), if d is greater than or equal to the threshold d′, then corner points m1 (x1 , y1 ), m2 (x2 , y2 ) are set as outliers, and outliers are optical flows of moving objects or misalignment optical flows. After completing a RANSAC calculation, initialize the current intra-office point as an out-of-office point, repeat steps 6.2 and 6.3, and end the current RANSAC algorithm when no new intra-office points are generated or the specified number of repetitions is reached.
d=sqrt((x2-x2′)2+(y2-y2′)2) (14)d=sqrt((x2 -x2 ′)2 +(y2 -y2 ′)2 ) (14)
所述对彩色图像J进行初始化的过程为:首先遍历该帧所有的角点,若该角点与彩色图像I保留的正确背景光流重合,则将该角点作为RANSAC计算的初始局内点。重复步骤6.1、6.2、6.3,逐帧完成RANSAC的估计。The process of initializing the color image J is as follows: first traverse all the corner points of the frame, and if the corner point coincides with the correct background optical flow retained by the color image I, then use the corner point as the initial intra-regional point for RANSAC calculation. Repeat steps 6.1, 6.2, and 6.3 to complete RANSAC estimation frame by frame.
根据上述步骤,不断地将图像中的运动物体光流或误配准光流剔除掉;利用RANSAC算法提取的角点光流,完成图像的配准。According to the above steps, the optical flow of moving objects or mis-registration optical flow in the image is continuously eliminated; the image registration is completed by using the corner optical flow extracted by the RANSAC algorithm.
本发明的有益效果可以通过以下实验进一步说明:Beneficial effect of the present invention can further illustrate by following experiment:
本实验采用的主控制器件为ALTERA公司的型号为EP2C5Q208C6的FPGA,传感器包含两部分:型号为ADIS16405的惯性传感器和Kinect摄像机,其中惯性传感器中的陀螺仪和加速度计分别获取摄像机的旋转运动参数和平移运动参数;Kinect摄像机包含深度图像获取CCD(深度CCD)和彩色图像获取CCD(彩色CCD),二者可以结合获取彩色图像的深度信息。由于惯性传感器的尺寸为35mm*30mm,所以硬件平台体积小,可以普及在移动设备中。The main control device used in this experiment is the EP2C5Q208C6 FPGA of ALTERA Company. The sensor consists of two parts: the inertial sensor of the model ADIS16405 and the Kinect camera. The gyroscope and accelerometer in the inertial sensor obtain the rotation parameters and Translational motion parameters; the Kinect camera includes a depth image acquisition CCD (depth CCD) and a color image acquisition CCD (color CCD), the two can be combined to obtain the depth information of the color image. Since the size of the inertial sensor is 35mm*30mm, the hardware platform is small in size and can be popularized in mobile devices.
利用Kinect相机采集得到大小均为640*480的彩色图像和深度图像。根据对PC端算法的实时性分析,本实施例设置硬件平台的图像采集频率和惯性传感器的参数采集频率均为40帧/s。Harris角点检测的窗口选取为5*5,RANSAC算法的计算次数设置为30次。The Kinect camera is used to collect color images and depth images with a size of 640*480. According to the real-time analysis of the PC-side algorithm, the image acquisition frequency of the hardware platform and the parameter acquisition frequency of the inertial sensor are both set to 40 frames/s in this embodiment. The window of Harris corner detection is selected as 5*5, and the number of calculations of RANSAC algorithm is set to 30 times.
为了更好地验证本发明方法提出的OFCM模型的效果,本实验分别验证了Kinect摄像机在亮度突变、随机平移运动、随机旋转运动、随机平移旋转复杂运动条件下,使用本发明方对同一场景进行图像配配准,分别获得如图2、图3、图4、图5所示的效果图。在图2、图3、图4、图5中,每幅图的左半部分为左图、右半部分为右图,左右图间隔均为10帧,其中左图为第k帧图像,右图为第k+10帧图像。从图2、图3、图4、图5看出,本发明方法能够精准的完成图像的配准。In order to better verify the effect of the OFCM model proposed by the method of the present invention, this experiment has verified that the Kinect camera uses the method of the present invention to carry out the same scene under the conditions of sudden changes in brightness, random translational motion, random rotational motion, and random translational rotation. The images are matched and registered, and the renderings shown in Figure 2, Figure 3, Figure 4, and Figure 5 are respectively obtained. In Figure 2, Figure 3, Figure 4, and Figure 5, the left half of each picture is the left picture, and the right half is the right picture, and the interval between the left and right pictures is 10 frames. The picture shows the k+10th frame image. It can be seen from FIG. 2 , FIG. 3 , FIG. 4 , and FIG. 5 that the method of the present invention can accurately complete image registration.
表1为上述不同摄像机运动条件下,PC端图像配准性能统计分析。根据表1,对同种运动方式的各项指标进行纵向比较,可以发现本发明方法在Kinect摄像机发生平移运动和旋转运动时,图像配准都取得了较好的结果,平均处理精度可以达到69.5%;对不同运动方式下图像配准的情况的各项指标进行横向比较,可以发现本发明方法对Kinect摄像机单一运动的配准精度高于混合运动的配准精度,此误差主要是由陀螺仪和加速度计的误差叠加造成的;就处理实时性来看,在PC机上的平均处理时间约22.92ms,时间上的差别主要是Kinect摄像机混合运动产生误匹配点的概率变大,因此RANSAC算法估计相对耗时;根据PC端的时间估计,综合考虑硬件采集的实际难度,设置Kinect摄像机的图像获取帧频和惯性传感器的参数的同步采集频率为30Hz。Table 1 shows the statistical analysis of PC-side image registration performance under the above-mentioned different camera motion conditions. According to table 1, carry out vertical comparison to each index of same kind of motion mode, can find that the present invention method has all obtained better result in image registration when translation motion and rotation motion occur in Kinect camera, and average processing accuracy can reach 69.5 %; Each index of the situation of image registration under different motion modes is carried out transverse comparison, can find that the registration accuracy of the inventive method to Kinect camera single motion is higher than the registration accuracy of mixed motion, and this error is mainly caused by gyroscope It is caused by the superimposition of accelerometer errors; in terms of real-time processing, the average processing time on the PC is about 22.92ms. The difference in time is mainly due to the increased probability of mismatching points caused by the mixed motion of the Kinect camera, so the RANSAC algorithm estimates Relatively time-consuming; according to the time estimation on the PC side, considering the actual difficulty of hardware acquisition, set the image acquisition frame rate of the Kinect camera and the synchronous acquisition frequency of the parameters of the inertial sensor to 30Hz.
表1PC端图像配准性能统计表Table 1 PC-side image registration performance statistics
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410027349.4ACN103745474B (en) | 2014-01-21 | 2014-01-21 | Image registration method based on inertial sensor and camera |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410027349.4ACN103745474B (en) | 2014-01-21 | 2014-01-21 | Image registration method based on inertial sensor and camera |
| Publication Number | Publication Date |
|---|---|
| CN103745474Atrue CN103745474A (en) | 2014-04-23 |
| CN103745474B CN103745474B (en) | 2017-01-18 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410027349.4AExpired - Fee RelatedCN103745474B (en) | 2014-01-21 | 2014-01-21 | Image registration method based on inertial sensor and camera |
| Country | Link |
|---|---|
| CN (1) | CN103745474B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104717426A (en)* | 2015-02-28 | 2015-06-17 | 深圳市德赛微电子技术有限公司 | Multi-camera video synchronization device and method based on external sensor |
| CN105096337A (en)* | 2014-05-23 | 2015-11-25 | 南京理工大学 | Image global motion compensation method based on hardware platform of gyroscope |
| CN105303518A (en)* | 2014-06-12 | 2016-02-03 | 南京理工大学 | Region feature based video inter-frame splicing method |
| CN106251342A (en)* | 2016-07-26 | 2016-12-21 | 杭州好好开车科技有限公司 | Camera calibration method based on sensing technology |
| CN106447766A (en)* | 2016-09-28 | 2017-02-22 | 成都通甲优博科技有限责任公司 | Scene reconstruction method and apparatus based on mobile device monocular camera |
| CN106485245A (en)* | 2015-08-24 | 2017-03-08 | 南京理工大学 | A kind of round-the-clock object real-time tracking method based on visible ray and infrared image |
| CN107507235A (en)* | 2017-08-31 | 2017-12-22 | 山东大学 | A kind of method for registering of coloured image and depth image based on the collection of RGB D equipment |
| CN107729847A (en)* | 2017-10-20 | 2018-02-23 | 阿里巴巴集团控股有限公司 | A kind of certificate verification, auth method and device |
| CN108232451A (en)* | 2017-12-28 | 2018-06-29 | 无锡人人行拍网络科技有限公司 | A kind of double-movement terminal data transmission antenna automatic bidirectional tracking system |
| CN108225309A (en)* | 2016-12-21 | 2018-06-29 | 波音公司 | Enhance the method and apparatus of multiple raw sensor images by geographic registration |
| CN108388830A (en)* | 2018-01-09 | 2018-08-10 | 中国农业大学 | Animal shaped detection method and device based on convolutional Neural net |
| CN108668087A (en)* | 2017-03-27 | 2018-10-16 | 展讯通信(上海)有限公司 | image alignment method and device, mobile terminal |
| WO2019051961A1 (en)* | 2017-09-18 | 2019-03-21 | 深圳大学 | Pipeline detection method and apparatus, and storage medium |
| CN109655057A (en)* | 2018-12-06 | 2019-04-19 | 深圳市吉影科技有限公司 | A kind of six push away the filtering optimization method and its system of unmanned plane accelerator measured value |
| CN109683710A (en)* | 2018-12-20 | 2019-04-26 | 北京字节跳动网络技术有限公司 | A kind of palm normal vector determines method, apparatus, equipment and storage medium |
| CN109900338A (en)* | 2018-12-25 | 2019-06-18 | 西安中科天塔科技股份有限公司 | A kind of road surface pit slot volume measuring method and device |
| CN109902725A (en)* | 2019-01-31 | 2019-06-18 | 北京达佳互联信息技术有限公司 | Mobile mesh object detection method, device and electronic equipment and storage medium |
| CN110414392A (en)* | 2019-07-15 | 2019-11-05 | 北京天时行智能科技有限公司 | A kind of determination method and device of obstacle distance |
| CN110602377A (en)* | 2019-03-18 | 2019-12-20 | 上海立可芯半导体科技有限公司 | Video image stabilizing method and device |
| CN111462177A (en)* | 2020-03-14 | 2020-07-28 | 华中科技大学 | Multi-clue-based online multi-target tracking method and system |
| CN111540003A (en)* | 2020-04-27 | 2020-08-14 | 浙江光珀智能科技有限公司 | Depth image generation method and device |
| CN111750850A (en)* | 2019-03-27 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Angle information acquisition method, device and system |
| CN111899305A (en)* | 2020-07-08 | 2020-11-06 | 深圳市瑞立视多媒体科技有限公司 | A kind of camera automatic calibration optimization method and related system and equipment |
| WO2021035525A1 (en)* | 2019-08-27 | 2021-03-04 | Oppo广东移动通信有限公司 | Image processing method and apparatus, and electronic device and computer-readable storage medium |
| WO2021035524A1 (en)* | 2019-08-27 | 2021-03-04 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, and computer-readable storage medium |
| CN114708141A (en)* | 2022-03-31 | 2022-07-05 | 湖北久之洋信息科技有限公司 | Image superposition method and system based on motion sensor |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101630406A (en)* | 2008-07-14 | 2010-01-20 | 深圳华为通信技术有限公司 | Camera calibration method and camera calibration device |
| CN102999759A (en)* | 2012-11-07 | 2013-03-27 | 东南大学 | Light stream based vehicle motion state estimating method |
| WO2013084782A1 (en)* | 2011-12-09 | 2013-06-13 | 株式会社日立国際電気 | Image processing device |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101630406A (en)* | 2008-07-14 | 2010-01-20 | 深圳华为通信技术有限公司 | Camera calibration method and camera calibration device |
| WO2013084782A1 (en)* | 2011-12-09 | 2013-06-13 | 株式会社日立国際電気 | Image processing device |
| CN102999759A (en)* | 2012-11-07 | 2013-03-27 | 东南大学 | Light stream based vehicle motion state estimating method |
| Title |
|---|
| 袁梦笛 等: "红外搜索跟踪系统中实时图像配准的研究和实现", 《红外技术》* |
| 陈晨 等: "基于卡尔曼滤波的MEMS陀螺仪漂移补偿", 《机电工程》* |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105096337B (en)* | 2014-05-23 | 2018-05-01 | 南京理工大学 | A kind of image global motion compensation method based on gyroscope hardware platform |
| CN105096337A (en)* | 2014-05-23 | 2015-11-25 | 南京理工大学 | Image global motion compensation method based on hardware platform of gyroscope |
| CN105303518A (en)* | 2014-06-12 | 2016-02-03 | 南京理工大学 | Region feature based video inter-frame splicing method |
| CN104717426B (en)* | 2015-02-28 | 2018-01-23 | 深圳市德赛微电子技术有限公司 | A kind of multiple-camera video synchronization device and method based on external sensor |
| CN104717426A (en)* | 2015-02-28 | 2015-06-17 | 深圳市德赛微电子技术有限公司 | Multi-camera video synchronization device and method based on external sensor |
| CN106485245A (en)* | 2015-08-24 | 2017-03-08 | 南京理工大学 | A kind of round-the-clock object real-time tracking method based on visible ray and infrared image |
| CN106251342A (en)* | 2016-07-26 | 2016-12-21 | 杭州好好开车科技有限公司 | Camera calibration method based on sensing technology |
| CN106447766A (en)* | 2016-09-28 | 2017-02-22 | 成都通甲优博科技有限责任公司 | Scene reconstruction method and apparatus based on mobile device monocular camera |
| CN106447766B (en)* | 2016-09-28 | 2019-07-09 | 成都通甲优博科技有限责任公司 | A kind of scene reconstruction method and device based on mobile device monocular camera |
| CN108225309B (en)* | 2016-12-21 | 2023-09-26 | 波音公司 | Method and apparatus for enhancing multiple raw sensor images by geographic registration |
| CN108225309A (en)* | 2016-12-21 | 2018-06-29 | 波音公司 | Enhance the method and apparatus of multiple raw sensor images by geographic registration |
| CN108668087A (en)* | 2017-03-27 | 2018-10-16 | 展讯通信(上海)有限公司 | image alignment method and device, mobile terminal |
| CN107507235A (en)* | 2017-08-31 | 2017-12-22 | 山东大学 | A kind of method for registering of coloured image and depth image based on the collection of RGB D equipment |
| CN107507235B (en)* | 2017-08-31 | 2020-11-10 | 山东大学 | Registration method of color image and depth image acquired based on RGB-D equipment |
| WO2019051961A1 (en)* | 2017-09-18 | 2019-03-21 | 深圳大学 | Pipeline detection method and apparatus, and storage medium |
| US10931871B2 (en) | 2017-09-18 | 2021-02-23 | Shenzhen University | Pipeline detection method and apparatus, and storage medium |
| CN107729847A (en)* | 2017-10-20 | 2018-02-23 | 阿里巴巴集团控股有限公司 | A kind of certificate verification, auth method and device |
| US10783369B2 (en) | 2017-10-20 | 2020-09-22 | Alibaba Group Holding Limited | Document verification system, device, and method using a classification model |
| CN107729847B (en)* | 2017-10-20 | 2020-08-04 | 阿里巴巴集团控股有限公司 | Certificate verification and identity verification method and device |
| CN108232451A (en)* | 2017-12-28 | 2018-06-29 | 无锡人人行拍网络科技有限公司 | A kind of double-movement terminal data transmission antenna automatic bidirectional tracking system |
| CN108388830A (en)* | 2018-01-09 | 2018-08-10 | 中国农业大学 | Animal shaped detection method and device based on convolutional Neural net |
| CN108388830B (en)* | 2018-01-09 | 2020-08-14 | 中国农业大学 | Method and device for animal body shape detection based on convolutional neural network |
| CN109655057A (en)* | 2018-12-06 | 2019-04-19 | 深圳市吉影科技有限公司 | A kind of six push away the filtering optimization method and its system of unmanned plane accelerator measured value |
| CN109683710B (en)* | 2018-12-20 | 2019-11-08 | 北京字节跳动网络技术有限公司 | A kind of palm normal vector determines method, apparatus, equipment and storage medium |
| CN109683710A (en)* | 2018-12-20 | 2019-04-26 | 北京字节跳动网络技术有限公司 | A kind of palm normal vector determines method, apparatus, equipment and storage medium |
| CN109900338A (en)* | 2018-12-25 | 2019-06-18 | 西安中科天塔科技股份有限公司 | A kind of road surface pit slot volume measuring method and device |
| CN109902725A (en)* | 2019-01-31 | 2019-06-18 | 北京达佳互联信息技术有限公司 | Mobile mesh object detection method, device and electronic equipment and storage medium |
| US11176687B2 (en) | 2019-01-31 | 2021-11-16 | Beijing Dajia Internet Information Technology Co., Ltd | Method and apparatus for detecting moving target, and electronic equipment |
| CN110602377B (en)* | 2019-03-18 | 2021-04-23 | 上海立可芯半导体科技有限公司 | Video image stabilizing method and device |
| CN110602377A (en)* | 2019-03-18 | 2019-12-20 | 上海立可芯半导体科技有限公司 | Video image stabilizing method and device |
| CN111750850B (en)* | 2019-03-27 | 2021-12-14 | 杭州海康威视数字技术股份有限公司 | Angle information acquisition method, device and system |
| CN111750850A (en)* | 2019-03-27 | 2020-10-09 | 杭州海康威视数字技术股份有限公司 | Angle information acquisition method, device and system |
| CN110414392B (en)* | 2019-07-15 | 2021-07-20 | 北京天时行智能科技有限公司 | Method and device for determining distance between obstacles |
| CN110414392A (en)* | 2019-07-15 | 2019-11-05 | 北京天时行智能科技有限公司 | A kind of determination method and device of obstacle distance |
| WO2021035524A1 (en)* | 2019-08-27 | 2021-03-04 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, and computer-readable storage medium |
| WO2021035525A1 (en)* | 2019-08-27 | 2021-03-04 | Oppo广东移动通信有限公司 | Image processing method and apparatus, and electronic device and computer-readable storage medium |
| US12141947B2 (en) | 2019-08-27 | 2024-11-12 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method, electronic device, and computer-readable storage medium |
| US12149826B2 (en) | 2019-08-27 | 2024-11-19 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Image processing method and device, electronic device, and computer-readable storage medium |
| CN111462177B (en)* | 2020-03-14 | 2023-04-07 | 华中科技大学 | Multi-clue-based online multi-target tracking method and system |
| CN111462177A (en)* | 2020-03-14 | 2020-07-28 | 华中科技大学 | Multi-clue-based online multi-target tracking method and system |
| CN111540003A (en)* | 2020-04-27 | 2020-08-14 | 浙江光珀智能科技有限公司 | Depth image generation method and device |
| CN111899305A (en)* | 2020-07-08 | 2020-11-06 | 深圳市瑞立视多媒体科技有限公司 | A kind of camera automatic calibration optimization method and related system and equipment |
| CN114708141A (en)* | 2022-03-31 | 2022-07-05 | 湖北久之洋信息科技有限公司 | Image superposition method and system based on motion sensor |
| Publication number | Publication date |
|---|---|
| CN103745474B (en) | 2017-01-18 |
| Publication | Publication Date | Title |
|---|---|---|
| CN103745474B (en) | Image registration method based on inertial sensor and camera | |
| CN106251305B (en) | A real-time electronic image stabilization method based on inertial measurement unit IMU | |
| CN103761737B (en) | Robot motion's method of estimation based on dense optical flow | |
| CN110146099B (en) | Synchronous positioning and map construction method based on deep learning | |
| US9967463B2 (en) | Method for camera motion estimation and correction | |
| CN102842117B (en) | Method for correcting kinematic errors in microscopic vision system | |
| US9888235B2 (en) | Image processing method, particularly used in a vision-based localization of a device | |
| CN104318561B (en) | Method for detecting vehicle motion information based on integration of binocular stereoscopic vision and optical flow | |
| KR102416523B1 (en) | A 3D skeleton generation method using calibration based on joints acquired from multi-view camera | |
| CN107357286A (en) | Vision positioning guider and its method | |
| CN110520694A (en) | A kind of visual odometry and its implementation | |
| Huai et al. | Continuous-time spatiotemporal calibration of a rolling shutter camera-IMU system | |
| CN107222662A (en) | A kind of electronic image stabilization method based on improved KLT and Kalman filtering | |
| CN107767425A (en) | A kind of mobile terminal AR methods based on monocular vio | |
| CN114979489A (en) | Gyroscope-based heavy equipment production scene video monitoring and image stabilizing method and system | |
| CN114219852B (en) | Multi-sensor calibration method and device for automatic driving vehicle | |
| CN106709222B (en) | IMU drift compensation method based on monocular vision | |
| CN103903263A (en) | Algorithm for 360-degree omnibearing distance measurement based on Ladybug panorama camera images | |
| CN105303518A (en) | Region feature based video inter-frame splicing method | |
| CN117470248B (en) | Indoor positioning method for mobile robot | |
| EP2800055A1 (en) | Method and system for generating a 3D model | |
| CN105096337A (en) | Image global motion compensation method based on hardware platform of gyroscope | |
| CN111105467A (en) | Image calibration method and device and electronic equipment | |
| JP7033294B2 (en) | Imaging system, imaging method | |
| TWI502162B (en) | Twin image guiding-tracking shooting system and method |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20170118 Termination date:20200121 |