Movatterモバイル変換


[0]ホーム

URL:


CN103745474A - Image registration method based on inertial sensor and camera - Google Patents

Image registration method based on inertial sensor and camera
Download PDF

Info

Publication number
CN103745474A
CN103745474ACN201410027349.4ACN201410027349ACN103745474ACN 103745474 ACN103745474 ACN 103745474ACN 201410027349 ACN201410027349 ACN 201410027349ACN 103745474 ACN103745474 ACN 103745474A
Authority
CN
China
Prior art keywords
formula
camera
image
sin
inertial sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410027349.4A
Other languages
Chinese (zh)
Other versions
CN103745474B (en
Inventor
顾国华
韩鲁
刘恒建
余明
廖逸琪
钱惟贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and TechnologyfiledCriticalNanjing University of Science and Technology
Priority to CN201410027349.4ApriorityCriticalpatent/CN103745474B/en
Publication of CN103745474ApublicationCriticalpatent/CN103745474A/en
Application grantedgrantedCritical
Publication of CN103745474BpublicationCriticalpatent/CN103745474B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

The invention puts forward an image registration method based on an inertial sensor and a camera. The camera and the inertial sensor are synchronously controlled to operate to simultaneously acquire the color image and depth image of the same scene and the three-axis motion angular velocity and three-axis motion acceleration of the camera; Kalman filtering is utilized to carry out filtering on the three-axis motion angular velocity and the three-axis motion acceleration, the Eulerian dynamics equation is used for converting the three-axis acceleration into a Eulerian angle, the uniformly accelerated motion formula is used for converting the three-axis angular velocity into three-axis displacement, and thereby an external reference matrix of the camera is obtained; by utilizing the affine transformation relation between the color image and the depth image and the proportional relation between depth image grey value and pixel depth information, the depth information of angular points in the depth image is solved; an OFCM (Orthogonal Frequency-division Multiplexing) model is used for calculation to obtain image background optical flow; the RANSAC (Random Sample Consensus) algorithm is used for completing image registration. The calculation load of the method is low, both the real-timeness and the precision are high, and the method is applicable to hand-held devices and mobile devices.

Description

Translated fromChinese
基于惯性传感器和摄像机的图像配准方法Image Registration Method Based on Inertial Sensor and Camera

技术领域technical field

本发明属于数字图像处理领域,具体涉及一种基于惯性传感器和摄像机的图像配准方法。The invention belongs to the field of digital image processing, and in particular relates to an image registration method based on an inertial sensor and a camera.

背景技术Background technique

图像配准技术的关键问题是对摄像机运动参数的估计,一般通过摄像机标定来完成。现有技术中,摄像机自标定的方法有:The key issue of image registration technology is the estimation of camera motion parameters, which is usually done through camera calibration. In the prior art, methods for camera self-calibration include:

李立春等人提出了基于八点法基础矩阵三维重建算法[李立春,邱志强,王鲲鹏等.基于匹配测度加权求解基础矩阵的三维重建算法[J].计算机应用,2007,27(10):2530-2533],该方法复杂并且计算量很大,难以实现图像处理的实时性。Li Lichun and others proposed a three-dimensional reconstruction algorithm based on the eight-point method of the fundamental matrix [Li Lichun, Qiu Zhiqiang, Wang Kunpeng, etc. A three-dimensional reconstruction algorithm based on matching measure weighting to solve the fundamental matrix [J]. Computer Applications, 2007, 27 (10): 2530-2533], this method is complex and computationally intensive, and it is difficult to achieve real-time image processing.

张洋等人提出了FOE模型验证了摄像机运动的平移效果[张洋,凯瑟勒威驰,柏森.可见光范围内基于移动相机的运动目标检测算法.计算机视觉和模式识别工作室.IEEE国际会议,2006,130-131],该方法没有考虑摄像机的旋转运动和摄像机的内参矩阵,精度不高。Zhang Yang and others proposed the FOE model to verify the translation effect of camera motion [Zhang Yang, Kaiser Wei Chi, Bo Sen. Moving object detection algorithm based on moving camera in the visible light range. Computer Vision and Pattern Recognition Studio. IEEE International Conference, 2006,130-131], this method does not consider the rotation of the camera and the internal reference matrix of the camera, and the accuracy is not high.

郑红等人提出了基于主动视觉的标定方法主要是利用云台等已知运动的硬件平台来提供摄像机运动信息[郑红,刘振强.基于精确模型的云台摄像机自标定[J].机器人,2013.5,35(3):1994-2013],该方法由于云台等设备较大,虽然能够很好地实现实时性,却很难普及到手持设备和移动设备中,使用面有限。Zheng Hong and others proposed a calibration method based on active vision, which mainly uses known motion hardware platforms such as pan/tilt to provide camera motion information [Zheng Hong, Liu Zhenqiang. Self-calibration of pan/tilt camera based on accurate model[J]. Robotics, 2013.5,35(3):1994-2013], this method is difficult to be popularized in handheld devices and mobile devices due to the large equipment such as the cloud platform, although it can realize real-time performance, and the application area is limited.

发明内容Contents of the invention

本发明提出一种基于惯性传感器和摄像机的图像配准方法,该方法计算量小,实时性和精度均较高,能适用于手持设备和移动设备。The invention proposes an image registration method based on an inertial sensor and a camera. The method has small calculation amount, high real-time performance and high precision, and can be applied to handheld devices and mobile devices.

为了解决上述技术问题,本发明提供一种基于惯性传感器和摄像机的图像配准方法,包括以下步骤:In order to solve the above technical problems, the present invention provides a method for image registration based on inertial sensors and cameras, comprising the following steps:

步骤一、同步控制摄像机和惯性传感器工作,同时获取同一场景的彩色图像I和J、深度图像I’和J’以及摄像机的三轴运动角速度[wx,wy,wz]和三轴运动加速度[ax,ay,az];Step 1. Synchronously control the work of the camera and the inertial sensor, and simultaneously acquire the color images I and J, depth images I' and J' of the same scene, as well as the camera's three-axis motion angular velocity [wx , wy , wz ] and three-axis motion acceleration[ax , ay , az ];

步骤二、提取彩色图像I的角点m1(x1,y1);Step 2, extracting the corner point m1 (x1 , y1 ) of the color image I;

步骤三、利用卡尔曼滤波对摄像机的三轴角速度[wx,wy,wz]和三轴加速度[ax,ay,az]进行滤波,获得卡尔曼滤波估计三轴角速度和卡尔曼滤波估计三轴加速度使用欧拉动力学方程式将

Figure BDA0000459694710000023
转换成欧拉角[αk,βk,γk],使用匀加速运动公式将
Figure BDA0000459694710000024
转换成三轴位移[txk,tyk,tzk],从而获得摄像机外参矩阵[R,T]中的旋转矩阵R和平移矩阵T;Step 3: Use the Kalman filter to filter the three-axis angular velocity [wx , wy , wz ] and the three-axis acceleration [ax , ay , az ] of the camera to obtain the estimated three-axis angular velocity of the Kalman filter and Kalman filtering to estimate the three-axis acceleration Using Euler's equation of kinetics the
Figure BDA0000459694710000023
Converted to Euler angles [αk , βk , γk ], using the uniform acceleration motion formula to
Figure BDA0000459694710000024
Convert to three-axis displacement [txk , tyk , tzk ], so as to obtain the rotation matrix R and translation matrix T in the camera extrinsic parameter matrix [R, T];

步骤四、利用彩色图像和深度图像的仿射变换关系,求解出彩色图像I的角点m1(x1,y1)在深度图像I’和J’中的坐标位置m1ir(x1,y1)和m2ir(x1,y1);利用深度图像灰度值与像素深度信息的比例关系求解出角点m1(x1,y1)在深度图像I’和J’中的深度信息u1和u2Step 4. Using the affine transformation relationship between the color image and the depth image, solve the coordinate position m1ir (x1 , y1 ) of the corner point m1 (x 1 , y1 ) of the color image I in the depth images I' and J' y1 ) and m2ir (x1 ,y1 ); use the proportional relationship between the gray value of the depth image and the depth information of the pixel to solve the corner point m1 (x1 ,y1 ) in the depth image I' and J' Depth information u1 and u2 ;

步骤五、将步骤三获得的旋转矩阵R和平移矩阵T以及步骤四获得的彩色图像的深度信息u1、u2带入OFCM模型,计算获得图像J的背景光流m2(x2,y2),OFCM模型如公式(1)所示,Step 5. Bring the rotation matrix R and translation matrix T obtained in Step 3 and the depth information u1 and u2 of the color image obtained in Step 4 into the OFCM model, and calculate the background optical flow m2 (x2 ,y2 ), the OFCM model is shown in formula (1),

mm22==uu11uu22KRKKRK--11mm11++11uu22KTKT------((11))

式(1)中,K为摄像机的内参矩阵,K-1为K的逆矩阵;In formula (1), K is the internal reference matrix of the camera, and K-1 is the inverse matrix of K;

步骤六、使用RANSAC算法对彩色图像I的背景光流m2进行鲁棒性估计,将背景光流m2中存在的由运动物体产生的光流和误配准光流剔除掉,从而完成彩色图像I和彩色图像J的图像配准,然后利用保留的彩色图像I的正确背景光流对彩色图像J进行初始化,进行后续图像的配准。Step 6. Use the RANSAC algorithm to robustly estimate the background optical flow m2 of the color image I, and remove the optical flow and misalignment optical flow generated by moving objects in the background optical flow m2 to complete the color image I. Image registration of image I and color image J, and then use the correct background optical flow of color image I to initialize color image J for registration of subsequent images.

主要发明原理:Main invention principle:

1、本发明结合欧式空间的刚体变化关系提出了图像背景光流与摄像机运动参数的关系模型(OFCM),该模型主要推导方式如下:1, the present invention proposes the relationship model (OFCM) of image background optical flow and camera motion parameter in conjunction with the rigid body variation relation of Euclidean space, and the main derivation mode of this model is as follows:

假设彩色CCD为针孔模型,场景中的背景只存在与彩色CCD的相对运动,摄像机坐标系的建立是以彩色CCD的光心为中心O,光轴所在的直线为Z轴,再根据左手原则建立X,Y轴。Assume that the color CCD is a pinhole model, and the background in the scene only has relative motion with the color CCD. The camera coordinate system is established with the optical center of the color CCD as the center O, and the straight line where the optical axis is located is the Z axis, and then according to the left-hand principle Create X, Y axis.

假设摄像机运动前摄像机坐标系与世界坐标系重合,那么空间背景中的任意一点P的坐标满足:Assuming that the camera coordinate system coincides with the world coordinate system before the camera moves, then the coordinates of any point P in the space background satisfy:

PW=PC         (2)PW =PC (2)

式(2)中,PW为空间某点的世界坐标系坐标,PW为空间某点的摄像机坐标系坐标,空间背景点P与其成像平面上的成像点的齐次坐标m1(x1,y1,1)满足公式(3):In formula (2), PW is the world coordinate system coordinates of a certain point in space, PW is the camera coordinate system coordinates of a certain point in space, and the homogeneous coordinates m1 (x1 ,y1 ,1) satisfy formula (3):

u1m1=KPC         (3)u1 m1 =KPC (3)

式(3)中,m1(x1,y1)为成像平面上的成像点,利用模型计算时采用Harris角点代替;u1可近似为m1(x1,y1)对应空间点的深度信息;K为摄像机的内参矩阵,考虑到图像中像素坐标系和物理坐标系的关系,可得出摄像机内参矩阵的五参数模型,如公式(4)所示,In formula (3), m1 (x1 , y1 ) is the imaging point on the imaging plane, which is replaced by the Harris corner point when calculating with the model; u1 can be approximated as the corresponding space point of m1 (x1 , y1 ) The depth information of the camera; K is the internal reference matrix of the camera, considering the relationship between the pixel coordinate system and the physical coordinate system in the image, the five-parameter model of the internal reference matrix of the camera can be obtained, as shown in formula (4),

KK==ffxxsthe suu0000ffythe yvv00000011------((44))

式(4)中,

Figure BDA0000459694710000032
Figure BDA0000459694710000033
其中,dx为像素点在x方向上的物理尺寸,dy为像素点在y方向上的物理尺寸,[u0,v0]为主点在像素坐标系中的坐标,s为相机畸变系数。In formula (4),
Figure BDA0000459694710000032
Figure BDA0000459694710000033
Among them, dx is the physical size of the pixel in the x direction, dy is the physical size of the pixel in the y direction, [u0 , v0 ] is the coordinate of the main point in the pixel coordinate system, and s is the camera distortion coefficient.

假设摄像机存在任意方向的平移运动和旋转运动,平移运动可用平移向量T=[tx,ty,tz]T来表示,旋转运动可用3*3的旋转矩阵R来表示,此时空间背景点P在摄像机坐标系中的坐标满足:Assuming that the camera has translational motion and rotational motion in any direction, the translational motion can be represented by a translational vector T=[tx , ty , tz ]T , and the rotational motion can be represented by a 3*3 rotation matrix R. At this time, the space background The coordinates of point P in the camera coordinate system satisfy:

P'C=R×PW+T=R×PC+T         (5)P'C = R×PW +T = R×PC +T (5)

式(5)中,旋转矩阵R可用欧式空间的欧拉角α,β,γ表示,如式(6)所示,In Equation (5), the rotation matrix R can be represented by Euler angles α, β, γ in Euclidean space, as shown in Equation (6),

RR==coscosγγcoscosββcoscosγγsinsinββsinsinαα--sinsinγγcoscosγγsinsinββcoscosαα--sinsinααsinsinγγsinsinγγcoscosββsinsinααsinsinββsinsinγγ++coscosγγcoscosααsinsinγγsinsinββcoscosαα++coscosγγsinsinαα--sinsinββcoscosββsinsinααcoscosββcoscosαα------((66))

摄像机运动后,空间点P与其成像平面上的成像点的齐次坐标m2(x2,y2,1)满足关系:After the camera moves, the homogeneous coordinate m2 (x2 ,y2 ,1) of the spatial point P and its imaging point on the imaging plane satisfies the relationship:

u2m2=KP'C=K(R×PC+T)         (7)u2 m2 =KP'C =K(R×PC +T) (7)

根据公式(3)和公式(5),可以获得摄像机运动前后空间点P在成像平面上的运动光流,即OFCM的最终形式:According to formula (3) and formula (5), the motion optical flow of the spatial point P on the imaging plane before and after the camera movement can be obtained, that is, the final form of OFCM:

mm22==uu11uu22KRKKRK--11mm11++11uu22KTKT------((88))

其中,u1和u2可以近似为空间点P在摄像机坐标系中的深度信息,通过摄像机获取的深度图像计算得出。外参矩阵[R,T]通过硬件平台获取并处理得出,m1可以通过Harris角点检测获取。假定摄像机的深度CCD和彩色CCD均工作在固定焦距f的条件下,此时像平面与物平面是一一对应的。因此,彩色CCD的内参矩阵K在整个图像获取过程中是一个恒定不变的量,可以经过一次标定获取后作为经验矩阵使用。Among them, u1 and u2 can be approximated as the depth information of the spatial point P in the camera coordinate system, which is calculated from the depth image obtained by the camera. The external parameter matrix [R, T] is obtained and processed through the hardware platform, and m1 can be obtained through Harris corner detection. It is assumed that both the depth CCD and the color CCD of the camera work under the condition of a fixed focal length f, and the image plane and the object plane are in one-to-one correspondence at this time. Therefore, the internal reference matrix K of the color CCD is a constant quantity during the entire image acquisition process, and can be used as an experience matrix after a calibration acquisition.

优势:现今存在的摄像机模型有FOE模型、MOF模型,FOE模型和MOF模型均没有考虑摄像机的内参矩阵,且FOE模型只支持摄像机的平移运动,因此在摄像机复杂运动下无法保持较高的精度。本发明提出的OFCM模型包含摄像机的内参矩阵,支持摄像机的三轴平移运动和三轴旋转运动,能够克服场景亮度突变。因此,OFCM模型在摄像机复杂运动和场景亮度突变情况下,都能保持高精度的图像配准。Advantages: The existing camera models include FOE model and MOF model. Both FOE model and MOF model do not consider the internal parameter matrix of the camera, and the FOE model only supports the translational movement of the camera, so it cannot maintain high accuracy under complex camera movements. The OFCM model proposed by the present invention includes the internal reference matrix of the camera, supports the three-axis translation movement and the three-axis rotation movement of the camera, and can overcome sudden changes in scene brightness. Therefore, the OFCM model can maintain high-precision image registration in the case of complex camera movements and sudden changes in scene brightness.

2、基于惯性传感器和摄像机的硬件平台:2. Hardware platform based on inertial sensor and camera:

本发明利用OFCM模型来计算图像的背景光流,由OFCM模型可知,图像背景光流的计算需要获知摄像机的外参矩阵[R,T]和图像像素对应的深度信息u1、u2。本发明采用硬件平台来完成摄像机外参矩阵和图像像素深度信息的获取。The present invention uses the OFCM model to calculate the background optical flow of the image. It can be seen from the OFCM model that the calculation of the background optical flow of the image needs to know the external parameter matrix [R, T] of the camera and the depth information u1 and u2 corresponding to the image pixels. The present invention adopts a hardware platform to complete the acquisition of the external parameter matrix of the camera and the depth information of the image pixel.

本发明所述硬件平台包括三个部分:控制器件FPGA、惯性传感器和摄像机,摄像机可以采用Kinect摄像机,惯性传感器中包括陀螺仪和加速度计,Kinect摄像机包括深度图像获取CCD和彩色图像获取CCD,FPGA和惯性传感器的最小系统均固定在深度图像获取CCD上,惯性传感器位于Kinect摄像机的正上方,陀螺仪用于获取摄像机的三轴运动角速度[wx,wy,wz],加速度计用于获取摄像机的三轴运动加速度[ax,ay,az],Kinect摄像机用于拍摄彩色图像I、J和深度图像I’、J’。硬件平台上电后,FPGA通过控制脉冲控制Kinect摄像机的快门工作,从而完成彩色图像和深度图像的获取,FPGA通过SPI协议同步控制Kinect摄像机和惯性传感器工作,以保证本步骤获取的图像序列和采集的摄像机运动参数一一对应。The hardware platform of the present invention comprises three parts: control device FPGA, inertial sensor and video camera, video camera can adopt Kinect video camera, comprise gyroscope and accelerometer in the inertial sensor, Kinect video camera comprises depth image acquisition CCD and color image acquisition CCD, FPGA The minimum system of and inertial sensor is fixed on the depth image acquisition CCD, the inertial sensor is located directly above the Kinect camera, the gyroscope is used to obtain the three-axis motion angular velocity [wx , wy , wz ] of the camera, and the accelerometer is used for Acquire the camera's three-axis motion acceleration [ax , ay , az ], and the Kinect camera is used to capture color images I, J and depth images I', J'. After the hardware platform is powered on, the FPGA controls the shutter of the Kinect camera through control pulses to complete the acquisition of color images and depth images. The FPGA controls the work of the Kinect camera and inertial sensors synchronously through the SPI protocol to ensure the image sequence and acquisition of this step. One-to-one correspondence of camera motion parameters.

优势:基于本模型的图像配准算法计算量小,系统实时性好;由于惯性传感器具有体积小、精度高等优点,和云台等获取摄像机运动参数的设备相比,能够更好地普及在手持设备和移动设备中。Advantages: The image registration algorithm based on this model has a small amount of calculation, and the system has good real-time performance; because the inertial sensor has the advantages of small size and high precision, it can be better popularized in handheld devices and mobile devices.

3、基于惯性传感器的卡尔曼滤波模型的建立:3. Establishment of Kalman filter model based on inertial sensor:

由于惯性传感器采集到的摄像机运动参数存在零点偏移、刻度因子误差和随机噪声等误差,因此需要对采集到的摄像机运动参数进行滤波去噪处理。Since the camera motion parameters collected by the inertial sensor have errors such as zero offset, scale factor error, and random noise, it is necessary to filter and denoise the collected camera motion parameters.

在使用卡尔曼滤波的经典公式时,需要根据惯性传感器的性能与参数建立惯性传感器的误差模型和状态转移关系式:When using the classic formula of Kalman filter, it is necessary to establish the error model and state transition relationship of the inertial sensor according to the performance and parameters of the inertial sensor:

z′k=zk(M+1)+Bf+U         (9)z′k =zk (M+1)+Bf +U (9)

zkk,k-1zk-1+V         (10)zkk,k-1 zk-1 +V (10)

其中,z′k为k时刻系统的实际观测值,zk为k时刻系统的真实值,M为刻度因子误差,Bf为零点漂移,V和U为随机高斯白噪声,Ψk,k-1为状态转移矩阵。Among them, z′k is the actual observed value of the system at k time, zk is the real value of the system at k time, M is the scale factor error, Bf is the zero drift, V and U are random Gaussian white noise, Ψk,k- 1 is the state transition matrix.

根据公式(9)并结合时间序列分析法求解出惯性传感器的刻度因子误差M,零点漂移Bf,和高斯白噪声V和U;根据公式(10)并结合时间序列分析法求解出参数间的状态转移矩阵Ψk,k-1According to the formula (9) combined with the time series analysis method, the scale factor error M, zero drift Bf , and Gaussian white noise V and U of the inertial sensor are solved; according to the formula (10) combined with the time series analysis method, the relationship between the parameters is solved State transition matrix Ψk,k-1 .

将公式公式(9)和公式(10)计算的结果带入卡尔曼滤波的经典公式(11),对k时刻的参数进行滤波,求出估计值

Figure BDA0000459694710000054
Bring the calculation results of formula (9) and formula (10) into the classical formula (11) of Kalman filter, filter the parameters at time k, and obtain the estimated value
Figure BDA0000459694710000054

zzkk,,kk--11==ΨΨkk,,kk--11zzkk--11zzkk^^==zzkk,,kk--11++KKkk[[zz′′kk((Mm++11))zzkk,,kk--11]]PPkk,,kk--11==ΨΨkk,,kk--11PPkk--11ΨΨkk,,kk--11TT++QQkk--11KKkk==PPkk,,kk--11((Mm++11))TTPPkk--11PPkk==[[II--KKkk((Mm++11))]]PPkk,,kk--11------((1111))

其中,Q为系统的噪声方差,Kk为卡尔曼增益,Pk为状态协方差,wk,k-1为状态预估值。Among them, Q is the noise variance of the system, Kk is the Kalman gain, Pk is the state covariance, and wk,k-1 is the estimated value of the state.

本发明与现有技术相比,其显著优点在于:Compared with the prior art, the present invention has significant advantages in that:

(1)由于摄像机参数的估计是通过惯性传感器和摄像机实时获取的,其中惯性传感器中的陀螺仪和加速度计分别获取摄像机的旋转运动参数和平移运动参数;摄像机包含深度图像获取CCD(深度CCD)和彩色图像获取CCD(彩色CCD),二者可以结合获取彩色图像的深度信息。因此本方法虽然基于硬件平台,但由于惯性传感器具有体积小、精度高等优点,与摄像机绑定后,可以很好地普及在手持设备和移动设备中。(1) Since the estimation of the camera parameters is obtained in real time through the inertial sensor and the camera, the gyroscope and the accelerometer in the inertial sensor respectively obtain the rotational motion parameters and the translational motion parameters of the camera; the camera includes a depth image acquisition CCD (depth CCD) And color image acquisition CCD (color CCD), the two can be combined to obtain the depth information of the color image. Therefore, although this method is based on a hardware platform, because the inertial sensor has the advantages of small size and high precision, it can be well popularized in handheld devices and mobile devices after being bound to a camera.

(2)从摄影几何角度出发,结合欧式空间的刚体变化关系推导出图像背景光流与摄像机运动参数的关系模型(optical flow and camera’s motion),记作OFCM模型。此模型包含摄像机的内参矩阵,并且支持摄像机的三轴平移运动和三轴旋转运动;同时,在亮度突变的情况下,仍然有着较高的精度,相较于FOE模型和光流匹配法有了较大的突破。(2) From the perspective of photography geometry, combined with the rigid body change relationship in Euclidean space, the relationship model (optical flow and camera’s motion) between the image background optical flow and camera’s motion parameters is derived, which is denoted as the OFCM model. This model contains the internal reference matrix of the camera, and supports three-axis translation and three-axis rotation of the camera; at the same time, in the case of sudden changes in brightness, it still has high accuracy, compared with the FOE model and the optical flow matching method. Big breakthrough.

附图说明Description of drawings

图1为本发明基于惯性传感器和摄像机的图像配准方法流程图。FIG. 1 is a flow chart of the image registration method based on inertial sensors and cameras in the present invention.

图2为在场景亮度突变情况下使用本发明方法获得的图像配准效果图。Fig. 2 is an image registration effect diagram obtained by using the method of the present invention in the case of sudden changes in scene brightness.

图3为在摄像机发生平移运动时使用本发明方法获得的图像配准效果图。Fig. 3 is an effect diagram of image registration obtained by using the method of the present invention when the camera moves in translation.

图4为在摄像机发生旋转运动时使用本发明方法获得的图像配准效果图。Fig. 4 is an effect diagram of image registration obtained by using the method of the present invention when the camera rotates.

图5为在摄像机发生平移运动和旋转运动时使用本发明方法获得的图像配准效果图。FIG. 5 is an effect diagram of image registration obtained by using the method of the present invention when the camera undergoes translational motion and rotational motion.

具体实施方式Detailed ways

如图1所示,本发明基于惯性传感器和摄像机的图像配准方法,包括以下步骤As shown in Figure 1, the image registration method based on the inertial sensor and camera of the present invention includes the following steps

步骤一、在硬件平台中,同步控制摄像机和惯性传感器工作,同时获取同一场景的彩色图像I和J、深度图像I’和J’以及摄像机的运动参数,即三轴运动角速度[wx,wy,wz]和三轴运动加速度[ax,ay,az],具体实现如下:Step 1. In the hardware platform, synchronously control the work of the camera and the inertial sensor, and simultaneously acquire the color images I and J, the depth images I' and J' of the same scene, and the motion parameters of the camera, that is, the three-axis motion angular velocity [wx , wy , wz ] and three-axis motion acceleration [ax , ay , az ], the specific implementation is as follows:

本发明所述硬件平台包括三个部分:控制器件FPGA、惯性传感器和摄像机,惯性传感器包括陀螺仪和加速度计,摄像机包括深度图像获取CCD和彩色图像获取CCD,用于拍摄彩色图像I、J和深度图像I’、J’,FPGA和惯性传感器的最小系统均固定在深度图像获取CCD上,陀螺仪用于获取摄像机的三轴运动角速度[wx,wy,wz],加速度计用于获取摄像机的三轴运动加速度[ax,ay,az]。硬件平台上电后,FPGA通过控制脉冲控制摄像机的快门工作,从而完成彩色图像和深度图像的获取,FPGA通过SPI协议同步控制摄像机和惯性传感器工作,以保证本步骤获取的图像序列和采集的摄像机运动参数一一对应。The hardware platform of the present invention comprises three parts: control device FPGA, inertial sensor and video camera, inertial sensor comprises gyroscope and accelerometer, video camera comprises depth image acquisition CCD and color image acquisition CCD, is used to take color image I, J and Depth image I', J', the minimum system of FPGA and inertial sensor are all fixed on the depth image acquisition CCD, the gyroscope is used to acquire the three-axis motion angular velocity [wx , wy , wz ] of the camera, and the accelerometer is used for Get the three-axis movement acceleration [ax , ay , az ] of the camera. After the hardware platform is powered on, the FPGA controls the shutter of the camera by controlling the pulse to complete the acquisition of color images and depth images. The FPGA controls the camera and the inertial sensor synchronously through the SPI protocol to ensure that the sequence of images acquired in this step and the captured camera One-to-one correspondence of motion parameters.

步骤二、利用Harris角点检测提取彩色图像I的角点m1(x1,y1)。具体实现如下:Step 2, using Harris corner detection to extract the corner m1 (x1 , y1 ) of the color image I. The specific implementation is as follows:

2.1计算彩色图像I中像素点p[x,y]的相关矩阵M,计算方法如公式(1)所示,2.1 Calculate the correlation matrix M of the pixel point p[x, y] in the color image I, the calculation method is shown in formula (1),

Mm==ΣΣxx,,ythe yww((xx,,ythe y))IIxx22IIxxIIythe yIIxxIIythe yIIythe y22==ww((xx,,ythe y))⊗⊗IIxx22IIxxIIythe yIIxxIIythe yIIythe y22------((22))

式(1)中,(x,y)为像素点p的图像坐标,w(x,y)是窗口大小为w*w的高斯权重函数,Ix代表行方向的导数,Iy代表列方向的导数;In formula (1), (x, y) is the image coordinate of pixel point p, w(x, y) is a Gaussian weight function with a window size of w*w, Ix represents the derivative in the row direction, and Iy represents the column direction derivative of

2.2计算像素点m1(x1,y1)的角点响应R(x,y),计算方法如公式(2)所示,2.2 Calculate the corner response R(x,y) of pixel m1 (x1 ,y1 ), the calculation method is shown in formula (2),

R(x,y)=detM-k*(traceM)2         (2)R(x,y)=detM-k*(traceM)2 (2)

式(2)中,detM代表相关矩阵M行列式的值,traceM代表相关矩阵M的迹,k为常数,一般为0.04-0.06;In the formula (2), detM represents the value of the determinant of the correlation matrix M, traceM represents the trace of the correlation matrix M, and k is a constant, generally 0.04-0.06;

2.3判断像素点m1(x1,y1)是否为角点,判断方法为公式(3)所示,若像素点m1(x1,y1)的角点响应R(x,y)满足公式(3),则像素点m1(x1,y1)为角点;若像素点m1(x1,y1)的角点响应R(x,y)不满足公式(11),则将m1(x1,y1)舍弃;2.3 Judging whether the pixel point m1 (x1 ,y1 ) is a corner point, the judgment method is shown in the formula (3), if the corner point response R(x,y ) of the pixel point m1 (x1 ,y 1 ) If the formula (3) is satisfied, the pixel point m1 (x1 ,y1 ) is a corner point; if the corner point response R(x,y) of the pixel point m1 (x1 ,y1 ) does not satisfy the formula (11) , discard m1 (x1 ,y1 );

[R(x,y)≥R(x′,y′)]∩[R(x,y)≥0.01Rmax]         (3)[R(x,y)≥R(x′,y′)]∩[R(x,y)≥0.01Rmax ] (3)

式(3)中,R(x′,y′)代表窗口w*w中除像素点m1(x1,y1)之外的其它像素点的角点响应,Rmax为边缘特征信息图像If中的最大角点响应。In formula (3), R(x′,y′) represents the corner response of other pixels in the window w*w except pixel m1 (x1 ,y1 ), and Rmax is the edge feature information image Maximum corner response inIf .

步骤三、利用卡尔曼滤波对摄像机的三轴角速度[wx,wy,wz]和三轴加速度[ax,ay,az]进行滤波,获得卡尔曼滤波估计三轴角速度

Figure BDA0000459694710000071
和卡尔曼滤波估计三轴加速度使用欧拉动力学方程式将
Figure BDA0000459694710000073
转换成欧拉角[αk,βk,γk],使用匀加速运动公式将
Figure BDA0000459694710000074
转换成三轴位移[txk,tyk,tzk]从而获得摄像机外参矩阵[R,T]中的旋转矩阵R和平移矩阵T;Step 3: Use the Kalman filter to filter the three-axis angular velocity [wx , wy , wz ] and the three-axis acceleration [ax , ay , az ] of the camera to obtain the estimated three-axis angular velocity of the Kalman filter
Figure BDA0000459694710000071
and Kalman filtering to estimate the three-axis acceleration Using Euler's equation of kinetics the
Figure BDA0000459694710000073
Converted to Euler angles [αk , βk , γk ], using the uniform acceleration motion formula to
Figure BDA0000459694710000074
Convert to three-axis displacement [txk , tyk , tzk ] to obtain the rotation matrix R and translation matrix T in the camera extrinsic matrix [R, T];

3.1利用卡尔曼滤波对摄像机的运动参数,即三轴角速度[wx,wy,wz]和三轴加速度[ax,ay,az]进行滤波,获得卡尔曼滤波估计三轴角速度和卡尔曼滤波估计三轴加速度

Figure BDA0000459694710000077
的具体计算方式如下:3.1 Use the Kalman filter to filter the motion parameters of the camera, namely the three-axis angular velocity [wx , wy , wz ] and the three-axis acceleration [ax , ay , az ] to obtain the Kalman filter estimated three-axis angular velocity and Kalman filtering to estimate the three-axis acceleration
Figure BDA0000459694710000077
The specific calculation method is as follows:

3.1.1根据惯性传感器的性能与运动参数建立惯性传感器的误差模型和状态转移关系式,结合时间序列分析法求解出惯性传感器的刻度因子误差M、零点漂移Bf、高斯白噪声U、运动参数间的状态转移矩阵Ψk,k-1以及高斯白噪声V;3.1.1 Establish the error model and state transition relational formula of the inertial sensor according to the performance and motion parameters of the inertial sensor, and solve the scale factor error M, zero drift Bf , Gaussian white noise U, and motion parameters of the inertial sensor by combining the time series analysis method Between the state transition matrix Ψk, k-1 and Gaussian white noise V;

误差模型如公式(4)所示,状态转移关系式公式(5)所示,The error model is shown in formula (4), and the state transition relation formula is shown in formula (5),

z′k=zk(M+1)+Bf+U         (4)z′k =zk (M+1)+Bf +U (4)

zkk,k-1zk-1+V         (5)zkk,k-1 zk-1 +V (5)

式(4)和式(5)中,z代表三轴角速度[wx,wy,wz]和三轴加速度[ax,ay,az]中的某一运动参数,z′k为k时刻惯性传感器测量获得的某一运动参数的测量值,zk为k时刻某一运动参数的真实值,M为刻度因子误差,Bf为零点漂移,V和U为随机高斯白噪声,Ψk,k-1为状态转移矩阵;In formula (4) and formula (5), z represents a motion parameter in three-axis angular velocity [wx , wy , wz ] and three-axis acceleration [ax , ay , az ], z′k is the measured value of a certain motion parameter obtained by inertial sensor measurement at time k, zk is the real value of a certain motion parameter at time k, M is the scale factor error, Bf is the zero drift, V and U are random Gaussian white noise, Ψk,k-1 is the state transition matrix;

根据公式(4)并结合时间序列分析法求解出惯性传感器的刻度因子误差M,零点漂移Bf,和高斯白噪声V和U;根据公式(5)并结合时间序列分析法求解出运动参数间的状态转移矩阵Ψk,k-1According to the formula (4) combined with the time series analysis method, the scale factor error M of the inertial sensor, the zero drift Bf , and the Gaussian white noise V and U are solved; according to the formula (5) combined with the time series analysis method, the motion parameters between The state transition matrix Ψk,k-1 of ;

3.1.2将步骤3.1.1所求取的结果带入卡尔曼滤波的经典公式(6)对k时刻的运动参数zk进行滤波,求出该运动参数的估计值

Figure BDA0000459694710000081
3.1.2 Bring the result obtained in step 3.1.1 into the classic formula (6) of Kalman filter to filter the motion parameter zk at time k to obtain the estimated value of the motion parameter
Figure BDA0000459694710000081

zzkk,,kk--11==ΨΨkk,,kk--11zzkk--11zzkk^^==zzkk,,kk--11++KKkk[[zz′′kk((Mm++11))zzkk,,kk--11]]PPkk,,kk--11==ΨΨkk,,kk--11PPkk--11ΨΨkk,,kk--11TT++QQkk--11KKkk==PPkk,,kk--11((Mm++11))TTPPkk--11PPkk==[[II--KKkk((Mm++11))]]PPkk,,kk--11------((66))

式(6)中,Q为系统的噪声方差,Kk为卡尔曼增益,Pk为状态协方差,zk,k-1为状态预估值。In formula (6), Q is the noise variance of the system, Kk is the Kalman gain, Pk is the state covariance, and zk,k-1 is the estimated value of the state.

按照上述方法分别对三轴运动角速度[wx,wy,wz]和三轴运动加速度[ax,ay,az]进行卡尔曼滤波,获得

Figure BDA0000459694710000084
Carry out Kalman filtering on three-axis motion angular velocity [wx , wy , wz ] and three-axis motion acceleration [ax , ay , az ] according to the above method, and obtain and
Figure BDA0000459694710000084

3.2使用欧拉动力学方程将k时刻滤波后的三轴运动角速度wxk,wyk,wzk转换成欧式坐标系中的欧拉角αk,βk,γk,将所得的αk,βk,γk带入旋转矩阵R中,完成彩色CCD旋转矩阵R的计算;3.2 Use the Euler dynamics equation to convert the filtered three-axis motion angular velocity wxk , wyk , wzk at time k into Euler angles αk , βk , γk in the Euclidean coordinate system, and convert the obtained αk , βk and γk are brought into the rotation matrix R to complete the calculation of the color CCD rotation matrix R;

所述欧拉动力学方程如公式(7)所示,Described Euler kinetic equation is as shown in formula (7),

αα·&Center Dot;kkββ··kkγγ·&Center Dot;kk==--sinsinααkk//tanthe tanββkkcoscosααkk//tanthe tanββkk11coscosααkksinsinααkk00sinsinααkk//sinsinββkk--coscosααkk//sinsinββkk00**wwxkxkwwykykwwzkzk------((77))

式(7)中,为欧拉角αk,βk,γk的一阶导数,将k-1时刻求得的参量αk-1,βk-1,γk-1作为公式(7)的初值;In formula (7), is the first derivative of Euler angles αk , βk , γk, and the parameters αk-1 , βk-1 , γk-1 obtained at time k-1 are taken as the initial values of formula (7);

所述旋转矩阵R如公式(8)所示:The rotation matrix R is shown in formula (8):

RR==coscosγγkkcoscosββkkcoscosγγkksinsinββkksinsinααkk--sinsinγγkkcoscosγγkksinsinββkkcoscosααkk--sinsinααkksinsinγγkksinsinγγkkcoscosββkksinsinααkksinsinββkksinsinγγkk++coscosγγkkcoscosααkksinsinγγkksinsinββkkcoscosααkk++coscosγγkksinsinααkk--sinsinββkkcoscosββkksinsinααkkcoscosββkkcoscosααkk------((88))

3.3使用如公式(9)所示的速度位移公式将三轴加速度axk,ayk,azk转换成三轴位移txk,tyk,tzk。由于摄像机运动参数的采集频率比较高,因此可以将摄像机的平移运动近似为匀加速运动,计算公式如公式(9)所示:3.3 Convert the three-axis acceleration axk , ayk , azk into three-axis displacement txk , tyk , tzk by using the velocity-displacement formula shown in formula (9). Since the acquisition frequency of camera motion parameters is relatively high, the translational motion of the camera can be approximated as a uniform acceleration motion, and the calculation formula is shown in formula (9):

ttkk==vvkkTT++1122aakkTT22vvkk==vvkk--11++aakk--11××TT------((99))

式(9)中,tk代表三轴位移txk,tyk,tzk中任意一坐标轴方向的位移,T为摄像机的拍摄周期,ak-1为k-1时刻对应坐标轴方向的加速度,vk-1和vk分别为k-1时刻和k时刻的初始速度。本发明给定系统初始时刻的初始速度0,即v0=0。将三轴加速度axk,ayk,azk分别带入公式(9),可计算获得三轴平移运动的平移向量T=[txk,tyk,tzk]。In formula (9), tk represents the displacement of any one of the three-axis displacements txk , tyk , tzk in the direction of the coordinate axis, T is the shooting period of the camera, and ak-1 is the displacement in the direction of the corresponding coordinate axis at time k-1 Acceleration, vk-1 and vk are the initial velocities at time k-1 and time k, respectively. In the present invention, the initial velocity of the system at the initial moment is 0, that is, v0 =0. The three-axis accelerations axk , ayk , and azk are brought into formula (9) respectively, and the translation vector T=[txk ,tyk , tzk ] of the three-axis translation motion can be calculated.

步骤四、利用彩色图像和深度图像的仿射变换关系,分别求解出彩色图像I的角点m1(x1,y1)在深度图像I’和J’中的坐标位置m1ir(x1,y1)和m2ir(x1,y1);利用深度图像中灰度值与像素深度信息的比例关系求解出角点m1(x1,y1)在深度图像I’和J’中的深度信息u1和u2,具体实现如下:Step 4.Using the affine transformation relationship betweenthe color image andthe depth image, respectively solve the coordinate position m1ir (x1 ,y1 ) and m2ir (x1 ,y1 ); use the proportional relationship between the gray value in the depth image and the pixel depth information to solve the corner point m1 (x1 ,y1 ) in the depth image I' and J' The depth information u1 and u2 in , the specific implementation is as follows:

4.1利用彩色图像和深度图像的仿射变换关系,分别求解出彩色图像I的角点m1(x1,y1)在深度图像I’和J’中的坐标位置m1ir(x1,y1)和m2ir(x1,y1),具体实现如下:4.1 Using the affine transformation relationship between the color image and the depth image, respectively solve the coordinate position m1ir (x 1 ,y ) of the corner point m1 (x1 ,y1 )of the color image I in the depth images I' and J'1 ) and m2ir (x1 ,y1 ), the specific implementation is as follows:

根据直接线性变换算法计算出彩色图像和深度图像的仿射变换关系,具体如公式(10)所示:According to the direct linear transformation algorithm, the affine transformation relationship between the color image and the depth image is calculated, as shown in formula (10):

mir=Hm         (10)mir =Hm (10)

式(10)中,mir为深度图像中的像素坐标,m为彩色图像中的像素坐标。将彩色图像I的角点m1(x1,y1)带入公式(9)求解出角点m1(x1,y1)在深度图像I’和J’中的坐标位置m1ir(x1,y1)和m2ir(x1,y1)。In formula (10), mir is the pixel coordinate in the depth image, and m is the pixel coordinate in the color image. Put the corner point m1 (x1 ,y1 )of the color image I into formula (9) to solve the coordinateposition m1ir( x1 ,y1 ) and m2ir (x1 ,y1 ).

4.2利用深度图像中灰度值与像素深度信息的比例关系求解出彩色图像角点m1(x1,y1)在深度图像I’和J’中的深度信息u1和u2,具体计算过程如下4.2 Solve the depth information u 1 andu 2of the color image corner point m1 (x1 , y1 ) in the depth images I' and J' by using the proportional relationship between the gray value and the pixel depth information in the depth image, and calculate specifically The process is as follows

灰度值与像素深度信息的比例关系可以通过反向工程提出的经验关系式,如公式(11)所示:The proportional relationship between the gray value and the pixel depth information can be expressed through the empirical relationship proposed by reverse engineering, as shown in formula (11):

u=l×m         (11)u=l×m (11)

式(11)中,m(x,y)为像素坐标(x,y)处的灰度值,u为像素坐标(x,y)处对应的深度值,l为比例系数;将步骤4.2求解出的m1ir(x1,y1),m2ir(x1,y1)带入公式(11)求解出对应的深度信息u1和u2In formula (11), m(x, y) is the gray value at the pixel coordinate (x, y), u is the corresponding depth value at the pixel coordinate (x, y), and l is the proportional coefficient; solve step 4.2 The obtained m1ir (x1 ,y1 ), m2ir (x1 ,y1 ) are brought into formula (11) to solve the corresponding depth information u1 and u2 .

步骤五、将步骤三获得的运动外参矩阵R和T以及步骤四获得的彩色图像的深度信息u1、u2带入OFCM模型,计算获得图像J的背景光流m2(x2,y2),具体实现如下:Step 5. Bring the motion extrinsic matrices R and T obtained in Step 3 and the depth information u1 and u2 of the color image obtained in Step 4 into the OFCM model, and calculate the background optical flow m2 (x2 ,y2 ), the specific implementation is as follows:

将步骤三获得的运动外参矩阵R和T以及步骤四获得的彩色图像的深度信息u1、u2带入如公式(12)所示的OFCM模型,计算获得第k帧角点m1在第k+1帧图像中的背景光流m2Bring the motion extrinsic matrix R and T obtained in step 3 and the depth information u1 and u2 of the color image obtained in step 4 into the OFCM model shown in formula (12), and calculate and obtain the corner point m1 of the kth frame at The background optical flow m2 in the k+1th frame image,

mm22==uu11uu22KRKKRK--11mm11++11uu22KTKT------((1212))

式(12)中,K为摄像机的内参矩阵,K-1为K的逆矩阵。In formula (12), K is the internal reference matrix of the camera, and K-1 is the inverse matrix of K.

步骤六、使用RANSAC算法对彩色图像I的背景光流m2进行鲁棒性估计,将背景光流m2中存在的由运动物体产生的光流和误配准光流剔除掉,从而完成彩色图像I和彩色图像J的图像配准,然后利用保留的彩色图像I的正确背景光流对彩色图像J进行初始化,进行后续图像的配准。具体实现如下:Step 6. Use the RANSAC algorithm to robustly estimate the background optical flow m2 of the color image I, and remove the optical flow and misalignment optical flow generated by moving objects in the background optical flow m2 to complete the color image I. Image registration of image I and color image J, and then use the correct background optical flow of color image I to initialize color image J for registration of subsequent images. The specific implementation is as follows:

所述彩色图像I的RANSAC计算过程为:The RANSAC calculation process of the color image I is:

6.1初始化当前所有的角点m1(x1,y1),m2(x2,y2)为局内点;6.1 Initialize all current corner points m1 (x1 , y1 ), and m2 (x2 , y2 ) are internal points;

6.2利用DLT算法计算出仿射变换矩阵H1,根据公式(13)结合角点m1(x1,y1)估计出角点估计值m2′(x2′,y2′):6.2 Use the DLT algorithm to calculate the affine transformation matrix H1 , and combine the corner point m1 (x1 ,y1 ) to estimate the corner point estimated value m2 ′(x2 ′,y2 ′) according to the formula (13):

m2′=H1m1         (13)m2 ′=H1 m1 (13)

6.3根据公式(14)计算角点m2(x2,y2)和角点估计值m2′(x2′,y2′)的欧氏距离d,若d小于阈值d′,则将角点m1(x1,y1),m2(x2,y2)置为局内点(即背景光流),若d大于等于阈值d′,则将角点m1(x1,y1),m2(x2,y2)置为局外点,局外点即为运动物体光流或误配准光流。当完成一次RANSAC计算后,初始化当前局内点为局外点,重复步骤6.2和6.3,当不再产生新的局内点或达到规定重复次数时,结束当前RANSAC算法,6.3 Calculate the Euclidean distance d between the corner point m2 (x2 , y2 ) and the corner point estimated value m2 ′(x2 ′, y2 ′) according to the formula (14). If d is less than the threshold d′, the Corner points m1 (x1 , y1 ), m2 (x2 , y2 ) are set as internal points (i.e. background optical flow), if d is greater than or equal to the threshold d′, then corner points m1 (x1 , y1 ), m2 (x2 , y2 ) are set as outliers, and outliers are optical flows of moving objects or misalignment optical flows. After completing a RANSAC calculation, initialize the current intra-office point as an out-of-office point, repeat steps 6.2 and 6.3, and end the current RANSAC algorithm when no new intra-office points are generated or the specified number of repetitions is reached.

d=sqrt((x2-x2′)2+(y2-y2′)2)         (14)d=sqrt((x2 -x2 ′)2 +(y2 -y2 ′)2 ) (14)

所述对彩色图像J进行初始化的过程为:首先遍历该帧所有的角点,若该角点与彩色图像I保留的正确背景光流重合,则将该角点作为RANSAC计算的初始局内点。重复步骤6.1、6.2、6.3,逐帧完成RANSAC的估计。The process of initializing the color image J is as follows: first traverse all the corner points of the frame, and if the corner point coincides with the correct background optical flow retained by the color image I, then use the corner point as the initial intra-regional point for RANSAC calculation. Repeat steps 6.1, 6.2, and 6.3 to complete RANSAC estimation frame by frame.

根据上述步骤,不断地将图像中的运动物体光流或误配准光流剔除掉;利用RANSAC算法提取的角点光流,完成图像的配准。According to the above steps, the optical flow of moving objects or mis-registration optical flow in the image is continuously eliminated; the image registration is completed by using the corner optical flow extracted by the RANSAC algorithm.

本发明的有益效果可以通过以下实验进一步说明:Beneficial effect of the present invention can further illustrate by following experiment:

本实验采用的主控制器件为ALTERA公司的型号为EP2C5Q208C6的FPGA,传感器包含两部分:型号为ADIS16405的惯性传感器和Kinect摄像机,其中惯性传感器中的陀螺仪和加速度计分别获取摄像机的旋转运动参数和平移运动参数;Kinect摄像机包含深度图像获取CCD(深度CCD)和彩色图像获取CCD(彩色CCD),二者可以结合获取彩色图像的深度信息。由于惯性传感器的尺寸为35mm*30mm,所以硬件平台体积小,可以普及在移动设备中。The main control device used in this experiment is the EP2C5Q208C6 FPGA of ALTERA Company. The sensor consists of two parts: the inertial sensor of the model ADIS16405 and the Kinect camera. The gyroscope and accelerometer in the inertial sensor obtain the rotation parameters and Translational motion parameters; the Kinect camera includes a depth image acquisition CCD (depth CCD) and a color image acquisition CCD (color CCD), the two can be combined to obtain the depth information of the color image. Since the size of the inertial sensor is 35mm*30mm, the hardware platform is small in size and can be popularized in mobile devices.

利用Kinect相机采集得到大小均为640*480的彩色图像和深度图像。根据对PC端算法的实时性分析,本实施例设置硬件平台的图像采集频率和惯性传感器的参数采集频率均为40帧/s。Harris角点检测的窗口选取为5*5,RANSAC算法的计算次数设置为30次。The Kinect camera is used to collect color images and depth images with a size of 640*480. According to the real-time analysis of the PC-side algorithm, the image acquisition frequency of the hardware platform and the parameter acquisition frequency of the inertial sensor are both set to 40 frames/s in this embodiment. The window of Harris corner detection is selected as 5*5, and the number of calculations of RANSAC algorithm is set to 30 times.

为了更好地验证本发明方法提出的OFCM模型的效果,本实验分别验证了Kinect摄像机在亮度突变、随机平移运动、随机旋转运动、随机平移旋转复杂运动条件下,使用本发明方对同一场景进行图像配配准,分别获得如图2、图3、图4、图5所示的效果图。在图2、图3、图4、图5中,每幅图的左半部分为左图、右半部分为右图,左右图间隔均为10帧,其中左图为第k帧图像,右图为第k+10帧图像。从图2、图3、图4、图5看出,本发明方法能够精准的完成图像的配准。In order to better verify the effect of the OFCM model proposed by the method of the present invention, this experiment has verified that the Kinect camera uses the method of the present invention to carry out the same scene under the conditions of sudden changes in brightness, random translational motion, random rotational motion, and random translational rotation. The images are matched and registered, and the renderings shown in Figure 2, Figure 3, Figure 4, and Figure 5 are respectively obtained. In Figure 2, Figure 3, Figure 4, and Figure 5, the left half of each picture is the left picture, and the right half is the right picture, and the interval between the left and right pictures is 10 frames. The picture shows the k+10th frame image. It can be seen from FIG. 2 , FIG. 3 , FIG. 4 , and FIG. 5 that the method of the present invention can accurately complete image registration.

表1为上述不同摄像机运动条件下,PC端图像配准性能统计分析。根据表1,对同种运动方式的各项指标进行纵向比较,可以发现本发明方法在Kinect摄像机发生平移运动和旋转运动时,图像配准都取得了较好的结果,平均处理精度可以达到69.5%;对不同运动方式下图像配准的情况的各项指标进行横向比较,可以发现本发明方法对Kinect摄像机单一运动的配准精度高于混合运动的配准精度,此误差主要是由陀螺仪和加速度计的误差叠加造成的;就处理实时性来看,在PC机上的平均处理时间约22.92ms,时间上的差别主要是Kinect摄像机混合运动产生误匹配点的概率变大,因此RANSAC算法估计相对耗时;根据PC端的时间估计,综合考虑硬件采集的实际难度,设置Kinect摄像机的图像获取帧频和惯性传感器的参数的同步采集频率为30Hz。Table 1 shows the statistical analysis of PC-side image registration performance under the above-mentioned different camera motion conditions. According to table 1, carry out vertical comparison to each index of same kind of motion mode, can find that the present invention method has all obtained better result in image registration when translation motion and rotation motion occur in Kinect camera, and average processing accuracy can reach 69.5 %; Each index of the situation of image registration under different motion modes is carried out transverse comparison, can find that the registration accuracy of the inventive method to Kinect camera single motion is higher than the registration accuracy of mixed motion, and this error is mainly caused by gyroscope It is caused by the superimposition of accelerometer errors; in terms of real-time processing, the average processing time on the PC is about 22.92ms. The difference in time is mainly due to the increased probability of mismatching points caused by the mixed motion of the Kinect camera, so the RANSAC algorithm estimates Relatively time-consuming; according to the time estimation on the PC side, considering the actual difficulty of hardware acquisition, set the image acquisition frame rate of the Kinect camera and the synchronous acquisition frequency of the parameters of the inertial sensor to 30Hz.

表1PC端图像配准性能统计表Table 1 PC-side image registration performance statistics

Figure BDA0000459694710000121
Figure BDA0000459694710000121

Claims (8)

Translated fromChinese
1.基于惯性传感器和摄像机的图像配准方法,其特征在于,包括以下步骤:1. The image registration method based on inertial sensor and video camera, is characterized in that, comprises the following steps:步骤一、同步控制摄像机和惯性传感器工作,同时获取同一场景的彩色图像I和J、深度图像I’和J’以及摄像机的三轴运动角速度[wx,wy,wz]和三轴运动加速度[ax,ay,az];Step 1. Synchronously control the work of the camera and the inertial sensor, and simultaneously acquire the color images I and J, depth images I' and J' of the same scene, as well as the camera's three-axis motion angular velocity [wx , wy , wz ] and three-axis motion acceleration[ax , ay , az ];步骤二、提取彩色图像I的角点m1(x1,y1);Step 2, extracting the corner point m1 (x1 , y1 ) of the color image I;步骤三、利用卡尔曼滤波对摄像机的三轴角速度[wx,wy,wz]和三轴加速度[ax,ay,az]进行滤波,获得卡尔曼滤波估计三轴角速度
Figure FDA0000459694700000012
和卡尔曼滤波估计三轴加速度;使用欧拉动力学方程式将
Figure FDA0000459694700000013
转换成欧拉角[αk,βk,γk],使用匀加速运动公式将
Figure FDA0000459694700000015
转换成三轴位移[txk,tyk,tzk],从而获得摄像机外参矩阵[R,T]中的旋转矩阵R和平移矩阵T;Step 3: Use the Kalman filter to filter the three-axis angular velocity [wx , wy , wz ] and the three-axis acceleration [ax , ay , az ] of the camera to obtain the estimated three-axis angular velocity of the Kalman filter
Figure FDA0000459694700000012
and Kalman filtering to estimate the three-axis acceleration ; using the Euler kinetic equation will
Figure FDA0000459694700000013
Converted to Euler angles [αk , βk , γk ], using the uniform acceleration motion formula to
Figure FDA0000459694700000015
Convert to three-axis displacement [txk , tyk , tzk ], so as to obtain the rotation matrix R and translation matrix T in the camera extrinsic parameter matrix [R, T];步骤四、利用彩色图像和深度图像的仿射变换关系,求解出彩色图像I的角点m1(x1,y1)在深度图像I’和J’中的坐标位置m1ir(x1,y1)和m2ir(x1,y1);利用深度图像灰度值与像素深度信息的比例关系求解出角点m1(x1,y1)在深度图像I’和J’中的深度信息u1和u2Step 4. Using the affine transformation relationship between the color image and the depth image, solve the coordinate position m1ir (x1 , y1 ) of the corner point m1 (x 1 , y1 ) of the color image I in the depth images I' and J' y1 ) and m2ir (x1 ,y1 ); use the proportional relationship between the gray value of the depth image and the depth information of the pixel to solve the corner point m1 (x1 ,y1 ) in the depth image I' and J' Depth information u1 and u2 ;步骤五、将步骤三获得的旋转矩阵R和平移矩阵T以及步骤四获得的彩色图像的深度信息u1、u2带入OFCM模型,计算获得图像J的背景光流m2(x2,y2),OFCM模型如公式(1)所示,Step 5. Bring the rotation matrix R and translation matrix T obtained in Step 3 and the depth information u1 and u2 of the color image obtained in Step 4 into the OFCM model, and calculate the background optical flow m2 (x2 ,y2 ), the OFCM model is shown in formula (1),mm22==uu11uu22KRKKRK--11mm11++11uu22KTKT------((11))式(1)中,K为摄像机的内参矩阵,K-1为K的逆矩阵;In formula (1), K is the internal reference matrix of the camera, and K-1 is the inverse matrix of K;步骤六、使用RANSAC算法对彩色图像I的背景光流m2进行鲁棒性估计,将背景光流m2中存在的由运动物体产生的光流和误配准光流剔除掉,从而完成彩色图像I和彩色图像J的图像配准,然后利用保留的彩色图像I的正确背景光流对彩色图像J进行初始化,进行后续图像的配准。Step 6. Use the RANSAC algorithm to robustly estimate the background optical flow m2 of the color image I, and remove the optical flow and misalignment optical flow generated by moving objects in the background optical flow m2 to complete the color image I. Image registration of image I and color image J, and then use the correct background optical flow of color image I to initialize color image J for registration of subsequent images.2.如权利要求1所述的基于惯性传感器和摄像机的图像配准方法,其特征在于,步骤一中,FPGA通过SPI协议同步控制摄像机和惯性传感器,惯性传感器包括陀螺仪和加速度计,摄像机包括深度图像获取CCD和彩色图像获取CCD,FPGA和惯性传感器的最小系统均固定在深度图像获取CCD上。2. The image registration method based on inertial sensor and camera as claimed in claim 1, wherein in step 1, FPGA controls camera and inertial sensor synchronously through SPI protocol, inertial sensor includes gyroscope and accelerometer, and camera includes The minimum system of depth image acquisition CCD and color image acquisition CCD, FPGA and inertial sensor are fixed on the depth image acquisition CCD.3.如权利要求1所述的基于惯性传感器和摄像机的图像配准方法,其特征在于,步骤二中使用Harris角点检测提取彩色图像I的角点m1(x1,y1),计算过程如下:3. the image registration method based on inertial sensor and video camera as claimed in claim 1, it is characterized in that, use Harris corner point detection to extract the corner point m1 (x1 , y1 ) of color image I in step 2, calculate The process is as follows:3.1计算彩色图像I中像素点p[x,y]的相关矩阵M,计算方法如公式(2)所示,3.1 Calculate the correlation matrix M of the pixel point p[x, y] in the color image I, the calculation method is shown in formula (2),Mm==ΣΣxx,,ythe yww((xx,,ythe y))IIxx22IIxxIIythe yIIxxIIythe yIIythe y22==ww((xx,,ythe y))⊗⊗IIxx22IIxxIIythe yIIxxIIythe yIIythe y22------((22))式(2)中,(x,y)为像素点p的图像坐标,w(x,y)是窗口大小为w*w的高斯权重函数,Ix代表行方向的导数,Iy代表列方向的导数;In formula (2), (x, y) is the image coordinate of pixel point p, w(x, y) is a Gaussian weight function with a window size of w*w, Ix represents the derivative in the row direction, and Iy represents the column direction derivative of3.2计算像素点m1(x1,y1)的角点响应R(x,y),计算方法如公式(3)所示,3.2 Calculate the corner response R(x,y) of pixel m1 (x1 ,y1 ), the calculation method is shown in formula (3),R(x,y)=detM-k*(traceM)2           (3)R(x,y)=detM-k*(traceM)2 (3)式(3)中,detM代表相关矩阵M行列式的值,traceM代表相关矩阵M的迹,k为常数,一般为0.04-0.06;In the formula (3), detM represents the value of the determinant of the correlation matrix M, traceM represents the trace of the correlation matrix M, and k is a constant, generally 0.04-0.06;3.3判断像素点m1(x1,y1)是否为角点,判断方法为公式(4)所示,若像素点m1(x1,y1)的角点响应R(x,y)满足公式(4),则像素点m1(x1,y1)为角点;若像素点m1(x1,y1)的角点响应R(x,y)不满足公式(4),则将m1(x1,y1)舍弃;3.3 Judging whether the pixel point m1 (x1 ,y1 ) is a corner point, the judgment method is shown in the formula (4), if the corner point response R(x,y) of the pixel point m1 (x1 ,y1 ) If the formula (4) is satisfied, the pixel point m1 (x1 ,y1 ) is a corner point; if the corner point response R(x,y) of the pixel point m1 (x1 ,y1 ) does not satisfy the formula (4) , discard m1 (x1 ,y1 );[R(x,y)≥R(x',y')]∩[R(x,y)≥0.01Rmax]        (3)[R(x,y)≥R(x',y')]∩[R(x,y)≥0.01Rmax ] (3)式(4)中,R(x',y')代表窗口w*w中除像素点m1(x1,y1)之外的其它像素点的角点响应,Rmax为边缘特征信息图像If中的最大角点响应。In formula (4), R(x',y') represents the corner response of other pixels in the window w*w except pixel m1 (x1 ,y1 ), and Rmax is the edge feature information image Maximum corner response inIf .4.如权利要求1所述的基于惯性传感器和摄像机的图像配准方法,其特征在于,步骤三中,所述获得卡尔曼滤波估计三轴角速度
Figure FDA0000459694700000022
和卡尔曼滤波估计三轴加速度的计算过程为:
4. the image registration method based on inertial sensor and video camera as claimed in claim 1, is characterized in that, in step 3, described obtaining Kalman filter estimates triaxial angular velocity
Figure FDA0000459694700000022
and Kalman filtering to estimate the three-axis acceleration The calculation process is:
4.1根据惯性传感器的性能与运动参数建立惯性传感器的误差模型和状态转移关系式,结合时间序列分析法求解出惯性传感器的刻度因子误差M、零点漂移Bf、高斯白噪声U、运动参数间的状态转移矩阵Ψk,k-1以及高斯白噪声V;4.1 Establish the error model and state transition relational formula of the inertial sensor according to the performance and motion parameters of the inertial sensor, and solve the scale factor error M, zero drift Bf , Gaussian white noise U, and the relationship between the motion parameters of the inertial sensor by combining the time series analysis method State transition matrix Ψk, k-1 and Gaussian white noise V;误差模型如公式(4)所示,状态转移关系式公式(5)所示,The error model is shown in formula (4), and the state transition relation formula is shown in formula (5),z'k=zk(M+1)+Bf+U       (4)z'k =zk (M+1)+Bf +U (4)zk=Ψk,k-1zk-1+V              (5)zk =Ψk,k-1 zk-1 +V (5)式(4)和式(5)中,z代表三轴角速度[wx,wy,wz]和三轴加速度[ax,ay,az]中的某一运动参数,z′k为k时刻惯性传感器测量获得的某一运动参数的测量值,zk为k时刻某一运动参数的真实值;In formula (4) and formula (5), z represents a motion parameter in three-axis angular velocity [wx , wy , wz ] and three-axis acceleration [ax , ay , az ], z′k is the measured value of a certain motion parameter measured by the inertial sensor at time k, and zk is the real value of a certain motion parameter at time k;4.2将步骤4.1所求取的结果带入卡尔曼滤波公式对k时刻的运动参数zk进行滤波,求出该运动参数的估计值
Figure FDA0000459694700000032
卡尔曼滤波公式如公式(6)所示,
4.2 Bring the result obtained in step 4.1 into the Kalman filter formula to filter the motion parameter zk at time k, and obtain the estimated value of the motion parameter
Figure FDA0000459694700000032
The Kalman filter formula is shown in formula (6),
zzkk,,kk--11==ΨΨkk,,kk--11zzkk--11zzkk^^==zzkk,,kk--11++KKkk[[zz′′kk((Mm++11))zzkk,,kk--11]]PPkk,,kk--11==ΨΨkk,,kk--11PPkk--11ΨΨkk,,kk--11TT++QQkk--11KKkk==PPkk,,kk--11((Mm++11))TTPPkk--11PPkk==[[II--KKkk((Mm++11))]]PPkk,,kk--11------((66))式(6)中,Q为系统的噪声方差,Kk为卡尔曼增益,Pk为状态协方差,zk,k-1为状态预估值。In formula (6), Q is the noise variance of the system, Kk is the Kalman gain, Pk is the state covariance, and zk,k-1 is the estimated value of the state.
5.如权利要求1所述的基于惯性传感器和摄像机的图像配准方法,其特征在于,步骤三中,所述获得摄像机旋转矩阵R和平移矩阵T的计算过程为:5. the image registration method based on inertial sensor and camera as claimed in claim 1, is characterized in that, in step 3, the calculation process of described obtaining camera rotation matrix R and translation matrix T is:5.1使用欧拉动力学方程将k时刻滤波后的三轴运动角速度wxk,wyk,wzk转换成欧式坐标系中的欧拉角αk,βk,γk,将所得的αk,βk,γk带入旋转矩阵R中,完成彩色CCD旋转矩阵R的计算;5.1 Use the Euler dynamic equation to convert the filtered three-axis motion angular velocity wxk , wyk , wzk at time k into Euler angles αk , βk , γk in the Euclidean coordinate system, and convert the obtained αk , βk and γk are brought into the rotation matrix R to complete the calculation of the color CCD rotation matrix R;所述欧拉动力学方程如公式(7)所示,Described Euler kinetic equation is as shown in formula (7),αα··kkββ·&Center Dot;kkγγ·&Center Dot;kk==--sinsinααkk//tanthe tanββkkcoscosααkk//tanthe tanββkk11coscosααkksinsinααkk00sinsinααkk//sinsinββkk--coscosααkk//sinsinββkk00**wwxkxkwwykykwwzkzk------((77))式(7)中,
Figure FDA0000459694700000034
为欧拉角的一阶导数,将k-1时刻求得的参量αk-1,βk-1,γk-1作为公式(7)的初值;
In formula (7),
Figure FDA0000459694700000034
is the Euler angle The first-order derivative of , the parameters αk-1 , βk-1 , and γk-1 obtained at time k-1 are taken as the initial value of formula (7);
所述旋转矩阵R如公式(8)所示:The rotation matrix R is shown in formula (8):RR==coscosγγkkcoscosββkkcoscosγγkksinsinββkksinsinααkk--sinsinγγkkcoscosγγkksinsinββkkcoscosααkk--sinsinααkksinsinγγkksinsinγγkkcoscosββkksinsinααkksinsinββkksinsinγγkk++coscosγγkkcoscosααkksinsinγγkksinsinββkkcoscosααkk++coscosγγkksinsinααkk--sinsinββkkcoscosββkksinsinααkkcoscosββkkcoscosααkk------((88))5.2使用如公式(9)所示的速度位移公式将三轴加速度axk,ayk,azk转换成三轴位移txk,tyk,tzk5.2 Use the velocity displacement formula shown in formula (9) to convert the three-axis acceleration axk , ayk , azk into three-axis displacement txk , tyk , tzk ,ttkk==vvkkTT++1122aakkTT22vvkk==vvkk--11++aakk--11××TT------((99))式(9)中,tk代表三轴位移txk,tyk,tzk中任意一坐标轴方向的位移,T为摄像机的拍摄周期,ak-1为k-1时刻对应坐标轴方向的加速度,vk-1和vk分别为k-1时刻和k时刻的初始速度,给定系统初始时刻的初始速度为0。In formula (9), tk represents the displacement of any one of the three-axis displacements txk , tyk , tzk in the direction of the coordinate axis, T is the shooting period of the camera, and ak-1 is the displacement in the direction of the corresponding coordinate axis at time k-1 Acceleration, vk-1 and vk are the initial velocities at k-1 time and k time respectively, and the initial speed of the given system at the initial time is 0.
6.如权利要求1所述的基于惯性传感器和摄像机的图像配准方法,其特征在于,步骤四中根据直接线性变换算法计算出彩色图像和深度图像的仿射变换关系,仿射变换关系如公式(10)所示:6. the image registration method based on inertial sensor and video camera as claimed in claim 1, is characterized in that, calculates the affine transformation relation of color image and depth image according to direct linear transformation algorithm in step 4, and affine transformation relation is as follows Formula (10) shows:mir=Hm         (10)mir =Hm (10)式(10)中,mir为深度图像中的像素坐标,m为彩色图像中的像素坐标。In formula (10), mir is the pixel coordinate in the depth image, and m is the pixel coordinate in the color image.7.如权利要求1所述的基于惯性传感器和摄像机的图像配准方法,其特征在于,步骤四中通过反向工程提出的经验关系式获得灰度值与像素深度信息的比例关系,灰度值与像素深度信息的比例关系如公式(11)所示:7. the image registration method based on inertial sensor and video camera as claimed in claim 1, is characterized in that, in step 4, obtains the proportional relation of grayscale value and pixel depth information by the empirical relation formula that reverse engineering proposes, grayscale The proportional relationship between the value and the pixel depth information is shown in formula (11):u=l×m         (11)u=l×m (11)式(11)中,m(x,y)为像素坐标(x,y)处的灰度值,u为像素坐标(x,y)处对应的深度值,l为比例系数。In formula (11), m(x, y) is the gray value at the pixel coordinate (x, y), u is the corresponding depth value at the pixel coordinate (x, y), and l is the proportional coefficient.8.如权利要求1所述的基于惯性传感器和摄像机的图像配准方法,其特征在于,步骤六中,8. the image registration method based on inertial sensor and video camera as claimed in claim 1, is characterized in that, in step 6,所述使用RANSAC算法对彩色图像I的背景光流m2进行鲁棒性估计的计算过程为:The calculation process of using the RANSAC algorithm to robustly estimate the background optical flowm2 of the color image I is:8.1初始化当前所有的角点m1(x1,y1),m2(x2,y2)为局内点;8.1 Initialize all current corner points m1 (x1 , y1 ), and m2 (x2 , y2 ) are internal points;8.2利用DLT算法计算出仿射变换矩阵H1,根据公式(12)结合角点m1(x1,y1)估计出角点估计值m2′(x2′,y2′):8.2 Use the DLT algorithm to calculate the affine transformation matrix H1 , and combine the corner point m1 (x1 ,y1 ) to estimate the corner point estimated value m2 ′(x2 ′,y2 ′) according to the formula (12):m2′=H1m1         (12)m2 ′=H1 m1 (12)8.3根据公式(13)计算角点m2(x2,y2)和角点估计值m2′(x2′,y2′)的欧氏距离d,若d小于阈值d′,则将角点m1(x1,y1),m2(x2,y2)置为局内点(即背景光流),若d大于等于阈值d′,则将角点m1(x1,y1),m2(x2,y2)置为局外点;当完成一次RANSAC计算后,初始化当前局内点为局外点,重复步骤8.2和8.3,当不再产生新的局内点或达到规定重复次数时,结束当前RANSAC算法,8.3 Calculate the Euclidean distance d between the corner point m2 (x2 , y2 ) and the corner point estimated value m2 ′(x2 ′, y2 ′) according to the formula (13). If d is less than the threshold d′, the Corner points m1 (x1 , y1 ), m2 (x2 , y2 ) are set as interior points (that is, background optical flow), if d is greater than or equal to the threshold d′, then corner points m1 (x1 , y1 ), m2 (x2 , y2 ) are set as outlier points; after completing a RANSAC calculation, initialize the current inlier point as an outlier point, repeat steps 8.2 and 8.3, when no new inlier points or When the specified number of repetitions is reached, the current RANSAC algorithm ends,d=sqrt((x2-x2')2+(y2-y2')2)         (13)d=sqrt((x2 -x2 ')2 +(y2 -y2 ')2 ) (13)所述对彩色图像J进行初始化的过程为:首先遍历该帧所有的角点,若该角点与彩色图像I保留的正确背景光流重合,则将该角点作为RANSAC计算的初始局内点,重复步骤8.1、8.2、8.3所述方法,逐帧完成RANSAC的估计。The process of initializing the color image J is as follows: first traverse all the corner points of the frame, if the corner point coincides with the correct background optical flow retained by the color image I, then use the corner point as the initial intra-office point for RANSAC calculation, Repeat the methods described in steps 8.1, 8.2, and 8.3 to complete RANSAC estimation frame by frame.
CN201410027349.4A2014-01-212014-01-21Image registration method based on inertial sensor and cameraExpired - Fee RelatedCN103745474B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201410027349.4ACN103745474B (en)2014-01-212014-01-21Image registration method based on inertial sensor and camera

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201410027349.4ACN103745474B (en)2014-01-212014-01-21Image registration method based on inertial sensor and camera

Publications (2)

Publication NumberPublication Date
CN103745474Atrue CN103745474A (en)2014-04-23
CN103745474B CN103745474B (en)2017-01-18

Family

ID=50502489

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201410027349.4AExpired - Fee RelatedCN103745474B (en)2014-01-212014-01-21Image registration method based on inertial sensor and camera

Country Status (1)

CountryLink
CN (1)CN103745474B (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104717426A (en)*2015-02-282015-06-17深圳市德赛微电子技术有限公司Multi-camera video synchronization device and method based on external sensor
CN105096337A (en)*2014-05-232015-11-25南京理工大学Image global motion compensation method based on hardware platform of gyroscope
CN105303518A (en)*2014-06-122016-02-03南京理工大学Region feature based video inter-frame splicing method
CN106251342A (en)*2016-07-262016-12-21杭州好好开车科技有限公司Camera calibration method based on sensing technology
CN106447766A (en)*2016-09-282017-02-22成都通甲优博科技有限责任公司Scene reconstruction method and apparatus based on mobile device monocular camera
CN106485245A (en)*2015-08-242017-03-08南京理工大学A kind of round-the-clock object real-time tracking method based on visible ray and infrared image
CN107507235A (en)*2017-08-312017-12-22山东大学A kind of method for registering of coloured image and depth image based on the collection of RGB D equipment
CN107729847A (en)*2017-10-202018-02-23阿里巴巴集团控股有限公司A kind of certificate verification, auth method and device
CN108232451A (en)*2017-12-282018-06-29无锡人人行拍网络科技有限公司A kind of double-movement terminal data transmission antenna automatic bidirectional tracking system
CN108225309A (en)*2016-12-212018-06-29波音公司Enhance the method and apparatus of multiple raw sensor images by geographic registration
CN108388830A (en)*2018-01-092018-08-10中国农业大学Animal shaped detection method and device based on convolutional Neural net
CN108668087A (en)*2017-03-272018-10-16展讯通信(上海)有限公司image alignment method and device, mobile terminal
WO2019051961A1 (en)*2017-09-182019-03-21深圳大学Pipeline detection method and apparatus, and storage medium
CN109655057A (en)*2018-12-062019-04-19深圳市吉影科技有限公司A kind of six push away the filtering optimization method and its system of unmanned plane accelerator measured value
CN109683710A (en)*2018-12-202019-04-26北京字节跳动网络技术有限公司A kind of palm normal vector determines method, apparatus, equipment and storage medium
CN109900338A (en)*2018-12-252019-06-18西安中科天塔科技股份有限公司A kind of road surface pit slot volume measuring method and device
CN109902725A (en)*2019-01-312019-06-18北京达佳互联信息技术有限公司Mobile mesh object detection method, device and electronic equipment and storage medium
CN110414392A (en)*2019-07-152019-11-05北京天时行智能科技有限公司A kind of determination method and device of obstacle distance
CN110602377A (en)*2019-03-182019-12-20上海立可芯半导体科技有限公司Video image stabilizing method and device
CN111462177A (en)*2020-03-142020-07-28华中科技大学Multi-clue-based online multi-target tracking method and system
CN111540003A (en)*2020-04-272020-08-14浙江光珀智能科技有限公司Depth image generation method and device
CN111750850A (en)*2019-03-272020-10-09杭州海康威视数字技术股份有限公司Angle information acquisition method, device and system
CN111899305A (en)*2020-07-082020-11-06深圳市瑞立视多媒体科技有限公司 A kind of camera automatic calibration optimization method and related system and equipment
WO2021035525A1 (en)*2019-08-272021-03-04Oppo广东移动通信有限公司Image processing method and apparatus, and electronic device and computer-readable storage medium
WO2021035524A1 (en)*2019-08-272021-03-04Oppo广东移动通信有限公司Image processing method and apparatus, electronic device, and computer-readable storage medium
CN114708141A (en)*2022-03-312022-07-05湖北久之洋信息科技有限公司Image superposition method and system based on motion sensor

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101630406A (en)*2008-07-142010-01-20深圳华为通信技术有限公司Camera calibration method and camera calibration device
CN102999759A (en)*2012-11-072013-03-27东南大学Light stream based vehicle motion state estimating method
WO2013084782A1 (en)*2011-12-092013-06-13株式会社日立国際電気Image processing device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101630406A (en)*2008-07-142010-01-20深圳华为通信技术有限公司Camera calibration method and camera calibration device
WO2013084782A1 (en)*2011-12-092013-06-13株式会社日立国際電気Image processing device
CN102999759A (en)*2012-11-072013-03-27东南大学Light stream based vehicle motion state estimating method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
袁梦笛 等: "红外搜索跟踪系统中实时图像配准的研究和实现", 《红外技术》*
陈晨 等: "基于卡尔曼滤波的MEMS陀螺仪漂移补偿", 《机电工程》*

Cited By (43)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105096337B (en)*2014-05-232018-05-01南京理工大学A kind of image global motion compensation method based on gyroscope hardware platform
CN105096337A (en)*2014-05-232015-11-25南京理工大学Image global motion compensation method based on hardware platform of gyroscope
CN105303518A (en)*2014-06-122016-02-03南京理工大学Region feature based video inter-frame splicing method
CN104717426B (en)*2015-02-282018-01-23深圳市德赛微电子技术有限公司A kind of multiple-camera video synchronization device and method based on external sensor
CN104717426A (en)*2015-02-282015-06-17深圳市德赛微电子技术有限公司Multi-camera video synchronization device and method based on external sensor
CN106485245A (en)*2015-08-242017-03-08南京理工大学A kind of round-the-clock object real-time tracking method based on visible ray and infrared image
CN106251342A (en)*2016-07-262016-12-21杭州好好开车科技有限公司Camera calibration method based on sensing technology
CN106447766A (en)*2016-09-282017-02-22成都通甲优博科技有限责任公司Scene reconstruction method and apparatus based on mobile device monocular camera
CN106447766B (en)*2016-09-282019-07-09成都通甲优博科技有限责任公司A kind of scene reconstruction method and device based on mobile device monocular camera
CN108225309B (en)*2016-12-212023-09-26波音公司Method and apparatus for enhancing multiple raw sensor images by geographic registration
CN108225309A (en)*2016-12-212018-06-29波音公司Enhance the method and apparatus of multiple raw sensor images by geographic registration
CN108668087A (en)*2017-03-272018-10-16展讯通信(上海)有限公司image alignment method and device, mobile terminal
CN107507235A (en)*2017-08-312017-12-22山东大学A kind of method for registering of coloured image and depth image based on the collection of RGB D equipment
CN107507235B (en)*2017-08-312020-11-10山东大学Registration method of color image and depth image acquired based on RGB-D equipment
WO2019051961A1 (en)*2017-09-182019-03-21深圳大学Pipeline detection method and apparatus, and storage medium
US10931871B2 (en)2017-09-182021-02-23Shenzhen UniversityPipeline detection method and apparatus, and storage medium
CN107729847A (en)*2017-10-202018-02-23阿里巴巴集团控股有限公司A kind of certificate verification, auth method and device
US10783369B2 (en)2017-10-202020-09-22Alibaba Group Holding LimitedDocument verification system, device, and method using a classification model
CN107729847B (en)*2017-10-202020-08-04阿里巴巴集团控股有限公司Certificate verification and identity verification method and device
CN108232451A (en)*2017-12-282018-06-29无锡人人行拍网络科技有限公司A kind of double-movement terminal data transmission antenna automatic bidirectional tracking system
CN108388830A (en)*2018-01-092018-08-10中国农业大学Animal shaped detection method and device based on convolutional Neural net
CN108388830B (en)*2018-01-092020-08-14中国农业大学 Method and device for animal body shape detection based on convolutional neural network
CN109655057A (en)*2018-12-062019-04-19深圳市吉影科技有限公司A kind of six push away the filtering optimization method and its system of unmanned plane accelerator measured value
CN109683710B (en)*2018-12-202019-11-08北京字节跳动网络技术有限公司A kind of palm normal vector determines method, apparatus, equipment and storage medium
CN109683710A (en)*2018-12-202019-04-26北京字节跳动网络技术有限公司A kind of palm normal vector determines method, apparatus, equipment and storage medium
CN109900338A (en)*2018-12-252019-06-18西安中科天塔科技股份有限公司A kind of road surface pit slot volume measuring method and device
CN109902725A (en)*2019-01-312019-06-18北京达佳互联信息技术有限公司Mobile mesh object detection method, device and electronic equipment and storage medium
US11176687B2 (en)2019-01-312021-11-16Beijing Dajia Internet Information Technology Co., LtdMethod and apparatus for detecting moving target, and electronic equipment
CN110602377B (en)*2019-03-182021-04-23上海立可芯半导体科技有限公司Video image stabilizing method and device
CN110602377A (en)*2019-03-182019-12-20上海立可芯半导体科技有限公司Video image stabilizing method and device
CN111750850B (en)*2019-03-272021-12-14杭州海康威视数字技术股份有限公司Angle information acquisition method, device and system
CN111750850A (en)*2019-03-272020-10-09杭州海康威视数字技术股份有限公司Angle information acquisition method, device and system
CN110414392B (en)*2019-07-152021-07-20北京天时行智能科技有限公司Method and device for determining distance between obstacles
CN110414392A (en)*2019-07-152019-11-05北京天时行智能科技有限公司A kind of determination method and device of obstacle distance
WO2021035524A1 (en)*2019-08-272021-03-04Oppo广东移动通信有限公司Image processing method and apparatus, electronic device, and computer-readable storage medium
WO2021035525A1 (en)*2019-08-272021-03-04Oppo广东移动通信有限公司Image processing method and apparatus, and electronic device and computer-readable storage medium
US12141947B2 (en)2019-08-272024-11-12Guangdong Oppo Mobile Telecommunications Corp., Ltd.Image processing method, electronic device, and computer-readable storage medium
US12149826B2 (en)2019-08-272024-11-19Guangdong Oppo Mobile Telecommunications Corp., Ltd.Image processing method and device, electronic device, and computer-readable storage medium
CN111462177B (en)*2020-03-142023-04-07华中科技大学Multi-clue-based online multi-target tracking method and system
CN111462177A (en)*2020-03-142020-07-28华中科技大学Multi-clue-based online multi-target tracking method and system
CN111540003A (en)*2020-04-272020-08-14浙江光珀智能科技有限公司Depth image generation method and device
CN111899305A (en)*2020-07-082020-11-06深圳市瑞立视多媒体科技有限公司 A kind of camera automatic calibration optimization method and related system and equipment
CN114708141A (en)*2022-03-312022-07-05湖北久之洋信息科技有限公司Image superposition method and system based on motion sensor

Also Published As

Publication numberPublication date
CN103745474B (en)2017-01-18

Similar Documents

PublicationPublication DateTitle
CN103745474B (en)Image registration method based on inertial sensor and camera
CN106251305B (en) A real-time electronic image stabilization method based on inertial measurement unit IMU
CN103761737B (en)Robot motion's method of estimation based on dense optical flow
CN110146099B (en)Synchronous positioning and map construction method based on deep learning
US9967463B2 (en)Method for camera motion estimation and correction
CN102842117B (en)Method for correcting kinematic errors in microscopic vision system
US9888235B2 (en)Image processing method, particularly used in a vision-based localization of a device
CN104318561B (en)Method for detecting vehicle motion information based on integration of binocular stereoscopic vision and optical flow
KR102416523B1 (en)A 3D skeleton generation method using calibration based on joints acquired from multi-view camera
CN107357286A (en)Vision positioning guider and its method
CN110520694A (en)A kind of visual odometry and its implementation
Huai et al.Continuous-time spatiotemporal calibration of a rolling shutter camera-IMU system
CN107222662A (en)A kind of electronic image stabilization method based on improved KLT and Kalman filtering
CN107767425A (en)A kind of mobile terminal AR methods based on monocular vio
CN114979489A (en)Gyroscope-based heavy equipment production scene video monitoring and image stabilizing method and system
CN114219852B (en)Multi-sensor calibration method and device for automatic driving vehicle
CN106709222B (en)IMU drift compensation method based on monocular vision
CN103903263A (en)Algorithm for 360-degree omnibearing distance measurement based on Ladybug panorama camera images
CN105303518A (en)Region feature based video inter-frame splicing method
CN117470248B (en)Indoor positioning method for mobile robot
EP2800055A1 (en)Method and system for generating a 3D model
CN105096337A (en)Image global motion compensation method based on hardware platform of gyroscope
CN111105467A (en)Image calibration method and device and electronic equipment
JP7033294B2 (en) Imaging system, imaging method
TWI502162B (en)Twin image guiding-tracking shooting system and method

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
C14Grant of patent or utility model
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20170118

Termination date:20200121


[8]ページ先頭

©2009-2025 Movatter.jp