

技术领域technical field
本发明涉及一种倾斜摄影数据辅助边坡雷达影像与光学照片匹配融合方法,属于露天矿边坡形变监测技术领域。The invention relates to a method for matching and fusing a slope radar image and an optical photo assisted by oblique photographic data, and belongs to the technical field of slope deformation monitoring in an open-pit mine.
背景技术Background technique
滑坡是仅次于地震、发生最频繁、造成损失最严重的一种地质灾害。滑坡灾后现场应急抢险搜救人员及工程车辆众多,此时若发生次生灾害,损失将无法估量。对滑坡的残余岩体持续监测,分析变形特征,专家研判定位危险变形位置并及时预警是目前滑坡灾后现场应急监测广泛认可的技术路线。地基合成孔径干涉雷达(GB-InSAR)是近些年的热点技术,已被证明是区域性形变监测的有力工具,具有较短的时间间隔(重访周期可达分钟量级)。已有的滑坡监测案例表明实测精度可达到亚毫米级,GB-InSAR在滑坡应急监测预警领域有巨大前景。雷达在极坐标系成像,需映射至三维空间才便于人眼感知定位危险区域,不适合安全监测人员直接使用,因此,测量结果常被要求与监测人员拍摄照片相匹配。Landslides are second only to earthquakes, occur most frequently, and cause the most serious losses. After the landslide disaster, there are many emergency rescue search and rescue personnel and engineering vehicles at the scene. If a secondary disaster occurs at this time, the loss will be immeasurable. Continuous monitoring of the residual rock mass of the landslide, analysis of deformation characteristics, expert research and determination of the location of dangerous deformation and timely early warning are currently widely recognized technical routes for on-site emergency monitoring after landslide disasters. Ground-based Synthetic Aperture Interferometry Radar (GB-InSAR) is a hot technology in recent years, and it has been proved to be a powerful tool for regional deformation monitoring with short time interval (the revisit period can reach the order of minutes). Existing landslide monitoring cases show that the measured accuracy can reach sub-millimeter level, and GB-InSAR has great prospects in the field of landslide emergency monitoring and early warning. Radar imaging in the polar coordinate system needs to be mapped to three-dimensional space to facilitate the human eye to perceive and locate dangerous areas. It is not suitable for direct use by safety monitoring personnel. Therefore, the measurement results are often required to match the photos taken by the monitoring personnel.
现有匹配方式匹配精度低,将地基雷达图像与拍摄的影像通过二维图像变换进行匹配,二者成像原理差距大无法精确对应,难以作为判读参考。边坡现场常常具有三维激光扫描(LIDAR)和无人机倾斜摄影三维数据。三维数据一方面可以通过距离-多普勒分析通过几何映射与雷达图像匹配,另一方面倾斜摄影的瞬时视角图像又可以和拍摄照片进行匹配。因此可以通过倾斜摄影辅助完成地基边坡形变监测雷达图像和拍摄照片匹配融合。The existing matching methods have low matching accuracy. The ground-based radar image and the captured image are matched by two-dimensional image transformation. The difference between the imaging principles of the two cannot be accurately matched, and it is difficult to be used as a reference for interpretation. Slope sites often have 3D laser scanning (LIDAR) and UAV oblique photography 3D data. On the one hand, the 3D data can be matched with the radar image through geometric mapping through range-Doppler analysis, and on the other hand, the instantaneous angle of view image of the oblique photography can be matched with the captured photo. Therefore, it is possible to complete the matching and fusion of the radar image of the foundation slope deformation monitoring and the photographed photos through the assistance of oblique photography.
发明内容SUMMARY OF THE INVENTION
为克服现有技术的不足,本发明提供一种倾斜摄影作为中间数据,分别与边坡形变监测雷达影像及拍摄照片匹配,实现边坡雷达形变影像和光学照片匹配融合的方法。In order to overcome the deficiencies of the prior art, the present invention provides a method for matching and merging slope radar deformation images and optical photos with oblique photography as intermediate data, which are respectively matched with slope deformation monitoring radar images and photographed photos.
本发明为解决上述技术问题采用以下技术方案:The present invention adopts the following technical solutions for solving the above-mentioned technical problems:
倾斜摄影数据辅助边坡雷达影像与光学照片匹配融合方法,包括以下具体步骤:The matching and fusion method of slope radar image and optical photo assisted by oblique photographic data includes the following specific steps:
步骤1,连续测量地基雷达一步一停移动过程中的每个停止位的天线中心坐标,得到点集Prail,对点集Prail进行直线拟合得到天线行迹矢量Step 1: Continuously measure the antenna center coordinates of each stop position of the ground-based radar in the process of moving one step at a time, to obtain the point set Prail , and perform straight line fitting on the point set Prail to obtain the antenna track vector
步骤2,计算三维倾斜摄影数据点集Pmap中各点与雷达的相对斜距和相对方位角,形成二维点集Pmap3D;Step 2, calculate the relative slant range and the relative azimuth angle of each point and the radar in the three-dimensional oblique photography data point set Pmap , and form a two-dimensional point set Pmap3D ;
步骤3,计算Pmap3D中各点与雷达图像中各像素点的欧式距离,最小欧式距离对应的像素点构成获得最邻近像素点集PI,得到Pmap3D与PI间一一映射表TI;Step 3: Calculate the Euclidean distance between each point in Pmap3D and each pixel in the radar image, and the pixel point corresponding to the minimum Euclidean distance constitutes a set of nearest neighbor pixels PI , and obtains a one-to-one mapping table TI between Pmap3D and PI ;
步骤4,从光学照片Ishot内读取光学照片拍摄点的地理坐标Pshot,将Pmap的观察点设置在Pshot处,使用投影变换法调整三维倾斜摄影数据的观察视角得到的瞬时图像Itemp,保存Pmap与Itemp各点间的映射表Ttemp;Step 4, read the geographic coordinates Pshot of the optical photo shooting point from the optical photo Ishot , set the observation point of Pmap at Pshot , and use the projection transformation method to adjust the instantaneous image I obtained from the observation angle of the three-dimensional oblique photographic data.temp , saves the mapping table Ttemp between each point of Pmap and Itemp ;
步骤5,目视解译Itemp与Ishot中显著地物目标的二维坐标,得到粗匹配公共点集Pco,其中,Pco由两个子点集构成,一个为Itemp内显著地物目标的坐标子点集,另一个为Ishot内与Itemp相同的显著地物目标的坐标子点集;Step 5, visually interpret the two-dimensional coordinates of the significant objects in Itemp and Ishot , and obtain a rough matching public point set Pco , where Pco consists of two sub-point sets, one is the significant objects in Itemp . The coordinate sub-point set of the target, and the other is the coordinate sub-point set of the salient feature target in the Ishot that is the same as the Itemp ;
步骤6,将Pco输入图像仿射变换方程f,得到图像仿射变换方程的变换参数初值,其中变换参数包括旋转因子、平移因子以及为缩放因子;Step 6, inputting Pco into the image affine transformation equation f, to obtain the initial value of the transformation parameter of the image affine transformation equation, wherein the transformation parameter includes a rotation factor, a translation factor and a scaling factor;
步骤7,将步骤6中的变换参数初值代回f形成变换方程f1,将Ishot中所有像素点代入f1,得到粗匹配图像Irough,形成Ishot与Irough各像素间的粗匹配映射表Trough,完成粗匹配;Step 7, substitute the initial value of the transformation parameter in step 6 back to f to form a transformation equation f1, substitute all pixels in Ishot into f1, obtain a rough matching image Irough , and form a rough matching map between each pixel of Ishot and Irough Table Trough , complete rough matching;
步骤8,使用特征提取方法提取Itemp和Irough内的图像特征,得到若干Itemp和Irough中的相同特征点,形成精匹配公共点对集Pfine,Pfine中元素为Itemp和Irough中相同特征点的像元坐标;Step 8, uses the feature extraction method to extract the image features in Itemp and Irough , obtains the same feature points in some Itemp and Irough , forms fine matching public point pair set Pfine , and elements in Pfine are Itemp and I The pixel coordinates of the same feature point inrough ;
步骤9,利用步骤8中的Pfine、步骤6中的变换参数初值,使用最小二乘迭代估计图像仿射变换方程f的最优变换参数,将最优变换参数代回f形成变换方程f2;Step 9, utilize Pfine in step 8, the initial value of transformation parameter in step 6, use least squares iteration to estimate the optimal transformation parameter of image affine transformation equation f, replace optimal transformation parameter back to f to form transformation equation f2 ;
步骤10,将Irough代入变换方程f2得到Ifine,形成Irough与Ifine各像素间的精匹配映射表Tfine,通过重采样与插值方法即可得到Ishot与雷达图像PI间的映射关系Tfinal;Step 10, substitute Irough into the transformation equation f2 to obtain Ifine , form the precise matching mapping table Tfine between each pixel of Irough and Ifine , can obtain the mapping between Ishot and radar image PI by resampling and interpolation method relation Tfinal ;
步骤11,查找Tfinal映射表,通过图像融合方法将雷达图像中各像素点的形变值与Ishot中RGB颜色通道进行融合,获得融合图像Ifusion,完成匹配融合过程。Step 11, look up the Tfinal mapping table, and fuse the deformation value of each pixel in the radar image with the RGB color channel in the Ishot through the image fusion method to obtain a fusion image Ifusion , and complete the matching fusion process.
进一步,步骤1中的方向由雷达运行起始点指向终止点。Further, in step 1 The direction of the radar runs from the starting point to the ending point.
进一步,步骤2中利用距离-多普勒算法计算Pmap中各点与雷达的相对斜距和相对方位角。Further, in step 2, the distance-Doppler algorithm is used to calculate the relative slant range and relative azimuth between each point in the Pmap and the radar.
进一步,Pmap中第i个顶点Ai与雷达的相对斜距:Further, the relative slope distance between the i-th vertex Ai and the radar in Pmap :
式中,N表示Pmap中的顶点数,(xi,yi,zi)为Ai的三维坐标,(xs,ys,zs)为雷达合成孔径中心Os的三维坐标;In the formula, N represents the number of vertices in Pmap , (xi , yi , zi ) are the three-dimensional coordinates of Ai , (xs , ys , zs ) are the three-dimensional coordinates of the radar synthetic aperture center Os ;
Pmap中第i个顶点Ai与雷达的相对方位角为:The relative azimuth angle between the i-th vertex Ai and the radar in Pmap is:
式中,为Ai在天线行迹矢量上的垂足,|·|代表向量的模,P2为雷达行进终止点。In the formula, is Ai in the antenna track vector The vertical foot on , |·| represents the norm of the vector, and P2 is the end point of the radar travel.
进一步,步骤6中图像仿射变换方程f为:Further, the image affine transformation equation f in step 6 is:
其中为目标坐标,为待匹配坐标,R为旋转因子,Tx、Ty为平移因子,ρ为缩放因子。in is the target coordinate, is the coordinate to be matched, R is a rotation factor, Tx andTy are translation factors, and ρ is a scaling factor.
本发明采用以上技术方案与现有技术相比,具有以下技术效果:Compared with the prior art, the present invention adopts the above technical scheme, and has the following technical effects:
(1)利用倾斜影像的三维信息辅助完成地基雷达图像和光学拍照匹配,避免了常规的方法直接利用二维图像变换方法忽略成像原理差异造成的巨大匹配误差问题;(1) The three-dimensional information of the oblique image is used to assist in completing the matching of ground-based radar images and optical photography, avoiding the huge matching error caused by the conventional method directly using the two-dimensional image transformation method to ignore the difference in imaging principles;
(2)通过识别场景内显著地物先进行倾斜影像瞬时图像与光学拍照图粗匹配后利用特征识别大量公共点优化仿射变换方程参数,解决了常规变换方法初值选择困难问题;(2) By identifying the significant objects in the scene, first perform rough matching between the instantaneous image of the oblique image and the optical photographic image, and then use the feature to identify a large number of common points to optimize the parameters of the affine transformation equation, which solves the difficulty of initial value selection for conventional transformation methods;
(3)通过特征识别大量公共点最小二乘迭代优化变换方程参数,方法易于实现,运算速度较快,可实现像元级匹配融合精度;(3) The least squares iterative optimization of transformation equation parameters through feature identification of a large number of common points, the method is easy to implement, the operation speed is fast, and the pixel-level matching and fusion accuracy can be achieved;
(4)边坡形变监测雷达的形变测量图像实时地与地质灾害勘察人员拍摄照片匹配,易用性强,可为滑坡风险评估,灾害预警提供数据支撑。(4) The deformation measurement image of the slope deformation monitoring radar matches the photos taken by the geological hazard surveyors in real time. It is easy to use and can provide data support for landslide risk assessment and disaster warning.
附图说明Description of drawings
图1是倾斜摄影数据辅助的边坡形变监测雷达影像与光学照片匹配融合示意图;Figure 1 is a schematic diagram of the matching and fusion of slope deformation monitoring radar images and optical photos assisted by oblique photographic data;
图2是本发明实施例提供的一种倾斜摄影数据辅助的边坡形变监测雷达影像与光学照片匹配融合方法流程图。FIG. 2 is a flowchart of a method for matching and fusing a radar image and an optical photo for slope deformation monitoring assisted by oblique photographic data according to an embodiment of the present invention.
具体实施方式Detailed ways
下面根据说明书附图对本发明的发明内容做进一步详细表述。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部实施例。基于本发明的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明的保护范围。The invention content of the present invention will be further described in detail below according to the accompanying drawings. Obviously, the described embodiments are only some, but not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative work fall within the protection scope of the present invention.
本发明倾斜摄影数据辅助边坡雷达影像与光学照片匹配融合方法,首先,利用地基雷达设站坐标、俯仰角信息及倾斜影像的空间关系完成几何映射;然后,在倾斜摄影数据中依据照片拍摄点坐标及朝向透视投影变换得到瞬时视角图像,提取瞬时视角图像和拍摄光学照片公共点后使用最小二乘优化的仿射变换匹配,获得雷达图像和拍摄光学照片像元间映射关系;最后,引入图像融合算法,将边坡形变监测雷达图像与拍摄光学照片颜色空间融合得到拍摄照片-形变融合图。The oblique photography data assists the matching and fusion method of the slope radar image and the optical photo of the present invention. First, the ground-based radar station coordinates, the pitch angle information and the spatial relationship of the oblique image are used to complete the geometric mapping; Coordinate and orientation perspective projection transformation to obtain an instantaneous perspective image, extract the common point of the instantaneous perspective image and the optical photo, and use the least squares optimization to match the affine transformation to obtain the mapping relationship between the radar image and the pixels of the optical photo; finally, introduce the image The fusion algorithm is used to fuse the radar image of the slope deformation monitoring with the color space of the captured optical photo to obtain the captured photo-deformation fusion map.
实施例Example
实施例提供的一种倾斜摄影数据辅助的边坡形变监测雷达影像与光学照片匹配融合方法流程图,如图2所示。A flowchart of a method for matching and fusing a radar image and an optical photo for slope deformation monitoring radar image assisted by oblique photographic data provided by the embodiment is shown in FIG. 2 .
利用全站仪、三维激光扫描、静态全球卫星导航定位系统(GPS/GNSS)等常用的辅助量测设备连续测量地基雷达一步一停移动过程中的每个停止位的天线中心坐标。雷达初次运行测量点集为第n次运行测量点集为集合得到点集Prail。Using common auxiliary measurement equipment such as total station, 3D laser scanning, and static Global Navigation and Positioning Satellite System (GPS/GNSS), the antenna center coordinates of each stop position of the ground-based radar are continuously measured in the process of moving step by step. The set of measurement points for the initial operation of the radar is The set of measurement points for the nth run is gather Get the point set Prail .
利用常规的直线拟合方法对点集Prail进行拟合得到雷达天线行迹矢量矢量方向由雷达运行起始点指向终止点。The radar antenna track vector is obtained by fitting the point set Prail with the conventional straight line fitting method vector The direction is from the start point of the radar operation to the end point.
天线行迹矢量三维倾斜摄影数据点集Pmap作为几何映射算法输入参数,利用常规距离-多普勒算法计算点集Pmap中各点与雷达的相对斜距R3D和相对方位角θ3D。Antenna Track Vector The three-dimensional oblique photography data point set Pmap is used as the input parameter of the geometric mapping algorithm, and the relative slant range R3D and the relative azimuth angle θ3D of each point in the point set Pmap and the radar are calculated by using the conventional range-Doppler algorithm.
令雷达合成孔径中心为Os,遍历Pmap中每个顶点(xi,yi,zi)相对于孔径中心Os(xs,ys,zs)的欧式距离:Let the center of the radar synthetic aperture be Os , traverse the Euclidean distance of each vertex (xi , yi , zi ) in the Pmap relative to the aperture center Os (xs , ys , zs ):
式中表示第i个点相对孔径中心Os的斜距,N表示Pmap模型总点数。in the formula Represents the slope distance of the i-th point relative to the aperture center Os , and N represents the total number of points in the Pmap model.
遍历Pmap中每个模型顶点相对于孔径中心Os的方位角,以相对竖直中心平面左侧为雷达方位负角度,则方位角为:Traverse the azimuth angle of each model vertex in the Pmap relative to the aperture center Os , and take the negative angle of the radar azimuth relative to the left side of the vertical center plane, then the azimuth angle is:
式中,为第i个点相对孔径中心的方位角,为第i个点在行迹轴线的垂足。|·|代表向量的模。形成二维点集Pmap3D:{(R3D,θ3D)1,(R3D,θ3D)2,...,(R3D,θ3D)N},计算Pmap3D各点与雷达图像I(r,θ)各网格点的欧式距离取最小欧式距离点获得最邻近像素点集PI:{(R2D,θ2D)1,(R2D,θ2D)2,...,(R2D,θ2D)N},得到Pmap3D与PI间一一映射表TI。In the formula, is the azimuth of the i-th point relative to the center of the aperture, is the i-th point on the track axis 's feet. |·| represents the magnitude of the vector. Form a two-dimensional point set Pmap3D : {(R3D ,θ3D )1 ,(R3D ,θ3D )2 ,...,(R3D ,θ3D )N }, calculate each point of Pmap3D and the radar image I (r, θ) The Euclidean distance of each grid point takes the minimum Euclidean distance point to obtain the nearest pixel point set PI : {(R2D ,θ2D )1 ,(R2D ,θ2D )2 ,...,( R2D , θ2D )N } to obtain a one-to-one mapping table TI between Pmap3D and PI .
本实施例光学照片Ishot拍摄地点的地理坐标通过GPS/GNSS获取,读取拍摄照片点的地理坐标Pshot,倾斜摄影数据将观察点设置在Pshot处,再确定视线方向即可确定倾斜摄影数据的瞬时二维图像Itemp。实施例为观察点Pshot与场景中心PR0所成向量。投影变换调整倾斜摄影数据观察视角至瞬时图像Itemp与拍摄照片Ishot内容初步一致。本实施例中投影变换,倾斜摄影初始为世界坐标系O-XYZ,先变换至以观察点为中心的目坐标系Pshot-XeYeZe,负Ze轴指向观察方向世界坐标系和目坐标系均取右手三维空间直角坐标系。然后,将Pmap中各点投影至屏幕空间,所述投影指目标物描述从世界坐标系到目坐标系的变换等价与运用平移、旋转运算将目坐标系叠加到世界坐标系上的变换,再使用透视投影。将平行与XeOeYe平面且离视点的距离等于f的平面作为瞬时图像Itemp投影面,那么,目坐标系中的一点在瞬时图像Itemp投影面上的网格点坐标(Xm,Ym)可由下式进行计算:The geographical coordinates of the shooting location of the optical photo Ishot in this embodiment are obtained by GPS/GNSS, the geographical coordinates Pshot of the photo point are read, and the observation point is set at Pshot with the oblique photography data, and then the line of sight direction is determined. The instantaneous two-dimensional image Itemp of the oblique photographic data can be determined. Example is the vector formed by the observation point Pshot and the scene center PR0 . Projection transformation adjusts the viewing angle of oblique photographic data so that the instantaneous image Itemp is initially consistent with the content of the photographed photo Ishot . In the projection transformation in this embodiment, the oblique photography is initially in the world coordinate system O-XYZ, and is first transformed to the objective coordinate system Pshot -Xe Ye Ze centered on the observation point, and the negative Ze axis points to the observation direction Both the world coordinate system and the target coordinate system take the right-hand three-dimensional space rectangular coordinate system. Then, project each point in the Pmap to the screen space, the projection refers to the transformation of the object description from the world coordinate system to the target coordinate system is equivalent to the transformation of using translation and rotation operations to superimpose the target coordinate system on the world coordinate system. , then use perspective projection.Take the plane parallel to the Xe Oe Ye plane and the distance from the viewpoint equal to f as the projection surface of the instantaneous image Itemp , then, the grid point coordinates (Xm , Ym ) can be calculated by the following formula:
式中,(Xs,Ys,Zs)为Pshot在世界坐标系O-XYZ中的坐标,瞬时图像Itemp投影面相对于平面XOY方位角θ和俯仰角α,观察点Pshot与瞬时图像Itemp投影面距离f,世界坐标系中任意目标点(XM,YM,ZM),保存Pmap与Itemp各网格点间的映射表Ttemp。In the formula, (Xs , Ys , Zs ) are the coordinates of Pshot in the world coordinate system O-XYZ, the azimuth angle θ and pitch angle α of the instantaneous image Itemp projection plane relative to the plane XOY, the observation point Pshot and the instantaneous The projection surface distance f of the image Itemp , any target point (XM , YM , ZM ) in the world coordinate system, save the mapping table Ttemp between Pmap and each grid point of Itemp .
目视解译瞬时图像Itemp与拍摄照片Ishot图像中显著地物目标的二维坐标得到粗匹配公共点集Pco,点集Pco包括为Itemp内显著地物目标的坐标子点集,还包括为Ishot内与Itemp相同的地物目标的坐标子点集,共np对公共点。Visually interpret the instantaneous image Itemp and the two-dimensional coordinates of the salient objects in the photograph Ishot image to obtain a rough matching common point set Pco , and the point set Pco includes is the coordinate sub-point set of the significant object target in Itemp , and also includes It is the coordinate sub-point set of the same ground object as Itemp in Ishot , a total of np pairs of common points.
将粗匹配公共点集Pco输入图像仿射变换方程得到参数初值ρ0、R0、方程f中为目标坐标,为待匹配坐标,R为旋转因子,Tx、Ty为平移因子,ρ为缩放因子。Input the rough matching common point set Pco into the image affine transformation equation Obtain the initial values of parameters ρ0 , R0 , in equation f is the target coordinate, is the coordinate to be matched, R is a rotation factor, Tx andTy are translation factors, and ρ is a scaling factor.
将参数初值带回方程f形成变换方程f1,将Ishot的所有图像网格点带入f2,得到粗匹配图像网格Irough,得到粗匹配映射表Trough,完成粗匹配。The initial value of the parameters is brought back to the equation f to form the transformation equation f1, and all the image grid points of Ishot are brought into f2 to obtain the rough matching image grid Irough , and the rough matching mapping table Trough is obtained to complete the rough matching.
在粗匹配结果控制下,使用特征提取方法提取Itemp和Irough内的点、线、面特征,得到大量相同特征点作为精匹配公共点集Pfine。Under the control of the rough matching results, the feature extraction method is used to extract the point, line and surface features in Itemp and Irough , and a large number of identical feature points are obtained as the fine matching public point set Pfine .
利用精匹配公共点集Pfine,变换参数初值ρ0、R0、使用最小二乘迭代寻优仿射变换方程f1参数,本实施例中最小二乘仿射变换使用迭代法优化,以平面右手直角坐标系为例,本实施例仿射变换后再进行双线性插值变换,变换模型为:Using the exact matching common point set Pfine , the initial values of transformation parameters ρ0 , R0 , Use the least squares iteration to optimize the f1 parameter of the affine transformation equation. In this embodiment, the least squares affine transformation is optimized using the iterative method. Taking the plane right-handed rectangular coordinate system as an example, in this embodiment, the bilinear transformation is performed after the affine transformation. Interpolation transformation, the transformation model is:
建立误差方程:Build the error equation:
其中,n为公共点Pfine个数,则模型参数的一种最小二乘解为P为等权重矩阵。最小二乘迭代,令公式(4)误差最小。通过迭代计算控制舍入误差,舍入误差选取单位权中误差自由度为误差方程数量与实际求解方程所需变量数的差,各控制点视为独立观测量权阵P取单位矩阵,迭代求出变换参数的最佳估值带回f形成变换方程f2。in, n is the number of common points Pfine , then a least squares solution of the model parameters is P is an equal weight matrix. The least squares iteration is used to minimize the error of formula (4). The rounding error is controlled by iterative calculation, and the rounding error selects the error in the unit weight. degrees of freedom is the difference between the number of error equations and the actual number of variables needed to solve the equation, each control point is regarded as an independent observation weight matrix P takes the unit matrix, and iteratively obtains the best estimate of the transformation parameters Bring back f to form the transformation equation f2.
将Irough带入变换方程f2得到Ifine,映射关系记为Tfine,此时已获得边坡形变监测雷达图像PI与Pmap3D间一一映射表TI,Pmap3D和Pmap为一一对应关系,Pmap与Itemp各点间的映射表Ttemp,Ishot与Irough粗匹配映射表Trough,Irough与Ifine精匹配映射表Tfine,通过常规重采样与插值方法即可得到Ishot光学照片与雷达图像PI间的映射关系Tfinal。Bring Irough into transformation equation f2 and obtain Ifine , and the mapping relation is denoted as Tfine , the one-to-one mapping table TI between the slope deformation monitoring radar image PI and Pmap3D has been obtained at this time, and Pmap3D and Pmap are one-to-one Correspondence, the mapping table Ttemp between each point of Pmap and Itemp , the rough matching mapping table Trough between Ishot and Irough , and the fine matching mapping table Tfine between Irough and Ifine can be obtained by conventional resampling and interpolation methods The mapping relationship Tfinal between the Ishot optical photo and the radar image PI is obtained.
查找Tfinal映射表,将边坡形变雷达的形变图像网格点形变值与Ishot光学照片红绿蓝(RGB)颜色通道通过图像融合方法进行融合。本实施例采用常用融合方法使雷达测量形变量替换光学照片红色(R)通道Find the Tfinal mapping table, and fuse the deformation value of the deformation image grid point of the slope deformation radar with the red, green and blue (RGB) color channels of the Ishot optical photo through the image fusion method. In this embodiment, a common fusion method is used to replace the red (R) channel of the optical photo with the radar measurement deformation variable.
式中,为经过变换后的图像,有红、绿、蓝三通道,红色通道使用了雷达图像PI形变值IR,绿色通道和蓝色通道使用Ishot光学照片的绿色通道IVG和蓝色通道IVB,获得融合图像Ifusion,完成匹配融合过程。倾斜摄影数据辅助的边坡形变监测雷达影像与光学照片匹配融合示意图,如图1所示。In the formula, For the transformed image, there are three channels of red, green and blue. The red channel uses the radar image PI deformation value IR, and the green channel and blue channel use the green channel IVG and the blue channel IVB of the Ishot optical photo. , obtain the fusion image Ifusion , and complete the matching fusion process. The schematic diagram of the matching and fusion of the slope deformation monitoring radar image and the optical photo assisted by the oblique photographic data is shown in Figure 1.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110244381.8ACN113516606B (en) | 2021-03-05 | 2021-03-05 | Slope radar image and optical photo matching fusion method assisted by oblique photographing data |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202110244381.8ACN113516606B (en) | 2021-03-05 | 2021-03-05 | Slope radar image and optical photo matching fusion method assisted by oblique photographing data |
| Publication Number | Publication Date |
|---|---|
| CN113516606Atrue CN113516606A (en) | 2021-10-19 |
| CN113516606B CN113516606B (en) | 2023-12-12 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202110244381.8AActiveCN113516606B (en) | 2021-03-05 | 2021-03-05 | Slope radar image and optical photo matching fusion method assisted by oblique photographing data |
| Country | Link |
|---|---|
| CN (1) | CN113516606B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106127683A (en)* | 2016-06-08 | 2016-11-16 | 中国电子科技集团公司第三十八研究所 | A kind of real-time joining method of unmanned aerial vehicle SAR image |
| KR102028324B1 (en)* | 2019-02-26 | 2019-11-04 | 엘아이지넥스원 주식회사 | Synthetic Aperture Radar Image Enhancement Method and Calculating Coordinates Method |
| CN112001955A (en)* | 2020-08-24 | 2020-11-27 | 深圳市建设综合勘察设计院有限公司 | Point cloud registration method and system based on two-dimensional projection plane matching constraint |
| CN112102458A (en)* | 2020-08-31 | 2020-12-18 | 湖南盛鼎科技发展有限责任公司 | Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106127683A (en)* | 2016-06-08 | 2016-11-16 | 中国电子科技集团公司第三十八研究所 | A kind of real-time joining method of unmanned aerial vehicle SAR image |
| KR102028324B1 (en)* | 2019-02-26 | 2019-11-04 | 엘아이지넥스원 주식회사 | Synthetic Aperture Radar Image Enhancement Method and Calculating Coordinates Method |
| CN112001955A (en)* | 2020-08-24 | 2020-11-27 | 深圳市建设综合勘察设计院有限公司 | Point cloud registration method and system based on two-dimensional projection plane matching constraint |
| CN112102458A (en)* | 2020-08-31 | 2020-12-18 | 湖南盛鼎科技发展有限责任公司 | Single-lens three-dimensional image reconstruction method based on laser radar point cloud data assistance |
| Title |
|---|
| 吴瑞娟;何秀凤: "高分雷达与光学影像融合的滨海湿地变化检测", 测绘科学, no. 011* |
| 李万莉;王文佳;: "基于Hough变换的激光SLAM几何特征地图提取方法", 机电一体化, no. 07* |
| 杨俊;乞耀龙;谭维贤;王彦平;洪文: "地基SAR 图像与地形数据的几何映射三维匹配方法", 中国科学院大学学报, vol. 32, no. 3* |
| Publication number | Publication date |
|---|---|
| CN113516606B (en) | 2023-12-12 |
| Publication | Publication Date | Title |
|---|---|---|
| CN111473739A (en) | Video monitoring-based surrounding rock deformation real-time monitoring method for tunnel collapse area | |
| CN104732577B (en) | A kind of building texture blending method based on UAV low-altitude aerial surveying systems | |
| Xie et al. | Study on construction of 3D building based on UAV images | |
| KR20220064524A (en) | Method and system for visual localization | |
| CN102003938A (en) | Thermal state on-site detection method for large high-temperature forging | |
| CN114841944B (en) | Tailing dam surface deformation inspection method based on rail-mounted robot | |
| CN104091369A (en) | Unmanned aerial vehicle remote-sensing image building three-dimensional damage detection method | |
| CN106408601A (en) | GPS-based binocular fusion positioning method and device | |
| CN114923477A (en) | Multi-dimensional space-ground collaborative mapping system and method based on vision and laser SLAM technology | |
| CN107063187A (en) | A kind of height of tree rapid extracting method of total powerstation and unmanned plane image association | |
| CN116957360A (en) | A UAV-based space observation and reconstruction method and system | |
| WO2025161614A1 (en) | Apparatus for measuring relative displacement of joint of jacking pipe and detecting water leakage, and method | |
| Chellappa et al. | On the positioning of multisensor imagery for exploitation and target recognition | |
| Jiang et al. | Determination of construction site elevations using drone technology | |
| Choi et al. | Precise geometric registration of aerial imagery and LIDAR data | |
| CN115423958A (en) | Mining area travelable area boundary updating method based on visual three-dimensional reconstruction | |
| CN113516606B (en) | Slope radar image and optical photo matching fusion method assisted by oblique photographing data | |
| CN101770656B (en) | Stereo orthophoto pair-based large-scene stereo model generating method and measuring method thereof | |
| Chen et al. | Outdoor 3d environment reconstruction based on multi-sensor fusion for remote control | |
| Kang et al. | An automatic mosaicking method for building facade texture mapping using a monocular close-range image sequence | |
| Qin et al. | A method for measuring large-scale deformation of landslide bodies based on nap-of-the-object photogrammetry | |
| CN116188372A (en) | A Recognition Method of Water-blocking Structures Based on Stereo Mapping | |
| Bai et al. | Application of unmanned aerial vehicle multi-vision image 3D modeling in geological disasters | |
| Kusuno et al. | A method localizing an omnidirectional image in pre-constructed 3D wireframe map | |
| Tang et al. | Application of oblique photogrammetry technique in geological hazard identification and decision management |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |