Movatterモバイル変換


[0]ホーム

URL:


CN117974773A - Method for calibrating bow direction based on geographic azimuth under ship static condition in ship lock - Google Patents

Method for calibrating bow direction based on geographic azimuth under ship static condition in ship lock
Download PDF

Info

Publication number
CN117974773A
CN117974773ACN202311242641.3ACN202311242641ACN117974773ACN 117974773 ACN117974773 ACN 117974773ACN 202311242641 ACN202311242641 ACN 202311242641ACN 117974773 ACN117974773 ACN 117974773A
Authority
CN
China
Prior art keywords
ship
image
point
points
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311242641.3A
Other languages
Chinese (zh)
Inventor
齐俊麟
王东
陈学文
吴勇
苏丽
李廷秋
柳晨光
胡树华
兰加芬
郑茂
胡玮
李乾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Changjiang Ship Design Institute Co ltd
Wuhan University of Technology WUT
Three Gorges Navigation Authority
Original Assignee
Wuhan Changjiang Ship Design Institute Co ltd
Wuhan University of Technology WUT
Three Gorges Navigation Authority
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Changjiang Ship Design Institute Co ltd, Wuhan University of Technology WUT, Three Gorges Navigation AuthorityfiledCriticalWuhan Changjiang Ship Design Institute Co ltd
Priority to CN202311242641.3ApriorityCriticalpatent/CN117974773A/en
Publication of CN117974773ApublicationCriticalpatent/CN117974773A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The method for calibrating the bow direction based on the geographic orientation under the condition that the ship in the ship lock is stationary comprises the following steps: step 1: identifying a ship area by using a camera, extracting a ship area image, and acquiring the edge of a ship outline image; step 2: acquiring a ship three-dimensional point cloud through a laser radar, projecting the three-dimensional point cloud to a two-dimensional plane, dividing the point cloud by adopting a DBSCAN clustering algorithm to form a ship target set, and extracting the edge of each ship target; step 3: fusing the video data with the laser point cloud data; step 4: and acquiring an accurate and complete ship profile, and acquiring the ship fore direction by adopting a rectangular bounding box identification method. The method of the invention has high efficiency, can effectively overcome the influence of data transmission in the lock chamber, and avoids the scratch accident caused by the deflection of the bow in the lock chamber.

Description

Translated fromChinese
船闸内船舶静止情况下基于地理方位对船首向进行校准的方法Method for calibrating the heading of a ship based on its geographic orientation when the ship is stationary in a lock

技术领域Technical Field

本发明涉及船舶过闸控制技术领域,具体涉及一种船闸内船舶静止情况下基于地理方位对船首向进行校准的方法。The present invention relates to the technical field of ship lock passage control, and in particular to a method for calibrating a ship's bow direction based on a geographic orientation when a ship is stationary in a lock.

背景技术Background Art

随着长江过闸船舶大型化和标准化趋势,通过三峡船闸和葛洲坝船闸的大型船舶所占比例不断增大。大型船舶在闸室内操纵性差,且过闸船舶驾驶员操控水平普遍不高,船舶在闸室内停船作业不理想,存在船舶使用闸室内浮式系船柱挂缆制动,导致浮式系船柱损毁的现象,对加大船闸被迫停航维修、坝区通航压力增大等通航风险。同时,船舶操纵难以及过闸移泊快速性要求,加大了闸室内船舶碰撞闸门的风险。此外,船首向识别与校准因为在进船闸的过程中闸室遮挡会影响船舶定位,从而影响罗经的数据产生误差。因此,如何提高精确感知船舶过闸航行动态,实现大型船舶精确停船控制,是在三峡-葛洲坝船闸通航服务与船舶控制技术上亟需解决的技术问题。With the trend of large-scale and standardized ships passing through the Yangtze River, the proportion of large ships passing through the Three Gorges Ship Lock and Gezhouba Ship Lock continues to increase. Large ships have poor maneuverability in the lock chamber, and the control level of the drivers of ships passing through the lock is generally not high. The ship's parking operation in the lock chamber is not ideal. There is a phenomenon that ships use the floating mooring column in the lock chamber to brake, resulting in damage to the floating mooring column, which increases the navigation risks such as forced suspension of navigation for maintenance and increased navigation pressure in the dam area. At the same time, the difficulty of ship maneuvering and the requirement for rapid shifting through the lock increase the risk of ships colliding with the gates in the lock chamber. In addition, the identification and calibration of the bow direction will affect the positioning of the ship because of the obstruction of the lock chamber during the entry process, thereby affecting the compass data and causing errors. Therefore, how to improve the accurate perception of the navigation dynamics of ships passing through the lock and realize the precise parking control of large ships is a technical problem that needs to be solved in the navigation service and ship control technology of the Three Gorges-Gezhouba Ship Lock.

发明内容Summary of the invention

本发明提出一种船闸内船舶静止情况下基于地理方位对船首向进行校准的方法,能有效实现船首向进行识别与校准。The present invention proposes a method for calibrating the bow direction based on the geographical orientation when a ship is stationary in a lock, which can effectively realize the identification and calibration of the bow direction.

本发明采取的技术方案为:The technical solution adopted by the present invention is:

船闸内船舶静止情况下基于地理方位对船首向进行校准的方法,包括以下步骤:The method for calibrating the heading of a ship based on the geographic orientation when the ship is stationary in a lock comprises the following steps:

步骤1:利用摄像机识别出船舶区域,提取船舶区域图像,获取船舶外轮廓图像边沿;Step 1: Use the camera to identify the ship area, extract the ship area image, and obtain the edge of the ship's outer contour image;

步骤2:通过激光雷达获取船舶三维点云,将三维点云投影至二维平面,采用DBSCAN聚类算法分割点云,形成船舶目标集合,提取每个船舶目标的边沿;Step 2: Obtain the three-dimensional point cloud of the ship through the laser radar, project the three-dimensional point cloud to the two-dimensional plane, use the DBSCAN clustering algorithm to segment the point cloud to form a ship target set, and extract the edge of each ship target;

步骤3:将视频数据与激光点云数据进行融合;Step 3: Fusion of video data and laser point cloud data;

步骤4:获取准确且完整的船舶轮廓,采用矩形包围盒识别方法,获得船舶船首向。Step 4: Obtain an accurate and complete ship outline and use a rectangular bounding box recognition method to obtain the ship's bow direction.

所述步骤1包括以下步骤:The step 1 comprises the following steps:

S1.1:首先,利用闸室上的高位摄像机图像数据,采用深度学习的方式进行语义分割,识别出船舶区域、闸室壁区域、水区域;S1.1: First, using the image data from the high-position camera on the lock chamber, deep learning is used to perform semantic segmentation to identify the ship area, lock chamber wall area, and water area;

S1.2:其次,提取船舶区域图像,获取船舶外轮廓图像边沿;S1.2: Secondly, extract the ship area image and obtain the edge of the ship outer contour image;

S1.3:最后,对船舶外轮廓图像边沿进行平滑处理。S1.3: Finally, the edge of the ship's outer contour image is smoothed.

所述步骤2包括以下步骤:The step 2 comprises the following steps:

S2.1:首先,将三维点云投影至二维平面;S2.1: First, project the 3D point cloud onto a 2D plane;

为最大限度获取船闸水域点云数据,激光雷达需要旋转一定角度安装。为此,需要将激光雷达坐标系,即以激光雷达中心为原点的坐标系,转换为大地坐标系。通过测量激光雷达安装时的俯仰角φ、横滚角θ、航向角ψ,分别计算激光点云绕x、y及z轴的转矩阵Rx,φ、Ry,θ和Rz,ψ。再通过公式公式(4),将激光雷达坐标系转为大地坐标系,其中(x,y,z)激光雷达坐标系下的点云坐标,(X,Y,Z)为其转换至船闸坐标系下的点云坐标。然后,将三维激光点云投影至XY平面,即令Z=0。In order to obtain the point cloud data of the lock water area to the maximum extent, the laser radar needs to be installed with a certain angle. To this end, the laser radar coordinate system, that is, the coordinate system with the center of the laser radar as the origin, needs to be converted into the earth coordinate system. By measuring the pitch angle φ, roll angle θ, and heading angle ψ when the laser radar is installed, the rotation matrix Rx,φ , Ry,θ, and Rz,ψ of the laser point cloud around the x, y, and z axes are calculated respectively. Then, the laser radar coordinate system is converted to the earth coordinate system by formula (4), where (x, y, z) are the point cloud coordinates in the laser radar coordinate system, and (X, Y, Z) are the point cloud coordinates converted to the lock coordinate system. Then, the three-dimensional laser point cloud is projected to the XY plane, that is, Z=0.

图13为点云坐标转换示意图;图14为三维点云投影至二维平面示意图。FIG13 is a schematic diagram of point cloud coordinate transformation; FIG14 is a schematic diagram of projection of a three-dimensional point cloud onto a two-dimensional plane.

S2.2:其次,通过条件滤波与下采样算法滤除噪点;S2.2: Secondly, the noise is filtered out by conditional filtering and downsampling algorithm;

通过设定滤波条件进行滤波,类似于分段函数,判断点云是否在规则的范围则中,如果不在则舍弃。根据船闸水域实景数据,设置X与Y坐标值的阈值,在阈值范围内的点云保留,在阈值范围外的点云删除。采用下采样算法,创建点云的体素栅格,通过重心表示体素内的所有点,在保持点云特征的同时,滤除冗余点云数据,极大地降低点云数量,提升计算效率。图16(b)为体素滤波后的点云示意图。Filtering is performed by setting filtering conditions, similar to piecewise functions, to determine whether the point cloud is within the regular range, if not, it is discarded. According to the real-life data of the lock waters, the thresholds of the X and Y coordinate values are set, and the point clouds within the threshold range are retained, and the point clouds outside the threshold range are deleted. A downsampling algorithm is used to create a voxel grid of the point cloud, and all points in the voxel are represented by the center of gravity. While maintaining the point cloud features, redundant point cloud data is filtered out, greatly reducing the number of point clouds and improving computational efficiency. Figure 16(b) is a schematic diagram of the point cloud after voxel filtering.

并采用DBSCAN聚类算法分割点云,具体如下:The DBSCAN clustering algorithm is used to segment the point cloud, as follows:

DBSCAN是一种基于密度的聚类算法,能够发现任意形状的簇,对有噪声(即孤立点或异常值)的数据集也有很好鲁棒性。DBSCAN将簇定义为密度相连的点的最大集合,能够把具有足够高密度的区域划分为簇,并可在噪声的空间数据库中发现任意形状的聚类。DBSCAN is a density-based clustering algorithm that can find clusters of any shape and is robust to noisy data sets (i.e., isolated points or outliers). DBSCAN defines a cluster as the largest set of density-connected points, can divide areas with sufficiently high density into clusters, and can find clusters of any shape in noisy spatial databases.

DBSCAN算法需要用户输入2个参数,一个参数是半径(Eps),代表给定点P为中心的圆形邻域的范围,另一个参数是以点P为中心的邻域内最少点的数量(MinPts)。对于船舶激光点云,设置半径(Eps)为1,最少点的数量(MinPts)为4。聚类后形成的船舶目标如图17所示,图17中所示为激光雷达从船头方向扫描到一艘客船的激光点云图。The DBSCAN algorithm requires the user to input two parameters, one parameter is the radius (Eps), which represents the range of the circular neighborhood centered on the given point P, and the other parameter is the minimum number of points (MinPts) in the neighborhood centered on the point P. For the ship laser point cloud, the radius (Eps) is set to 1 and the minimum number of points (MinPts) is set to 4. The ship target formed after clustering is shown in Figure 17, which shows the laser point cloud image of a passenger ship scanned by the laser radar from the bow direction.

S2.3:通过凸包算法提取每个船舶目标的边沿,具体如下:S2.3: Extract the edge of each ship target using the convex hull algorithm, as follows:

凸包问题可以描述为:给定一个点集P,求小点集S,使得S构成的形状能包含这些点集。凸包的定义为:平面的一个子集S被称为是“凸”的,当且进当对于任意两点p,q∈S,线段都完全属于S。由此推出凸包的性质:一条直线如果与凸包相交(不是相切)的话,最多交于两条边或者两个面。如图18所示,由点p0、p1、p3、p10、p12所围城的红色多边形即为凸包。The convex hull problem can be described as: given a point set P, find a small point set S so that the shape formed by S can contain these point sets. The convex hull is defined as: a subset S of a plane is called "convex" if and for any two points p, q∈S, the line segment completely belongs to S. From this, the properties of the convex hull are deduced: if a straight line intersects with the convex hull (not tangent), it intersects at most two edges or two faces. As shown in Figure 18, the red polygon surrounded by points p0, p1, p3, p10, and p12 is the convex hull.

凸包算法采用Graham扫描法:The convex hull algorithm uses the Graham scan method:

(1)以船闸入闸法向为y轴,闸口与y轴垂直方向为x轴,将激光点云所有点置于xOy平面;(1) With the normal direction of the ship lock as the y-axis and the direction perpendicular to the y-axis as the x-axis, all points of the laser point cloud are placed in the xOy plane;

(2)对于任意一艘船舶激光点云,在所有点中选取y坐标最小的一点H,当作基点。如果存在多个点的y坐标都为最小值,则选取x坐标最小的一点。坐标相同的点应排除。(2) For any ship laser point cloud, select the point H with the smallest y coordinate among all points as the base point. If there are multiple points with the smallest y coordinate, select the point with the smallest x coordinate. Points with the same coordinates should be excluded.

(3)按照其它各点p和基点构成的向量<H,p>;与x轴的夹角进行排序,夹角由大至小进行顺时针扫描,反之则进行逆时针扫描。实现中无需求得夹角,只需根据余弦定理求出向量夹角的余弦值即可。以图19为例,基点为H,根据夹角由小至大排序后依次为H,K,C,D,L,F,G,E,I,B,A,J。下面进行逆时针扫描。(3) Sort the vectors <H, p> formed by the other points p and the base point according to their angles with the x-axis. Scan the angles clockwise from large to small, and vice versa. There is no need to obtain the angles in the implementation. You only need to find the cosine value of the vector angles according to the law of cosines. Taking Figure 19 as an example, the base point is H. The angles are sorted from small to large in order of H, K, C, D, L, F, G, E, I, B, A, and J. Next, scan counterclockwise.

(4)线段<H,K>;一定在凸包上,接着加入C。假设线段<K,C>;也在凸包上,因为就H,K,C三点而言,它们的凸包就是由此三点所组成。但是接下来加入D时会发现,线段<K,D>;才会在凸包上,所以将线段<K,C>;排除,C点不可能是凸包。(4) The line segment <H, K> must be on the convex hull, so we add C. Assume that the line segment <K, C> is also on the convex hull, because the convex hull of points H, K, and C is composed of these three points. However, when we add D, we will find that the line segment <K, D> is on the convex hull, so we exclude the line segment <K, C>, and point C cannot be on the convex hull.

(5)即当加入一点时,必须考虑到前面的线段是否在凸包上。从基点开始,凸包上每条相临的线段的旋转方向应该一致,并与扫描的方向相反。如果发现新加的点使得新线段与上线段的旋转方向发生变化,则可判定上一点必然不在凸包上。实现时可用向量叉积进行判断,设新加入的点为pn+1,上一点为pn,再上一点为pn-1。顺时针扫描时,如果向量{pn-1,pn}与{pn,pn+1}的叉积为正(逆时针扫描判断是否为负),则将上一点删除。删除过程需要回溯,将之前所有叉积符号相反的点都删除,然后将新点加入凸包。(5) That is, when adding a point, it is necessary to consider whether the previous line segment is on the convex hull. Starting from the base point, the rotation direction of each adjacent line segment on the convex hull should be consistent and opposite to the scanning direction. If it is found that the newly added point causes the rotation direction of the new line segment and the previous line segment to change, it can be determined that the previous point must not be on the convex hull. When implementing, the vector cross product can be used for judgment. Let the newly added point bepn+1 , the previous point bepn , and the previous point be pn-1 . When scanning clockwise, if the cross product of the vectors {pn-1 ,pn } and {pn ,pn+1 } is positive (scanning counterclockwise to determine whether it is negative), then delete the previous point. The deletion process needs to backtrack, delete all previous points with opposite cross product signs, and then add the new point to the convex hull.

在图19中,加入K点时,由于线段<H,C>要旋转到<H,K>的角度,为顺时针旋转,所以C点不在凸包上,应该删除,保留K点。接着加入D点,由于线段<K,D>要旋转到<H,K>的角度,为逆时针旋转,故D点保留。按照上述步骤进行扫描,直到点集中所有的点都遍历完成,即得到凸包。In Figure 19, when adding point K, since the line segment <H, C> needs to be rotated to the angle of <H, K>, which is a clockwise rotation, point C is not on the convex hull and should be deleted, and point K is retained. Then add point D. Since the line segment <K, D> needs to be rotated to the angle of <H, K>, which is a counterclockwise rotation, point D is retained. Scan according to the above steps until all points in the point set are traversed, and the convex hull is obtained.

所述步骤3包括以下步骤:The step 3 comprises the following steps:

S3.1:首先,时间对准,以1s为时间基准,将视频数据与激光点云数据进行对准;S3.1: First, time alignment is performed, using 1s as the time reference to align the video data with the laser point cloud data;

由于摄像头与激光雷达的开始探测时间、探测周期、与目标相对位置均不同,使得摄像头与激光雷达对目标探测数据不是同一时刻得到的,即存在着探测数据的时间差异。视频数据数据采集频率为50Hz,激光雷达采集频率为5Hz。通过以1s为数据采集时间间隔,将视频数据与激光点云数据进行时间对准,也就是每1s采集同时刻的摄像头和激光雷达数据。Since the camera and the laser radar have different detection start times, detection cycles, and relative positions to the target, the camera and the laser radar do not obtain the target detection data at the same time, that is, there is a time difference in the detection data. The video data acquisition frequency is 50Hz, and the laser radar acquisition frequency is 5Hz. By taking 1s as the data acquisition time interval, the video data and the laser point cloud data are time-aligned, that is, the camera and laser radar data at the same time are collected every 1s.

S3.2:其次,空间对准,将视频坐标、激光点云坐标均转换为闸室平面坐标;S3.2: Secondly, spatial alignment, converting the video coordinates and laser point cloud coordinates into the plane coordinates of the lock chamber;

因为激光雷达、视频数据不在同一个坐标系下,因此需要将其坐标系进行转换,使其处于同一坐标系中。激光雷达点云数据按照s2.1进行转换,视频数据按照步骤3详述中的坐标转换进行变换。Because the LiDAR and video data are not in the same coordinate system, their coordinate systems need to be converted so that they are in the same coordinate system. The LiDAR point cloud data is converted according to s2.1, and the video data is transformed according to the coordinate conversion detailed in step 3.

S3.3:再次,采用协方差交叉法将视频数据与激光数据融合。具体是:S3.3: Again, the covariance crossover method is used to fuse the video data with the laser data. Specifically:

协方差交叉法解决的是包含误差的数据融合问题。激光雷达与视频测量同一艘船舶时,得到了两个不同的测量值A和B,但A和B的相关性不确定,通过计算A和B的协方差交叉矩阵CI,并求得矩阵CI的迹最小时的权重系数分布,融合后的数据可通过测量值A和B乘以权重系数得到,具体公式如下:The covariance cross method solves the problem of data fusion containing errors. When the lidar and video measure the same ship, two different measurement values A and B are obtained, but the correlation between A and B is uncertain. By calculating the covariance cross matrix CI of A and B and finding the weight coefficient distribution when the trace of the matrix CI is the smallest, the fused data can be obtained by multiplying the measurement values A and B by the weight coefficient. The specific formula is as follows:

其中,为融合后的数据,P1为激光雷达的估计误差方差,P2为视频的估计误差方差,PCI为融合后的估计误差方差,ω1和ω2为权重系数,其中,0≤ω1≤1,0≤ω2≤1,ω12=1,权重系数ω1和ω2通过最小化如下指标函数J=tr(PCI)得到,其中,tr(PCI)为矩阵PCI的迹。in, is the fused data,P1 is the estimation error variance of the lidar,P2 is the estimation error variance of the video,PCI is the estimation error variance after fusion,ω1 andω2 are weight coefficients, where0≤ω1≤1 ,0≤ω2≤1 ,ω1 +ω2 =1, and the weight coefficientsω1 andω2 are obtained by minimizing the following indicator function J=tr(PCI ), where tr(PCI ) is the trace of the matrixPCI .

所述步骤3中,视频是通过多个摄像机采集后拼接而成,拼接方法包括以下步骤:In step 3, the video is collected by multiple cameras and then spliced together. The splicing method includes the following steps:

坐标变换、图像预处理、图像配准、图像融合;Coordinate transformation, image preprocessing, image registration, image fusion;

坐标变换中:In coordinate transformation:

①:世界坐标系转换为相机坐标系:①: Convert the world coordinate system to the camera coordinate system:

世界坐标系和相机坐标系均为三维坐标系,能够通过平移变换和旋转变换相互转换。因此,如果己知一个点在世界坐标系中的坐标,则能够求出其在相机坐标系中的坐标;The world coordinate system and the camera coordinate system are both three-dimensional coordinate systems, which can be converted to each other through translation and rotation transformations. Therefore, if the coordinates of a point in the world coordinate system are known, its coordinates in the camera coordinate system can be calculated;

坐标转换关系如式(1),式(2)所示The coordinate transformation relationship is shown in formula (1) and formula (2):

其中,x、y、z分别船舶在相机坐标系中的坐标值,R1为绕z轴的旋转矩阵,x1、y1、z1为船舶在世界坐标系中的坐标轴,α为世界坐标系绕z轴的旋转角度。Among them, x, y, z are the coordinates of the ship in the camera coordinate system,R1 is the rotation matrix around the z axis, x1 , y1 , z1 are the coordinate axes of the ship in the world coordinate system, and α is the rotation angle of the world coordinate system around the z axis.

绕着不同的坐标轴旋转不同的角度,得到相应的旋转矩阵,同理绕x轴、y轴旋转和ω能够得到:Rotate different angles around different coordinate axes to obtain the corresponding rotation matrix. Similarly, rotate around the x-axis and y-axis and ω can be obtained:

通过上述公式,能够得到旋转矩阵R=R1R2R3,从而得到P点在相机坐标系中的坐标。Through the above formula, the rotation matrix R=R1 R2 R3 can be obtained, thereby obtaining the coordinates of point P in the camera coordinate system.

R2、R3分别为绕x轴、y轴的旋转矩阵,R2 and R3 are the rotation matrices around the x-axis and y-axis respectively.

②:相机坐标系转换到图像坐标系:②: Convert the camera coordinate system to the image coordinate system:

对于相机坐标系中一点P(Xw,Yw,Zw),其在像平面坐标系中对应的投影点为p(x,y)。根据三角形的相似关系可得到从相机坐标系到像平面坐标系的转换关系;For a point P(Xw ,Yw ,Zw ) in the camera coordinate system, its corresponding projection point in the image plane coordinate system is p(x, y). Based on the similarity relationship of triangles, the transformation relationship from the camera coordinate system to the image plane coordinate system can be obtained;

其中,x、y分别船舶在图像坐标系中的坐标值,f为相机焦距,Xw、Yw、Zw为船舶在相机坐标系中的坐标值。Among them, x and y are the coordinate values of the ship in the image coordinate system, f is the focal length of the camera, andXw ,Yw , andZw are the coordinate values of the ship in the camera coordinate system.

③:图像坐标系转换到图像像素坐标系:③: Convert the image coordinate system to the image pixel coordinate system:

在图像坐标系中点p(x,y),其对应的图像像素坐标系的坐标为o(u0,v0),则点p在图像坐标系中坐标与像素坐标系中坐标的转换关系如下:In the image coordinate system, the corresponding coordinates of point p (x, y) in the image pixel coordinate system are o (u0 , v0 ). The conversion relationship between the coordinates of point p in the image coordinate system and the coordinates in the pixel coordinate system is as follows:

其中,dx和dy分别表示一个像素点在x轴和y轴上的宽度;(u0,v0)为摄像机主点,即光轴与像平面的交点o的像素坐标;u、v分别为船舶在像素坐标系中的坐标值。Wherein, dx and dy represent the width of a pixel on the x-axis and y-axis respectively; (u0 ,v0 ) is the pixel coordinate of the camera principal point, ie, the intersection o of the optical axis and the image plane; u and v are the coordinate values of the ship in the pixel coordinate system respectively.

综合以上公式原理,就可以推算出世界坐标点(XW,YW,ZW)与像素坐标点o(u0,v0)之间的转换关系为:Based on the above formula principles, we can deduce the conversion relationship between the world coordinate point (XW , YW , ZW ) and the pixel coordinate point o (u0 , v0 ) as follows:

其中,in,

作为高清摄像机内参,可通过测量得到;为相机外参,T为投影矩阵;经过标定后可获得P,再根据相机拍摄到图片中目标的像素坐标可反算出其在真实世界中的位置。zc为位置变量,与摄像机的安装高度,安装姿态角等有关。fx、fy为摄像机的内部参数,R为世界坐标系转换为相机坐标系的旋转矩阵。 As the internal parameter of the high-definition camera, it can be obtained through measurement; is the camera external parameter, T is the projection matrix; after calibration, P can be obtained, and then the position of the target in the image captured by the camera can be inversely calculated.zc is the position variable, which is related to the installation height and installation attitude angle of the camera.fx andfy are the internal parameters of the camera, R is the rotation matrix that transforms the world coordinate system into the camera coordinate system.

图像预处理包括:Image preprocessing includes:

a:使用滤波模板覆盖范围内全部像素点灰度值的中位数,作为中心位置像素点的灰度值,使得邻近像素点灰度值与真实灰度值相近,进而滤除图像中的噪声;设滤波模板尺寸为M*M,模板范围内像素灰度值为f1,f2,f3,...,fM*M,经中值滤波后中心位置处像素灰度值F为:a: Use the median of the grayscale values of all pixels within the coverage of the filter template as the grayscale value of the pixel at the center, so that the grayscale values of the neighboring pixels are close to the true grayscale values, thereby filtering out the noise in the image; suppose the size of the filter template is M*M, the pixel grayscale values within the template range are f1 ,f2 ,f3 ,...,fM*M , and the grayscale value F of the pixel at the center after median filtering is:

F=med{f1,f2,f3,...,fM*M} (10);F=med{f1 , f2 , f3 ,..., fM*M } (10);

f1,f2,f3,...,fM*M为模板范围内每个像素点的灰度值。f1 ,f2 ,f3 ,...,fM*M are the grayscale values of each pixel within the template range.

b:采用高斯滤波的算法,通过高斯滤波模板覆盖范围内像素点灰度值的加权平均,代替中心点处像素灰度值。b: Using the Gaussian filtering algorithm, the grayscale value of the pixel at the center point is replaced by the weighted average of the grayscale values of the pixels within the coverage range of the Gaussian filtering template.

设使用3*3的滤波模板进行图像平滑,将模板的中心位置作为取样原点,并对周围像素点进行取样,根据高斯函数可以得到所有位置的模板系数,若高斯函数中σ的值为1.5,对计算结果归一化后,得到高斯滤波模板后将周围邻域像素值做加权平均,用以代替模板中心位置像素值。Suppose a 3*3 filter template is used for image smoothing, the center of the template is used as the sampling origin, and the surrounding pixels are sampled. The template coefficients of all positions can be obtained according to the Gaussian function. If the value of σ in the Gaussian function is 1.5, after normalizing the calculation result, the Gaussian filter template is obtained and the weighted average of the surrounding neighborhood pixel values is used to replace the pixel value at the center of the template.

图像配准采用基于特征的配准,包括特征提取、特征匹配和图像变换模型的参数计算;Image registration uses feature-based registration, including feature extraction, feature matching, and parameter calculation of image transformation model;

1)采用SIFT算法进行特征提取,通过在尺度空间中找到极值点作为候选特征点,并从中选择图像尺度、位置和旋转保持不变的点作为最终的特征点;1) SIFT algorithm is used for feature extraction. The extreme points in the scale space are found as candidate feature points, and the points whose image scale, position and rotation remain unchanged are selected as the final feature points.

SIFT(Scale-invariant feature transform),尺度不变特征转换,是一种图像局部特征提取算法,它通过在不同的尺度空间中寻找极值点(特征点,关键点)的精确定位和主方向,构建关键点描述符来提取特征。采用SIFT算法提取船舶图像特征的具体步骤:SIFT (Scale-invariant feature transform) is an algorithm for extracting local features from images. It extracts features by finding the precise location and main direction of extreme points (feature points, key points) in different scale spaces and constructing key point descriptors. The specific steps for extracting ship image features using the SIFT algorithm are as follows:

(1)尺度空间的极值检测:搜索所有尺度空间上的图像,通过高斯微分函数来识别潜在的对尺度和旋转不变的兴趣点;(1) Scale space extreme value detection: Search images in all scale spaces and identify potential scale- and rotation-invariant points of interest using Gaussian derivative functions;

(2)特征点定位:在每个候选的位置上,通过一个拟合精细模型来确定位置尺度,关键点的选取依据他们的稳定程度;(2) Feature point positioning: At each candidate position, a fine-tuned model is fitted to determine the position scale, and key points are selected based on their stability.

(3)特征方向赋值:基于图像局部的梯度方向,分配给每个关键点位置一个或多个方向,后续的所有操作都是对于关键点的方向、尺度和位置进行变换,从而提供这些特征的不变性;(3) Feature direction assignment: Based on the local gradient direction of the image, one or more directions are assigned to each key point position. All subsequent operations transform the direction, scale, and position of the key point to provide invariance of these features.

(4)特征点描述:在每个特征点周围的邻域内,在选定的尺度上测量图像的局部梯度,这些梯度被变换成一种表示,这种表示允许比较大的局部形状的变形和光照变换。(4) Feature point description: In the neighborhood around each feature point, the local gradients of the image are measured at a selected scale, and these gradients are transformed into a representation that allows relatively large local shape deformations and illumination changes.

2)采用KNN算法将输入图像中得到的相同特征点进行配对,得到的特征匹配点可以用于计算图像变换模型,具体如下:2) The KNN algorithm is used to pair the same feature points obtained in the input image. The obtained feature matching points can be used to calculate the image transformation model, as follows:

每个特征点都有其唯一的特征描述子,若两个特征点的特征描述子相似度高,那么就认定它们为一对特征匹配点,反之异然。首先,计算出图像I1特征点集合A′={a1,a2,a3,...,an}中的所有特征点的特征描述子,到图像I2特征点集合B′={b1,b2,b3,...,bn}中所有特征点描述子之间的距离;然后,为每个特征点选择K个距离最近的点作为候选匹配点,每个特征点选择与其距离最近的两个点作为匹配候选,将最近距离与次近距离之间做商获得一个比率,若此比率值较大,则选择最近距离对应的一对特征点作为最终的特征匹配点。Each feature point has its unique feature descriptor. If the feature descriptors of two feature points have high similarity, they are identified as a pair of feature matching points, otherwise they are different. First, calculate the distance between the feature descriptors ofall feature points in the feature point set A′={a1 ,a2 ,a3 ,...,an } of image I 1 and the descriptors of all feature points in the feature point set B′={b1 ,b2 ,b3 ,...,bn } of image I2 ; then, select the K closest points for each feature point as candidate matching points, and select the two closest points to each feature point as matching candidates. The quotient between the closest distance and the second closest distance is used to obtain a ratio. If the ratio value is large, select a pair of feature points corresponding to the closest distance as the final feature matching points.

3)将待配准图像的特征匹配点之间的坐标关系用数学模型表示出来:3) The coordinate relationship between the feature matching points of the image to be registered is expressed by a mathematical model:

采用投影变换模型求解单应性矩阵,设图像1和图像2分别可以经过单应性矩阵H1、H2投影到同一投影平面内,并且两幅图像之间只涉及到旋转、平移,因此图像1和图像2之间的变换关系可以合并为一个单应性矩阵H3表示。设图像中存在一点p,齐次坐标为(x,y,1)T,经过投影变换后变为p′=(x′,y′,1)T,其变换表达式为:The homography matrix is solved by using the projection transformation model. Assume that image 1 and image 2 can be projected into the same projection plane through the homography matrices H1 and H2 respectively, and the two images only involve rotation and translation. Therefore, the transformation relationship between image 1 and image 2 can be combined into a homography matrix H3. Assume that there is a point p in the image, with homogeneous coordinates (x, y, 1)T. After the projection transformation, it becomes p′=(x′, y′, 1)T. Its transformation expression is:

x′、y′分别为投影变换后的像素点坐标,a1~a8为单应矩阵参数,由相机的内参、旋转、平移以及投影平面的参数信息决定。x′ and y′ are the coordinates of the pixel points after projection transformation, and a1 to a8 are the homography matrix parameters, which are determined by the intrinsic parameters, rotation, translation and projection plane parameter information of the camera.

4)采用最佳缝合线融合算法在图像结构相似的区域找到一条接缝,然后取输入图像中位于缝合线两侧的部分,构成最终融合结果。4) The best seam fusion algorithm is used to find a seam in the area with similar image structure, and then the parts of the input image located on both sides of the seam are taken to form the final fusion result.

最佳缝合线融合算法具体是:The optimal seam fusion algorithm is specifically:

最佳缝合线能够有效的去除拼接中运动物体移动出现的鬼影,通过选择一个最优的路径,以最小化拼接区域内的不连续性。主要步骤包括:①获取图像的重叠区域;②遍历重叠区域所有像素点计算每个像素点的能量值;③从第一行开始,计算每一个像素点与其下一行四邻像素点中能量值最小的和,不断累加,直到最后一行为止;④从第一行每个像素点开始都会得到一条缝合线,选出能量值累加值最小的缝合线;⑤获得最佳缝合线;⑥沿着缝合线进行图像融合。图20为最佳缝合线融合示意图。The best stitching line can effectively remove the ghosting caused by the movement of moving objects during stitching, by selecting an optimal path to minimize the discontinuity in the stitching area. The main steps include: ① Get the overlapping area of the image; ② Traverse all the pixels in the overlapping area to calculate the energy value of each pixel; ③ Starting from the first row, calculate the minimum sum of the energy values of each pixel and its four neighboring pixels in the next row, and continue to accumulate until the last row; ④ Starting from each pixel in the first row, a stitching line will be obtained, and the stitching line with the smallest accumulated energy value will be selected; ⑤ Get the best stitching line; ⑥ Perform image fusion along the stitching line. Figure 20 is a schematic diagram of the best stitching line fusion.

所述步骤3中,在图像拼接的基础上,采用基于语义分割的背景信息提取算法,根据背景处的时间特征匹配点估计相机运动路径;排除了运动前景的干扰,达到更平滑的视频稳像效果;In step 3, based on image stitching, a background information extraction algorithm based on semantic segmentation is used to estimate the camera motion path according to the temporal feature matching points at the background; the interference of the moving foreground is eliminated to achieve a smoother video stabilization effect;

首先,对输入的图像进行语义分割,使用深度学习模型将图像中的不同区域标记为不同的语义类别,如背景、船舶等。根据语义分割的结果,生成一个背景掩码,其中只包含被标记为背景类别的像素。这个掩码可以将图像中的背景元素从前景分割出来。使用生成的背景掩码,将船舶等前景物体从图像中分离出来。First, the input image is semantically segmented, and the deep learning model is used to mark different areas in the image as different semantic categories, such as background, ship, etc. Based on the results of semantic segmentation, a background mask is generated, which only contains pixels marked as background categories. This mask can separate the background elements in the image from the foreground. Using the generated background mask, foreground objects such as ships are separated from the image.

根据背景处的空间特征匹配点,计算图像变换模型,实现背景处的无缝拼接,具体如下:According to the spatial feature matching points at the background, the image transformation model is calculated to achieve seamless splicing at the background, as follows:

按照步骤3中的SIFT算法进行特征提取,如图21(a)、图21(b)所示。然后,按照步骤3中采用KNN算法将输入图像中得到的相同特征点进行配对,得到的特征匹配点计算图像变换模型,使图像投影到同一投影平面内,The feature extraction is performed according to the SIFT algorithm in step 3, as shown in Figure 21(a) and Figure 21(b). Then, the same feature points obtained in the input image are paired using the KNN algorithm in step 3, and the image transformation model is calculated based on the obtained feature matching points so that the images are projected onto the same projection plane.

然后,使用最佳缝合线融合算法在背景区域找到一条缝合线,沿缝合线将图像拼接在一起,如图22(c)所示,提高了视差存在情况下视频对应帧图像的拼接精度。Then, the optimal seam line fusion algorithm is used to find a seam line in the background area and stitch the images together along the seam line, as shown in Figure 22(c), which improves the stitching accuracy of corresponding frame images of the video in the presence of parallax.

所述步骤4中,In step 4,

寻找包围船舶轮廓的外接矩形,并考虑矩形面积最小化、到边接近度最大化、到边平方误差最小化标准,选择拟合的矩形,并以外接矩形的走向表征船舶船首向。具体如下:Find the circumscribed rectangle that surrounds the ship's outline, and consider the criteria of minimizing the rectangular area, maximizing the proximity to the edge, and minimizing the square error to the edge, select the fitted rectangle, and characterize the ship's bow direction based on the direction of the circumscribed rectangle. The details are as follows:

如图23、图24所示,以一定搜索角度步长遍历所有方向,寻找包围船舶轮廓的外接矩形,在每次迭代中,找到一个指向该方向并包含所有点云的矩形,然后分别计算此时的矩形面积、点云到矩形边的距离以及点云到边的平方误差,找到满足矩形面积最小、到边接近度最大也就是点到边距离最小、以及到边平方误差最小的矩形框,设置评判标准,选择正确的矩形框,此时,矩形框的搜索角度作为船舶首向角,也就是外接矩形的走向表征船舶船首向。As shown in Figures 23 and 24, all directions are traversed with a certain search angle step size to find the circumscribed rectangle surrounding the ship outline. In each iteration, a rectangle pointing in that direction and containing all point clouds is found, and then the area of the rectangle, the distance from the point cloud to the rectangle edge, and the square error from the point cloud to the edge are calculated respectively. The rectangular box that satisfies the minimum rectangular area, the maximum proximity to the edge, that is, the minimum point-to-edge distance, and the minimum square error to the edge is found, the judgment criteria are set, and the correct rectangular box is selected. At this time, the search angle of the rectangular box is used as the ship's heading angle, that is, the direction of the circumscribed rectangle represents the ship's bow direction.

本发明一种船闸内船舶静止情况下基于地理方位对船首向进行校准的方法,技术效果如下:The present invention provides a method for calibrating the bow direction based on the geographical orientation when the ship is stationary in a lock, and the technical effects are as follows:

1)本发明通过视频船舶边沿识别,激光点云船舶边沿识别,视频与激光点云融合采集船首向数据,通过岸基辅助修正提高罗经船首向数据精度。1) The present invention collects the ship's heading data through video ship edge recognition, laser point cloud ship edge recognition, video and laser point cloud fusion, and improves the compass ship's heading data accuracy through shore-based auxiliary correction.

2)本发明方法不仅效率高,而且能有效克服闸室内数据传输的影响,避免闸室内因船首偏离,造成刮碰事故。2) The method of the present invention is not only highly efficient, but also can effectively overcome the influence of data transmission in the lock chamber, and avoid collision accidents caused by bow deviation in the lock chamber.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1为本发明实施例摄像头及雷达施工布置示意图。FIG. 1 is a schematic diagram of the construction layout of a camera and a radar according to an embodiment of the present invention.

图2为本发明实施例系统架构示意图。FIG. 2 is a schematic diagram of a system architecture according to an embodiment of the present invention.

图3为本发明实施例视频图片采集示意图。FIG. 3 is a schematic diagram of video image acquisition according to an embodiment of the present invention.

图4为本发明实施例激光雷达数据示意图。FIG. 4 is a schematic diagram of laser radar data according to an embodiment of the present invention.

图5为本发明实施船舶控制算法流程图。FIG5 is a flow chart of a ship control algorithm implemented in the present invention.

图6为激光雷达监测范围尺度计算示意图。Figure 6 is a schematic diagram of the scale calculation of the laser radar monitoring range.

图7为世界坐标系中旋转角度α示意图Figure 7 is a schematic diagram of the rotation angle α in the world coordinate system

图8为图像坐标系与像素坐标系示意图。FIG8 is a schematic diagram of an image coordinate system and a pixel coordinate system.

图9为高斯滤波模板示意图。FIG9 is a schematic diagram of a Gaussian filter template.

图10为两图像间的单应性变换示意图。FIG. 10 is a schematic diagram of a homography transformation between two images.

图11为最佳缝合线融合示意图。FIG11 is a schematic diagram of optimal suture fusion.

图12为三角波调频连续波示意图。FIG. 12 is a schematic diagram of a triangular wave frequency modulated continuous wave.

图13为点云坐标转换示意图。FIG13 is a schematic diagram of point cloud coordinate transformation.

图14为三维点云投影至二维平面示意图。FIG14 is a schematic diagram of projecting a three-dimensional point cloud onto a two-dimensional plane.

图15为直通滤波示意图。FIG15 is a schematic diagram of straight-through filtering.

图16(a)为原始点云示意图;Figure 16(a) is a schematic diagram of the original point cloud;

图16(b)为体素滤波后的点云点云示意图。Figure 16(b) is a schematic diagram of the point cloud after voxel filtering.

图17为激光雷达从船头方向扫描到一艘客船的激光点云图。Figure 17 is a laser point cloud image of a passenger ship scanned by the laser radar from the bow direction.

图18为由点p0、p1、p3、p10、p12所围城的为凸包示意图。FIG18 is a schematic diagram of the convex hull enclosed by points p0, p1, p3, p10, and p12.

图19为根据余弦定理求出向量夹角的余弦值示意图。FIG. 19 is a schematic diagram showing how to calculate the cosine value of a vector angle according to the cosine theorem.

图20为最佳缝合线融合示意图。FIG. 20 is a schematic diagram of optimal suture fusion.

图21(a)为空间特征匹配点示意图(图像1特征点提取);Figure 21(a) is a schematic diagram of spatial feature matching points (feature point extraction of image 1);

图21(b)为空间特征匹配点示意图(图像2特征点提取)。Figure 21(b) is a schematic diagram of spatial feature matching points (feature point extraction of image 2).

图22(a)为图像1;Figure 22(a) is image 1;

图22(b)为图像2;Figure 22(b) is image 2;

图22(c)为图像1和图像2的缝合图像。Figure 22(c) is the stitched image of Image 1 and Image 2.

图23为外接矩形示意图。FIG. 23 is a schematic diagram of a circumscribed rectangle.

图24为矩形框的搜索角度示意图。FIG. 24 is a schematic diagram of the search angle of the rectangular frame.

具体实施方式DETAILED DESCRIPTION

本发明通过岸基的激光雷达和摄像机识别出船舶轮廓,从而得到船舶船首向;然后把这个船首向发给船上,让它修正船载的罗经设备。The present invention identifies the outline of a ship through a shore-based laser radar and a camera, thereby obtaining the bow direction of the ship; then the bow direction is sent to the ship to correct the ship-borne compass equipment.

具体的,如图1、图2所示,图1为本发明实施例摄像头及雷达施工布置示意图。图2为本发明实施例系统架构示意图。本实施例中,激光雷达和摄像机识别出船舶轮廓是采用:视频船舶边沿识别和激光点云船舶边沿识别;Specifically, as shown in Figures 1 and 2, Figure 1 is a schematic diagram of the construction layout of the camera and radar in an embodiment of the present invention. Figure 2 is a schematic diagram of the system architecture of an embodiment of the present invention. In this embodiment, the laser radar and camera recognize the outline of the ship by: video ship edge recognition and laser point cloud ship edge recognition;

所述视频船舶边沿识别,具体如下:The video ship edge recognition is as follows:

首先,利用闸室上的高位摄像机图像数据,采用深度学习的方式进行语义分割,识别出船舶区域、闸室壁区域、水区域;其次,提取船舶区域图像,获取船舶外轮廓图像边沿;最后,对轮廓边沿进行平滑处理;如图3所示,图3为闸室内采集的船舶视频图片。Firstly, the image data from the high-position camera on the lock chamber is used to perform semantic segmentation by deep learning to identify the ship area, lock chamber wall area and water area. Secondly, the image of the ship area is extracted to obtain the image edge of the ship's outer contour. Finally, the contour edge is smoothed. As shown in Figure 3, Figure 3 is a video image of a ship collected in the lock chamber.

所述激光点云船舶边沿识别,具体如下:The laser point cloud ship edge recognition is as follows:

首先,将三维点云投影至二维平面进行分析,一方面可以降低点噪点的影响,另一方面可以减小运算量,缩短结果输出时间;其次,通过条件滤波与下采样算法,滤除噪点,并采用DBSCAN聚类算法分割点云,形成船舶目标集合;然后,通过凸包算法,提取每个船舶目标的边沿。如图4所示,通过激光雷达采集船舶数据,确定船舶边沿。First, the three-dimensional point cloud is projected onto a two-dimensional plane for analysis. On the one hand, it can reduce the impact of point noise, and on the other hand, it can reduce the amount of calculation and shorten the result output time. Secondly, the noise is filtered out through conditional filtering and downsampling algorithms, and the DBSCAN clustering algorithm is used to segment the point cloud to form a set of ship targets. Then, the convex hull algorithm is used to extract the edge of each ship target. As shown in Figure 4, the ship data is collected by lidar to determine the ship edge.

视频与激光点云融合,具体如下:The video and laser point cloud are fused as follows:

首先,时间对准,以1s为时间基准,将视频数据与激光数据进行对准;其次,空间对准,将视频坐标、激光点云左边均转换为闸室平面坐标;再次,采用协方差交互法将视频与激光数据融合。First, time alignment is performed, using 1s as the time reference to align the video data with the laser data. Second, spatial alignment is performed, converting the video coordinates and the left side of the laser point cloud into the plane coordinates of the lock chamber. Third, the covariance interaction method is used to fuse the video and laser data.

船首向识别与校准,具体如下:Heading identification and calibration, as follows:

利用视频与激光点云融合数据获取准确且完整的船舶轮廓,采用矩形包围盒的方法识别,获得船舶船首向。寻找包围船舶轮廓的外接矩形,并考虑矩形面积最小化、到边接近度最大化、到边平方误差最小化等标准选择拟合的矩形,并以外接矩形的走向表征船舶船首向。The accurate and complete ship outline is obtained by fusion data of video and laser point cloud, and the ship bow direction is obtained by using the rectangular bounding box method. The circumscribed rectangle surrounding the ship outline is found, and the fitted rectangle is selected considering the criteria of minimizing the rectangular area, maximizing the proximity to the edge, minimizing the square error to the edge, and the ship bow direction is characterized by the direction of the circumscribed rectangle.

如图5所示,本发明通过采取PID控制方法,结合传感器所测得的输入信号,经过控制系统综合运算给出指令舵角信号,进而控制船舶舵叶转动,实现对闸室内船舶的有效控制。根据实际输出与预设值之间的差值分别进行积分运算、比例运算及微分运算得到对船舶舵机的控制输出结果,并将该结果输出到舵机。通过此过程并结合船端显示动态鸟瞰视角,实现对船舶的控制。As shown in FIG5 , the present invention adopts a PID control method, combines the input signal measured by the sensor, and gives a command rudder angle signal through comprehensive calculation of the control system, thereby controlling the rotation of the ship's rudder blade to achieve effective control of the ship in the lock chamber. According to the difference between the actual output and the preset value, integral calculation, proportional calculation and differential calculation are performed respectively to obtain the control output result of the ship's steering gear, and the result is output to the steering gear. Through this process and combined with the dynamic bird's-eye view displayed at the ship end, the control of the ship is achieved.

如图1所示,图1是施工布置图,从图1中可见,本实施例中该系统由摄像头、激光雷达、毫米波雷达、配电箱、网桥等设备组成。其中,在事故检修门顶部南侧将安装1台激光雷达与2台摄像头,并配备配电箱。该部分激光雷达和摄像头将固定于直径5.5cm、壁厚0.1cm的不锈钢管上,不锈钢管固定于事故检修门南侧栏杆处;此外将在船闸两侧电线杆上各安装1台摄像头、毫米波雷达及小型配电箱。As shown in Figure 1, Figure 1 is a construction layout diagram. It can be seen from Figure 1 that in this embodiment, the system consists of cameras, laser radars, millimeter-wave radars, distribution boxes, bridges and other equipment. Among them, a laser radar and two cameras will be installed on the south side of the top of the accident inspection door, and a distribution box will be equipped. This part of the laser radar and camera will be fixed on a stainless steel pipe with a diameter of 5.5cm and a wall thickness of 0.1cm. The stainless steel pipe is fixed to the railing on the south side of the accident inspection door; in addition, a camera, a millimeter-wave radar and a small distribution box will be installed on each of the poles on both sides of the ship lock.

采用广域远距离毫米波雷达,通过采用远、中、近独特的三波束技术,可实时纵向500米范围内的目标位置、距离、运动方向和速度等信息。通过采用浮式结构,可实现高、低水位的目标监测。激光雷达主要对船舶在闸室低水位时的状态监测,以上游事故检修门上的激光雷达为例,安装时将激光雷达绕X轴旋转90°,再绕Z轴旋转一定角度安装,以旋转55°为例进行计算。从图6可知,在低水位,上游激光雷达横向最小监测范围32m,最长纵向监测范围294m,通过与下游两侧的激光雷达进行拼接,可对整个闸室范围进行监测。The wide-area long-range millimeter-wave radar is adopted, and by adopting the unique three-beam technology of long, medium and short range, the target position, distance, movement direction and speed within the vertical range of 500 meters can be obtained in real time. By adopting a floating structure, target monitoring at high and low water levels can be achieved. The laser radar mainly monitors the status of ships at low water levels in the lock chamber. Taking the laser radar on the upstream accident maintenance door as an example, the laser radar is rotated 90° around the X-axis during installation, and then rotated around the Z-axis at a certain angle for installation. The calculation is taken as an example of a rotation of 55°. As can be seen from Figure 6, at low water levels, the minimum lateral monitoring range of the upstream laser radar is 32m, and the maximum longitudinal monitoring range is 294m. By splicing with the laser radars on both sides of the downstream, the entire lock chamber range can be monitored.

视频是通过多个摄像头采集后,将视频拼接而成,由于视频拼接可以看做是由时间上连续的一系列图像拼接组成,因此图像拼接算法的质量直接关系到视频拼接的效果。本实施例中,图像拼接算法主要包括三个步骤:坐标变换、图像预处理、图像配准、图像融合。Video is acquired by multiple cameras and then stitched together. Since video stitching can be regarded as a series of images stitched together in time, the quality of the image stitching algorithm is directly related to the effect of video stitching. In this embodiment, the image stitching algorithm mainly includes three steps: coordinate transformation, image preprocessing, image registration, and image fusion.

在图像拼接以及视频拼接的过程中,需要将多张输入图像变换到同一坐标系下再拼接。故,需通过相机的内参数矩阵和外参数矩阵,实现世界坐标系、相机坐标系、成像坐标系以及像素坐标系之间的转换,最终建立现实世界中的物理点与图像像素点之间的联系。In the process of image stitching and video stitching, multiple input images need to be transformed into the same coordinate system before stitching. Therefore, it is necessary to use the camera's intrinsic parameter matrix and extrinsic parameter matrix to realize the conversion between the world coordinate system, camera coordinate system, imaging coordinate system and pixel coordinate system, and finally establish the connection between physical points in the real world and image pixels.

①世界坐标系转换为相机坐标系:①Convert the world coordinate system to the camera coordinate system:

世界坐标系和相机坐标系均为三维坐标系,可以通过平移变换和旋转变换相互转换。因此,如果己知一个点在世界坐标系中的坐标,则可以求出其在相机坐标系中的坐标,世界坐标系绕z轴旋转的过程如图7所示。The world coordinate system and the camera coordinate system are both three-dimensional coordinate systems, which can be converted to each other through translation and rotation transformations. Therefore, if the coordinates of a point in the world coordinate system are known, its coordinates in the camera coordinate system can be calculated. The process of rotating the world coordinate system around the z-axis is shown in Figure 7.

坐标转换关系如式(1),(2)所示:The coordinate transformation relationship is shown in equations (1) and (2):

绕着不同的坐标轴旋转不同的角度,得到相应的旋转矩阵,同理绕x轴、y轴旋转和ω可以得到Rotate different angles around different coordinate axes to obtain the corresponding rotation matrix. Similarly, rotate around the x-axis and y-axis and ω can be obtained

于是可以得到旋转矩阵R=R1R2R3,于是可以得到P点在相机坐标系中的坐标;Then we can get the rotation matrix R = R1 R2 R3 , and then we can get the coordinates of point P in the camera coordinate system;

②相机坐标系转换到图像坐标系:②Convert the camera coordinate system to the image coordinate system:

对于相机坐标系中一点P(Xw,Yw,Zw)其在像平面坐标系中对应的投影点为p(x,y)。根据三角形的相似关系可得到从相机坐标系到像平面坐标系的转换关系;For a point P(Xw ,Yw ,Zw ) in the camera coordinate system, its corresponding projection point in the image plane coordinate system is p(x, y). Based on the similarity relationship of triangles, the transformation relationship from the camera coordinate system to the image plane coordinate system can be obtained;

③图像坐标系转换到图像像素坐标系:③Convert the image coordinate system to the image pixel coordinate system:

像素坐标系和图像坐标系都在成像平面上,只是各自的原点和度量单位不一样。图像坐标系的原点为相机光轴与成像平面的交点,图像坐标系的单位是mm,属于物理单位,而像素坐标系的单位是pixel。相机图像坐标系及图像像素坐标系的关系如图8所示。The pixel coordinate system and the image coordinate system are both on the imaging plane, but their origins and measurement units are different. The origin of the image coordinate system is the intersection of the camera optical axis and the imaging plane. The unit of the image coordinate system is mm, which is a physical unit, while the unit of the pixel coordinate system is pixel. The relationship between the camera image coordinate system and the image pixel coordinate system is shown in Figure 8.

在图像坐标系中点p(x,y),其对应的图像像素坐标系的坐标为o(u0,v0),则点p在图像坐标系中坐标与像素坐标系中坐标的转换关系如下In the image coordinate system, the corresponding coordinates of the point p (x, y) in the image pixel coordinate system are o (u0 , v0 ). The conversion relationship between the coordinates of the point p in the image coordinate system and the coordinates in the pixel coordinate system is as follows:

其中,dx和dy分别表示一个像素点在x轴和y轴上的宽度,(u0,v0)为摄像机主点,即光轴与像平面的交点o的像素坐标,综合以上公式原理,就可以推算出世界坐标点(XW,YW,ZW)与像素坐标点o(u0,v0)之间的转换关系为Among them, dx and dy represent the width of a pixel on the x-axis and y-axis respectively, (u0 ,v0 ) is the camera principal point, that is, the pixel coordinate of the intersection o of the optical axis and the image plane. Combining the above formula principles, the conversion relationship between the world coordinate point (XW ,YW ,ZW ) and the pixel coordinate point o (u0 ,v0 ) can be calculated as

其中,作为高清摄像机内参可通过测量得到,为相机外参,T为投影矩阵,经过标定后可获得P,再根据相机拍摄到图片中目标的像素坐标可反算出其在真实世界中的位置。in, As the internal parameters of the high-definition camera, is the camera external parameter, T is the projection matrix, and after calibration, P can be obtained. Then, according to the pixel coordinates of the target in the picture taken by the camera, its position in the real world can be calculated.

图像获取过程易受到相机硬件以及环境等诸多因素的影响,造成图像中出现畸变、曝光差异、模糊等现象,图像预处理可以在一定程度上提高图片质量,为后续图像配准、融合的精准计算奠定基础。本发明拟采用中值滤波与高斯滤波组合滤波的方法进行噪声去除。使用滤波模板覆盖范围内全部像素点灰度值的中位数,作为中心位置像素点的灰度值,使得邻近像素点灰度值与真实灰度值相近,进而滤除图像中的噪声。假设滤波模板尺寸为M*M,模板范围内像素灰度值为f1,f2,f3,...,fM*M,经中值滤波后中心位置处像素灰度值F为:The image acquisition process is easily affected by many factors such as camera hardware and environment, resulting in distortion, exposure differences, blur and other phenomena in the image. Image preprocessing can improve the image quality to a certain extent and lay the foundation for the subsequent accurate calculation of image registration and fusion. The present invention intends to use a combined filtering method of median filtering and Gaussian filtering to remove noise. The median of the grayscale values of all pixels within the coverage range of the filter template is used as the grayscale value of the pixel at the center position, so that the grayscale value of the neighboring pixels is close to the true grayscale value, thereby filtering out the noise in the image. Assuming that the size of the filter template is M*M, and the grayscale values of the pixels within the template range are f1 , f2 , f3 ,..., fM*M , the grayscale value F of the pixel at the center position after median filtering is:

F=med{f1,f2,f3,...,fM*M} (10)F=med{f1 ,f2 ,f3 ,...,fM*M } (10)

在处理噪声污染的图像时,中值滤波可以平滑噪声并保持原有的清晰度,但是在面对被高斯噪声等污染的图像时,由于滤波后图像像素不连续,此时采用高斯滤波的算法,通过高斯滤波模板覆盖范围内像素点灰度值的加权平均,代替中心点处像素灰度值。假设使用3*3的滤波模板进行图像平滑,将模板的中心位置作为取样原点,并对周围像素点进行取样,根据高斯函数可以得到所有位置的模板系数,若高斯函数中σ的值为1.5,对计算结果归一化后,得到的高斯滤波模板如图9所示。得到高斯滤波模板后将周围邻域像素值做加权平均,用以代替模板中心位置像素值。由于高斯函数拥有旋转对称的特点,因此高斯滤波可以实现图像所有方位的均匀平滑,达到了保护了原图像边缘的走向的效果。其次,高斯函数的峰值位于原点处,离原点越远,函数值越小,这保证了中心点位置处的像素点不会受距离较远的像素点过大的影响,保护了原图像特征点和边缘的特性。When processing noise-contaminated images, median filtering can smooth out the noise and maintain the original clarity. However, when facing images contaminated by Gaussian noise, the pixels of the image are discontinuous after filtering. At this time, the Gaussian filtering algorithm is used to replace the grayscale value of the pixel at the center point by the weighted average of the grayscale values of the pixels within the coverage range of the Gaussian filtering template. Assuming that a 3*3 filtering template is used for image smoothing, the center position of the template is used as the sampling origin, and the surrounding pixels are sampled. According to the Gaussian function, the template coefficients of all positions can be obtained. If the value of σ in the Gaussian function is 1.5, the calculation results are normalized, and the Gaussian filtering template obtained is shown in Figure 9. After obtaining the Gaussian filtering template, the pixel values of the surrounding neighborhood are weighted averaged to replace the pixel value at the center of the template. Since the Gaussian function has the characteristics of rotational symmetry, the Gaussian filter can achieve uniform smoothing of all directions of the image, achieving the effect of protecting the direction of the edge of the original image. Secondly, the peak of the Gaussian function is located at the origin. The farther away from the origin, the smaller the function value. This ensures that the pixel at the center point will not be overly affected by the pixels farther away, thus protecting the characteristics of the feature points and edges of the original image.

图像配准指的是利用图像中的特征匹配点获得图像变换模型,将目标图像映射到参考图像所在坐标系下,实现其图像对齐的过程。配准算法的好坏直接决定了拼接结果的精确度,在图像拼接中占有核心地位。图像配准方法包括基于灰度的配准和基于特征的配准,其中基于灰度的配准方法对图像平移、缩放、旋转、光照以及视角变化鲁棒性较差,并且计算量大、速度慢,因此没有得到广泛应用。近年来基于特征的图像配准获得了快速发展,其主要流程包括特征提取、特征匹配和图像变换模型的参数计算。Image registration refers to the process of using feature matching points in an image to obtain an image transformation model, mapping the target image to the coordinate system of the reference image, and achieving image alignment. The quality of the registration algorithm directly determines the accuracy of the stitching result and plays a core role in image stitching. Image registration methods include grayscale-based registration and feature-based registration. The grayscale-based registration method has poor robustness to image translation, scaling, rotation, illumination, and perspective changes, and has large computational complexity and slow speed, so it has not been widely used. In recent years, feature-based image registration has developed rapidly, and its main processes include feature extraction, feature matching, and parameter calculation of the image transformation model.

本发明拟采用SIFT算法进行特征提取,通过在尺度空间中找到极值点作为候选特征点,并从中选择图像尺度、位置和旋转保持不变的点作为最终的特征点。这些特征点不仅显著度高,而且对噪声、光线以及视角变化等具有较强的鲁棒性。The present invention intends to use the SIFT algorithm for feature extraction, by finding extreme points in the scale space as candidate feature points, and selecting points where the image scale, position and rotation remain unchanged as the final feature points. These feature points are not only highly significant, but also highly robust to noise, light and perspective changes.

接着,采用KNN算法将输入图像中得到的相同特征点进行配对,得到的特征匹配点可以用于计算图像变换模型。每个特征点都有其唯一的特征描述子,若两个特征点的特征描述子相似度高,那么就认定它们为一对特征匹配点,反之异然。首先,计算出图像I1特征点集合A′={a1,a2,a3,...,an}中的所有特征点的特征描述子,到图像I2特征点集合B′={b1,b2,b3,...,bn}中所有特征点描述子之间的距离。然后,为每个特征点选择K个距离最近的点作为候选匹配点,每个特征点选择与其距离最近的两个点作为匹配候选,将最近距离与次近距离之间做商获得一个比率,若此比率值较大,则选择最近距离对应的一对特征点作为最终的特征匹配点。Next, the KNN algorithm is used to pair the same feature points obtained in the input image, and the obtained feature matching points can be used to calculate the image transformation model. Each feature point has its own unique feature descriptor. If the feature descriptors of two feature points have high similarity, they are identified as a pair of feature matching points, and vice versa. First, calculate the distance between the feature descriptors of all feature points in the image I1 feature point set A′={a1 ,a2 ,a3 ,...,an } and the descriptors of all feature points in the image I2 feature point set B′={b1 ,b2 ,b3 ,...,bn }. Then, for each feature point, select K points with the closest distance as candidate matching points, and for each feature point, select the two points with the closest distance to it as matching candidates. The quotient between the closest distance and the second closest distance is used to obtain a ratio. If the ratio value is large, select a pair of feature points corresponding to the closest distance as the final feature matching points.

最后,将待配准图像的特征匹配点之间的坐标关系用数学模型表示出来,实现图像重叠区域对齐,减少模糊、重影现象。本发明拟采用投影变换模型求解单应性矩阵。如图10所示,假设图像1和图像2分别可以经过单应性矩阵H1、H2投影到同一投影平面内,并且两幅图像之间只涉及到旋转、平移,因此图像1和图像2之间的变换关系可以合并为一个单应性矩阵H3表示。假设图像中存在一点p,齐次坐标为(x,y,1)T,经过投影变换后变为p′=(x′,y′,1)T,其变换表达式为:Finally, the coordinate relationship between the feature matching points of the images to be registered is expressed by a mathematical model to achieve alignment of the overlapping areas of the images and reduce blurring and ghosting. The present invention proposes to use a projection transformation model to solve the homography matrix. As shown in Figure 10, assuming that image 1 and image 2 can be projected into the same projection plane through homography matrices H1 and H2, respectively, and only rotation and translation are involved between the two images, so the transformation relationship between image 1 and image 2 can be combined into a homography matrix H3. Assume that there is a point p in the image, with homogeneous coordinates (x, y, 1)T , which becomes p′=(x′, y′, 1)T after projection transformation, and its transformation expression is:

由于投影变换公式中存在八个未知数,因此最少需要四对特征匹配点进行影变换矩阵参数求解。Since there are eight unknowns in the projection transformation formula, at least four pairs of feature matching points are required to solve the projection transformation matrix parameters.

当待拼接图像之间存在曝光差异或配准误差时,若直接使用叠加的方式将变换后的拼接在一起,会使得拼接结果中存在重影等现象。本发明拟采用最佳缝合线融合算法在图像结构相似的区域找到一条接缝,然后取输入图像中位于缝合线两侧的部分,构成最终融合结果,如图11所示,以消除接缝区域亮度差异,校正图像配准结果中的细小误差,实现重叠区域平滑。When there are exposure differences or registration errors between the images to be stitched, if the transformed images are directly stitched together by superposition, ghosting and other phenomena will occur in the stitching result. The present invention intends to use the best seam fusion algorithm to find a seam in the area with similar image structure, and then take the parts of the input image on both sides of the seam to form the final fusion result, as shown in Figure 11, to eliminate the brightness difference in the seam area, correct the small errors in the image registration results, and achieve smooth overlapping areas.

视频可以看作是由时间上连续的多帧图像组成,视频拼接以图像拼接为基础。但对于动态视频拼接而言,由于摄像头的位置、姿势差异较大,拍摄场景中包含多个景深不同的物体,导致待拼接视频对应帧图像之间存在着较大的视差,若仅仅使用图像拼接技术将视频对应帧图像拼接在一起,会导致拼接结果中存在抖动现象,因此,为了获得稳定的视频拼接结果,还需要考虑视频稳像问题。本发明拟在图像拼接的基础上,采用基于语义分割的背景信息提取算法,根据背景处的时间特征匹配点估计相机运动路径,排除了运动前景的干扰,达到更平滑的视频稳像效果;根据背景处的空间特征匹配点计算图像变换模型,实现背景处的无缝拼接,然后使用多感知缝合线融合算法,在背景区域找到一条缝合线,沿缝合线将图像拼接在一起,提高了视差存在情况下视频对应帧图像的拼接精度。Video can be regarded as composed of multiple frames of images that are continuous in time, and video stitching is based on image stitching. However, for dynamic video stitching, due to the large differences in the position and posture of the camera, the shooting scene contains multiple objects with different depths of field, resulting in large parallax between the corresponding frame images of the video to be stitched. If only the image stitching technology is used to stitch the corresponding frame images of the video together, it will cause jitter in the stitching result. Therefore, in order to obtain a stable video stitching result, the video stabilization problem needs to be considered. The present invention intends to use a background information extraction algorithm based on semantic segmentation on the basis of image stitching, estimate the camera motion path according to the temporal feature matching points at the background, eliminate the interference of the motion foreground, and achieve a smoother video stabilization effect; calculate the image transformation model according to the spatial feature matching points at the background, realize seamless stitching at the background, and then use the multi-perception seam line fusion algorithm to find a seam line in the background area, and stitch the images together along the seam line, thereby improving the stitching accuracy of the corresponding frame images of the video in the presence of parallax.

以三角波调频连续波为例,如图12所示,浅色为发射信号频率,深色为接收信号频率,扫频周期为T,扫频带宽为B,岸基雷达发射信号经过船舶反射回波信号存在延时,在三角形的频率变化中,可以在上升沿和下降沿两者上进行船舶距离测量。Taking the triangular frequency modulated continuous wave as an example, as shown in Figure 12, the light color is the transmitting signal frequency, the dark color is the receiving signal frequency, the sweep period is T, the sweep bandwidth is B, and there is a delay in the echo signal reflected by the ship when the shore-based radar transmits the signal. In the frequency change of the triangle, the ship distance measurement can be performed on both the rising edge and the falling edge.

若无多普勒频率,则上升沿期间的频率差值等于下降沿期间的测量值。对于运动船舶目标,则上升/下降沿期间的频率差不同,可通过这2个频率差来测距和测速,目标距离R为:If there is no Doppler frequency, the frequency difference during the rising edge is equal to the measured value during the falling edge. For moving ship targets, the frequency difference during the rising/falling edge is different. The distance and speed can be measured by these two frequency differences. The target distance R is:

目标速度v为:The target speed v is:

其中,需要测量的频差为△f1与△f2,已知的调频斜率为Kr,fd为运动目标的多普勒频差,c为光速,λ为信号波长,△t为时间差。Among them, the frequency differences to be measured are △f1 and △f2 , the known frequency modulation slope is Kr , fd is the Doppler frequency difference of the moving target, c is the speed of light, λ is the signal wavelength, and △t is the time difference.

Claims (7)

Translated fromChinese
1.船闸内船舶静止情况下基于地理方位对船首向进行校准的方法,其特征在于包括以下步骤:1. A method for calibrating the heading of a ship based on its geographic position when the ship is stationary in a lock, characterized by comprising the following steps:步骤1:利用摄像机识别出船舶区域,提取船舶区域图像,获取船舶外轮廓图像边沿;Step 1: Use the camera to identify the ship area, extract the ship area image, and obtain the edge of the ship's outer contour image;步骤2:通过激光雷达获取船舶三维点云,将三维点云投影至二维平面,采用DBSCAN聚类算法分割点云,形成船舶目标集合,提取每个船舶目标的边沿;Step 2: Obtain the three-dimensional point cloud of the ship through the laser radar, project the three-dimensional point cloud to the two-dimensional plane, use the DBSCAN clustering algorithm to segment the point cloud to form a ship target set, and extract the edge of each ship target;步骤3:将视频数据与激光点云数据进行融合;Step 3: Fusion of video data and laser point cloud data;步骤4:获取准确且完整的船舶轮廓,采用矩形包围盒识别方法,获得船舶船首向。Step 4: Obtain an accurate and complete ship outline and use a rectangular bounding box recognition method to obtain the ship's bow direction.2.根据权利要求1所述船闸内船舶静止情况下基于地理方位对船首向进行校准的方法,其特征在于:所述步骤1包括以下步骤:2. The method for calibrating the heading of a ship based on its geographic position when the ship is stationary in a lock according to claim 1, characterized in that: step 1 comprises the following steps:S1.1:首先,利用闸室上的高位摄像机图像数据,采用深度学习的方式进行语义分割,识别出船舶区域、闸室壁区域、水区域;S1.1: First, using the image data from the high-position camera on the lock chamber, deep learning is used to perform semantic segmentation to identify the ship area, lock chamber wall area, and water area;S1.2:其次,提取船舶区域图像,获取船舶外轮廓图像边沿;S1.2: Secondly, extract the ship area image and obtain the edge of the ship outer contour image;S1.3:最后,对船舶外轮廓图像边沿进行平滑处理。S1.3: Finally, the edge of the ship's outer contour image is smoothed.3.根据权利要求1所述船闸内船舶静止情况下基于地理方位对船首向进行校准的方法,其特征在于:所述步骤2包括以下步骤:3. The method for calibrating the heading of a ship based on its geographic position when the ship is stationary in a lock according to claim 1, characterized in that: step 2 comprises the following steps:S2.1:首先,将三维点云投影至二维平面,具体包括:S2.1: First, project the 3D point cloud onto a 2D plane, including:将激光雷达坐标系,即以激光雷达中心为原点的坐标系,转换为大地坐标系;通过测量激光雷达安装时的俯仰角φ、横滚角θ、航向角ψ,分别计算激光点云绕x、y及z轴的转矩阵Rx,φ、Ry,θ和Rz,ψ;再将激光雷达坐标系转为大地坐标系,其中(x,y,z)激光雷达坐标系下的点云坐标,(X,Y,Z)为其转换至船闸坐标系下的点云坐标;然后,将三维激光点云投影至XY平面,即令Z=0;The laser radar coordinate system, i.e., the coordinate system with the center of the laser radar as the origin, is converted to the earth coordinate system; by measuring the pitch angle φ, roll angle θ, and heading angle ψ when the laser radar is installed, the rotation matrices Rx,φ , Ry ,θ , and Rz,ψ of the laser point cloud around the x, y, and z axes are calculated respectively; the laser radar coordinate system is then converted to the earth coordinate system, where (x, y, z) are the point cloud coordinates in the laser radar coordinate system, and (X, Y, Z) are the point cloud coordinates converted to the ship lock coordinate system; then, the three-dimensional laser point cloud is projected to the XY plane, i.e., Z=0;S2.2:其次,通过条件滤波与下采样算法滤除噪点,具体包括:S2.2: Secondly, the noise is filtered out through conditional filtering and downsampling algorithms, including:通过设定滤波条件进行滤波,判断点云是否在规则的范围则中,如果不在则舍弃;根据船闸水域实景数据,设置X与Y坐标值的阈值,在阈值范围内的点云保留,在阈值范围外的点云删除;并采用DBSCAN聚类算法分割点云;By setting filtering conditions, the point cloud is judged whether it is within the range of the rules, and if not, it is discarded; according to the real scene data of the lock waters, the thresholds of the X and Y coordinate values are set, the point cloud within the threshold range is retained, and the point cloud outside the threshold range is deleted; and the DBSCAN clustering algorithm is used to segment the point cloud;S2.3:通过凸包算法提取每个船舶目标的边沿,具体包括::S2.3: Extract the edge of each ship target by convex hull algorithm, including:凸包算法采用Graham扫描法:The convex hull algorithm uses the Graham scan method:(1)以船闸入闸法向为y轴,闸口与y轴垂直方向为x轴,将激光点云所有点置于xOy平面;(1) With the normal direction of the ship lock as the y-axis and the direction perpendicular to the y-axis as the x-axis, all points of the laser point cloud are placed in the xOy plane;(2)对于任意一艘船舶激光点云,在所有点中选取y坐标最小的一点H,当作基点;如果存在多个点的y坐标都为最小值,则选取x坐标最小的一点;坐标相同的点应排除;(2) For any ship laser point cloud, select the point H with the smallest y coordinate among all points as the base point; if there are multiple points with the smallest y coordinate, select the point with the smallest x coordinate; points with the same coordinates should be excluded;(3)按照其它各点p和基点构成的向量<H,p>;与x轴的夹角进行排序,夹角由大至小进行顺时针扫描,反之则进行逆时针扫描;根据余弦定理求出向量夹角的余弦值即可;基点为H,根据夹角由小至大排序后依次为H,K,C,D,L,F,G,E,I,B,A,J;下面进行逆时针扫描;(3) According to the angles between the vectors <H, p> formed by the other points p and the base point and the x-axis, the angles are sorted clockwise from large to small, and vice versa. The cosine value of the vector angle can be calculated according to the cosine theorem. The base point is H, and the angles are sorted from small to large in the order of H, K, C, D, L, F, G, E, I, B, A, J. The counterclockwise scan is then performed.(4)线段<H,K>;一定在凸包上,接着加入C;设线段<K,C>;也在凸包上,因为就H,K,C三点而言,它们的凸包就是由此三点所组成;但是接下来加入D时,线段<K,D>;才会在凸包上,所以将线段<K,C>;排除,C点不可能是凸包;(4) The line segment <H, K> must be on the convex hull. Then add C. Assume that the line segment <K, C> is also on the convex hull, because the convex hull of points H, K, and C is composed of these three points. However, when D is added, the line segment <K, D> will be on the convex hull. Therefore, the line segment <K, C> is excluded, and point C cannot be on the convex hull.(5)即当加入一点时,必须考虑到前面的线段是否在凸包上;从基点开始,凸包上每条相临的线段的旋转方向应该一致,并与扫描的方向相反;如果发现新加的点使得新线段与上线段的旋转方向发生变化,则可判定上一点必然不在凸包上;实现时可用向量叉积进行判断,设新加入的点为pn+1,上一点为pn,再上一点为pn-1;顺时针扫描时,如果向量{pn-1,pn}与{pn,pn+1}的叉积为正,逆时针扫描判断是否为负,则将上一点删除;删除过程需要回溯,将之前所有叉积符号相反的点都删除,然后将新点加入凸包;(5) That is, when adding a point, it is necessary to consider whether the previous line segment is on the convex hull; starting from the base point, the rotation direction of each adjacent line segment on the convex hull should be consistent and opposite to the scanning direction; if it is found that the newly added point causes the rotation direction of the new line segment and the previous line segment to change, it can be determined that the previous point must not be on the convex hull; when implementing, the vector cross product can be used for judgment, assuming that the newly added point is pn+1 , the previous point ispn , and the previous point is pn-1 ; when scanning clockwise, if the cross product of the vectors {pn-1 ,pn } and {pn , pn+1 } is positive, scan counterclockwise to determine whether it is negative, then delete the previous point; the deletion process needs to backtrack, delete all previous points with opposite cross product signs, and then add the new point to the convex hull;加入K点时,由于线段<H,C>要旋转到<H,K>的角度,为顺时针旋转,所以C点不在凸包上,应该删除,保留K点;接着加入D点,由于线段<K,D>要旋转到<H,K>的角度,为逆时针旋转,故D点保留;When adding point K, since the line segment <H, C> needs to be rotated to the angle of <H, K>, which is a clockwise rotation, point C is not on the convex hull and should be deleted, and point K is retained; then add point D. Since the line segment <K, D> needs to be rotated to the angle of <H, K>, which is a counterclockwise rotation, point D is retained;按照上述步骤进行扫描,直到点集中所有的点都遍历完成,即得到凸包。Scan according to the above steps until all points in the point set are traversed and the convex hull is obtained.4.根据权利要求1所述船闸内船舶静止情况下基于地理方位对船首向进行校准的方法,其特征在于:所述步骤3包括以下步骤:4. The method for calibrating the heading of a ship based on its geographic position when the ship is stationary in a lock according to claim 1, characterized in that: said step 3 comprises the following steps:S3.1:首先,时间对准,以1s为时间基准,将视频数据与激光点云数据进行对准;S3.1: First, time alignment is performed, using 1s as the time reference to align the video data with the laser point cloud data;S3.2:其次,空间对准,将视频坐标、激光点云坐标均转换为闸室平面坐标;S3.2: Secondly, spatial alignment, converting the video coordinates and laser point cloud coordinates into the plane coordinates of the lock chamber;S3.3:再次,采用协方差交叉法将视频数据与激光数据融合,具体包括:S3.3: Again, the video data and laser data are fused using the covariance crossover method, including:协方差交叉法解决的是包含误差的数据融合问题;激光雷达与视频测量同一艘船舶时,得到了两个不同的测量值A和B,但A和B的相关性不确定,通过计算A和B的协方差交叉矩阵CI,并求得矩阵CI的迹最小时的权重系数分布,融合后的数据可通过测量值A和B乘以权重系数得到,具体公式如下:The covariance cross method solves the problem of data fusion containing errors. When the lidar and video measure the same ship, two different measurement values A and B are obtained, but the correlation between A and B is uncertain. By calculating the covariance cross matrix CI of A and B and finding the weight coefficient distribution when the trace of the matrix CI is the smallest, the fused data can be obtained by multiplying the measurement values A and B by the weight coefficient. The specific formula is as follows:其中,为融合后的数据,P1为激光雷达的估计误差方差,P2为视频的估计误差方差,PCI为融合后的估计误差方差,ω1和ω2为权重系数,其中,0≤ω1≤1,0≤ω2≤1,ω12=1,权重系数ω1和ω2通过最小化如下指标函数J=tr(PCI)得到,其中,tr(PCI)为矩阵PCI的迹;in, is the fused data, P1 is the estimation error variance of the lidar, P2 is the estimation error variance of the video,PCI is the estimation error variance after fusion, ω1 and ω2 are weight coefficients, where 0≤ω1 ≤1, 0≤ω2 ≤1, ω12 =1, and the weight coefficients ω1 and ω2 are obtained by minimizing the following indicator function J=tr(PCI ), where tr(PCI ) is the trace of the matrixPCI ;5.根据权利要求4所述船闸内船舶静止情况下基于地理方位对船首向进行校准的方法,其特征在于:所述步骤3中,视频是通过多个摄像机采集后拼接而成,拼接方法包括以下步骤:5. The method for calibrating the heading of a ship based on the geographic orientation when the ship is stationary in a lock according to claim 4, characterized in that: in step 3, the video is acquired by multiple cameras and then spliced, and the splicing method includes the following steps:坐标变换、图像预处理、图像配准、图像融合;Coordinate transformation, image preprocessing, image registration, image fusion;坐标变换中:In coordinate transformation:世界坐标点(XW,YW,ZW)与像素坐标点o(u0,v0)之间的转换关系为:The conversion relationship between the world coordinate point (XW , YW , ZW ) and the pixel coordinate point o (u0 , v0 ) is:其中,in,作为高清摄像机内参,可通过测量得到;为相机外参,T为投影矩阵;经过标定后可获得P,再根据相机拍摄到图片中目标的像素坐标可反算出其在真实世界中的位置;zc为位置变量,与摄像机的安装高度,安装姿态角等有关;fx、fy为摄像机的内部参数,R为世界坐标系转换为相机坐标系的旋转矩阵; As the internal parameter of the high-definition camera, it can be obtained through measurement; is the camera external parameter, T is the projection matrix; after calibration, P can be obtained, and then the position of the target in the image taken by the camera can be inversely calculated;zc is the position variable, which is related to the installation height and installation attitude angle of the camera;fx andfy are the internal parameters of the camera, R is the rotation matrix from the world coordinate system to the camera coordinate system;图像预处理包括:Image preprocessing includes:a:使用滤波模板覆盖范围内全部像素点灰度值的中位数,作为中心位置像素点的灰度值,使得邻近像素点灰度值与真实灰度值相近,进而滤除图像中的噪声;设滤波模板尺寸为M*M,模板范围内像素灰度值为f1,f2,f3,...,fM*M,经中值滤波后中心位置处像素灰度值F为:a: Use the median of the grayscale values of all pixels within the coverage of the filter template as the grayscale value of the pixel at the center, so that the grayscale values of the neighboring pixels are close to the true grayscale values, thereby filtering out the noise in the image; suppose the size of the filter template is M*M, the pixel grayscale values within the template range are f1 ,f2 ,f3 ,...,fM*M , and the grayscale value F of the pixel at the center after median filtering is:F=med{f1,f2,f3,...,fM*M} (10);F=med{f1 , f2 , f3 ,..., fM*M } (10);f1,f2,f3,...,fM*M为模板范围内每个像素点的灰度值;f1 ,f2 ,f3 ,...,fM*M are the grayscale values of each pixel within the template range;b:采用高斯滤波的算法,通过高斯滤波模板覆盖范围内像素点灰度值的加权平均,代替中心点处像素灰度值;b: Using the Gaussian filtering algorithm, the grayscale value of the pixel at the center point is replaced by the weighted average of the grayscale values of the pixels within the coverage range of the Gaussian filtering template;设使用3*3的滤波模板进行图像平滑,将模板的中心位置作为取样原点,并对周围像素点进行取样,根据高斯函数可以得到所有位置的模板系数,若高斯函数中σ的值为1.5,对计算结果归一化后,得到高斯滤波模板后将周围邻域像素值做加权平均,用以代替模板中心位置像素值;Assume that a 3*3 filter template is used for image smoothing, the center of the template is used as the sampling origin, and the surrounding pixels are sampled. The template coefficients of all positions can be obtained according to the Gaussian function. If the value of σ in the Gaussian function is 1.5, the calculation result is normalized, and the Gaussian filter template is obtained. The pixel values of the surrounding neighborhood are weighted averaged to replace the pixel value at the center of the template.图像配准采用基于特征的配准,包括特征提取、特征匹配和图像变换模型的参数计算;Image registration uses feature-based registration, including feature extraction, feature matching, and parameter calculation of image transformation model;1)采用SIFT算法进行特征提取,通过在尺度空间中找到极值点作为候选特征点,并从中选择图像尺度、位置和旋转保持不变的点作为最终的特征点;1) SIFT algorithm is used for feature extraction. The extreme points in the scale space are found as candidate feature points, and the points whose image scale, position and rotation remain unchanged are selected as the final feature points.2)采用KNN算法将输入图像中得到的相同特征点进行配对,得到的特征匹配点可以用于计算图像变换模型,具体如下:2) The KNN algorithm is used to pair the same feature points obtained in the input image. The obtained feature matching points can be used to calculate the image transformation model, as follows:每个特征点都有其唯一的特征描述子,若两个特征点的特征描述子相似度高,那么就认定它们为一对特征匹配点,反之异然;首先,计算出图像I1特征点集合A′={a1,a2,a3,...,an}中的所有特征点的特征描述子,到图像I2特征点集合B′={b1,b2,b3,...,bn}中所有特征点描述子之间的距离;然后,为每个特征点选择K个距离最近的点作为候选匹配点,每个特征点选择与其距离最近的两个点作为匹配候选,将最近距离与次近距离之间做商获得一个比率,若此比率值较大,则选择最近距离对应的一对特征点作为最终的特征匹配点;Each feature point has its unique feature descriptor. If the feature descriptors of two feature points have high similarity, they are identified as a pair of feature matching points, and vice versa. First, calculate the distance between the feature descriptors of all feature points in the image I1 feature point set A′={a1 ,a2 ,a3 ,...,an } and the descriptors of all feature points in the image I2 feature point set B′={b1 ,b2 ,b3 ,...,bn }. Then, select K points with the closest distance to each feature point as candidate matching points. Each feature point selects the two points with the closest distance to it as matching candidates. The quotient between the closest distance and the second closest distance is used to obtain a ratio. If the ratio value is large, select a pair of feature points corresponding to the closest distance as the final feature matching points.3)将待配准图像的特征匹配点之间的坐标关系用数学模型表示出来:3) The coordinate relationship between the feature matching points of the image to be registered is expressed by a mathematical model:采用投影变换模型求解单应性矩阵,设图像1和图像2分别可以经过单应性矩阵H1、H2投影到同一投影平面内,并且两幅图像之间只涉及到旋转、平移,因此图像1和图像2之间的变换关系可以合并为一个单应性矩阵H3表示;设图像中存在一点p,齐次坐标为(x,y,1)T,经过投影变换后变为p′=(x′,y′,1)T,其变换表达式为:The projection transformation model is used to solve the homography matrix. Assume that image 1 and image 2 can be projected into the same projection plane through the homography matrices H1 and H2 respectively, and the two images only involve rotation and translation. Therefore, the transformation relationship between image 1 and image 2 can be combined into a homography matrix H3. Assume that there is a point p in the image, with homogeneous coordinates (x, y, 1)T. After the projection transformation, it becomes p′=(x′, y′, 1)T. The transformation expression is:x′、y′分别为投影变换后的像素点坐标,a1~a8为单应矩阵参数,由相机的内参、旋转、平移以及投影平面的参数信息决定;x′ and y′ are the pixel coordinates after projection transformation, and a1 to a8 are homography matrix parameters, which are determined by the intrinsic parameters, rotation, translation and projection plane parameter information of the camera;4)采用最佳缝合线融合算法在图像结构相似的区域找到一条接缝,然后取输入图像中位于缝合线两侧的部分,构成最终融合结果。4) The best seam fusion algorithm is used to find a seam in the area with similar image structure, and then the parts of the input image located on both sides of the seam are taken to form the final fusion result.6.根据权利要求5所述船闸内船舶静止情况下基于地理方位对船首向进行校准的方法,其特征在于:所述步骤3中,在图像拼接的基础上,采用基于语义分割的背景信息提取算法,根据背景处的时间特征匹配点估计相机运动路径;6. The method for calibrating the bow direction based on the geographic orientation when the ship is stationary in the lock according to claim 5, characterized in that: in the step 3, on the basis of image stitching, a background information extraction algorithm based on semantic segmentation is adopted to estimate the camera motion path according to the time feature matching points at the background;根据背景处的空间特征匹配点,计算图像变换模型,实现背景处的无缝拼接;According to the spatial feature matching points at the background, the image transformation model is calculated to achieve seamless splicing at the background;然后,使用最佳缝合线融合算法在背景区域找到一条缝合线,沿缝合线将图像拼接在一起。Then, an optimal seam fusion algorithm is used to find a seam line in the background region and stitch the images together along the seam line.7.根据权利要求1所述船闸内船舶静止情况下基于地理方位对船首向进行校准的方法,其特征在于:所述步骤4中,寻找包围船舶轮廓的外接矩形,并考虑矩形面积最小化、到边接近度最大化、到边平方误差最小化标准,选择拟合的矩形,并以外接矩形的走向表征船舶船首向,具体包括:7. According to claim 1, the method for calibrating the bow direction based on the geographic orientation when the ship is stationary in the lock is characterized in that: in the step 4, finding the circumscribed rectangle surrounding the outline of the ship, and considering the standards of minimizing the rectangular area, maximizing the proximity to the edge, and minimizing the square error to the edge, selecting the fitted rectangle, and characterizing the bow direction of the ship with the direction of the circumscribed rectangle, specifically includes:以一定搜索角度步长遍历所有方向,寻找包围船舶轮廓的外接矩形,在每次迭代中,找到一个指向该方向并包含所有点云的矩形,然后分别计算此时的矩形面积、点云到矩形边的距离以及点云到边的平方误差,找到满足矩形面积最小、到边接近度最大也就是点到边距离最小、以及到边平方误差最小的矩形框,设置评判标准,选择正确的矩形框,此时,矩形框的搜索角度作为船舶首向角,也就是外接矩形的走向表征船舶船首向。Traverse all directions with a certain search angle step size to find the circumscribed rectangle that surrounds the ship's outline. In each iteration, find a rectangle pointing in that direction and containing all point clouds, and then calculate the rectangular area, the distance from the point cloud to the rectangle edge, and the square error from the point cloud to the edge at this time, and find the rectangular box that satisfies the minimum rectangular area, the maximum proximity to the edge, that is, the minimum point-to-edge distance, and the minimum square error to the edge. Set the judgment criteria and select the correct rectangular box. At this time, the search angle of the rectangular box is used as the ship's heading angle, that is, the direction of the circumscribed rectangle represents the ship's bow direction.
CN202311242641.3A2023-09-252023-09-25Method for calibrating bow direction based on geographic azimuth under ship static condition in ship lockPendingCN117974773A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202311242641.3ACN117974773A (en)2023-09-252023-09-25Method for calibrating bow direction based on geographic azimuth under ship static condition in ship lock

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202311242641.3ACN117974773A (en)2023-09-252023-09-25Method for calibrating bow direction based on geographic azimuth under ship static condition in ship lock

Publications (1)

Publication NumberPublication Date
CN117974773Atrue CN117974773A (en)2024-05-03

Family

ID=90852032

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202311242641.3APendingCN117974773A (en)2023-09-252023-09-25Method for calibrating bow direction based on geographic azimuth under ship static condition in ship lock

Country Status (1)

CountryLink
CN (1)CN117974773A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118753744A (en)*2024-07-232024-10-11滨沅国科(秦皇岛)智能科技股份有限公司 Ship loader anti-collision system and method for cabin detection based on three-dimensional point cloud
CN119741579A (en)*2025-03-042025-04-01天津财经大学Three-dimensional measurement and simulation analysis system and method based on multi-sensor fusion
CN119919595A (en)*2025-04-012025-05-02浙江华是科技股份有限公司 A method and system for completing the back side of a ship point cloud

Cited By (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118753744A (en)*2024-07-232024-10-11滨沅国科(秦皇岛)智能科技股份有限公司 Ship loader anti-collision system and method for cabin detection based on three-dimensional point cloud
CN119741579A (en)*2025-03-042025-04-01天津财经大学Three-dimensional measurement and simulation analysis system and method based on multi-sensor fusion
CN119919595A (en)*2025-04-012025-05-02浙江华是科技股份有限公司 A method and system for completing the back side of a ship point cloud
CN119919595B (en)*2025-04-012025-06-17浙江华是科技股份有限公司Ship point cloud backside completion method and system

Similar Documents

PublicationPublication DateTitle
CN112505065B (en)Method for detecting surface defects of large part by indoor unmanned aerial vehicle
CN107844750B (en)Water surface panoramic image target detection and identification method
CN117974773A (en)Method for calibrating bow direction based on geographic azimuth under ship static condition in ship lock
CN101609504B (en)Method for detecting, distinguishing and locating infrared imagery sea-surface target
CN110246159A (en)The 3D target motion analysis method of view-based access control model and radar information fusion
CN103065323B (en)Subsection space aligning method based on homography transformational matrix
CN108416798B (en) A method for vehicle distance estimation based on optical flow
CN108305277B (en)Heterogeneous image matching method based on straight line segments
CN111967337A (en)Pipeline line change detection method based on deep learning and unmanned aerial vehicle images
CN104077760A (en)Rapid splicing system for aerial photogrammetry and implementing method thereof
CN114118252A (en)Vehicle detection method and detection device based on sensor multivariate information fusion
CN113947724A (en) An automatic measurement method of line icing thickness based on binocular vision
CN111126116A (en) Method and system for identifying river garbage by unmanned boat
CN116977806A (en)Airport target detection method and system based on millimeter wave radar, laser radar and high-definition array camera
CN110851978B (en)Camera position optimization method based on visibility
CN110298271A (en)Seawater method for detecting area based on critical point detection network and space constraint mixed model
Gülsoylu et al.Image and AIS data fusion technique for maritime computer vision applications
CN117173650B (en)Ship measurement and identification method and system based on laser radar
Zhou et al.Verification of AIS data by using video images taken by a UAV
CN112733678A (en)Ranging method, ranging device, computer equipment and storage medium
CN119600550B (en) Traffic target monitoring and tracking method based on monocular vision daytime scene reconstruction
CN116246162A (en)Unmanned ship passing gate recognition method based on multiple sensors
Li et al.A new visual sensing system for motion state estimation of lateral localization of intelligent vehicles
Cheng et al.G-fusion: Lidar and camera feature fusion on the ground voxel space
CN113885558A (en)Dam surface image unmanned aerial vehicle automatic safety acquisition method and system

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp