Movatterモバイル変換


[0]ホーム

URL:


CN118753347A - Vehicle leading device, vehicle leading system based on machine vision and interlocking signal, and vehicle leading method - Google Patents

Vehicle leading device, vehicle leading system based on machine vision and interlocking signal, and vehicle leading method
Download PDF

Info

Publication number
CN118753347A
CN118753347ACN202411054750.7ACN202411054750ACN118753347ACN 118753347 ACN118753347 ACN 118753347ACN 202411054750 ACN202411054750 ACN 202411054750ACN 118753347 ACN118753347 ACN 118753347A
Authority
CN
China
Prior art keywords
vehicle
distance
track
reserved
car
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411054750.7A
Other languages
Chinese (zh)
Other versions
CN118753347B (en
Inventor
杜怡曼
张屹
赵怡洁
董怀志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Southwest Jiaotong University Shengyang Technology Co ltd
Original Assignee
Beijing Southwest Jiaotong University Shengyang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Southwest Jiaotong University Shengyang Technology Co ltdfiledCriticalBeijing Southwest Jiaotong University Shengyang Technology Co ltd
Priority to CN202411054750.7ApriorityCriticalpatent/CN118753347B/en
Publication of CN118753347ApublicationCriticalpatent/CN118753347A/en
Application grantedgrantedCritical
Publication of CN118753347BpublicationCriticalpatent/CN118753347B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明涉及铁路站场调车作业领域,具体涉及一种领车器、基于机器视觉和联锁信号的领车系统与领车方法。本发明的领车系统包括领车器、数据服务平台和移动作业终端。本发明通过对自车前方路况和信号状态的实时拍摄,进行视频识别和分析,实现进路的自动识别、异物的自动检测、距离的精确测量,并将实时视频流和分析结果传输至数据服务平台,数据服务平台再结合联锁系统的进路、信号状态等联锁信号,对视频流和分析结果作进一步的校验和确认,并将视频流、进路与信号状态等信息通过移动作业终端传输给调车组成员和司机,指挥司机安全驾驶机车,从而实现领车过程的自动、精确控制,在提高作业效率的同时,降低了调车作业人员的劳动强度。

The present invention relates to the field of shunting operations in railway yards, and specifically to a train leader, a train leader system and a train leader method based on machine vision and interlocking signals. The train leader system of the present invention includes a train leader, a data service platform and a mobile operation terminal. The present invention realizes automatic identification of routes, automatic detection of foreign objects, and accurate measurement of distances by real-time shooting of the road conditions and signal status in front of the vehicle, and performs video recognition and analysis, and transmits the real-time video stream and analysis results to the data service platform. The data service platform further verifies and confirms the video stream and analysis results in combination with interlocking signals such as the route and signal status of the interlocking system, and transmits the video stream, route and signal status information to the shunting team members and the driver through the mobile operation terminal, instructing the driver to drive the locomotive safely, thereby realizing automatic and accurate control of the train leader process, while improving the operation efficiency and reducing the labor intensity of the shunting operators.

Description

Translated fromChinese
一种领车器、基于机器视觉和联锁信号的领车系统与领车方法A vehicle leading device, a vehicle leading system based on machine vision and interlocking signals, and a vehicle leading method

技术领域Technical Field

本发明涉及铁路站场调车作业领域,尤其涉及一种领车器、基于机器视觉和联锁信号的领车系统与领车方法。The invention relates to the field of railway yard shunting operations, and in particular to a train leading device, a train leading system and a train leading method based on machine vision and interlocking signals.

背景技术Background Art

在铁路站场调车作业中,推送调车是最基本的调车作业方式,即调车机车在车辆行驶方向后方,利用机车动力向前推进车辆运行。由于在推送调车作业过程中,调车机车司机无法看清前方线路、信号等,因而在现有的作业组织模式和技术设备条件下,推送调车作业均需要领车员通过抓乘推进首车的方式跟乘作业,目的是为了在车辆推进运行过程中替司机确认前方信号开放情况、道路情况、以及判断十、五、三车距离。无论刮风下雨、严寒酷暑,只要有调车作业推进,领车员就需要抓乘,对领车员的体能消耗非常大。另外,在抓乘过程中,受自身因素、外部条件等影响,领车员容易从机车车辆上坠落、上下车时蹬空或被线路旁信号机、接触网支柱、高站台等设备设施碰撞,或者在道口调车车辆与其它机动车辆发生冲突时受伤,导致人身安全事故。In the shunting operation of railway yards, push shunting is the most basic shunting operation method, that is, the shunting locomotive is behind the vehicle in the direction of travel, and uses the locomotive power to push the vehicle forward. Since the shunting locomotive driver cannot see the line and signal in front clearly during the push shunting operation, under the existing operation organization mode and technical equipment conditions, the push shunting operation requires the leader to follow the operation by grabbing and pushing the first car. The purpose is to confirm the signal opening situation and road conditions in front for the driver during the vehicle pushing operation, and to judge the distance between the tenth, fifth and third cars. Regardless of wind and rain, severe cold and heat, as long as there is a shunting operation, the leader needs to grab and ride, which consumes a lot of physical energy for the leader. In addition, during the grabbing process, affected by personal factors and external conditions, the leader is prone to fall from the locomotive vehicle, step into the air when getting on and off the vehicle, or be hit by equipment and facilities such as line side signal machines, contact network pillars, and high platforms, or be injured when the shunting vehicle collides with other motor vehicles at the crossing, resulting in personal safety accidents.

要解决将领车员从抓乘作业中解放出来的问题,就必须解决让调车机车司机自动获取机车推送前方的环境、信号状态、异物信息、与异物之间的距离等问题。目前,有少量研究从技术和设备的层面,提出一些设想,试图将领车员从抓乘作业中解放出来。比如让联锁系统的联锁信号上车,联锁信号虽然能反映信号状态和轨道占用状态,但当有异物侵限时,联锁信号无法反应是否有异物侵限,同时,仅靠联锁信号也获取不了推进首车与前方车辆或者异物的距离。另外,也有视频领车系统将推进前方的视频传输到驾驶室,但是摄像头只能代替人的眼睛,不能代替人的大脑,还是需要人员盯控,劳动强度大,而且容易发生疏漏,对于距离的判断有偏差。有研究者在矿区铁路调车作业中尝试采用雷达测距系统,,对作业人员的机车速度的控制提供有效参考条件,在尽头线调车作业中进行了试用,试验结果证明可以提高调车作业安全系数,但是激光雷达设备体积较大,无法满足调车作业对领车设备便携性的要求,导致系统缺乏可实施性。在视觉测距技术方面,现有的研究和实验主要集中在道路车辆,尤其是自动驾驶领域,而针对铁路推送调车作业测距的研究相对较少,而且既有的视觉测距方案大多只采用单一的测距模型进行距离测算,即基于留存车目标进行测距,并没有考虑推进首车与留存车之间距离过远或者过近的场景,也缺少基于多参考测距模型的融合方法,因此在目标测距精度和鲁棒性方面仍有提升空间。To solve the problem of freeing the leader from the operation of grabbing and riding, it is necessary to solve the problem of allowing the shunting locomotive driver to automatically obtain the environment, signal status, foreign object information, and distance between foreign objects in front of the locomotive push. At present, a small number of studies have proposed some ideas from the technical and equipment levels to try to free the leader from the operation of grabbing and riding. For example, the interlocking signal of the interlocking system is put on the car. Although the interlocking signal can reflect the signal status and track occupancy status, when there is foreign object intrusion, the interlocking signal cannot reflect whether there is foreign object intrusion. At the same time, the interlocking signal alone cannot obtain the distance between the first car and the front vehicle or foreign object. In addition, there is also a video leading system that transmits the video in front of the push to the cab, but the camera can only replace the human eye, not the human brain, and still requires personnel to monitor, which is labor-intensive and prone to omissions, and there is a deviation in the judgment of distance. Some researchers have tried to use radar ranging systems in railway shunting operations in mining areas to provide effective reference conditions for operators to control the speed of locomotives. They have tried it in dead-end line shunting operations. The test results show that it can improve the safety factor of shunting operations. However, the laser radar equipment is large in size and cannot meet the portability requirements of shunting operations for leading equipment, resulting in the lack of feasibility of the system. In terms of visual ranging technology, existing research and experiments are mainly focused on road vehicles, especially in the field of autonomous driving, while there are relatively few studies on ranging for railway push shunting operations. In addition, most existing visual ranging solutions only use a single ranging model for distance measurement, that is, ranging based on the retained car target, and do not consider the scenario where the distance between the first car and the retained car is too far or too close. There is also a lack of fusion methods based on multi-reference ranging models. Therefore, there is still room for improvement in target ranging accuracy and robustness.

发明内容Summary of the invention

基于以上存在的技术问题,本发明提出一种领车器、基于机器视觉和联锁信号的领车系统与领车方法。Based on the above technical problems, the present invention proposes a vehicle leading device, a vehicle leading system based on machine vision and interlocking signals, and a vehicle leading method.

为解决上述技术问题,本发明采用的具体技术方案概述如下:In order to solve the above technical problems, the specific technical solutions adopted by the present invention are summarized as follows:

一种领车器,包括视频采集模块和机器视觉智能分析模块;所述视频采集模块包括主摄像头和辅摄像头,在所述视频测距领车器工作时,所述主摄像头的光心平行于地面,所述辅摄像头的光心与地面呈夹角设置且所述夹角可调节;所述机器视觉智能分析模块包括基于轨道间距的自车与留存车间距测量单元、基于锚线消失点与留存车检测框的自车与留存车间距测量单元和自车车钩与留存车车钩间距测量单元;所述视频采集模块与所述机器视觉智能分析模块连接。A vehicle leader comprises a video acquisition module and a machine vision intelligent analysis module; the video acquisition module comprises a main camera and an auxiliary camera, when the video ranging vehicle leader is working, the optical center of the main camera is parallel to the ground, the optical center of the auxiliary camera is set at an angle with the ground, and the angle is adjustable; the machine vision intelligent analysis module comprises a distance measurement unit for a vehicle and a retained vehicle based on track spacing, a distance measurement unit for a vehicle and a retained vehicle based on an anchor line vanishing point and a retained vehicle detection frame, and a distance measurement unit for a vehicle coupler and a retained vehicle coupler; the video acquisition module is connected to the machine vision intelligent analysis module.

进一步的,针对自车与留存车或异物的远近距离不同,本发明提出了分别针对远距离、中距离和近距离三种不同应用场景的测距方法。所述远距离是指所述自车与所述留存车的间距为100米-200米,所述中距离是指所述自车与所述留存车的间距为2米-100米,所述近距离是指所述自车与所述留存车的间距在2米以内。Furthermore, in view of the different distances between the self-vehicle and the retained vehicle or foreign objects, the present invention proposes distance measurement methods for three different application scenarios: long distance, medium distance and short distance. The long distance means that the distance between the self-vehicle and the retained vehicle is 100 meters to 200 meters, the medium distance means that the distance between the self-vehicle and the retained vehicle is 2 meters to 100 meters, and the short distance means that the distance between the self-vehicle and the retained vehicle is within 2 meters.

进一步的,对于远距离应用场景,利用所述领车器,本发明提出了一种基于轨道间距的自车与留存车间距测量法,所述基于轨道间距的自车与留存车间距测量法应用于所述基于轨道间距的自车与留存车间距测量单元,具体包括以下步骤,Furthermore, for long-distance application scenarios, the present invention proposes a method for measuring the distance between the vehicle and the reserved vehicle based on the track spacing by using the vehicle leader. The method for measuring the distance between the vehicle and the reserved vehicle based on the track spacing is applied to the measurement unit for measuring the distance between the vehicle and the reserved vehicle based on the track spacing, and specifically includes the following steps:

步骤1.所述机器视觉智能分析模块接收并读取所述视频采集模块中的所述主摄像头采集的视频图像;Step 1. The machine vision intelligent analysis module receives and reads the video image captured by the main camera in the video acquisition module;

步骤2.所述机器视觉智能分析模块中的轨道锚线检测单元对所述视频图像中的轨道进行检测,获得视频图像中所有可识别轨道的像素坐标对,并判断轨道是主轨道还是侧轨道;Step 2. The track anchor line detection unit in the machine vision intelligent analysis module detects the track in the video image, obtains pixel coordinate pairs of all identifiable tracks in the video image, and determines whether the track is a main track or a side track;

步骤3.若所述步骤2中的所述轨道被判断为主轨道,根据所述主轨道的宽度计算自车与留存车的间距D。Step 3. If the track in step 2 is determined to be the main track, the distance D between the vehicle and the reserved vehicle is calculated according to the width of the main track.

进一步的,根据所述主轨道的宽度计算自车与留存车的间距D为,Furthermore, the distance D between the vehicle and the reserved vehicle is calculated based on the width of the main track as follows:

其中,Lline为主轨道的真实宽度,fx为主摄像头的焦距,Lpixel为主轨道的像素宽度。Among them, Lline is the actual width of the main track, fx is the focal length of the main camera, and Lpixel is the pixel width of the main track.

进一步的,对于中距离应用场景,利用所述领车器,本发明提出了一种基于锚线消失点与留存车检测框的自车与留存车间距测量方法,所述基于锚线消失点与留存车检测框的自车与留存车间距测量方法应用于所述基于锚线消失点与留存车检测框的自车与留存车间距测量单元,具体包括以下步骤,Furthermore, for medium-distance application scenarios, the present invention proposes a method for measuring the distance between the vehicle and the retained vehicle based on the anchor line vanishing point and the retained vehicle detection frame using the vehicle leader. The method for measuring the distance between the vehicle and the retained vehicle based on the anchor line vanishing point and the retained vehicle detection frame is applied to the measurement unit for measuring the distance between the vehicle and the retained vehicle based on the anchor line vanishing point and the retained vehicle detection frame, and specifically includes the following steps:

步骤1.所述机器视觉智能分析模块接收并读取所述视频采集模块中的所述主摄像头采集的视频图像;Step 1. The machine vision intelligent analysis module receives and reads the video image captured by the main camera in the video acquisition module;

步骤2.所述机器视觉智能分析模块中的轨道锚线检测单元对所述视频图像中的轨道锚线进行检测分析,获得所述轨道锚线的消失点坐标;Step 2. The track anchor line detection unit in the machine vision intelligent analysis module detects and analyzes the track anchor line in the video image to obtain the vanishing point coordinates of the track anchor line;

步骤3.基于所述步骤2中的所述轨道锚线的所述消失点坐标,对所述主摄像头的姿态角进行修正,得到所述主摄像头的偏航角和俯仰角;Step 3. Based on the vanishing point coordinates of the track anchor line in step 2, the attitude angle of the main camera is corrected to obtain the yaw angle and pitch angle of the main camera;

步骤4.所述机器视觉智能分析模块中的所述自车与留存车间距测量单元对所述视频图像中的留存车进行检测,并基于留存车检测框底边缘中点位置进行测距,获得自车与留存车之间的间距d;Step 4. The self-vehicle-retained-vehicle distance measurement unit in the machine vision intelligent analysis module detects the retained vehicle in the video image, and measures the distance based on the midpoint position of the bottom edge of the retained vehicle detection frame to obtain the distance d between the self-vehicle and the retained vehicle;

步骤5.所述机器视觉智能分析模块中的所述轨道检测单元对轨道进行检测,并判断轨道是否为直轨,若所述轨道为直轨,Step 5. The track detection unit in the machine vision intelligent analysis module detects the track and determines whether the track is a straight track. If the track is a straight track,

则基于所述步骤3中的所述主摄像头的俯仰角对所述步骤4中的所述自车与留存车的间距进行修正,得到修正后的自车与留存车的间距d1,Based on the pitch angle of the main camera in step 3, the distance between the self-vehicle and the reserved vehicle in step 4 is corrected to obtain a corrected distance d1 between the self-vehicle and the reserved vehicle.

并基于所述步骤3中的所述主摄像头的偏航角对所述修正后的自车与留存车的间距d1进行修正,得到二次修正后的自车与留存车的间距D1,The corrected distance d1 between the self-vehicle and the reserved vehicle is corrected based on the yaw angle of the main camera in step 3 to obtain a second corrected distance D1 between the self-vehicle and the reserved vehicle.

若所述轨道为非直轨,则将所述主摄像头的偏航角设置为零,仅基于所述步骤3中的所述主摄像头的俯仰角对所述步骤4中的所述自车与留存车的间距进行修正,得到修正后的自车与留存车的间距d1。If the track is a non-straight track, the yaw angle of the main camera is set to zero, and the distance between the self-vehicle and the retained vehicle in step 4 is corrected only based on the pitch angle of the main camera in step 3 to obtain the corrected distance d1 between the self-vehicle and the retained vehicle.

进一步的,所述步骤2中所述轨道锚线的消失点坐标为所述视频图像中检测的轨道锚线的交点,所述轨道锚线表示为N点的序列,即P={(x0,y0),(x1,y1),……,(xN―1,yN―1)},其中所述轨道锚线点的y坐标是固定的,并沿所述视频图像的垂直轴均匀采样。使用最小二乘法来找到最佳拟合锚线的斜率m和截距c,得到所述锚线的拟合方程y=mx+c,进一步计算所述轨道锚线的交点坐标(u,v):Furthermore, the vanishing point coordinates of the track anchor line in step 2 are the intersection points of the track anchor lines detected in the video image, and the track anchor line is represented as a sequence of N points, i.e., P = {(x0 , y0 ), (x1 , y1 ), ..., (xN - 1 , yN - 1 )}, wherein the y coordinates of the track anchor line points are fixed and uniformly sampled along the vertical axis of the video image. The slope m and intercept c of the best fitting anchor line are found using the least squares method, and the fitting equation of the anchor line y = mx + c is obtained, and the intersection coordinates (u, v) of the track anchor line are further calculated:

其中,m1和m2是两条轨道锚线的斜率,c1和c2是两条轨道锚线的截距。Among them,m1 andm2 are the slopes of the two track anchor lines, andc1 andc2 are the intercepts of the two track anchor lines.

进一步的,所述基于锚线消失点与留存车检测框的自车与留存车间距测量方法中所述步骤3中所述主摄像头的偏航角为,其中,γ为主摄像头的偏航角,W为成像平面的宽度,u1为存在俯仰角和偏航角情况下的轨道锚线消失点横坐标,fx为相机坐标系x轴方向等效焦距,其中相机坐标系以主摄像头光轴中心为原点,向右为x轴正方向,向下为y轴正方向,向前为z轴正方向;所述主摄像头的俯仰角为,其中,θ为主摄像头的俯仰角,H为成像平面的高度,v1为存在俯仰角和偏航角情况下的轨道消失点纵坐标,fy为相机坐标系y轴方向等效焦距。Furthermore, the yaw angle of the main camera in step 3 of the method for measuring the distance between the vehicle and the retained vehicle based on the anchor line vanishing point and the retained vehicle detection frame is, Wherein, γ is the yaw angle of the main camera, W is the width of the imaging plane,u1 is the horizontal coordinate of the vanishing point of the track anchor line when there is a pitch angle and a yaw angle,fx is the equivalent focal length in the x-axis direction of the camera coordinate system, where the camera coordinate system takes the center of the optical axis of the main camera as the origin, the right is the positive direction of the x-axis, the downward is the positive direction of the y-axis, and the forward is the positive direction of the z-axis; the pitch angle of the main camera is, Among them, θ is the pitch angle of the main camera, H is the height of the imaging plane,v1 is the ordinate of the track vanishing point when there is a pitch angle and yaw angle, andfy is the equivalent focal length in the y-axis direction of the camera coordinate system.

进一步的,所述基于锚线消失点与留存车检测框的自车与留存车间距测量方法中所述步骤4中所述自车与留存车的间距d为:Furthermore, the distance d between the self-vehicle and the retained vehicle in step 4 of the method for measuring the distance between the self-vehicle and the retained vehicle based on the anchor line vanishing point and the retained vehicle detection frame is:

其中,μ为主摄像头光轴与留存车检测框底边缘中心和主摄像头光心的连线所形成的夹角,Hc是主摄像头与地面之间的距离。Among them, μ is the angle formed by the optical axis of the main camera and the line connecting the center of the bottom edge of the retained vehicle detection frame and the optical center of the main camera, andHc is the distance between the main camera and the ground.

进一步的,所述基于锚线消失点与留存车检测框的自车与留存车间距测量方法中所述步骤5中所述修正后的自车与留存车的间距d1为,Furthermore, the corrected distance d1 between the ego vehicle and the retained vehicle in step 5 of the method for measuring the distance between the ego vehicle and the retained vehicle based on the anchor line vanishing point and the retained vehicle detection frame is,

其中,d1为基于所述主摄像头的俯仰角对自车与留存车的间距进行修正的自车与留存车的间距;Hc是主摄像头与地面之间的距离;μ为主摄像头光轴与留存车检测框底边缘中心和主摄像头光心的连线所形成的夹角;θ是主摄像头的俯仰角;Wherein, d1 is the distance between the self-vehicle and the reserved vehicle corrected based on the pitch angle of the main camera;Hc is the distance between the main camera and the ground; μ is the angle formed by the line connecting the optical axis of the main camera and the center of the bottom edge of the reserved vehicle detection frame and the optical center of the main camera; θ is the pitch angle of the main camera;

所述二次修正后的自车与留存车的间距D1为,The distance D1 between the self-vehicle and the retained vehicle after the secondary correction is,

其中,为留存车检测框底边缘中点与所述主摄像头光心的连线和光轴在竖直平面形成的夹角,计算公式为:in, The angle formed by the line connecting the midpoint of the bottom edge of the retained vehicle detection frame and the optical center of the main camera and the optical axis in the vertical plane is calculated as follows:

其中,uc为摄像机成像平面内留存车检测框底边缘中心点横坐标,uo为图像坐标系原点在像素坐标系中的坐标。Among them,uc is the horizontal coordinate of the center point of the bottom edge of the retained car detection frame in the camera imaging plane, and uo is the coordinate of the origin of the image coordinate system in the pixel coordinate system.

进一步的,对于近距离应用场景,利用所述领车器,本发明提出了一种基于自车车钩与留存车车钩间距测量方法,所述基于自车车钩与留存车车钩间距测量方法应用于所述基于自车车钩与留存车车钩间距测量单元,具体包括以下步骤,Furthermore, for close-range application scenarios, the present invention proposes a method for measuring the distance between the self-vehicle coupler and the reserved vehicle coupler by using the vehicle leader. The method for measuring the distance between the self-vehicle coupler and the reserved vehicle coupler is applied to the unit for measuring the distance between the self-vehicle coupler and the reserved vehicle coupler, and specifically includes the following steps:

步骤1.所述机器视觉智能分析模块接收并读取所述视频采集模块中所述辅摄像头采集的视频图像;Step 1. The machine vision intelligent analysis module receives and reads the video image captured by the auxiliary camera in the video acquisition module;

步骤2.所述机器视觉智能分析模块中的轨道锚线检测单元对所述视频图像中的轨道锚线进行检测分析,获得所述轨道锚线的消失点坐标,基于所述轨道锚线的消失点坐标,设定感兴趣区域,Step 2: The track anchor line detection unit in the machine vision intelligent analysis module detects and analyzes the track anchor line in the video image to obtain the vanishing point coordinates of the track anchor line, and sets the region of interest based on the vanishing point coordinates of the track anchor line.

步骤3.所述机器视觉智能分析模块中的所述自车车钩与留存车车钩间距测量单元对所述视频图像中的自车车钩与留存车车钩进行检测,以识别所述自车车钩与留存车车钩;Step 3. The distance measurement unit between the own vehicle coupler and the reserved vehicle coupler in the machine vision intelligent analysis module detects the own vehicle coupler and the reserved vehicle coupler in the video image to identify the own vehicle coupler and the reserved vehicle coupler;

步骤4.所述机器视觉智能分析模块中的所述自车车钩与留存车车钩间距测量单元跟踪所述自车车钩与留存车车钩的移动并生成车钩跟踪id,判断所述自车车钩与留存车车钩是否位于所述步骤2中的所述感兴趣区域内,则位于所述感兴趣区域内的车钩为留存车车钩,并记录留存车车钩的跟踪id;Step 4. The distance measurement unit between the own vehicle coupler and the reserved vehicle coupler in the machine vision intelligent analysis module tracks the movement of the own vehicle coupler and the reserved vehicle coupler and generates a coupler tracking ID, and determines whether the own vehicle coupler and the reserved vehicle coupler are located in the region of interest in step 2. The coupler located in the region of interest is the reserved vehicle coupler, and the tracking ID of the reserved vehicle coupler is recorded;

步骤5.基于所述步骤4中的所述留存车车钩,所述机器视觉智能分析模块中的所述自车车钩与留存车车钩对车车钩间距测量单元分析计算所述自车车钩与留存车车钩的间距D2Step 5. Based on the reserved vehicle coupler in step 4, the distance measurement unit between the vehicle coupler and the reserved vehicle coupler in the machine vision intelligent analysis module analyzes and calculates the distance D2 between the vehicle coupler and the reserved vehicle coupler.

进一步的,所述基于自车车钩与留存车车钩间距测量方法中所述步骤2中所述轨道锚线的消失点坐标为所述视频图像中检测的轨道锚线的交点,所述轨道锚线表示为N点的序列,即P={(x0,y0),(x1,y1),……,(xN―1,yN―1)},其中所述轨道锚线点的y坐标是固定的,并沿所述视频图像的垂直轴均匀采样。使用最小二乘法来找到最佳拟合锚线的斜率m和截距c,得到所述锚线的拟合方程y=mx+c,进一步计算所述轨道锚线的交点坐标(u,v):Furthermore, the vanishing point coordinates of the track anchor line in step 2 of the method for measuring the distance between the self-vehicle coupler and the reserved vehicle coupler are the intersection points of the track anchor lines detected in the video image, and the track anchor line is represented as a sequence of N points, that is, P = {(x0 , y0 ), (x1 , y1 ), ..., (xN -1 , yN -1 )}, wherein the y coordinates of the track anchor line points are fixed and uniformly sampled along the vertical axis of the video image. The least squares method is used to find the slope m and intercept c of the best fitting anchor line, and the fitting equation of the anchor line y = mx + c is obtained, and the intersection coordinates (u, v) of the track anchor line are further calculated:

其中,m1和m2是两条轨道锚线的斜率,c1和c2是两条轨道锚线的截距。Among them,m1 andm2 are the slopes of the two track anchor lines, andc1 andc2 are the intercepts of the two track anchor lines.

进一步的,所述基于自车车钩与留存车车钩间距测量方法中所述步骤5中所述自车车钩与留存车车钩的间距D2为,Furthermore, the distance D2 between the own vehicle coupler and the reserved vehicle coupler in step 5 of the method for measuring the distance between the own vehicle coupler and the reserved vehicle coupler is,

其中,l为辅摄像头与留存车车钩之间的直线距离,h为留存车车钩高度,w为留存车车钩宽度,D0为辅摄像头与自车车钩之间的水平距离,h0为辅摄像头与自车车钩间的垂直距离,,lOB为世界坐标系内留存车车钩检测框右上角坐标点到世界坐标系原点的直线距离,lOC为世界坐标系内留存车车钩检测框左下角坐标点到世界坐标系原点的直线距离。Among them, l is the straight-line distance between the auxiliary camera and the retained vehicle coupler, h is the height of the retained vehicle coupler, w is the width of the retained vehicle coupler, D0 is the horizontal distance between the auxiliary camera and the own vehicle coupler,h0 is the vertical distance between the auxiliary camera and the own vehicle coupler, lOB is the straight-line distance from the upper right corner coordinate point of the retained vehicle coupler detection frame in the world coordinate system to the origin of the world coordinate system, and lOC is the straight-line distance from the lower left corner coordinate point of the retained vehicle coupler detection frame in the world coordinate system to the origin of the world coordinate system.

进一步的,本发明还提出了一种机器视觉和联锁信号的领车系统,包括所述领车器、数据服务平台和移动作业终端,所述领车器与数据服务平台通过无线通信连接,所述数据服务平台与移动作业终端通过无线通信连接。Furthermore, the present invention also proposes a vehicle leading system of machine vision and interlocking signals, including the vehicle leading device, a data service platform and a mobile operation terminal, wherein the vehicle leading device is connected to the data service platform via wireless communication, and the data service platform is connected to the mobile operation terminal via wireless communication.

进一步的,所述领车器还包括定位模块。Furthermore, the vehicle leader also includes a positioning module.

进一步的,所述数据服务平台包括数据信息接收模块、数据信息分析模块和数据信息发送模块。Furthermore, the data service platform includes a data information receiving module, a data information analyzing module and a data information sending module.

进一步的,所述数据信息接收模块用于接收并保存所述领车器中的所述视屏采集模块采集的实时视频流、所述定位模块采集的定位信息和所述机器视觉智能分析模块的测距信息和异物识别信息,以及联锁系统的联锁信号。Furthermore, the data information receiving module is used to receive and save the real-time video stream collected by the video acquisition module in the vehicle leader, the positioning information collected by the positioning module, the ranging information and foreign object recognition information of the machine vision intelligent analysis module, and the interlocking signal of the interlocking system.

进一步的,所述数据信息分析模块用于将所述联锁系统的联锁信号与所述领车器中的所述视频采集模块采集的实时视频流、所述定位模块采集的定位信息和所述机器视觉智能分析模块分析出的测距信息和异物识别信息进行比对校验并修正,确定自车推进前方的进路信息和信号状态。Furthermore, the data information analysis module is used to compare, verify and correct the interlocking signal of the interlocking system with the real-time video stream collected by the video acquisition module in the leader, the positioning information collected by the positioning module, and the ranging information and foreign object recognition information analyzed by the machine vision intelligent analysis module to determine the route information and signal status ahead of the vehicle.

进一步的,所述数据信息发送模块用于将所述实时视频流、测距信息和异物识别信息发送给移动作业终端。Furthermore, the data information sending module is used to send the real-time video stream, ranging information and foreign object identification information to the mobile operation terminal.

进一步的,所述移动作业终端包括视频接收与显示单元、调车信令接收单元、进路信号接收单元、调车计划显示单元、卫星定位单元以及预警单元。Furthermore, the mobile operation terminal includes a video receiving and display unit, a shunting signal receiving unit, a route signal receiving unit, a shunting plan display unit, a satellite positioning unit and an early warning unit.

进一步的,本发明还提出了一种基于机器视觉和联锁信号的领车系统的领车方法,具体包括以下步骤:Furthermore, the present invention also proposes a vehicle leading method of a vehicle leading system based on machine vision and interlocking signals, which specifically comprises the following steps:

步骤1.在调车作业过程中,推进作业之前,将所述领车器安装在自车的前方正面;Step 1. During the shunting operation, before the advancing operation, the leading device is installed on the front of the vehicle;

步骤2.在推进作业过程中,所述视频测距领车器通过所述视频采集模块采集所述自车前方的实时视频流,所述机器视觉智能分析模块对采集的所述实时视频流进行智能分析,得到智能分析结果;Step 2. During the pushing operation, the video ranging leader collects the real-time video stream in front of the vehicle through the video acquisition module, and the machine vision intelligent analysis module performs intelligent analysis on the collected real-time video stream to obtain an intelligent analysis result;

步骤3.所述视频测距领车器将所述实时视频流、所述智能分析结果和所述定位模块采集的定位信息发送给数据服务平台;Step 3. The video ranging leader sends the real-time video stream, the intelligent analysis result and the positioning information collected by the positioning module to the data service platform;

步骤4.所述数据服务平台中的所述数据信息接收模块接收并保存所述领车器中的所述视屏采集模块采集的实时视频流、所述定位模块采集的定位信息和所述机器视觉智能分析模块的测距信息和异物识别信息,以及联锁系统的联锁信号;Step 4. The data information receiving module in the data service platform receives and stores the real-time video stream collected by the video acquisition module in the vehicle leader, the positioning information collected by the positioning module, the ranging information and foreign object recognition information of the machine vision intelligent analysis module, and the interlocking signal of the interlocking system;

步骤5.所述数据信息分析模块将所述联锁系统的联锁信号与所述领车器中的所述视频采集模块采集的实时视频流、所述定位模块采集的定位信息和所述机器视觉智能分析模块分析出的测距信息和异物识别信息进行比对校验并修正,确定自车推进前方的进路信息和信号状态;Step 5. The data information analysis module compares and verifies the interlocking signal of the interlocking system with the real-time video stream collected by the video acquisition module in the vehicle leader, the positioning information collected by the positioning module, and the distance measurement information and foreign object recognition information analyzed by the machine vision intelligent analysis module, and determines the route information and signal status ahead of the vehicle;

步骤6.所述数据信息发送模块将所述实时视频流、所述测距信息和所述异物识别信息发送给移动作业终端,若所述视频测距领车器识别到所述自车推进前方的异物,所述数据服务平台将警告信息发送给所述移动作业终端;Step 6. The data information sending module sends the real-time video stream, the distance measurement information and the foreign object identification information to the mobile operation terminal. If the video distance measurement device identifies a foreign object in front of the vehicle, the data service platform sends a warning message to the mobile operation terminal.

步骤7.所述移动作业终端通过所述视频接收与显示单元接收并显示所述实时视频流,通过所述调车信令接收单元接收调车信令,通过所述进路信号接收单元接收进路信息和信号状态以及所述测距信息,通过所述调车计划显示单元接收调车计划,通过所述预警单元接收所述警告信息;Step 7. The mobile operation terminal receives and displays the real-time video stream through the video receiving and display unit, receives the shunting signal through the shunting signal receiving unit, receives the route information and signal status and the distance measurement information through the route signal receiving unit, receives the shunting plan through the shunting plan display unit, and receives the warning information through the early warning unit;

步骤8.调车组根据所述步骤7中所述移动作业终端接收的信息流指挥司机进行安全驾驶。Step 8. The shunting team instructs the driver to drive safely according to the information flow received by the mobile operation terminal in step 7.

进一步的,所述领车方法中所述步骤2中的所述分析结果包括测距信息和异物识别信息,所述测距信息包括所述自车与所述留存车之间的间距、所述自车车钩与所述留存车车钩之间的间距和所述自车与所述异物之间的间距。Furthermore, the analysis result in step 2 of the vehicle leading method includes distance measurement information and foreign object identification information, and the distance measurement information includes the distance between the own vehicle and the retained vehicle, the distance between the own vehicle coupler and the retained vehicle coupler, and the distance between the own vehicle and the foreign object.

本发明通过领车器的基于机器视觉和联锁信号的领车系统,实现了对自车前方路况和信号状态的实时拍摄,并借助计算机视觉等人工智能技术,进行视频识别和分析,从而实现了进路的自动识别、异物的自动检测、自车与留存车或异物之间距离的精确测量,并将实时视频流和分析结构传输至数据服务平台,结合联锁系统的进路、信号状态等联锁信号,对视频流和分析结果进行进一步的校验和确认,将视频流、进路与信号状态等信息通过移动作业终端传输给调车组成员和司机,指挥司机驾驶机车,从而实现领车过程的自动、精确控制,在提高作业效率的同时,降低了调车作业人员的劳动强度,增强了调车作业的安全性,提高了整个领车系统的实用性和安全性。本发明通过视频车距替代传统的激光雷达测距,从而解决了领车设备便携性的问题。本发明分别针对远距离、中距离和近距离三种场景提出了不同的测距方法,从而使得本发明的领车系统具备多场景视频测距功能,解决了单一视频测距无法获取与留存车或异物距离过近和过远的问题,使得调车作业更加准确,并采用基于锚线消失点坐标动态估计相机姿态角的方法,提高了视频测距在动态场景下的精度和鲁棒性。The present invention realizes real-time shooting of the road conditions and signal status in front of the vehicle through the vehicle leading system based on machine vision and interlocking signals, and performs video recognition and analysis with the help of artificial intelligence technologies such as computer vision, thereby realizing automatic recognition of the approach, automatic detection of foreign objects, and accurate measurement of the distance between the vehicle and the retained vehicle or foreign objects, and transmits the real-time video stream and analysis structure to the data service platform, combines the interlocking signals such as the approach and signal status of the interlocking system, further verifies and confirms the video stream and analysis results, transmits the video stream, approach and signal status information to the shunting team members and the driver through the mobile operation terminal, and instructs the driver to drive the locomotive, thereby realizing automatic and accurate control of the vehicle leading process, while improving the operating efficiency, reducing the labor intensity of the shunting operators, enhancing the safety of the shunting operation, and improving the practicality and safety of the entire vehicle leading system. The present invention replaces the traditional laser radar distance measurement with the video vehicle distance, thereby solving the problem of portability of the vehicle leading equipment. The present invention proposes different ranging methods for three scenarios: long distance, medium distance and short distance, so that the vehicle leading system of the present invention has a multi-scenario video ranging function, which solves the problem that a single video ranging cannot obtain the distance that is too close or too far from the retained vehicle or foreign objects, making the shunting operation more accurate, and adopts a method of dynamically estimating the camera attitude angle based on the coordinates of the anchor line vanishing point, thereby improving the accuracy and robustness of video ranging in dynamic scenarios.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了获得本发明的上述和其他优点及特点,以下将参照附图中所示的本发明的具体实施例对以上概述的本发明进行更具体的说明。应理解的是,这些附图仅示出了本发明的典型实施例,因此不应被视为对本发明的范围的限制,通过使用附图,将对本发明进行更具体和更详细的说明和阐述;在附图中:In order to obtain the above and other advantages and features of the present invention, the present invention summarized above will be described in more detail below with reference to the specific embodiments of the present invention shown in the accompanying drawings. It should be understood that these drawings only show typical embodiments of the present invention and should not be regarded as limiting the scope of the present invention. Through the use of the accompanying drawings, the present invention will be described and explained in more detail; in the accompanying drawings:

图1是本发明的一种基于机器视觉和联锁信号的领车系统的功能模块示意图;FIG1 is a schematic diagram of functional modules of a vehicle leading system based on machine vision and interlocking signals of the present invention;

图2是本发明的一种基于轨道间距的自车与留存车间距测量方法的流程框图;FIG2 is a flowchart of a method for measuring the distance between a self-vehicle and a reserved vehicle based on track spacing according to the present invention;

图3是本发明的一种基于轨道间距的自车与留存车间距测量方法的测距模型图;3 is a distance measurement model diagram of a method for measuring the distance between a self-vehicle and a reserved vehicle based on track spacing of the present invention;

图4是本发明的一种基于锚线消失点与留存车检测框的自车与留存车间距测量方法的流程框图;4 is a flowchart of a method for measuring the distance between a self-vehicle and a retained vehicle based on an anchor line vanishing point and a retained vehicle detection frame according to the present invention;

图5是本发明的摄像机偏航角计算模型图;FIG5 is a diagram of a camera yaw angle calculation model according to the present invention;

图6是本发明的基于留存车目标检测框底边缘位置测距模型图;6 is a diagram of a distance measurement model based on the bottom edge position of a retained vehicle target detection frame according to the present invention;

图7是本发明的基于自车车钩与留存车车钩间距测量方法的流程框图;7 is a flowchart of a method for measuring the distance between a self-vehicle coupler and a reserved vehicle coupler according to the present invention;

图8是本发明的基于自车车钩与留存车车钩间距测量方法的测距模型图;8 is a distance measurement model diagram based on the distance measurement method of the vehicle coupler and the reserved vehicle coupler of the present invention;

具体实施方式DETAILED DESCRIPTION

以下描述用于揭露本发明以使本领域技术人员能够实现本发明。以下描述中的优选实施例只作为举例,本领域技术人员可以想到其他显而易见的变型。以下描述中的“前”、“后”、“左”、“右”等方向性术语并不解释为对本发明的限制。在以下描述中界定的本发明的基本原理可以应用于其他实施方案、变形方案、改进方案、等同方案以及没有背离本发明的精神和范围的其他技术方案。The following description is used to disclose the present invention so that those skilled in the art can implement the present invention. The preferred embodiments described below are only examples, and those skilled in the art can think of other obvious variations. Directional terms such as "front", "back", "left", and "right" in the following description are not to be interpreted as limitations of the present invention. The basic principles of the present invention defined in the following description can be applied to other embodiments, variations, improvements, equivalents, and other technical solutions that do not deviate from the spirit and scope of the present invention.

下面结合附图1-8对本发明提供的一种领车器、基于机器视觉和联锁信号的领车系统与领车方法,作进一步的详细说明:The following is a further detailed description of a vehicle leading device, a vehicle leading system and a vehicle leading method based on machine vision and interlocking signals provided by the present invention in conjunction with Figures 1-8:

如附图1所示,本发明提出的一种基于机器视觉和联锁信号的领车系统,包括领车器、数据服务平台和移动作业终端,所述领车器与数据服务平台通过无线通信连接,所述数据服务平台与移动作业终端通过无线通信连接。As shown in Figure 1, the present invention proposes a vehicle leading system based on machine vision and interlocking signals, including a vehicle leading device, a data service platform and a mobile operation terminal. The vehicle leading device is connected to the data service platform via wireless communication, and the data service platform is connected to the mobile operation terminal via wireless communication.

进一步的,如图1所示,本发明提出的一种领车器,包括视频采集模块和机器视觉智能分析模块;所述视频采集模块包括主摄像头和辅摄像头,在所述视频测距领车器工作时,所述主摄像头的光心平行于地面,所述辅摄像头的光心与地面呈夹角设置且所述夹角可调节;所述机器视觉智能分析模块包括基于轨道间距的自车与留存车间距测量单元、基于锚线消失点与留存车检测框的自车与留存车间距测量单元和自车车钩与留存车车钩间距测量单元;所述视频采集模块与所述机器视觉智能分析模块连接。Furthermore, as shown in FIG1 , a vehicle leader proposed in the present invention comprises a video acquisition module and a machine vision intelligent analysis module; the video acquisition module comprises a main camera and an auxiliary camera, and when the video ranging vehicle leader is working, the optical center of the main camera is parallel to the ground, and the optical center of the auxiliary camera is set at an angle to the ground, and the angle is adjustable; the machine vision intelligent analysis module comprises a distance measurement unit for the own vehicle and the retained vehicle based on the track spacing, a distance measurement unit for the own vehicle and the retained vehicle based on the anchor line vanishing point and the retained vehicle detection frame, and a distance measurement unit for the own vehicle coupler and the retained vehicle coupler; the video acquisition module is connected to the machine vision intelligent analysis module.

作为进一步优选的技术方案,所示领车器以磁吸的方式安装在推送自车的前方正面,并以头钩连接员的视角进行实时视频流的采集,通过无线通信将实时视频流和所述机器视觉智能分析模块的分析结果传输给所述数据服务平台。As a further preferred technical solution, the vehicle leader is magnetically installed on the front side of the vehicle being pushed, and collects real-time video streams from the perspective of the head hook connector, and transmits the real-time video streams and the analysis results of the machine vision intelligent analysis module to the data service platform through wireless communication.

所述主摄像头在所述自车上的安装角度为其光心平行于地面,所述主摄像头主要负责对推送作业中所述自车正前方场景视频流的采集,所述辅摄像头的光心与地面呈夹角设置,夹角优选为60°,所述辅摄像头主要负责对推送作业中所述自车车钩与留存车车钩间距以及自车车钩与留存车车钩之间的连挂状态场景的视频流的采集,方便作业人员更好的掌握自车车钩与留存车车钩连挂作业的完成情况。The installation angle of the main camera on the self-vehicle is such that its optical center is parallel to the ground. The main camera is mainly responsible for collecting the video stream of the scene directly in front of the self-vehicle during the pushing operation. The optical center of the auxiliary camera is set at an angle to the ground, and the angle is preferably 60°. The auxiliary camera is mainly responsible for collecting the video stream of the distance between the self-vehicle coupler and the retained vehicle coupler as well as the connection status scene between the self-vehicle coupler and the retained vehicle coupler during the pushing operation, so as to facilitate the operating personnel to better understand the completion status of the connection operation between the self-vehicle coupler and the retained vehicle coupler.

所述视频采集模块具有高性能图像传感器,支持3D降噪,强光抑制,背光补偿。The video acquisition module has a high-performance image sensor and supports 3D noise reduction, strong light suppression, and backlight compensation.

作为进一步优选的技术方案,所述领车器还包括定位模块,所述定位模块具有北斗高精度多频芯片和小型化全方向螺旋天线,并支持介入cors基站。As a further preferred technical solution, the vehicle leader also includes a positioning module, which has a Beidou high-precision multi-frequency chip and a miniaturized omnidirectional helical antenna, and supports the intervention of a cors base station.

所述机器视觉智能分析模块利用视频增强算法、单阶段检测算法、目标分类算法等机器视觉算法,对采集的视频进行大尺寸图像网络深层识别的多目标检测,从而实现进路的自动识别、异物的自动检测和自车与留存车或异物之间距离的精确测量,准确的识别出推进前方的进路及信号状态等信息,包括轨道、道岔的状态、信号灯的类型和颜色、前方的异物以及与自车与前方异物之间的距离。The machine vision intelligent analysis module uses machine vision algorithms such as video enhancement algorithm, single-stage detection algorithm, target classification algorithm, etc. to perform multi-target detection of large-scale image network deep recognition on the collected video, thereby realizing automatic identification of the route, automatic detection of foreign objects and accurate measurement of the distance between the vehicle and the retained vehicle or foreign objects, and accurately identifying information such as the route and signal status ahead, including the status of tracks and switches, the type and color of traffic lights, foreign objects ahead, and the distance between the vehicle and the foreign objects ahead.

作为进一步优选的技术方案,所示数据服务平台包括数据信息接收模块、数据信息分析模块和数据信息发送模块。As a further preferred technical solution, the data service platform shown includes a data information receiving module, a data information analyzing module and a data information sending module.

所述数据信息接收模块用于接收并保存所述领车器中的所述视屏采集模块采集的实时视频流、所述定位模块采集的定位信息和所述机器视觉智能分析模块的测距信息和异物识别信息,以及联锁系统的联锁信号;所示联锁信号包括站场进路、信号状态等信息。The data information receiving module is used to receive and save the real-time video stream collected by the video acquisition module in the vehicle leader, the positioning information collected by the positioning module, the ranging information and foreign object recognition information of the machine vision intelligent analysis module, and the interlocking signal of the interlocking system; the interlocking signal shown includes information such as the station approach route and signal status.

所述数据信息分析模块用于将所述联锁系统的联锁信号与所述领车器中的所述视频采集模块采集的实时视频流、所述定位模块采集的定位信息和所述机器视觉智能分析模块分析出的测距信息、进路识别信息和异物识别信息进行比对校验并修正,即结合所述视频测距、卫星定位和业务逻辑定位等数据源,采用多源融合测距定位方法,实现精确测距和定位,确定自车推进前方的进路信息和信号状态。The data information analysis module is used to compare, verify and correct the interlocking signal of the interlocking system with the real-time video stream collected by the video acquisition module in the leader, the positioning information collected by the positioning module, and the ranging information, route identification information and foreign object identification information analyzed by the machine vision intelligent analysis module, that is, combining the data sources such as video ranging, satellite positioning and business logic positioning, and adopting a multi-source fusion ranging and positioning method to achieve accurate ranging and positioning, and determine the route information and signal status ahead of the vehicle.

所述数据信息发送模块用于将所述实时视频流、测距信息和异物识别信息发送给移动作业终端。The data information sending module is used to send the real-time video stream, distance measurement information and foreign object identification information to the mobile operation terminal.

所述数据服务平台可自动关联匹配视频采集设备与移动作业终端,自动将对应进路前方的视频画面发送到相应的移动作业终端上。The data service platform can automatically associate and match the video acquisition device with the mobile operation terminal, and automatically send the video image in front of the corresponding route to the corresponding mobile operation terminal.

所述数据服务平台支持视频信息的多路转发,实现多点远程实时监视,并可支持视频等作业过程的记录及查询功能。The data service platform supports multi-channel forwarding of video information, realizes multi-point remote real-time monitoring, and can support recording and query functions of video and other operation processes.

作为进一步优选的技术方案,所述移动作业终端包括视频接收与显示单元、调车信令接收单元、进路信号接收单元、调车计划接收与显示单元、卫星定位单元以及预警单元。As a further preferred technical solution, the mobile operation terminal includes a video receiving and display unit, a shunting signal receiving unit, a route signal receiving unit, a shunting plan receiving and display unit, a satellite positioning unit and an early warning unit.

所述移动作业终端通过所述视频接收与显示单元接收并显示所述实时视频流,通过所述调车信令接收单元接收调车信令,通过所述进路信号接收单元接收进路信息和信号状态以及所述测距信息,通过所述调车计划显示单元接收调车计划,通过所述预警单元接收所述警告信息。The mobile operation terminal receives and displays the real-time video stream through the video receiving and display unit, receives shunting signaling through the shunting signaling receiving unit, receives route information and signal status as well as the distance measurement information through the route signal receiving unit, receives the shunting plan through the shunting plan display unit, and receives the warning information through the early warning unit.

所述移动作业终端具有视频显示、语音对讲、调车信令、进路信号、调车计划显示及卫星定位等功能,不但能够为调车组人员显示推送运行前方视频采集信息,而且能够满足现有平调手台的功能要求,同时提供调车计划显示、调车组人员安全防护预警等功能,调车长在推进过程中根据视频以及站场信号指挥司机驾驶机车。The mobile operation terminal has functions such as video display, voice intercom, shunting signaling, approach signal, shunting plan display and satellite positioning. It can not only display and push the video collection information in front of the operation to the shunting crew, but also meet the functional requirements of the existing horizontal shunting handheld console. At the same time, it provides functions such as shunting plan display and safety protection warning for shunting crew members. During the advancement process, the shunting master directs the driver to drive the locomotive according to the video and station signals.

进一步的,本发明分别针对远距离、中距离和近距离三种场景提出了三种不同的测距方法,使得本发明的领车系统具备多场景视频测距功能,测距更准确,解决了单一视频测距无法获取与留存车或异物距离过近或过远的问题,使得调车作业效率更高。所述远距离是指所述自车与所述留存车的间距为100米-200米,所述中距离是指所述自车与所述留存车的间距为2米-100米,所述近距离是指所述自车与所述留存车的间距在2米以内。下面结合附图作进一步的详细说明:Furthermore, the present invention proposes three different distance measurement methods for the three scenarios of long distance, medium distance and short distance, respectively, so that the vehicle leading system of the present invention has a multi-scenario video distance measurement function, and the distance measurement is more accurate, which solves the problem that a single video distance measurement cannot obtain the distance to the retained vehicle or foreign objects that is too close or too far, and makes the shunting operation more efficient. The long distance refers to the distance between the own vehicle and the retained vehicle being 100 meters to 200 meters, the medium distance refers to the distance between the own vehicle and the retained vehicle being 2 meters to 100 meters, and the short distance refers to the distance between the own vehicle and the retained vehicle being within 2 meters. The following is a further detailed description in conjunction with the accompanying drawings:

如附图2所示,对于距离自车较远距离的应用场景,本发明提出了一种基于轨道间距的自车与留存车间距测量方法,该测量方法应用于所述基于轨道间距的自车与留存车间距测量单元,具体包括以下步骤,As shown in FIG. 2 , for application scenarios at a long distance from the vehicle, the present invention proposes a method for measuring the distance between the vehicle and the reserved vehicle based on the track spacing. The method is applied to the measurement unit for measuring the distance between the vehicle and the reserved vehicle based on the track spacing, and specifically includes the following steps:

步骤1.所述机器视觉智能分析模块接收并读取所述视频采集模块中的所述主摄像头采集的视频图像;Step 1. The machine vision intelligent analysis module receives and reads the video image captured by the main camera in the video acquisition module;

步骤2.所述机器视觉智能分析模块中的轨道锚线检测单元对所述视频图像中的轨道进行检测,获得视频图像中所有可识别轨道的像素坐标对,并判断轨道是主轨道还是侧轨道;Step 2. The track anchor line detection unit in the machine vision intelligent analysis module detects the track in the video image, obtains pixel coordinate pairs of all identifiable tracks in the video image, and determines whether the track is a main track or a side track;

步骤3.若所述步骤2中的所述轨道被判断为主轨道,根据所述主轨道的宽度计算自车与留存车的间距D。Step 3. If the track in step 2 is determined to be the main track, the distance D between the vehicle and the reserved vehicle is calculated according to the width of the main track.

对于所述远距离应用场景,考虑到轨道宽度是较为容易获得的先验信息,在运动情况下测距鲁棒性较好,且距离和摄像机的姿态角的变化等因素对基于轨道宽度的测距方法影响较小,因此本发明提出一种基于标准轨道宽度、轨道线为参考的目标定位模型,如图3所示。For the long-distance application scenario, considering that the track width is a priori information that is relatively easy to obtain, the ranging robustness is better in the case of motion, and factors such as changes in distance and camera attitude angle have little effect on the ranging method based on track width, the present invention proposes a target positioning model based on standard track width and track line as a reference, as shown in Figure 3.

基于轨道线在摄像机中的投影情况类似小孔成像的原理,根据前方检测到的轨道在图像中的像素宽度,通过几何知识可推导出前方的留存车与自车间距的计算公式,已知轨道真实宽度Lline,通过相机内外参数可获得相机焦距fx、前方轨道像素宽度Lpixel,因此,自车与前方待挂留存车之间的间距D的计算公式为:Based on the principle that the projection of the track line in the camera is similar to pinhole imaging, according to the pixel width of the track detected in front in the image, the calculation formula of the distance between the reserved car in front and the ego vehicle can be derived through geometric knowledge. The real width of the track Lline is known, and the camera focal length fx and the pixel width of the track in front Lpixel can be obtained through the internal and external parameters of the camera. Therefore, the calculation formula of the distance D between the ego vehicle and the reserved car to be attached in front is:

进一步的,所述基于轨道间距的自车与留存车间距测量方法中所述步骤2中所述主轨道的判断步骤具体为:Furthermore, the step of determining the main track in step 2 of the method for measuring the distance between the own vehicle and the reserved vehicle based on the track spacing is specifically as follows:

step1:初始化和清空数据:代码首先重置找到的主轨道和侧轨道的索引和相关数据。Step 1: Initialize and clear data: The code first resets the index and related data of the found main track and side track.

step2:查找所有检测到的轨道起始点:遍历起始点数组,使用分类像素数据来确定每个起始点的类别,从而收集所有轨道的起始点。Step 2: Find all detected track starting points: Traverse the starting point array and use the classified pixel data to determine the category of each starting point, thereby collecting the starting points of all tracks.

Step3:寻找可能的主轨道:遍历所有起始点,将每一对起始点作为一组可能的轨道进行考虑。对于每对起始点,如果它们都位于图像的底部(y坐标最大),计算它们之间的距离。如果这个距离接近设定的主轨道距离范围,并且这对轨道的中心接近图像中心,则认为它是一个可能的主轨道对。将所有满足条件的轨道对存储起来。从可能的主轨道中选择最中心的一对。Step 3: Find possible main tracks: Traverse all starting points and consider each pair of starting points as a possible set of tracks. For each pair of starting points, if they are both at the bottom of the image (largest y coordinate), calculate the distance between them. If this distance is close to the set main track distance range, and the center of the pair of tracks is close to the center of the image, it is considered a possible main track pair. Store all track pairs that meet the conditions. Select the most central pair from the possible main tracks.

step4:确定主轨道:遍历所有可能的主轨道对,找到中心最接近图像中心的那一对,并标记为主轨道。Step 4: Determine the main track: traverse all possible main track pairs, find the pair whose center is closest to the center of the image, and mark it as the main track.

step5:确定侧轨道:继续遍历所有起始点,排除已经作为主轨道的那一对。对于每一对起始点,如果它们都位于图像的底部,根据它们的位置(左侧或右侧半边图像)来确定是左侧轨道还是右侧轨道。Step 5: Determine the side track: Continue to traverse all starting points, excluding the pair that is already the main track. For each pair of starting points, if they are both at the bottom of the image, determine whether it is the left track or the right track based on their position (left or right half of the image).

如附图4所示,对于距离所述自车中距离的应用场景,本发明提出了一种基于锚线消失点与留存车检测框的自车与留存车间距测量方法,该测量方法应用于所述基于锚线消失点与留存车检测框的自车与留存车间距测量单元,具体包括以下步骤,As shown in FIG. 4 , for the application scenario of the distance from the self-vehicle, the present invention proposes a method for measuring the distance between the self-vehicle and the retained vehicle based on the anchor line vanishing point and the retained vehicle detection frame. The measurement method is applied to the self-vehicle and the retained vehicle distance measurement unit based on the anchor line vanishing point and the retained vehicle detection frame, and specifically includes the following steps:

步骤1.所述机器视觉智能分析模块接收并读取所述视频采集模块中的所述主摄像头采集的视频图像;Step 1. The machine vision intelligent analysis module receives and reads the video image captured by the main camera in the video acquisition module;

步骤2.所述机器视觉智能分析模块中的轨道锚线检测单元对所述视频图像中的轨道锚线进行检测分析,获得所述轨道锚线的消失点坐标;Step 2. The track anchor line detection unit in the machine vision intelligent analysis module detects and analyzes the track anchor line in the video image to obtain the vanishing point coordinates of the track anchor line;

步骤3.基于所述步骤2中的所述轨道锚线的所述消失点坐标,对所述主摄像头的姿态角进行修正,得到所述主摄像头的偏航角和俯仰角;Step 3. Based on the vanishing point coordinates of the track anchor line in step 2, the attitude angle of the main camera is corrected to obtain the yaw angle and pitch angle of the main camera;

步骤4.所述机器视觉智能分析模块中的所述自车与留存车间距测量单元对所述视频图像中的留存车进行检测,并基于留存车检测框底边缘中点位置进行测距,获得自车与留存车之间的间距d;Step 4. The self-vehicle-retained-vehicle distance measurement unit in the machine vision intelligent analysis module detects the retained vehicle in the video image, and measures the distance based on the midpoint position of the bottom edge of the retained vehicle detection frame to obtain the distance d between the self-vehicle and the retained vehicle;

步骤5.所述机器视觉智能分析模块中的所述轨道检测单元对轨道进行检测,并判断轨道是否为直轨,若所述轨道为直轨,Step 5. The track detection unit in the machine vision intelligent analysis module detects the track and determines whether the track is a straight track. If the track is a straight track,

则基于所述步骤3中的所述主摄像头的俯仰角对所述步骤4中的所述自车与留存车的间距进行修正,得到修正后的自车与留存车的间距d1,Based on the pitch angle of the main camera in step 3, the distance between the self-vehicle and the reserved vehicle in step 4 is corrected to obtain a corrected distance d1 between the self-vehicle and the reserved vehicle.

并基于所述步骤3中的所述主摄像头的偏航角对所述修正后的自车与留存车的间距d1进行修正,得到二次修正后的自车与留存车的间距D1,The corrected distance d1 between the self-vehicle and the reserved vehicle is corrected based on the yaw angle of the main camera in step 3 to obtain a second corrected distance D1 between the self-vehicle and the reserved vehicle.

若所述轨道为非直轨,则将所述主摄像头的偏航角设置为零,仅基于所述步骤3中的所述主摄像头的俯仰角对所述步骤4中的所述自车与留存车的间距进行修正,得到修正后的自车与留存车的间距d1。If the track is a non-straight track, the yaw angle of the main camera is set to zero, and the distance between the self-vehicle and the retained vehicle in step 4 is corrected only based on the pitch angle of the main camera in step 3 to obtain the corrected distance d1 between the self-vehicle and the retained vehicle.

对于所述中距离的应用场景,本发明应用基于锚线消失点坐标修正摄像机姿态角的留存车目标检测框底边缘位置的测距方法。For the medium-distance application scenario, the present invention applies a distance measurement method for retaining the bottom edge position of the vehicle target detection frame based on the coordinates of the anchor line vanishing point to correct the camera attitude angle.

在视觉测距领域,摄像机的姿态角变化会影响视觉测距的精准度。原因是在不同的姿态角下,物体的实际距离与它们在摄像机中的投影距离之间的比例关系会发生变化,从而影响测量物体距离的准确性。机车在推进过程中必然存在颠簸,这导致视觉传感器相对于地面的角度也可能动态变化,从而令上述摄像机几何模型的参数发生改变,当测距算法并没有及时更新这种参数变化时,就会发生错误的距离测算。In the field of visual ranging, changes in the camera's attitude angle will affect the accuracy of visual ranging. The reason is that at different attitude angles, the proportional relationship between the actual distance of objects and their projected distance in the camera will change, thus affecting the accuracy of measuring the distance of objects. Locomotives are bound to bump during propulsion, which causes the angle of the visual sensor relative to the ground to change dynamically, thereby changing the parameters of the above-mentioned camera geometric model. When the ranging algorithm does not update this parameter change in time, incorrect distance measurement will occur.

在三维世界里,轨道锚线是平行的,但是,从摄像机拍摄轨道的图像中会观察到图像中的轨道锚线不平行,这些线在图像中会发生相交,相交的点称为消失点,消失点的移动能反应出旋转矩阵的变化。相比传统的线段检测与权重投票方法,端到端的锚线检测方法简化了计算流程,减少了中间步骤可能带来的误差和复杂性。In the three-dimensional world, track anchor lines are parallel. However, when the camera takes an image of the track, it can be observed that the track anchor lines in the image are not parallel. These lines intersect in the image. The intersection point is called the vanishing point. The movement of the vanishing point can reflect the change of the rotation matrix. Compared with the traditional line segment detection and weighted voting method, the end-to-end anchor line detection method simplifies the calculation process and reduces the errors and complexity that may be caused by the intermediate steps.

本发明采用基于锚线消失点的摄像机姿态角估计方法对实际运行中的摄像机外参数进行更新,使系统更好地适应环境变化,保证测距结果的稳定性和可靠性。The present invention adopts a camera attitude angle estimation method based on the anchor line vanishing point to update the external parameters of the camera in actual operation, so that the system can better adapt to environmental changes and ensure the stability and reliability of the ranging result.

所述基于锚线消失点与留存车检测框的自车与留存车间距测量方法中所述步骤2中所述轨道锚线的消失点坐标为所述视频图像中检测的轨道锚线的交点,所述轨道锚线表示为N点的序列,即P={(x0,y0),(x1,y1),……,(xN―1,yN―1)},其中所述轨道锚线点的y坐标是固定的,并沿所述视频图像的垂直轴均匀采样。使用最小二乘法来找到最佳拟合锚线的斜率m和截距c,得到所述锚线的拟合方程y=mx+c,进一步计算所述轨道锚线的交点坐标(u,v):The vanishing point coordinates of the track anchor line in step 2 of the method for measuring the distance between the vehicle and the retained vehicle based on the anchor line vanishing point and the retained vehicle detection frame are the intersection points of the track anchor lines detected in the video image, and the track anchor line is represented as a sequence of N points, that is, P = {(x0 ,y0 ), (x1 ,y1 ), ..., (xN -1 ,yN -1 )}, wherein the y coordinates of the track anchor line points are fixed and uniformly sampled along the vertical axis of the video image. The least squares method is used to find the slope m and intercept c of the best fitting anchor line, and the fitting equation of the anchor line y = mx + c is obtained, and the intersection coordinates (u, v) of the track anchor line are further calculated:

其中,m1和m2是两条轨道锚线的斜率,c1和c2是两条轨道锚线的截距。Among them,m1 andm2 are the slopes of the two track anchor lines, andc1 andc2 are the intercepts of the two track anchor lines.

当固定在机车上的摄像机存在俯仰角时,此时采集到的图像消失点的位置会产生横向偏移;若车载摄像机存在偏航角时,则采集到的图像中道路消失点的位置会出现纵向偏移,如附图5所示;其中,W与H分别为成像平面的宽度和高度,β代表摄像机水平视场角的一半,γ、θ分别为摄像机的偏航角与俯仰角。When the camera fixed on the locomotive has a pitch angle, the position of the vanishing point of the collected image will be offset laterally; if the on-board camera has a yaw angle, the position of the vanishing point of the road in the collected image will be offset longitudinally, as shown in Figure 5; wherein W and H are the width and height of the imaging plane, β represents half of the horizontal field of view of the camera, and γ and θ are the yaw angle and pitch angle of the camera, respectively.

当γ=0时,成像平面内道路消失点坐标是V(u0,v0);当γ≠0时,道路消失点坐标偏移至点V′(u1,v1);推导出车载摄像机的偏航角和俯仰角计算公式为:When γ=0, the coordinates of the road vanishing point in the imaging plane are V(u0 ,v0 ); when γ≠0, the coordinates of the road vanishing point are offset to point V′(u1 ,v1 ); the calculation formulas for the yaw angle and pitch angle of the vehicle-mounted camera are derived as follows:

其中,γ为主摄像头的偏航角,W为成像平面的宽度,u1为存在俯仰角和偏航角情况下的轨道锚线消失点横坐标,fx为相机坐标系x轴方向等效焦距,其中相机坐标系以主摄像头光轴中心为原点,向右为x轴正方向,向下为y轴正方向,向前为z轴正方向;θ为主摄像头的俯仰角,H为成像平面的高度,v1为存在俯仰角和偏航角情况下的轨道消失点纵坐标,fy为相机坐标系y轴方向等效焦距。Among them, γ is the yaw angle of the main camera, W is the width of the imaging plane,u1 is the horizontal coordinate of the vanishing point of the track anchor line when there is a pitch angle and yaw angle,fx is the equivalent focal length in the x-axis direction of the camera coordinate system, where the camera coordinate system takes the center of the optical axis of the main camera as the origin, the right is the positive direction of the x-axis, the downward is the positive direction of the y-axis, and the forward is the positive direction of the z-axis; θ is the pitch angle of the main camera, H is the height of the imaging plane,v1 is the vertical coordinate of the vanishing point of the track when there is a pitch angle and yaw angle, andfy is the equivalent focal length in the y-axis direction of the camera coordinate system.

所述基于锚线消失点与留存车检测框的自车与留存车间距测量方法中所述步骤4中所述自车与留存车的间距d为,The distance d between the self-vehicle and the retained vehicle in step 4 of the method for measuring the distance between the self-vehicle and the retained vehicle based on the anchor line vanishing point and the retained vehicle detection frame is,

其中,μ为主摄像头光轴与留存车检测框底边缘中心和主摄像头光心的连线所形成的夹角,Hc是主摄像头与地面之间的距离。Among them, μ is the angle formed by the optical axis of the main camera and the line connecting the center of the bottom edge of the retained vehicle detection frame and the optical center of the main camera, andHc is the distance between the main camera and the ground.

基于留存车目标检测框底边缘位置的测距模型如附图6所示,其中,C点所在位置对应摄像机成像平面内留存车目标检测框底边缘中心C(uc,vc),此时根据基于锚线消失点的摄像机姿态角估计的俯仰角大小,修正后的推进自车与待连挂留存车的间距d1为,The distance measurement model based on the bottom edge position of the retained vehicle target detection frame is shown in FIG6 , where the position of point C corresponds to the bottom edge center C(uc ,vc ) of the retained vehicle target detection frame in the camera imaging plane. At this time, according to the pitch angle estimated by the camera attitude angle based on the anchor line vanishing point, the corrected distance d1 between the propulsion vehicle and the retained vehicle to be connected is,

其中,Hc为主摄像头与地面之间的距离;μ为主摄像头光轴与留存车检测框底边缘中心和主摄像头光心的连线所形成的夹角;θ为主摄像头的俯仰角。Among them,Hc is the distance between the main camera and the ground; μ is the angle formed by the main camera optical axis and the line connecting the center of the bottom edge of the retained vehicle detection frame and the optical center of the main camera; θ is the pitch angle of the main camera.

如附图6所示,对自车与留存车间距的计算产生影响的不仅是摄像机的俯仰角,还需考虑摄像机的偏航角,对计算公式(6)进行修正,得到二次修正后的推进自车与待连挂留存车的间距D1为:As shown in FIG6 , the calculation of the distance between the ego vehicle and the reserved vehicle is affected not only by the pitch angle of the camera but also by the yaw angle of the camera. The calculation formula (6) is corrected to obtain the distance D1 between the propelling ego vehicle and the reserved vehicle to be coupled after secondary correction:

其中,表示目标车辆检测框下边缘中点与车载摄像机光心的连线和光轴在竖直平面形成的夹角,计算公式为:in, It represents the angle formed by the line connecting the midpoint of the lower edge of the target vehicle detection frame and the optical center of the vehicle-mounted camera and the optical axis in the vertical plane. The calculation formula is:

其中,uc为摄像机成像平面内留存车检测框底边缘中心点横坐标,uo为图像坐标系原点在像素坐标系中的坐标。Among them,uc is the horizontal coordinate of the center point of the bottom edge of the retained car detection frame in the camera imaging plane, and uo is the coordinate of the origin of the image coordinate system in the pixel coordinate system.

然而,在实际情况下当在弯道下行驶时,考虑到车载摄像机偏航角会出现较大的变化,严重影响到基于留存车检测框位置测距模型的测距精度,此时需要停止该测距模型对偏航角的修正,即仅考虑摄像机俯仰角变化情况并提出修正。而是否对基于留存车目标检测框位置测距模型的偏航角修正取0值,取决于消失点的横坐标在整幅图像中的位置,通过消失点的变化判断当前道路属于直道或弯道,当消失点的偏移量较大时,则认为当前道路为弯道,否则为直道,具体判断依据如下所示:However, in actual situations, when driving on a curve, the yaw angle of the on-board camera will change greatly, which seriously affects the ranging accuracy of the ranging model based on the retained vehicle detection frame position. At this time, it is necessary to stop the ranging model from correcting the yaw angle, that is, only consider the change in the camera pitch angle and make corrections. Whether the yaw angle correction based on the ranging model of the retained vehicle target detection frame position is taken as 0 depends on the position of the horizontal coordinate of the vanishing point in the entire image. The change of the vanishing point is used to determine whether the current road is a straight road or a curve. When the offset of the vanishing point is large, the current road is considered to be a curve, otherwise it is a straight road. The specific judgment basis is as follows:

其中,W为图像的像素宽度,uvp为道路消失点的横坐标;当满足公式(9)所示条件时,只需修正俯仰角,即对基于留存车检测框位置测距模型偏航角的修正取0值。Where W is the pixel width of the image, and uvp is the horizontal coordinate of the road vanishing point. When the conditions shown in formula (9) are met, only the pitch angle needs to be corrected, that is, the correction of the yaw angle based on the ranging model of the retained vehicle detection frame position is set to 0.

如图7所示,对于距离所述自车较近距离的应用场景,本发明提出了一种基于自车车钩与留存车车钩间距测量方法,该测量方法应用于基于自车车钩与留存车车钩间距测量单元,具体包括以下步骤,As shown in FIG7 , for the application scenario at a relatively close distance to the self-vehicle, the present invention proposes a method for measuring the distance between the self-vehicle coupler and the reserved vehicle coupler. The method is applied to a measuring unit for measuring the distance between the self-vehicle coupler and the reserved vehicle coupler, and specifically includes the following steps:

步骤1.所述机器视觉智能分析模块接收并读取所述视频采集模块中所述辅摄像头采集的视频图像;Step 1. The machine vision intelligent analysis module receives and reads the video image captured by the auxiliary camera in the video acquisition module;

步骤2.所述机器视觉智能分析模块中的轨道锚线检测单元对所述视频图像中的轨道锚线进行检测分析,获得所述轨道锚线的消失点坐标,基于所述轨道锚线的消失点坐标,设定感兴趣区域,Step 2: The track anchor line detection unit in the machine vision intelligent analysis module detects and analyzes the track anchor line in the video image to obtain the vanishing point coordinates of the track anchor line, and sets the region of interest based on the vanishing point coordinates of the track anchor line.

步骤3.所述机器视觉智能分析模块中的所述自车车钩与留存车车钩间距测量单元对所述视频图像中的自车车钩与留存车车钩进行检测,以识别所述自车车钩与留存车车钩;Step 3. The distance measurement unit between the own vehicle coupler and the reserved vehicle coupler in the machine vision intelligent analysis module detects the own vehicle coupler and the reserved vehicle coupler in the video image to identify the own vehicle coupler and the reserved vehicle coupler;

步骤4.所述机器视觉智能分析模块中的所述自车车钩与留存车车钩间距测量单元跟踪所述自车车钩与留存车车钩的移动并生成车钩跟踪id,判断所述自车车钩与留存车车钩是否位于所述步骤2中的所述感兴趣区域内,则位于所述感兴趣区域内的车钩为留存车车钩,并记录留存车车钩的跟踪id;Step 4. The distance measurement unit between the own vehicle coupler and the reserved vehicle coupler in the machine vision intelligent analysis module tracks the movement of the own vehicle coupler and the reserved vehicle coupler and generates a coupler tracking ID, and determines whether the own vehicle coupler and the reserved vehicle coupler are located in the region of interest in step 2. The coupler located in the region of interest is the reserved vehicle coupler, and the tracking ID of the reserved vehicle coupler is recorded;

步骤5.基于所述步骤4中的所述留存车车钩,所述机器视觉智能分析模块中的所述自车车钩与留存车车钩对车车钩间距测量单元分析计算所述自车车钩与留存车车钩的间距D2。Step 5. Based on the retained vehicle coupler in step 4, the vehicle-to-vehicle coupler and retained vehicle coupler distance measurement unit in the machine vision intelligent analysis module analyzes and calculates the distance D2 between the vehicle-to-vehicle coupler and the retained vehicle coupler.

对于近距离场景,由于镜头内前方场景全部被留存车填满,唯一可用的参照物是车钩,因此,本发明采用了基于车钩尺寸的测距方法。For close-range scenes, since the scene in front of the lens is completely filled with the remaining vehicles, the only available reference is the coupler. Therefore, the present invention adopts a distance measurement method based on the coupler size.

PnP问题是指在透视投影环境里具有很多三维和二维匹配点,当预知摄像机内部参数的基础上,计算求解上述三维匹配点在摄像机坐标系的具体位置。PnP技术使用图像上特征作用点的二维平面图像坐标,以及特征作用点在靶标坐标系里的三维立体坐标,来获取摄像机坐标系和靶标坐标系彼此的位姿关系。在PnP算法中,n取4的时候,如果4个特征点共面,可使用坐标系交换矩阵的单位正交性来进行计算,能够得到唯一的解析解。The PnP problem refers to the situation where there are many three-dimensional and two-dimensional matching points in a perspective projection environment. When the internal parameters of the camera are known in advance, the specific positions of the three-dimensional matching points in the camera coordinate system are calculated. The PnP technology uses the two-dimensional plane image coordinates of the feature action points on the image and the three-dimensional coordinates of the feature action points in the target coordinate system to obtain the pose relationship between the camera coordinate system and the target coordinate system. In the PnP algorithm, when n is 4, if the four feature points are coplanar, the unit orthogonality of the coordinate system exchange matrix can be used for calculation to obtain a unique analytical solution.

车钩在摄像机中的投影原理如附图8所示。所述附图8中包含四个坐标系:The projection principle of the coupler in the camera is shown in FIG8. FIG8 contains four coordinate systems:

世界坐标系OWXWYWZW、相机坐标系OCXCYCZC、Oixy图像坐标系以及Opuv像素坐标系;A、B、C、D分别表示车钩检测框的四个角点,a、b、c、d是四个角点在摄像机成像作用平面上的投影点。World coordinate system OW XW YW ZW , camera coordinate system OC XC YC ZC , Oixy image coordinate system and Opuv pixel coordinate system; A, B, C, D represent the four corner points of the coupler detection frame respectively, and a, b, c, d are the projection points of the four corner points on the camera imaging plane.

如附图8所示,对于ΔOcab和ΔOAB,根据余弦定理计算得出:As shown in FIG8 , for ΔOc ab and ΔO AB, the following is calculated based on the cosine theorem:

同理,对于其他三角形,可以得到以下公式:Similarly, for other triangles, we can get the following formula:

其中,h为留存车车钩高度(AC),w为留存车车钩宽度(CD);我国的车钩几何尺寸要求是几乎不变的,由于a、b、c、d四个点在视频图像上的位置已知,余弦角α、β、γ、θ也是已知的,通过公式(10)和公式(11)可以计算得出lOA、lOB、lOC、lOD。因此辅摄像机与留存车车钩之间的距离l为,Among them, h is the height of the coupler of the retained car (AC), and w is the width of the coupler of the retained car (CD); the requirements for the geometric dimensions of the coupler in China are almost unchanged. Since the positions of the four points a, b, c, and d on the video image are known, the cosine angles α, β, γ, and θ are also known. Through formulas (10) and (11), lOA , lOB , lOC , and lOD can be calculated. Therefore, the distance l between the auxiliary camera and the coupler of the retained car is,

其中,lOB为世界坐标系内留存车车钩检测框右上角坐标点到世界坐标系原点的直线距离,lOC为世界坐标系内留存车车钩检测框左下角坐标点到世界坐标系原点的直线距离。Among them, lOB is the straight-line distance from the upper right corner coordinate point of the retained vehicle coupler detection frame in the world coordinate system to the origin of the world coordinate system, and lOC is the straight-line distance from the lower left corner coordinate point of the retained vehicle coupler detection frame in the world coordinate system to the origin of the world coordinate system.

因为辅摄像头与自车前方车钩之间存在水平距离D0,所以在计算真实的自车车钩与目标留存车车钩间的间距D2时,需减去这部分距离,得到两车钩之间的距离D2为,Because there is a horizontal distance D0 between the auxiliary camera and the front coupler of the ego vehicle, this distance needs to be subtracted when calculating the distance D2 between the real coupler of the ego vehicle and the coupler of the target reserved vehicle. The distance D2 between the two couplers is obtained as follows:

其中,h0为辅摄像头与自车车钩间的垂直距。Wherein,h0 is the vertical distance between the auxiliary camera and the self-vehicle coupler.

近距离测距流程包括以下步骤:首先,使用检测模型检测图像中的车钩和锚线;接着,由于推送作业辅摄像头视角中对车车钩总是率先出现在轨道消失点附近,因此根据锚线检测结果计算消失点坐标,根据消失点设定感兴趣区域,判别车钩是否位于该区域内,以识别对车车钩;然后,通过目标跟踪算法跟踪远处车钩的移动,更新其位置和跟踪id;最后,通过基于车钩尺寸的测距方法计算对车车钩的实际距离,并输出测距结果。The close-range distance measurement process includes the following steps: first, the detection model is used to detect the coupler and anchor line in the image; then, since the opposite vehicle coupler always appears first near the track vanishing point in the perspective of the auxiliary camera of the push operation, the vanishing point coordinates are calculated according to the anchor line detection results, and the region of interest is set according to the vanishing point to determine whether the coupler is located in the region to identify the opposite vehicle coupler; then, the movement of the distant coupler is tracked by the target tracking algorithm, and its position and tracking ID are updated; finally, the actual distance of the opposite vehicle coupler is calculated by the distance measurement method based on the coupler size, and the distance measurement result is output.

进一步的,本发明还提出了一种利用所述领车系统的领车方法,具体包括以下步骤,Furthermore, the present invention also proposes a method for leading a vehicle using the vehicle leading system, which specifically comprises the following steps:

步骤1.在调车作业过程中,推进作业之前,将所述领车器安装在自车的前方正面;Step 1. During the shunting operation, before the advancing operation, the leading device is installed on the front of the vehicle;

步骤2.在推进作业过程中,所述视频测距领车器通过所述视频采集模块采集所述自车前方的实时视频流,所述机器视觉智能分析模块对采集的所述实时视频流进行智能分析,得到分析结果;Step 2. During the pushing operation, the video ranging leader collects the real-time video stream in front of the vehicle through the video acquisition module, and the machine vision intelligent analysis module performs intelligent analysis on the collected real-time video stream to obtain the analysis result;

步骤3.所述视频测距领车器将所述实时视频流、所述分析结果和所述定位模块采集的定位信息发送给数据服务平台;Step 3. The video ranging leader sends the real-time video stream, the analysis result and the positioning information collected by the positioning module to the data service platform;

步骤4.所述数据服务平台中的所述数据信息接收模块接收并保存所述领车器中的所述视屏采集模块采集的实时视频流、所述定位模块采集的定位信息和所述机器视觉智能分析模块的测距信息和异物识别信息,以及联锁系统的联锁信号;Step 4. The data information receiving module in the data service platform receives and stores the real-time video stream collected by the video acquisition module in the vehicle leader, the positioning information collected by the positioning module, the ranging information and foreign object recognition information of the machine vision intelligent analysis module, and the interlocking signal of the interlocking system;

步骤5.所述数据信息分析模块将所述联锁系统的联锁信号与所述领车器中的所述视频采集模块采集的实时视频流、所述定位模块采集的定位信息和所述机器视觉智能分析模块分析出的测距信息和异物识别信息进行比对校验并修正,确定自车推进前方的进路信息和信号状态;Step 5. The data information analysis module compares and verifies the interlocking signal of the interlocking system with the real-time video stream collected by the video acquisition module in the vehicle leader, the positioning information collected by the positioning module, and the distance measurement information and foreign object recognition information analyzed by the machine vision intelligent analysis module, and determines the route information and signal status ahead of the vehicle;

步骤6.所述数据信息发送模块将所述实时视频流、所述测距信息和所述异物识别信息发送给移动作业终端,若所述视频测距领车器识别到所述自车推进前方的异物,所述数据服务平台将异物警告信息发送给所述移动作业终端;Step 6. The data information sending module sends the real-time video stream, the distance measurement information and the foreign object identification information to the mobile operation terminal. If the video distance measurement device identifies a foreign object in front of the vehicle, the data service platform sends a foreign object warning message to the mobile operation terminal.

步骤7.所述移动作业终端通过所述视频接收与显示单元接收并显示所述实时视频流,通过所述调车信令接收单元接收调车信令,通过所述进路信号接收单元接收进路信息和信号状态以及所述测距信息,通过所述调车计划显示单元接收调车计划,通过所述预警单元接收所述警告信息;Step 7. The mobile operation terminal receives and displays the real-time video stream through the video receiving and display unit, receives the shunting signal through the shunting signal receiving unit, receives the route information and signal status and the distance measurement information through the route signal receiving unit, receives the shunting plan through the shunting plan display unit, and receives the warning information through the early warning unit;

步骤8.调车组根据所述步骤7中所述移动作业终端接收的信息流指挥司机进行安全驾驶。Step 8. The shunting team instructs the driver to drive safely according to the information flow received by the mobile operation terminal in step 7.

进一步的,所述领车方法中所述步骤2中的所述分析结果包括测距信息和异物识别信息,所述测距信息包括所述自车与所述留存车之间的间距、所述自车车钩与所述留存车车钩之间的间距和所述自车与所述异物之间的间距。Furthermore, the analysis result in step 2 of the vehicle leading method includes distance measurement information and foreign object identification information, and the distance measurement information includes the distance between the own vehicle and the retained vehicle, the distance between the own vehicle coupler and the retained vehicle coupler, and the distance between the own vehicle and the foreign object.

进一步的,所述领车方法中所述步骤4中的所述联锁信号至少包括进路信息和信号状态信息。Furthermore, the interlock signal in step 4 of the vehicle leading method at least includes route information and signal status information.

本发明所取得的有益技术效果:本发明通过领车器的基于机器视觉和联锁信号的领车系统,实现了对自车前方路况和信号状态的实时拍摄,并借助计算机视觉等人工智能技术,进行视频识别和分析,从而实现了进路的自动识别、异物的自动检测、自车与留存车或异物之间距离的精确测量,并将实时视频流和分析结构传输至数据服务平台,结合联锁系统的进路、信号状态等联锁信号,对视频流和分析结果进行进一步的校验和确认,将视频流、进路与信号状态等信息通过移动作业终端传输给调车组成员和司机,指挥司机驾驶机车,从而实现领车过程的自动、精确控制,在提高作业效率的同时,降低了调车作业人员的劳动强度,增强了调车作业的安全性,提高了整个领车系统的实用性和安全性。本发明通过视频车距替代传统的激光雷达测距,从而解决了领车设备便携性的问题。本发明分别针对远距离、中距离和近距离三种场景提出了不同的测距方法,从而使得本发明的领车系统具备多场景视频测距功能,解决了单一视频测距无法获取与留存车或异物距离过近和过远的问题,使得调车作业更加准确,并采用基于锚线消失点坐标动态估计相机姿态角的方法,提高了视频测距在动态场景下的精度和鲁棒性。The beneficial technical effects achieved by the present invention are as follows: the present invention realizes real-time shooting of the road conditions and signal status in front of the vehicle through the vehicle leading system based on machine vision and interlocking signals, and performs video recognition and analysis with the help of artificial intelligence technologies such as computer vision, thereby realizing automatic identification of the approach, automatic detection of foreign objects, and accurate measurement of the distance between the vehicle and the reserved vehicle or foreign objects, and transmits the real-time video stream and analysis structure to the data service platform, and further verifies and confirms the video stream and analysis results in combination with the interlocking signals such as the approach and signal status of the interlocking system, and transmits the video stream, approach and signal status information to the shunting team members and the driver through the mobile operation terminal, and instructs the driver to drive the locomotive, thereby realizing automatic and accurate control of the vehicle leading process, while improving the operating efficiency, reducing the labor intensity of the shunting operators, enhancing the safety of the shunting operation, and improving the practicality and safety of the entire vehicle leading system. The present invention replaces the traditional laser radar ranging with video vehicle distance, thereby solving the problem of portability of the vehicle leading equipment. The present invention proposes different ranging methods for three scenarios: long distance, medium distance and short distance, so that the vehicle leading system of the present invention has a multi-scenario video ranging function, which solves the problem that a single video ranging cannot obtain the distance that is too close or too far from the retained vehicle or foreign objects, making the shunting operation more accurate, and adopts a method of dynamically estimating the camera attitude angle based on the coordinates of the anchor line vanishing point, thereby improving the accuracy and robustness of video ranging in dynamic scenarios.

上述实施例为本发明的较佳实施例,并非用以限定本发明实施的范围。任何本领域的普通技术人员,在不脱离本发明的发明范围内,当可作些许的改进,即凡是依照本发明所做的同等改进,应为本发明的范围所涵盖。The above embodiments are preferred embodiments of the present invention and are not intended to limit the scope of the present invention. Any person skilled in the art may make some improvements without departing from the scope of the present invention, that is, any equivalent improvements made according to the present invention should be covered by the scope of the present invention.

Claims (16)

CN202411054750.7A2024-08-012024-08-01 Vehicle leading device, vehicle leading system and vehicle leading method based on machine vision and interlocking signalActiveCN118753347B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411054750.7ACN118753347B (en)2024-08-012024-08-01 Vehicle leading device, vehicle leading system and vehicle leading method based on machine vision and interlocking signal

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411054750.7ACN118753347B (en)2024-08-012024-08-01 Vehicle leading device, vehicle leading system and vehicle leading method based on machine vision and interlocking signal

Publications (2)

Publication NumberPublication Date
CN118753347Atrue CN118753347A (en)2024-10-11
CN118753347B CN118753347B (en)2025-08-26

Family

ID=92939970

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411054750.7AActiveCN118753347B (en)2024-08-012024-08-01 Vehicle leading device, vehicle leading system and vehicle leading method based on machine vision and interlocking signal

Country Status (1)

CountryLink
CN (1)CN118753347B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20120192756A1 (en)*2011-01-312012-08-02Harsco CorporationRail vision system
CN104899554A (en)*2015-05-072015-09-09东北大学Vehicle ranging method based on monocular vision
CN104908782A (en)*2015-07-022015-09-16株洲高新技术产业开发区壹星科技有限公司Safety pre-warning method and system device for locomotive shunting operation
CN105857343A (en)*2016-04-072016-08-17天津联合智选科技有限公司Shunting operation system and method for railways
US20160292865A1 (en)*2015-04-022016-10-06Sportvision, Inc.Automated framing and selective discard of parts of high resolution videos of large event space
CN106898023A (en)*2017-01-222017-06-27中山大学A kind of space headway measuring method and system based on video image
WO2021121854A1 (en)*2019-12-162021-06-24Plasser & Theurer Export Von Bahnbaumaschinen Gesellschaft M.B.H.Method and monitoring system for determining a position of a rail vehicle
CN113320571A (en)*2021-06-092021-08-31中国国家铁路集团有限公司Intelligent coupling control method for railway plane shunting
CN116665176A (en)*2023-07-212023-08-29石家庄铁道大学Multi-task network road target detection method for vehicle automatic driving
CN117953466A (en)*2024-01-032024-04-30北京航空航天大学合肥创新研究院Obstacle detection method integrating machine vision and laser radar
CN221023689U (en)*2023-09-132024-05-28山东高速轨道交通集团有限公司益羊铁路管理处Intelligent shunting operation system for locomotive

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20120192756A1 (en)*2011-01-312012-08-02Harsco CorporationRail vision system
US20160292865A1 (en)*2015-04-022016-10-06Sportvision, Inc.Automated framing and selective discard of parts of high resolution videos of large event space
CN104899554A (en)*2015-05-072015-09-09东北大学Vehicle ranging method based on monocular vision
CN104908782A (en)*2015-07-022015-09-16株洲高新技术产业开发区壹星科技有限公司Safety pre-warning method and system device for locomotive shunting operation
CN105857343A (en)*2016-04-072016-08-17天津联合智选科技有限公司Shunting operation system and method for railways
CN106898023A (en)*2017-01-222017-06-27中山大学A kind of space headway measuring method and system based on video image
WO2021121854A1 (en)*2019-12-162021-06-24Plasser & Theurer Export Von Bahnbaumaschinen Gesellschaft M.B.H.Method and monitoring system for determining a position of a rail vehicle
CN113320571A (en)*2021-06-092021-08-31中国国家铁路集团有限公司Intelligent coupling control method for railway plane shunting
CN116665176A (en)*2023-07-212023-08-29石家庄铁道大学Multi-task network road target detection method for vehicle automatic driving
CN221023689U (en)*2023-09-132024-05-28山东高速轨道交通集团有限公司益羊铁路管理处Intelligent shunting operation system for locomotive
CN117953466A (en)*2024-01-032024-04-30北京航空航天大学合肥创新研究院Obstacle detection method integrating machine vision and laser radar

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张屹, 魏学业, 蒋海峰: "基于模式匹配算法的机车信号故障诊断的研究", 《铁道学报》, no. 01, 15 February 2007 (2007-02-15), pages 121 - 123*

Also Published As

Publication numberPublication date
CN118753347B (en)2025-08-26

Similar Documents

PublicationPublication DateTitle
CN109664916B (en)Train operation control system with vehicle-mounted controller as core
EP1981748B1 (en)System for measuring speed of a train
CN111461088B (en)Rail transit obstacle avoidance system based on image processing and target recognition
CN109552366B (en)Intelligent detection and alarm system for locomotive-mounted railway obstacles and early warning method thereof
CN113791621B (en)Automatic steering tractor and airplane docking method and system
CN113085896B (en) A system and method for assisted automatic driving of a modern rail cleaning vehicle
CN101075376B (en)Intelligent video traffic monitoring system based on multi-viewpoints and its method
CN109765571B (en) A vehicle obstacle detection system and method
CN108416257A (en)Merge the underground railway track obstacle detection method of vision and laser radar data feature
CN108256413A (en)Passable area detection method and device, storage medium and electronic equipment
CN107705331A (en)A kind of automobile video frequency speed-measuring method based on multiple views video camera
CN111999298A (en)Unmanned aerial vehicle bridge system of patrolling and examining fast based on 5G technique
CN108639108B (en) A safety protection system for locomotive operation
CN106494611B (en)A kind of dual-purpose patrol unmanned machine of sky rail
CN109910955A (en) Rail transit tunnel obstacle detection system and method based on transponder information transmission
CN200990147Y (en)Intelligent video traffic monitoring system based on multi-view point
CN106741890A (en)A kind of high-speed railway safety detecting system based on the dual-purpose unmanned plane of empty rail
CN104369742A (en)Image-processing-based fast intelligent detection vehicle for tunnel surface cracks
CN110443819A (en)A kind of track detection method and device of monorail train
CN111445725A (en)Blind area intelligent warning device and algorithm for meeting scene
CN105701453A (en)Railway ballast vehicle with obstacle identification system and obstacle identification method
CN114841188A (en) A method and device for vehicle fusion positioning based on two-dimensional code
CN108629328A (en)Intelligent vehicle collision-proof method based on monocular vision and device
CN110550072B (en)Method, system, medium and equipment for identifying obstacle in railway shunting operation
CN118753347B (en) Vehicle leading device, vehicle leading system and vehicle leading method based on machine vision and interlocking signal

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp