



技术领域technical field
本公开涉及煤矿井下运输技术技术领域,具体的说,是涉及一种矿用电机车无人驾驶系统及控制方法。The present disclosure relates to the technical field of coal mine underground transportation technology, and in particular, relates to an unmanned driving system and a control method for mine electric locomotives.
背景技术Background technique
本部分的陈述仅仅是提供了与本公开相关的背景技术信息,并不必然构成在先技术。The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
矿用电机车是煤矿井下辅助运输系统中的一种重要运输工具,随着采矿业技术的不断更迭和矿产资源需求的扩大,煤矿规模越来越大,年开采量上千万吨级特大矿山不断出现。尤其是一些处于高海拔、超深度的特殊矿山,存在大量涌水或其他危险,对矿用电机车的运输能力、效率和安全性提出了越来越高的要求。随着社会的发展和人们生活水平的提高,人工成本在采矿成本中占的比例也不断增大,并且人们对工作环境的要求也不断提高,因而矿用电机车的无人驾驶系统急需被开发应用于井下运输,这种系统提供了一种可靠、安全的运输方案,可以满足矿山井下运输的这些新需求。这一系统对于大型矿山、特大型矿山、特殊矿山都有很好的适用性。从经济效益、环境效益和社会效益看,无人驾驶系统也是矿用电机车运输发展趋势,因此,如何针对煤矿井下环境开发矿用电机车无人驾驶系统是一项重大的技术需求。Mining electric locomotive is an important means of transportation in the coal mine auxiliary transportation system. With the continuous change of mining technology and the expansion of mineral resources demand, the scale of coal mines is getting larger and larger, and the annual mining capacity is tens of millions of tons. keep appearing. Especially in some special mines at high altitude and ultra-deep, there are a lot of water gushing or other dangers, which put forward higher and higher requirements for the transportation capacity, efficiency and safety of mining electric locomotives. With the development of society and the improvement of people's living standards, the proportion of labor costs in mining costs is also increasing, and people's requirements for the working environment are also increasing, so the unmanned driving system for mining electric locomotives needs to be developed urgently Applied to underground transportation, this system provides a reliable and safe transportation solution that can meet these new demands for underground transportation in mines. This system has good applicability to large mines, extra large mines and special mines. From the perspective of economic benefits, environmental benefits and social benefits, the unmanned driving system is also the development trend of mine electric locomotive transportation. Therefore, how to develop an unmanned driving system for mine electric locomotives for the underground environment of coal mines is a major technical requirement.
发明内容Contents of the invention
本公开为了解决上述问题,提出了一种矿用电机车无人驾驶系统。In order to solve the above problems, the present disclosure proposes an unmanned driving system for mine electric locomotives.
为了实现上述目的,本公开采用如下技术方案:In order to achieve the above purpose, the present disclosure adopts the following technical solutions:
根据本发明实施例的第一个方面,提供了一种矿用电机车无人驾驶系统,包括无人驾驶客户端系统,电机车以及煤矿自动化监控中心,所述无人驾驶客户端系统分别与电机车以及煤矿自动化监控中心无线连接;According to the first aspect of the embodiment of the present invention, there is provided an unmanned driving system for mining electric locomotives, including an unmanned client system, an electric locomotive, and a coal mine automation monitoring center, and the unmanned client system is connected with Electric locomotive and coal mine automation monitoring center wireless connection;
所述无人驾驶客户端系统包括操作子系统、计算子系统、控制子系统,所述计算子系统与传感器组件通信连接,所述传感器组件用于从环境中收集数据,并将这些数据传输给计算子系统进行感知和动作的计算,然后计算子系统再将动作规划发送到控制子系统,所述控制子系统接收计算子系统传输的动作规划数据,控制电机车进行执行,所述操作子系统用于协调传感器组件之间所有的通信,并协调不同实时任务的资源分配。The unmanned driving client system includes an operating subsystem, a computing subsystem, and a control subsystem, and the computing subsystem communicates with sensor components, and the sensor components are used to collect data from the environment and transmit these data to The calculation subsystem performs perception and action calculations, and then the calculation subsystem sends the action planning to the control subsystem, and the control subsystem receives the action planning data transmitted by the calculation subsystem, controls the electric locomotive to execute, and the operation subsystem It is used to coordinate all communication between sensor components and coordinate resource allocation of different real-time tasks.
根据本发明实施例的第二个方面,提供了一种矿用电机车无人驾驶系统的控制方法,包括如下步骤:According to a second aspect of an embodiment of the present invention, a control method for an unmanned driving system of a mining electric locomotive is provided, including the following steps:
获取煤矿井下环境的高精度地图数据,采集电机车的姿态、方向、速度、加速度信息,以及电机车的位置信息;Obtain high-precision map data of the underground environment of the coal mine, collect the attitude, direction, speed, acceleration information of the electric locomotive, and the position information of the electric locomotive;
对上述数据进行融合,实时解算当前电机车在空间中的坐标数据;The above data are fused to solve the coordinate data of the current electric locomotive in space in real time;
对电机车轨道路线周围环境进行实时感知,计算电机车与周围障碍物距离,以实现电机车自动避障驾驶以及防碰撞预警。Real-time perception of the surrounding environment of the electric locomotive track route, and calculation of the distance between the electric locomotive and surrounding obstacles, so as to realize automatic obstacle avoidance driving and anti-collision warning of the electric locomotive.
与现有技术相比,本公开的有益效果为:Compared with the prior art, the beneficial effects of the present disclosure are:
该系统实现了井下电机车无人驾驶运行、轨道障碍物识别、防碰撞预警等核心功能,减少运输水平作业人员,包括井下电机车司机,在井上控制室一个人可以监管多台车的运行,改善劳动环境,增加设备运行时间,提高生产能力,达到本质安全水平,真正实现机械化换人、自动化减人的目标。The system realizes core functions such as unmanned operation of underground electric locomotives, identification of track obstacles, anti-collision warning, etc., and reduces the number of operators at the transportation level, including the driver of underground electric locomotives. One person in the control room on the well can supervise the operation of multiple vehicles. Improve the working environment, increase the operating time of equipment, increase production capacity, reach the level of intrinsic safety, and truly realize the goal of replacing people with mechanization and reducing people with automation.
附图说明Description of drawings
构成本公开的一部分的说明书附图用来提供对本公开的进一步理解,本公开的示意性实施例及其说明用于解释本公开,并不构成对本公开的限定。The accompanying drawings constituting a part of the present disclosure are used to provide further understanding of the present disclosure, and the exemplary embodiments and descriptions of the present disclosure are used to explain the present disclosure, but not to limit the present disclosure.
图1是实施例1的无人驾驶客户端系统的框图;Fig. 1 is the block diagram of the unmanned driving client system of embodiment 1;
图2是实施例1的毫米波雷达基本工作原理示意图;Fig. 2 is a schematic diagram of the basic working principle of the millimeter-wave radar in Embodiment 1;
图3是实施例1的车载摄像机机芯组成图;Fig. 3 is the composition figure of the vehicle-mounted camera core of embodiment 1;
图4是实施例1的图像的坐标系与像素的坐标系之间的转换关系。FIG. 4 is a conversion relationship between the image coordinate system and the pixel coordinate system in the first embodiment.
具体实施方式:Detailed ways:
下面结合附图与实施例对本公开作进一步说明。The present disclosure will be further described below in conjunction with the accompanying drawings and embodiments.
应该指出,以下详细说明都是示例性的,旨在对本公开提供进一步的说明。除非另有指明,本文使用的所有技术和科学术语具有与本公开所属技术领域的普通技术人员通常理解的相同含义。It should be noted that the following detailed description is exemplary and intended to provide further explanation of the present disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
需要注意的是,这里所使用的术语仅是为了描述具体实施方式,而非意图限制根据本公开的示例性实施方式。如在这里所使用的,除非上下文另外明确指出,否则单数形式也意图包括复数形式,此外,还应当理解的是,当在本说明书中使用术语“包含”和/或“包括”时,其指明存在特征、步骤、操作、器件、组件和/或它们的组合。需要说明的是,在不冲突的情况下,本公开中的各个实施例及实施例中的特征可以相互组合。下面将结合附图对实施例进行详细描述。It should be noted that the terminology used herein is only for describing specific embodiments, and is not intended to limit the exemplary embodiments according to the present disclosure. As used herein, unless the context clearly dictates otherwise, the singular is intended to include the plural, and it should also be understood that when the terms "comprising" and/or "comprising" are used in this specification, they mean There are features, steps, operations, means, components and/or combinations thereof. It should be noted that, in the case of no conflict, various embodiments and features in the embodiments in the present disclosure can be combined with each other. The embodiments will be described in detail below in conjunction with the accompanying drawings.
实施例一Embodiment one
如图1至图3所示,一种矿用电机车无人驾驶系统,包括无人驾驶客户端系统,电机车以及自动化监控中心;As shown in Figures 1 to 3, an unmanned driving system for mining electric locomotives, including an unmanned driving client system, an electric locomotive, and an automated monitoring center;
所述无人驾驶客户端系统,通过自主驾驶的仿真学习、环境感知、建模形成三维图。无人驾驶客户端系统包括操作子系统、计算子系统、控制子系统,所述计算子系统与传感器组件通信连接,所述传感器组件从环境中收集数据,并将这些数据传输给计算平台进行感知和动作的计算,然后计算子系统再将动作规划发送到控制子系统进行执行,操作子系统来协调这些组件之间所有的通信,并协调不同实时任务的资源分配。The unmanned driving client system forms a three-dimensional map through autonomous driving simulation learning, environmental perception, and modeling. The unmanned driving client system includes an operating subsystem, a computing subsystem, and a control subsystem. The computing subsystem communicates with the sensor component, and the sensor component collects data from the environment and transmits the data to the computing platform for perception And the calculation of the action, and then the calculation subsystem sends the action plan to the control subsystem for execution, and the operation subsystem coordinates all the communication between these components and coordinates the resource allocation of different real-time tasks.
无人驾驶客户端系统,可实现电机车远程控制多台机车手动、自动调度功能,提高轨道利用率,在调度室实现电机车远程控制。特殊情况下可以切换调度室远程手动驾驶模式,以应对突发情况。电机车运行过程中的各种参数、运行时间、电量等必要的数据保存,方便对电机车运行状态监控及后期故障排查。无人驾驶客户端系统具备的能力包括:a)足够的计算能力,用于快速处理大量传感器数据;b)较高的鲁棒性,即使系统部分发生故障时,也能够从中恢复;c)在严格的能耗和资源约束下进行所有的计算。The unmanned driving client system can realize the remote control of electric locomotives, manual and automatic dispatching functions of multiple locomotives, improve track utilization, and realize remote control of electric locomotives in the dispatching room. In special cases, the remote manual driving mode of the dispatching room can be switched to deal with emergencies. Various parameters, running time, power and other necessary data during the operation of the electric locomotive are saved, which is convenient for monitoring the operating status of the electric locomotive and troubleshooting in the later stage. The capabilities of the unmanned driving client system include: a) sufficient computing power to quickly process a large amount of sensor data; b) high robustness, even if the system part fails, it can also recover from it; c) in the All computations are performed under strict energy and resource constraints.
无人驾驶客户端系统作为电机车调度中控系统,可集成接入煤矿自动化管理监控中心,能够管理电机车位置、速度等状态信息,发送指令控制电机车动作,执行监控调度、信集闭功能。且通过网络系统与电机车的车载摄像机连接,对电机车周边环境进行记录和监测,实现自动化监控中心查看;As the central control system for electric locomotive dispatching, the driverless client system can be integrated into the coal mine automation management and monitoring center. It can manage state information such as the position and speed of electric locomotives, send instructions to control the movement of electric locomotives, and perform monitoring, scheduling, and signal collection functions. . Moreover, it is connected with the on-board camera of the electric locomotive through the network system to record and monitor the surrounding environment of the electric locomotive, and realize the inspection of the automatic monitoring center;
在计算子系统中,提出了一个SoC架构,此SoC架构中,有一个I/O子系统与前端传感器交互;由DSP负责图像预处理流以进行特征提取;由GPU进行目标识别和其他深度学习任务;多核CPU用于规划、控制和交互任务;由FPGA进行动态重构以分时共享的方式完成传感器数据压缩上传、目标跟踪和交通预测等工作,这些计算部件和I/O组件通过共享内存进行数据通信。在计算子系统中存在一个动态系统,通过OpenCL把不同的工作负荷分配到异构计算单元上执行,并由实时的执行引擎动态地完成任务调度。在动态系统之上,部署机器人操作系统(Robot Operating System),ROS是一个分布式操作系统,其中包含了多个ROS节点,每一个节点封装了一个无人驾驶任务。In the computing subsystem, a SoC architecture is proposed. In this SoC architecture, there is an I/O subsystem interacting with front-end sensors; DSP is responsible for image preprocessing flow for feature extraction; GPU is used for object recognition and other deep learning tasks; multi-core CPUs are used for planning, control and interactive tasks; FPGA performs dynamic reconfiguration to complete sensor data compression and uploading, target tracking and traffic prediction in a time-sharing manner. These computing components and I/O components share memory for data communication. There is a dynamic system in the computing subsystem, which distributes different workloads to heterogeneous computing units for execution through OpenCL, and the real-time execution engine dynamically completes task scheduling. On top of the dynamic system, the Robot Operating System (Robot Operating System) is deployed. ROS is a distributed operating system that contains multiple ROS nodes, each of which encapsulates an unmanned driving task.
传感器组可以包括毫米波雷达传感器、IMU惯性测量单元、车载摄像头等。The sensor cluster can include millimeter-wave radar sensors, IMU inertial measurement units, on-board cameras, and more.
毫米波雷达内置MMIC芯片、天线PCB板、收发模块、信号处理模块,利用高频电路产生特定调制频率(FMCW)的电磁波,并通过天线发送电磁波和接收从目标反射回来的电磁波,通过发送和接收电磁波的参数来计算目标的各个参数,可以同时对多个目标进行测距、测速以及方位测量;测速是根据多普勒效应,而方位测量(包括水平角度和垂直角度)是通过天线的阵列方式来实现的,毫米波雷达主要有脉冲体制以及连续波体制两种工作体制,其中连续波又可以分为FSK(频移键控)、PSK(相移键控)、CW(恒频连续波)、FMCW(调频连续波)等方式。The millimeter-wave radar has a built-in MMIC chip, antenna PCB board, transceiver module, and signal processing module. It uses high-frequency circuits to generate electromagnetic waves with a specific modulation frequency (FMCW), and sends electromagnetic waves through the antenna and receives electromagnetic waves reflected from the target. By sending and receiving The parameters of the electromagnetic wave are used to calculate the various parameters of the target, and multiple targets can be measured at the same time, ranging, speed and azimuth measurement; the speed measurement is based on the Doppler effect, and the azimuth measurement (including horizontal angle and vertical angle) is through the array of antennas To achieve this, millimeter-wave radar mainly has two working systems: pulse system and continuous wave system, among which continuous wave can be divided into FSK (frequency shift keying), PSK (phase shift keying), CW (constant frequency continuous wave) , FMCW (frequency modulated continuous wave) and other methods.
FMCW调频连续波雷达的不同调制形式包括:a、正弦波调制,b、锯齿式波调制,c、三角波调制;不同调频方式的雷达硬件构成基本相同,只有小部分电路模块、电路参数与信号处理算法有所区别;对于单个静止物体的测量,锯齿波调制方式即可满足;对于运动物体,多采用三角波调制方式;The different modulation forms of FMCW frequency modulation continuous wave radar include: a, sine wave modulation, b, sawtooth wave modulation, c, triangular wave modulation; the radar hardware configuration of different frequency modulation methods is basically the same, only a small part of the circuit modules, circuit parameters and signal processing The algorithm is different; for the measurement of a single static object, the sawtooth wave modulation method is sufficient; for moving objects, the triangular wave modulation method is mostly used;
测距:(TOF)通过给目标连续发送光脉冲,然后用传感器接收从物体返回的光,通过探测光脉冲的飞行(往返)时间来得到目标物距离。Ranging: (TOF) continuously sends light pulses to the target, and then uses the sensor to receive the light returned from the object, and obtains the target distance by detecting the flight (round-trip) time of the light pulse.
测速:根据多普勒效应,通过计算返回接收天线的雷达波的频率变化就可以得到目标相对于雷达的运动速度,简单地说就是相对速度正比于频率变化量。Speed measurement: According to the Doppler effect, by calculating the frequency change of the radar wave returning to the receiving antenna, the moving speed of the target relative to the radar can be obtained. Simply put, the relative speed is proportional to the frequency change.
测方位角:通过并列的接收天线收到同一目标反射的雷达波的相位差计算得到目标的方位角。Azimuth measurement: the azimuth angle of the target is calculated by the phase difference of the radar waves reflected by the same target received by the parallel receiving antennas.
毫米波雷达的工作频段为30-300GHz,波长范围为1~10mm,介于厘米波和光波之间,具有微波制导和光电制导,通过雷达对轨道前进方向进行高速动态扫描,感知的信息传输给电机车智能驾驶控制(车载计算机)中,与计算机中的模型自主学习对比,快速准确识别前方障碍物,并快速进行自主控制,防止出现安全事故的发生,可实现自适应巡航、自动紧急制动、前/后向碰撞预警、盲点检测以及车道辅助等功能,电机车运行过程中遇到行人、矿车等障碍物时,能够范围内鸣笛提醒、范围内自动停车避让。The working frequency band of millimeter-wave radar is 30-300GHz, the wavelength range is 1-10mm, which is between centimeter wave and light wave. In the intelligent driving control of electric locomotives (on-board computer), compared with the autonomous learning of the model in the computer, it can quickly and accurately identify obstacles in front, and quickly perform autonomous control to prevent safety accidents, and can realize adaptive cruise and automatic emergency braking , front/rear collision warning, blind spot detection, and lane assist functions. When the electric locomotive encounters obstacles such as pedestrians and mining vehicles during operation, it can whistle within the range to remind and automatically stop and avoid within the range.
车载摄像头内置DSP数字信号处理芯片及CMOS感光元件,通过镜头(LENS)生成光学图像投射到图像传感器上,光信号转变为电信号,再经过A/D(模数转换)后变为数字图像信号,最后送到DSP(数字信号处理芯片)中进行加工处理,由DSP将信号处理成特定格式的图像传输到无人驾驶客户端系统中,无人驾驶客户端系统通过图像识别算法进行电机车四周各类障碍物目标的分类识别以及车辆轨道线识别,实现电机车车道线检测、交通标示识别、行人/车辆识别等任务,采集视频画面可在无人驾驶客户端系统中实时显示。The vehicle-mounted camera has a built-in DSP digital signal processing chip and CMOS photosensitive element, which generates an optical image through the lens (LENS) and projects it onto the image sensor. The optical signal is converted into an electrical signal, and then converted into a digital image signal after A/D , and finally sent to the DSP (digital signal processing chip) for processing, the DSP processes the signal into a specific format image and transmits it to the unmanned driving client system. The classification and recognition of various obstacles and targets and the recognition of vehicle track lines can realize tasks such as motor vehicle lane line detection, traffic sign recognition, pedestrian/vehicle recognition, etc. The collected video images can be displayed in real time in the unmanned driving client system.
IMU惯性测量单元,由三个单轴的加速度计和三个单轴的陀螺仪组成,加速度计检测物体在导航坐标系内三轴的加速度信号,而陀螺仪检测载体相对于导航坐标系的角速度信号,通过信号处理,可得出物体的姿态。传感器信息融合为将多传感器或多源的信息和数据,在一定的准则下加以自动分析和综合,以完成所需的决策和估计而进行的信息处理过程;多传感器信息融合是用于包含处于不同位置的多个或者多种传感器的信息处理技术;在多传感器信息融合中,按其在融合系统中信息处理的抽象程度可分为三个层次:数据级融合、特征级融合和决策级融合;根据对原始数据处理方法的不同,多传感器信息融合系统的体系结构可分为三种:集中式、分布式和混合式。The IMU inertial measurement unit consists of three single-axis accelerometers and three single-axis gyroscopes. The accelerometer detects the three-axis acceleration signal of the object in the navigation coordinate system, and the gyroscope detects the angular velocity of the carrier relative to the navigation coordinate system. The signal, through signal processing, can get the attitude of the object. Sensor information fusion is an information processing process that automatically analyzes and synthesizes multi-sensor or multi-source information and data under certain criteria to complete the required decision-making and estimation; multi-sensor information fusion is used to include Information processing technology of multiple or multiple sensors at different locations; in multi-sensor information fusion, it can be divided into three levels according to the abstraction level of information processing in the fusion system: data-level fusion, feature-level fusion and decision-level fusion ; According to different processing methods of raw data, the architecture of multi-sensor information fusion system can be divided into three types: centralized, distributed and hybrid.
本申请通过以上毫米波雷达、车载摄像头、IMU惯性测量单元等各类传感器数据的融合,不仅可以实现电机车无人驾驶运行、轨道障碍物识别、防碰撞预警,还可实现电机车的高精度定位;Through the fusion of various sensor data such as millimeter-wave radar, vehicle-mounted camera, and IMU inertial measurement unit, this application can not only realize unmanned driving operation of electric locomotives, identification of track obstacles, anti-collision warning, but also realize high precision of electric locomotives. position;
其中车载摄像机和雷达融合时,以摄像头数据为主,毫米波雷达作为辅助;将毫米波雷达返回的目标点投影到图像上,围绕该点并结合先验知识,生成一个矩形的感兴趣区域,然后只对该区域内进行目标检测。本申请通过车载摄像机和雷达的数据融合可以迅速地排除大量不会有目标的区域,极大地提高识别速度。When the vehicle-mounted camera and radar are fused, the camera data is the main data, and the millimeter-wave radar is used as an auxiliary; the target point returned by the millimeter-wave radar is projected onto the image, and a rectangular area of interest is generated around the point and combined with prior knowledge. Object detection is then performed only within this region. In this application, through the data fusion of the on-board camera and radar, a large number of areas without targets can be quickly eliminated, and the recognition speed is greatly improved.
在对多个传感器数据进行融合时,通过统一的主机给各个传感器提供基准时间,各传感器根据已经校准后的各自时间为各自独立采集的数据加上时间戳信息,以做到所有传感器时间戳同步,然后将不同传感器坐标系的测量值转换到同一个坐标系中:When merging multiple sensor data, each sensor is provided with a reference time through a unified host, and each sensor adds time stamp information to the data collected independently according to the calibrated respective time, so that all sensor time stamps are synchronized , and then transform measurements from different sensor coordinate systems into the same coordinate system:
其中,像素的坐标系:以CCD图像平面的左上角顶点为原点,U轴和V轴分别平行于图像坐标系的X轴和Y轴,用(u,v)表示其坐标值。车载摄像机采集的图像首先是形成标准电信号的形式,然后再通过模数转换变换为数字图像。每幅图像的存储形式是M×N的数组,M行N列的图像中的每一个元素的数值代表的是图像点的灰度。这样的每个元素叫像素,像素坐标系就是以像素为单位的图像坐标系。Among them, the pixel coordinate system: take the upper left corner vertex of the CCD image plane as the origin, the U axis and the V axis are respectively parallel to the X axis and the Y axis of the image coordinate system, and (u, v) represent its coordinate value. The image collected by the on-board camera is first formed into a standard electrical signal, and then transformed into a digital image through analog-to-digital conversion. The storage form of each image is an M×N array, and the value of each element in the image of M rows and N columns represents the grayscale of the image point. Each such element is called a pixel, and the pixel coordinate system is the image coordinate system in pixels.
图像的坐标系:以CCD图像平面的中心为坐标原点,X轴和Y轴分别平行于图像平面的两条垂直边,用(x,y)表示其坐标值。图像坐标系是用物理单位(例如毫米)表示像素在图像中的位置。Image coordinate system: take the center of the CCD image plane as the coordinate origin, the X axis and the Y axis are respectively parallel to the two vertical sides of the image plane, and use (x, y) to represent its coordinate value. The image coordinate system expresses the position of pixels in the image in physical units (such as millimeters).
摄像机坐标系,以摄像机的光心为坐标原点,X’轴和Y’轴分别平行于图像坐标系的X轴和Y轴,摄像机的光轴为Z’轴,用(Xc,Yc,Zc)表示其坐标值。The camera coordinate system takes the optical center of the camera as the coordinate origin, the X' axis and the Y' axis are parallel to the X axis and the Y axis of the image coordinate system respectively, and the optical axis of the camera is the Z' axis, using (Xc, Yc, Zc) Indicates its coordinate value.
摄像机的坐标系与图像的坐标系之间的转换关系:The conversion relationship between the coordinate system of the camera and the coordinate system of the image:
图像的坐标系与像素的坐标系之间的转换关系为:The conversion relationship between the image coordinate system and the pixel coordinate system is:
定位导航IMU自动驾驶中按照不同的定位实现技术,高精度定位可以分为三类:Positioning and navigation In IMU automatic driving, according to different positioning implementation technologies, high-precision positioning can be divided into three categories:
a)基于信号的定位(GNSS定位),即全球卫星导航系统;a) Signal-based positioning (GNSS positioning), i.e. Global Navigation Satellite System;
b)航迹推算-IMU(惯性测量单元),其根据上一时刻的位置和方位推断现在的位置和方位;b) Dead reckoning-IMU (inertial measurement unit), which infers the current position and orientation based on the position and orientation at the previous moment;
c)环境特征匹配,基于雷达/立体视觉的定位,用观测到的特征和数据库中存储的特征进行匹配,得到当前时刻车辆的位置和姿态。c) Environmental feature matching, based on radar/stereo vision positioning, match the observed features with the features stored in the database to obtain the position and attitude of the vehicle at the current moment.
本申请无人驾驶车辆高精度定位采用融合方式:The high-precision positioning of unmanned vehicles in this application adopts the fusion method:
a)基于GPS和IMU传感器的融合的组合导航定位;a) Integrated navigation and positioning based on the fusion of GPS and IMU sensors;
b)基于雷达点云特征与高精地图的环境特征匹配定位;b) Matching and positioning of environmental features based on radar point cloud features and high-precision maps;
c)基于摄像头的道路特征识别为主+GPS定位为辅助的形式;c) Camera-based road feature recognition is the main form + GPS positioning is the auxiliary form;
d)绝对定位(GNSS)+相对定位(IMU+环境特征匹配定位);d) Absolute positioning (GNSS) + relative positioning (IMU + environmental feature matching positioning);
利用IMU传感器作出大概的位置判断,然后用预选准备好的高精度地图与雷达点云图像以及车载摄像机图像特征相匹配,即放在一个坐标系内做配准,配对成功后确认车辆位置,Use the IMU sensor to make a rough position judgment, and then use the pre-selected high-precision map to match the radar point cloud image and the image features of the vehicle camera, that is, place it in a coordinate system for registration, and confirm the vehicle position after the pairing is successful.
具体步骤包括:首先通过车载摄像机获取无人车四周的环境图像,进行图像特征的检测,图像特征检测的主要目标物为:车道线以及杆状物;然后从预先采集3D地图里,提取相关的车道线和杆状物元素,再通过GPS获取电机车、车道线以及杆状物的初始位置,通过该位置,计算子系统将会把车载摄像机获的图像特征和地图里采集的对应特征进行一次匹配;再利用IMU传感器进行电机车姿态的预测,预测得到电机车姿态,匹配完,把结果输出,得到电机车相对于地图的位置以及电机车朝向;本申请无人驾驶车辆高精度定位采用融合方式,实现电机车主动高精度定位,定位精度在30cm范围内,相关的定位能在无人驾驶客户端统中实时体现行车方向、车号、位置、状态等。The specific steps include: first, obtain the environmental image around the unmanned vehicle through the on-board camera, and detect the image features. The main targets of the image feature detection are: lane lines and poles; Lane lines and pole elements, and then obtain the initial positions of the electric locomotive, lane lines, and poles through GPS. Through this position, the calculation subsystem will compare the image features obtained by the vehicle camera with the corresponding features collected in the map. Matching; then use the IMU sensor to predict the attitude of the electric locomotive, predict the attitude of the electric locomotive, output the result after matching, and obtain the position of the electric locomotive relative to the map and the orientation of the electric locomotive; the high-precision positioning of unmanned vehicles in this application uses fusion This method realizes the active high-precision positioning of electric locomotives, and the positioning accuracy is within 30cm. The relevant positioning can reflect the driving direction, vehicle number, position, status, etc. in real time in the unmanned driving client system.
实施例二Embodiment two
本实施例提供一种矿用电机车无人驾驶系统的控制方法,包括如下步骤:This embodiment provides a control method for an unmanned driving system of a mining electric locomotive, including the following steps:
采集电机车的姿态、方向、速度、加速度信息,以及电机车的位置信息;Collect the attitude, direction, speed, acceleration information of the electric locomotive, and the position information of the electric locomotive;
对上述数据进行融合,实时解算当前电机车在空间中的坐标数据;The above data are fused to solve the coordinate data of the current electric locomotive in space in real time;
对电机车轨道路线周围环境进行实时感知,计算电机车与周围障碍物距离,以实现电机车自动避障驾驶以及防碰撞预警。Real-time perception of the surrounding environment of the electric locomotive track route, and calculation of the distance between the electric locomotive and surrounding obstacles, so as to realize automatic obstacle avoidance driving and anti-collision warning of the electric locomotive.
其中,对数据进行融合包括:无人驾驶客户端系统给各个传感器提供基准时间,各传感器根据已经校准后的各自时间为各自独立采集的数据加上时间戳信息,以做到所有传感器时间戳同步,将不同传感器坐标系的测量值转换到同一个坐标系中。Among them, the fusion of data includes: the unmanned driving client system provides a reference time for each sensor, and each sensor adds time stamp information to the data collected independently according to the respective time after calibration, so as to achieve the synchronization of all sensor time stamps , to transform the measured values of different sensor coordinate systems into the same coordinate system.
高精度融合定位包括:利用IMU惯性测量单元作出大概的位置判断,然后用预设的高精度地图与雷达点云图像以及车载摄像机图像特征相匹配,即放在一个坐标系内做配准,配对成功后确认电机车位置。High-precision fusion positioning includes: using the IMU inertial measurement unit to make a rough position judgment, and then using the preset high-precision map to match the radar point cloud image and the image features of the vehicle camera, that is, to place it in a coordinate system for registration and pairing Confirm the location of the electric locomotive after success.
所述对电机车轨道路线周围环境进行实时感知包括:利用车载摄像头扫描电机车周边环境;车载摄像头内置DSP数字信号处理芯片及CMOS感光元件,通过LENS镜头生成光学图像投射到图像传感器上,光信号转变为电信号,再经过A/D转换后变为数字图像信号,最后送到DSP数字信号处理芯片中进行加工处理,由DSP将信号处理成特定格式的图像传输到无人驾驶客户端系统中;The real-time perception of the surrounding environment of the electric locomotive track route includes: using the vehicle-mounted camera to scan the surrounding environment of the motor vehicle; the vehicle-mounted camera has a built-in DSP digital signal processing chip and a CMOS photosensitive element, generates an optical image through a LENS lens and projects it onto the image sensor, and the optical signal It is converted into an electrical signal, and then converted into a digital image signal after A/D conversion, and finally sent to the DSP digital signal processing chip for processing, and the DSP processes the signal into a specific format image and transmits it to the unmanned driving client system ;
无人驾驶客户端系统接收车载摄像头传输的数据,实现电机车车道线检测、交通标示识别以及行人/车辆识别,同时采集视频画面可在无人驾驶客户端系统中实时显示。The unmanned driving client system receives the data transmitted by the on-board camera to realize the detection of motor vehicle lane lines, traffic sign recognition, and pedestrian/vehicle identification. At the same time, the collected video images can be displayed in real time in the unmanned driving client system.
以上所述仅为本公开的优选实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。The above descriptions are only preferred embodiments of the present disclosure, and are not intended to limit the present disclosure. For those skilled in the art, the present disclosure may have various modifications and changes. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present disclosure shall be included within the protection scope of the present disclosure.
上述虽然结合附图对本公开的具体实施方式进行了描述,但并非对本公开保护范围的限制,所属领域技术人员应该明白,在本公开的技术方案的基础上,本领域技术人员不需要付出创造性劳动即可做出的各种修改或变形仍在本公开的保护范围以内。Although the specific implementation of the present disclosure has been described above in conjunction with the accompanying drawings, it does not limit the protection scope of the present disclosure. Those skilled in the art should understand that on the basis of the technical solutions of the present disclosure, those skilled in the art do not need to pay creative work Various modifications or variations that can be made are still within the protection scope of the present disclosure.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211121498.8ACN115685989A (en) | 2022-09-15 | 2022-09-15 | A mine electric locomotive unmanned driving system and control method |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202211121498.8ACN115685989A (en) | 2022-09-15 | 2022-09-15 | A mine electric locomotive unmanned driving system and control method |
| Publication Number | Publication Date |
|---|---|
| CN115685989Atrue CN115685989A (en) | 2023-02-03 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202211121498.8APendingCN115685989A (en) | 2022-09-15 | 2022-09-15 | A mine electric locomotive unmanned driving system and control method |
| Country | Link |
|---|---|
| CN (1) | CN115685989A (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105489035A (en)* | 2015-12-29 | 2016-04-13 | 大连楼兰科技股份有限公司 | Method for detecting traffic lights applied in active driving technology |
| CN106767853A (en)* | 2016-12-30 | 2017-05-31 | 中国科学院合肥物质科学研究院 | A kind of automatic driving vehicle high-precision locating method based on Multi-information acquisition |
| CN107024216A (en)* | 2017-03-14 | 2017-08-08 | 重庆邮电大学 | Introduce the intelligent vehicle fusion alignment system and method for panoramic map |
| CN108445885A (en)* | 2018-04-20 | 2018-08-24 | 鹤山东风新能源科技有限公司 | A kind of automated driving system and its control method based on pure electric vehicle logistic car |
| CN109405824A (en)* | 2018-09-05 | 2019-03-01 | 武汉契友科技股份有限公司 | A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile |
| CN110409550A (en)* | 2019-07-29 | 2019-11-05 | 湖南大学 | A fully automatic operation underground mining scraper |
| CN110456745A (en)* | 2019-07-29 | 2019-11-15 | 湖南大学 | A kind of Full-automatic underground mining haul system |
| CN113359171A (en)* | 2021-05-17 | 2021-09-07 | 交控科技股份有限公司 | Positioning method and device based on multi-sensor fusion and electronic equipment |
| CN113359752A (en)* | 2021-06-24 | 2021-09-07 | 中煤科工开采研究院有限公司 | Automatic driving method for underground coal mine skip car |
| CN114413881A (en)* | 2022-01-07 | 2022-04-29 | 中国第一汽车股份有限公司 | Method and device for constructing high-precision vector map and storage medium |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN105489035A (en)* | 2015-12-29 | 2016-04-13 | 大连楼兰科技股份有限公司 | Method for detecting traffic lights applied in active driving technology |
| CN106767853A (en)* | 2016-12-30 | 2017-05-31 | 中国科学院合肥物质科学研究院 | A kind of automatic driving vehicle high-precision locating method based on Multi-information acquisition |
| CN107024216A (en)* | 2017-03-14 | 2017-08-08 | 重庆邮电大学 | Introduce the intelligent vehicle fusion alignment system and method for panoramic map |
| CN108445885A (en)* | 2018-04-20 | 2018-08-24 | 鹤山东风新能源科技有限公司 | A kind of automated driving system and its control method based on pure electric vehicle logistic car |
| CN109405824A (en)* | 2018-09-05 | 2019-03-01 | 武汉契友科技股份有限公司 | A kind of multi-source perceptual positioning system suitable for intelligent network connection automobile |
| CN110409550A (en)* | 2019-07-29 | 2019-11-05 | 湖南大学 | A fully automatic operation underground mining scraper |
| CN110456745A (en)* | 2019-07-29 | 2019-11-15 | 湖南大学 | A kind of Full-automatic underground mining haul system |
| CN113359171A (en)* | 2021-05-17 | 2021-09-07 | 交控科技股份有限公司 | Positioning method and device based on multi-sensor fusion and electronic equipment |
| CN113359752A (en)* | 2021-06-24 | 2021-09-07 | 中煤科工开采研究院有限公司 | Automatic driving method for underground coal mine skip car |
| CN114413881A (en)* | 2022-01-07 | 2022-04-29 | 中国第一汽车股份有限公司 | Method and device for constructing high-precision vector map and storage medium |
| Publication | Publication Date | Title |
|---|---|---|
| US11915470B2 (en) | Target detection method based on fusion of vision, lidar, and millimeter wave radar | |
| Rawashdeh et al. | Collaborative automated driving: A machine learning-based method to enhance the accuracy of shared information | |
| CN108572663A (en) | Target Tracking | |
| CN113850102A (en) | Vehicle-mounted visual detection method and system based on millimeter-wave radar assistance | |
| CN114442101A (en) | Vehicle navigation method, device, equipment and medium based on imaging millimeter wave radar | |
| US11754415B2 (en) | Sensor localization from external source data | |
| WO2021243696A1 (en) | Vehicle navigation positioning method and apparatus, and base station, system and readable storage medium | |
| AU2021209229A1 (en) | Method and apparatus for positioning movable device, and movable device | |
| Liu et al. | Deep learning-based localization and perception systems: approaches for autonomous cargo transportation vehicles in large-scale, semiclosed environments | |
| CN117553811B (en) | Vehicle-road collaborative positioning and navigation method and system based on roadside camera and vehicle-mounted GNSS/INS | |
| Chetan et al. | An overview of recent progress of lane detection for autonomous driving | |
| CN112179362A (en) | High-precision map data acquisition system and acquisition method | |
| CN112611374A (en) | Path planning and obstacle avoidance method and system based on laser radar and depth camera | |
| CN118534893A (en) | A path planning method based on air-ground collaborative system | |
| Gazis et al. | Examining the sensors that enable self-driving vehicles | |
| CN115793536A (en) | Intelligent driving control method and device, electronic equipment, storage medium and vehicle | |
| CN113758482B (en) | Vehicle navigation positioning method, device, base station, system and readable storage medium | |
| Andert et al. | Optical-aided aircraft navigation using decoupled visual SLAM with range sensor augmentation | |
| Gogineni | Multi-sensor fusion and sensor calibration for autonomous vehicles | |
| CN119723925A (en) | A vehicle-road cooperative perception and parking method and system for an underground mine chute environment | |
| US20230242127A1 (en) | Visual and wireless joint three-dimensional mapping for autonomous vehicles and advanced driver assistance systems | |
| Meydani | State-of-the-Art Analysis of the Performance of the Sensors Utilized in Autonomous Vehicles in Extreme Conditions | |
| CN212721458U (en) | Positioning device and vehicle comprising positioning device | |
| Lee et al. | Infrastructure node-based vehicle localization for autonomous driving | |
| JP7602100B2 (en) | Object Recognition System |
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |