Movatterモバイル変換


[0]ホーム

URL:


CN112068574A - A control method and system for an unmanned vehicle in a dynamic complex environment - Google Patents

A control method and system for an unmanned vehicle in a dynamic complex environment
Download PDF

Info

Publication number
CN112068574A
CN112068574ACN202011119650.XACN202011119650ACN112068574ACN 112068574 ACN112068574 ACN 112068574ACN 202011119650 ACN202011119650 ACN 202011119650ACN 112068574 ACN112068574 ACN 112068574A
Authority
CN
China
Prior art keywords
vehicle
traffic light
unmanned vehicle
computer controller
lidar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011119650.XA
Other languages
Chinese (zh)
Inventor
高洪波
李陈畅
李智军
朱菊萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTCfiledCriticalUniversity of Science and Technology of China USTC
Priority to CN202011119650.XApriorityCriticalpatent/CN112068574A/en
Publication of CN112068574ApublicationCriticalpatent/CN112068574A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供了一种无人车在动态复杂环境中的控制系统及方法,包括:将环境感知系统的点云数据传入计算机控制器,通过深度学习的方法完成障碍物的检测、分割和识别;根据环境感知系统拍摄的图像传入计算机控制器获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯颜色识别;环境感知系统将障碍物检测识别数据与红绿灯框位置与颜色识别数据传入计算机控制器,计算得到避免碰撞的最小安全距离;将汽车目标车速、角速度输入计算机控制器,计算机控制器由通过无人车动力学模型根据最小安全距离生成避障路径,最终使车辆自主完成行驶任务到达目标点;本发明可以帮助无人车在动态复杂环境中感知和运行,实现无人驾驶与智慧交通。

Figure 202011119650

The invention provides a control system and method for an unmanned vehicle in a dynamic and complex environment, comprising: transferring point cloud data of an environment perception system to a computer controller, and completing the detection, segmentation and identification of obstacles by means of deep learning ; According to the image captured by the environmental perception system, the computer controller obtains the precise position of the traffic light frame, and the color recognition of the traffic light is carried out according to the position of the traffic light frame; the environmental perception system transmits the obstacle detection identification data and the traffic light frame position and color recognition data to the computer The controller calculates the minimum safe distance to avoid collision; the target vehicle speed and angular velocity of the vehicle are input into the computer controller, and the computer controller generates an obstacle avoidance path according to the minimum safe distance through the unmanned vehicle dynamics model, and finally enables the vehicle to complete the driving task autonomously reach the target point; the present invention can help the unmanned vehicle to perceive and operate in a dynamic and complex environment, and realize unmanned and intelligent traffic.

Figure 202011119650

Description

Translated fromChinese
一种无人车在动态复杂环境中的控制方法及系统A control method and system for an unmanned vehicle in a dynamic complex environment

技术领域technical field

本发明涉及车辆工程,具体地,涉及一种无人车在动态复杂环境中的控制方法及系统,更为具体地,涉及一种研究无人车在动态复杂环境中如何感知、控制、规划路线的解决方法。The invention relates to vehicle engineering, in particular to a control method and system for an unmanned vehicle in a dynamic and complex environment, and more specifically, to a method for studying how an unmanned vehicle perceives, controls and plans a route in a dynamic and complex environment solution.

背景技术Background technique

车辆技术的发展使得无人驾驶技术成为近年来国内外研究的热点,无人驾驶成为当前重要研究方向之一。无人驾驶技术是在机器人技术的基础上结合车辆工程开发的,涉及人工智能与计算机视觉等交叉学科,在物流配送、共享出行、公共交通、环卫等领域都可起到非常重要的作用。无人驾驶相比有驾驶员驾驶的汽车有如下三个优点。首先提高了系统对环境的反应速度。人对外界环境的变化的感知的位置精度是有限的,在车辆行驶条件下对障碍物的位置和速度只能给出大概的估计,而无人驾驶智能车的车载传感器可以大大的提高环境中的信息精度。第二,驾驶员在对于外界环境的变化刺激的反映是缓慢的,而且人的反应时间还受到天气、年龄、心情等因素影响,疲劳驾驶的话人的反应时间会更长。而无人驾驶智能汽车的刹车反映时间一般为定值,不会超过0.3秒,并且外界因素对其影响相对于对人的影响非常小。第三,无人驾驶可以为乘客提供一个舒适自然的环境,不用紧握方向盘担忧事故发生,使得人们能把时间精力花在更多的事情上。The development of vehicle technology has made unmanned driving technology a hot research topic at home and abroad in recent years, and unmanned driving has become one of the current important research directions. Unmanned driving technology is developed on the basis of robotics and combined with vehicle engineering, involving interdisciplinary subjects such as artificial intelligence and computer vision, and can play a very important role in logistics distribution, shared travel, public transportation, sanitation and other fields. Driverless cars have the following three advantages over a car with a driver. First of all, it improves the system's response speed to the environment. The positional accuracy of people's perception of changes in the external environment is limited, and the position and speed of obstacles can only be estimated under vehicle driving conditions. accuracy of information. Second, the driver's response to changes in the external environment is slow, and the person's reaction time is also affected by factors such as weather, age, and mood. If driving fatigued, the person's reaction time will be longer. The braking response time of driverless smart cars is generally a fixed value, which will not exceed 0.3 seconds, and the impact of external factors on it is very small compared to the impact on people. Third, unmanned driving can provide passengers with a comfortable and natural environment, without having to hold the steering wheel and worry about accidents, so that people can spend their time and energy on more things.

专利文献CN108520559A(申请号:201810299122.3)公开了一种基于双目视觉的无人机定位导航的方法,根据无人机载控制系统的双目相机获取图像左右视图与相机参数得到矫正后的左右视图进而得到对应像素的深度信息;提取左视图的关键点进行滤除和筛选;然后在当前帧通过光流跟踪寻找匹配关键点集,得到匹配关键点对;根据匹配关键点对计算代价函数,得到最终位姿结果;最后对输入的连续图像帧进行筛选得到关键图像帧,对关键图像帧的关键点集与位姿计算联合代价函数,优化求解代价函数得到更新后的位姿。Patent document CN108520559A (application number: 201810299122.3) discloses a method for positioning and navigating unmanned aerial vehicles based on binocular vision. The left and right views of images obtained from the binocular cameras of the unmanned aerial vehicle control system and the corrected left and right views are obtained from camera parameters. Then, the depth information of the corresponding pixel is obtained; the key points of the left view are extracted for filtering and filtering; then the matching key point set is found in the current frame through optical flow tracking, and the matching key point pair is obtained; according to the matching key point pair, the cost function is calculated to obtain The final pose result; finally, the input continuous image frames are screened to obtain the key image frame, and the joint cost function is calculated for the key point set of the key image frame and the pose, and the updated pose is obtained by optimizing and solving the cost function.

发明内容SUMMARY OF THE INVENTION

针对现有技术中的缺陷,本发明的目的是提供一种无人车在动态复杂环境中的控制系统及方法。In view of the defects in the prior art, the purpose of the present invention is to provide a control system and method for an unmanned vehicle in a dynamic and complex environment.

根据本发明提供的一种无人车在动态复杂环境中的控制系统,包括:A control system for an unmanned vehicle in a dynamic and complex environment provided according to the present invention includes:

模块M1:将环境感知系统的点云数据传入计算机控制器,通过深度学习的方法完成障碍物的检测、分割和识别;Module M1: Transfer the point cloud data of the environmental perception system to the computer controller, and complete the detection, segmentation and identification of obstacles by means of deep learning;

模块M2:根据环境感知系统拍摄的图像传入计算机控制器获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯颜色识别;Module M2: According to the image captured by the environmental perception system, it is transmitted to the computer controller to obtain the precise position of the traffic light frame, and the color of the traffic light is identified according to the position of the traffic light frame;

模块M3:环境感知系统将障碍物检测识别数据与红绿灯框位置与颜色识别数据传入计算机控制器,计算得到避免碰撞的最小安全距离;Module M3: The environment perception system transmits the obstacle detection identification data and the traffic light frame position and color identification data to the computer controller, and calculates the minimum safe distance to avoid collision;

模块M4:根据计算得到避免碰撞的最小安全距离,通过计算机控制器计算得到汽车目标角速度和速度;Module M4: Calculate the minimum safe distance to avoid collision according to the calculation, and calculate the target angular velocity and speed of the car through the computer controller;

模块M5:将汽车目标车速、角速度输入计算机控制器,计算机控制器由通过无人车动力学模型根据最小安全距离生成避障路径,最终使车辆自主完成行驶任务到达目标点;Module M5: Input the vehicle target speed and angular velocity into the computer controller, and the computer controller generates an obstacle avoidance path according to the minimum safe distance through the unmanned vehicle dynamics model, and finally enables the vehicle to complete the driving task autonomously and reach the target point;

所述无人车动力学模型是包括汽车的动力性、制动性、平顺性和稳定性,测量车辆的质量和受力情况与车辆运动之间的关系。The unmanned vehicle dynamics model includes the dynamics, braking, ride comfort and stability of the vehicle, and measures the relationship between the mass and force of the vehicle and the motion of the vehicle.

优选地,所述模块M1包括:Preferably, the module M1 includes:

所述环境感知系统包括超声波雷达1、毫米波雷达2、激光雷达6和相机;The environment perception system includes anultrasonic radar 1, a millimeter-wave radar 2, a lidar 6 and a camera;

所述计算机控制器9包括惯性测量单元8和中央处理器与图形处理器10;The computer controller 9 includes aninertial measurement unit 8, a central processing unit and agraphics processing unit 10;

所述相机包括前向相机3、侧向相机4和后向相机5;The cameras include a front-facingcamera 3, a side-facing camera 4 and a rear-facing camera 5;

所述激光雷达6安装在车顶上;The lidar 6 is installed on the roof;

模块M1.1:基于所述激光雷达6点云数据,所述中央处理器与图形处理器10通过卷积神经网络模型学习点云特征并预测障碍物的相关属性,根据障碍物的相关属性进行障碍物分割,从而对障碍物检测识别;Module M1.1: Based on the 6 point cloud data of the lidar, the central processing unit and thegraphics processor 10 learn the point cloud features and predict the relevant attributes of the obstacles through the convolutional neural network model, and perform the operation according to the relevant attributes of the obstacles. Obstacle segmentation, so as to detect and identify obstacles;

模块M1.2:基于所述毫米波雷达2点云数据,所述中央处理器与图形处理器10进行处理从而对障碍物检测识别;Module M1.2: Based on the point cloud data of the millimeter wave radar 2, the central processing unit and thegraphics processor 10 are processed to detect and identify obstacles;

模块M1.3:所述中央处理器与图形处理器10通过融合激光雷达算法,将所述激光雷达7和所述毫米波雷达2的障碍物识别结果进行融合,得到无人车精确检测障碍物的位置;Module M1.3: The central processing unit and thegraphics processor 10 fuse the obstacle recognition results of the laser radar 7 and the millimeter-wave radar 2 by fusing the laser radar algorithm to obtain the unmanned vehicle accurately detecting obstacles s position;

所述融合激光雷达算法主要进行了单传感器结果和融合结果的管理、匹配以及基于卡尔曼滤波的障碍物速度融合。The fusion lidar algorithm mainly manages and matches single sensor results and fusion results, as well as obstacle velocity fusion based on Kalman filtering.

优选地,所述惯性测量单元8包括:惯性测量单元测量无人车角速度以及加速度;所述惯性测量单元安装在靠近无人车的重心上。Preferably, theinertial measurement unit 8 includes: the inertial measurement unit measures the angular velocity and acceleration of the unmanned vehicle; the inertial measurement unit is installed near the center of gravity of the unmanned vehicle.

优选地,所述模块M2包括:Preferably, the module M2 includes:

所述环境感知系统包括超声波雷达1、毫米波雷达2、激光雷达6和相机;The environment perception system includes anultrasonic radar 1, a millimeter-wave radar 2, a lidar 6 and a camera;

所述相机包括前向相机3、预设个侧向相机4和后向相机5;The camera includes a front-facingcamera 3, a preset side-facing camera 4 and a rear-facing camera 5;

所述激光雷达6安装在车顶上;The lidar 6 is installed on the roof;

根据环境感知系统中相机在投影区域外选取感兴趣区域,运行红绿灯检测来获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯的颜色识别,得到红绿灯当前的状态;根据单帧的红绿灯状态,通过时序的滤波矫正算法进一步确认红绿灯的最终状态。According to the camera in the environmental perception system, select the area of interest outside the projection area, run the traffic light detection to obtain the accurate traffic light frame position, and identify the color of the traffic light according to the position of the traffic light frame to obtain the current status of the traffic light; according to the traffic light status of a single frame, The final state of the traffic light is further confirmed through the filtering and correction algorithm of the timing sequence.

优选地,所述模块M3包括:通过车辆自身消息与外界障碍物信息进行比较,计算无人车在复杂环境中面对障碍物所需的最小安全距离;Preferably, the module M3 includes: calculating the minimum safe distance required for the unmanned vehicle to face obstacles in a complex environment by comparing the vehicle's own information with the external obstacle information;

所述车辆自身消息包括车辆的尺寸信息、动力学模型信息;所述动力学模型信息包括汽车加速度、速度和角速度;相机当前采集到的图片信息,与下一帧的图片信息作比较完成闭环检测,实现汽车的建图与定位;The vehicle's own message includes the size information and dynamic model information of the vehicle; the dynamic model information includes the acceleration, speed and angular velocity of the vehicle; the picture information currently collected by the camera is compared with the picture information of the next frame to complete the closed-loop detection. , to realize the construction and positioning of the car;

所述外界障碍物信息包括环境感知系统的障碍物检测识别。The external obstacle information includes the obstacle detection and identification of the environment perception system.

优选地,所述模块M5中无人车动力学模型包括:Preferably, the unmanned vehicle dynamics model in the module M5 includes:

Figure BDA0002731562120000031
Figure BDA0002731562120000031

Figure BDA0002731562120000032
Figure BDA0002731562120000032

其中,k1,k2为前后轮的侧偏刚度,m为汽车总质量,a为车辆重心到前轴的距离,b车辆重心到后轴的距离,δ为前轮转角,Iz汽车车体转动惯量;β表示质心偏侧角,wr表示横摆角速度,u表示前进速度,

Figure BDA0002731562120000033
表示侧向加速度,
Figure BDA0002731562120000034
表示横摆角加速度。Among them, k1 , k2 are the cornering stiffnesses of the front and rear wheels, m is the total mass of the vehicle, a is the distance from the center of gravity of the vehicle to the front axle, b is the distance from the center of gravity of the vehicle to the rear axle, δ is the front wheel angle, Iz body moment of inertia; β represents the side angle of the center of mass, wr represents the yaw rate, u represents the forward speed,
Figure BDA0002731562120000033
represents the lateral acceleration,
Figure BDA0002731562120000034
represents the yaw angular acceleration.

优选地,所述模块M5还包括:行为决策输出系统和车辆控制系统;Preferably, the module M5 further includes: a behavior decision output system and a vehicle control system;

所述行为决策输出系统包括电能底盘根据计算机控制器计算得到的角速度和速度控制无人车轮子速度,转向;当环境感知系统感知到前方有障碍物时,通过计算机控制器将命令告知底盘,让底盘以稳定的减速度避让或停下;The behavior decision output system includes an electric chassis to control the wheel speed and steering of the unmanned vehicle according to the angular velocity and speed calculated by the computer controller; when the environment perception system senses an obstacle ahead, the computer controller informs the chassis of the command to let the vehicle turn. The chassis avoids or stops at a stable deceleration;

所述车辆控制系统包括采用PID控制算法对目标路径进行跟踪,保证车辆能够按照给定的路径信息点进行跟踪行驶。The vehicle control system includes using a PID control algorithm to track the target path, so as to ensure that the vehicle can track and travel according to a given path information point.

根据本发明提供的一种无人车在动态复杂环境中的控制方法,包括:A control method for an unmanned vehicle in a dynamic and complex environment provided according to the present invention includes:

步骤M1:将环境感知系统的点云数据传入计算机控制器,通过深度学习的方法完成障碍物的检测、分割和识别;Step M1: transfer the point cloud data of the environmental perception system to the computer controller, and complete the detection, segmentation and identification of obstacles by means of deep learning;

步骤M2:根据环境感知系统拍摄的图像传入计算机控制器获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯颜色识别;Step M2: According to the image captured by the environment perception system, the computer controller is transmitted to obtain the precise traffic light frame position, and the traffic light color recognition is performed according to the traffic light frame position;

步骤M3:环境感知系统将障碍物检测识别数据与红绿灯框位置与颜色识别数据传入计算机控制器,计算得到避免碰撞的最小安全距离;Step M3: The environment perception system transmits the obstacle detection identification data and the traffic light frame position and color identification data to the computer controller, and calculates the minimum safe distance to avoid collision;

步骤M4:根据计算得到避免碰撞的最小安全距离,通过计算机控制器计算得到汽车目标角速度和速度;Step M4: obtain the minimum safe distance to avoid collision according to the calculation, and obtain the target angular velocity and speed of the vehicle through calculation by the computer controller;

步骤M5:将汽车目标车速、角速度输入计算机控制器,计算机控制器由通过无人车动力学模型根据最小安全距离生成避障路径,最终使车辆自主完成行驶任务到达目标点;Step M5: input the target vehicle speed and angular velocity of the vehicle into the computer controller, and the computer controller generates an obstacle avoidance path according to the minimum safe distance through the unmanned vehicle dynamics model, and finally enables the vehicle to independently complete the driving task and reach the target point;

所述无人车动力学模型是包括汽车的动力性、制动性、平顺性和稳定性,测量车辆的质量和受力情况与车辆运动之间的关系。The unmanned vehicle dynamics model includes the dynamics, braking, ride comfort and stability of the vehicle, and measures the relationship between the mass and force of the vehicle and the motion of the vehicle.

优选地,所述步骤M1包括:Preferably, the step M1 includes:

所述环境感知系统包括超声波雷达1、毫米波雷达2、激光雷达6和相机;The environment perception system includes anultrasonic radar 1, a millimeter-wave radar 2, a lidar 6 and a camera;

所述计算机控制器9包括惯性测量单元8和中央处理器与图形处理器10;The computer controller 9 includes aninertial measurement unit 8, a central processing unit and agraphics processing unit 10;

所述相机包括前向相机3、侧向相机4和后向相机5;The cameras include a front-facingcamera 3, a side-facing camera 4 and a rear-facing camera 5;

所述激光雷达6安装在车顶上;The lidar 6 is installed on the roof;

步骤M1.1:基于所述激光雷达6点云数据,所述中央处理器与图形处理器10通过卷积神经网络模型学习点云特征并预测障碍物的相关属性,根据障碍物的相关属性进行障碍物分割,从而对障碍物检测识别;Step M1.1: Based on the 6 point cloud data of the lidar, the central processing unit and thegraphics processor 10 learn the point cloud features through the convolutional neural network model and predict the relevant attributes of the obstacles, and perform the operation according to the relevant attributes of the obstacles. Obstacle segmentation, so as to detect and identify obstacles;

步骤M1.2:基于所述毫米波雷达2点云数据,所述中央处理器与图形处理器10进行处理从而对障碍物检测识别;Step M1.2: Based on the point cloud data of the millimeter wave radar 2, the central processing unit and thegraphics processor 10 are processed to detect and identify obstacles;

步骤M1.3:所述中央处理器与图形处理器10通过融合激光雷达算法,将所述激光雷达7和所述毫米波雷达2的障碍物识别结果进行融合,得到无人车精确检测障碍物的位置;Step M1.3: The central processing unit and thegraphics processor 10 fuse the obstacle recognition results of the laser radar 7 and the millimeter-wave radar 2 by fusing the laser radar algorithm to obtain an unmanned vehicle that accurately detects obstacles s position;

所述融合激光雷达算法主要进行了单传感器结果和融合结果的管理、匹配以及基于卡尔曼滤波的障碍物速度融合;The fusion lidar algorithm mainly manages and matches single sensor results and fusion results, as well as obstacle velocity fusion based on Kalman filtering;

所述惯性测量单元8包括:惯性测量单元测量无人车角速度以及加速度;所述惯性测量单元安装在靠近无人车的重心上。Theinertial measurement unit 8 includes: the inertial measurement unit measures the angular velocity and acceleration of the unmanned vehicle; the inertial measurement unit is installed near the center of gravity of the unmanned vehicle.

优选地,所述步骤M2包括:Preferably, the step M2 includes:

所述环境感知系统包括超声波雷达1、毫米波雷达2、激光雷达6和相机;The environment perception system includes anultrasonic radar 1, a millimeter-wave radar 2, a lidar 6 and a camera;

所述相机包括前向相机3、预设个侧向相机4和后向相机5;The camera includes a front-facingcamera 3, a preset side-facing camera 4 and a rear-facing camera 5;

所述激光雷达6安装在车顶上;The lidar 6 is installed on the roof;

根据环境感知系统中相机在投影区域外选取感兴趣区域,运行红绿灯检测来获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯的颜色识别,得到红绿灯当前的状态;根据单帧的红绿灯状态,通过时序的滤波矫正算法进一步确认红绿灯的最终状态;According to the camera in the environmental perception system, select the area of interest outside the projection area, run the traffic light detection to obtain the accurate traffic light frame position, and identify the color of the traffic light according to the position of the traffic light frame to obtain the current status of the traffic light; according to the traffic light status of a single frame, The final state of the traffic light is further confirmed through the filtering and correction algorithm of the timing sequence;

所述步骤M3包括:通过车辆自身消息与外界障碍物信息进行比较,计算无人车在复杂环境中面对障碍物所需的最小安全距离;The step M3 includes: calculating the minimum safe distance required for the unmanned vehicle to face obstacles in a complex environment by comparing the information of the vehicle itself with the information of external obstacles;

所述车辆自身消息包括车辆的尺寸信息、动力学模型信息;所述动力学模型信息包括汽车加速度、速度和角速度;相机当前采集到的图片信息,与下一帧的图片信息作比较完成闭环检测,实现汽车的建图与定位;The vehicle's own message includes the size information and dynamic model information of the vehicle; the dynamic model information includes the acceleration, speed and angular velocity of the vehicle; the picture information currently collected by the camera is compared with the picture information of the next frame to complete the closed-loop detection. , to realize the construction and positioning of the car;

所述外界障碍物信息包括环境感知系统的障碍物检测识别;The external obstacle information includes obstacle detection and identification of the environmental perception system;

所述步骤M5中无人车动力学模型包括:In the step M5, the unmanned vehicle dynamics model includes:

Figure BDA0002731562120000051
Figure BDA0002731562120000051

Figure BDA0002731562120000052
Figure BDA0002731562120000052

其中,k1,k2为前后轮的侧偏刚度,m为汽车总质量,a为车辆重心到前轴的距离,b车辆重心到后轴的距离,δ为前轮转角,Iz汽车车体转动惯量;β表示质心偏侧角,wr表示横摆角速度,u表示前进速度,

Figure BDA0002731562120000053
表示侧向加速度,
Figure BDA0002731562120000054
表示横摆角加速度;Among them, k1 , k2 are the cornering stiffnesses of the front and rear wheels, m is the total mass of the vehicle, a is the distance from the center of gravity of the vehicle to the front axle, b is the distance from the center of gravity of the vehicle to the rear axle, δ is the front wheel angle, Iz body moment of inertia; β represents the side angle of the center of mass, wr represents the yaw rate, u represents the forward speed,
Figure BDA0002731562120000053
represents the lateral acceleration,
Figure BDA0002731562120000054
represents the yaw angular acceleration;

所述步骤M5还包括:行为决策输出系统和车辆控制系统;The step M5 further includes: a behavior decision output system and a vehicle control system;

所述行为决策输出系统包括电能底盘根据计算机控制器计算得到的角速度和速度控制无人车轮子速度,转向;当环境感知系统感知到前方有障碍物时,通过计算机控制器将命令告知底盘,让底盘以稳定的减速度避让或停下;The behavior decision output system includes an electric chassis to control the wheel speed and steering of the unmanned vehicle according to the angular velocity and speed calculated by the computer controller; when the environment perception system senses an obstacle ahead, the computer controller informs the chassis of the command to let the vehicle turn. The chassis avoids or stops at a stable deceleration;

所述车辆控制系统包括采用PID控制算法对目标路径进行跟踪,保证车辆能够按照给定的路径信息点进行跟踪行驶。The vehicle control system includes using a PID control algorithm to track the target path, so as to ensure that the vehicle can track and travel according to a given path information point.

与现有技术相比,本发明具有如下的有益效果:Compared with the prior art, the present invention has the following beneficial effects:

1、本发明设计了一种在动态复杂环境中运行的无人车,相对于传统采用双目视觉的无人驾驶车辆,本发明采用激光雷达,毫米波雷达和相机的融合方案,可以对周围环境精确感知,使得不再局限于晴朗天气并且在障碍物检测上有很高的精度;1. The present invention designs an unmanned vehicle that operates in a dynamic and complex environment. Compared with the traditional unmanned vehicle that adopts binocular vision, the present invention adopts the fusion scheme of laser radar, millimeter wave radar and camera, which can detect the surrounding environment. Accurate perception of the environment, so that it is no longer limited to sunny weather and has high accuracy in obstacle detection;

2、本发明设计了一种在动态复杂环境中运行的无人车,可以检测出路面上红绿灯的位置和红绿灯的闪烁情况,这使得无人车可应用场景增加,不再局限于园区物流等简单场景;2. The present invention designs an unmanned vehicle that operates in a dynamic and complex environment, which can detect the position of traffic lights on the road and the flickering of traffic lights, which increases the application scenarios of unmanned vehicles, and is no longer limited to park logistics, etc. simple scene;

3、本发明设计的计算机控制器安装有计算机处理器和图形处理器GTX1080,较多的计算资源使得无人车能较快计算出周围障碍并做出反应;3. The computer controller designed by the present invention is equipped with a computer processor and a graphics processor GTX1080, and more computing resources enable the unmanned vehicle to quickly calculate and respond to surrounding obstacles;

4、本发明设计的决策输出系统,执行计算机控制器指令,通过无人车的动力学特性可以稳定的完成汽车加减速,刹车,转弯等任务,并有很高的鲁棒性。4. The decision output system designed by the present invention executes the computer controller instructions, and can stably complete tasks such as acceleration and deceleration, braking, and turning of the vehicle through the dynamic characteristics of the unmanned vehicle, and has high robustness.

附图说明Description of drawings

通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、目的和优点将会变得更明显:Other features, objects and advantages of the present invention will become more apparent by reading the detailed description of non-limiting embodiments with reference to the following drawings:

图1为一种动态复杂环境中的无人车总体结构示意图;1 is a schematic diagram of the overall structure of an unmanned vehicle in a dynamic and complex environment;

图2为无人车模型2自由度示意图;Figure 2 is a schematic diagram of the unmanned vehicle model 2 degrees of freedom;

图3为激光雷达示意图;Figure 3 is a schematic diagram of the lidar;

图4为无人车系统框架图;Figure 4 is a frame diagram of an unmanned vehicle system;

图5为无人车辆软件总体框架;Figure 5 is the overall framework of the unmanned vehicle software;

图6为无人车系统具体架构与关系图;Figure 6 is the specific architecture and relationship diagram of the unmanned vehicle system;

其中,1为超声波雷达、2为毫米波雷达、3为前向相机、4为侧向相机、5为后向相机、6为激光雷达、7为车身包括电能底盘、8为惯性测量单元、9为计算机控制器、10为中央处理器与图形处理器。Among them, 1 is ultrasonic radar, 2 is millimeter wave radar, 3 is forward camera, 4 is side camera, 5 is backward camera, 6 is lidar, 7 is body including power chassis, 8 is inertial measurement unit, 9 is It is a computer controller, and 10 is a central processing unit and a graphics processing unit.

具体实施方式Detailed ways

下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变化和改进。这些都属于本发明的保护范围。The present invention will be described in detail below with reference to specific embodiments. The following examples will help those skilled in the art to further understand the present invention, but do not limit the present invention in any form. It should be noted that, for those skilled in the art, several changes and improvements can be made without departing from the inventive concept. These all belong to the protection scope of the present invention.

本发明提供一种无人车在动态复杂环境中的解决方案,该车于环境感知、车辆控制、路径规划技术于一体。The present invention provides a solution for an unmanned vehicle in a dynamic and complex environment, which integrates environmental perception, vehicle control and path planning technologies.

实施例1Example 1

根据本发明提供的一种无人车在动态复杂环境中的控制系统,如图1、如图4所示,包括:环境感知系统、计算机控制器和行为决策输出系统;如图5-6所示,A control system for an unmanned vehicle in a dynamic and complex environment provided according to the present invention, as shown in Figure 1 and Figure 4, includes: an environment perception system, a computer controller and a behavior decision output system; as shown in Figures 5-6 Show,

模块M1:将环境感知系统的点云数据传入计算机控制器,通过深度学习的方法完成障碍物的检测、分割和识别;Module M1: Transfer the point cloud data of the environmental perception system to the computer controller, and complete the detection, segmentation and identification of obstacles by means of deep learning;

模块M2:根据环境感知系统拍摄的图像传入计算机控制器获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯颜色识别;Module M2: According to the image captured by the environmental perception system, it is transmitted to the computer controller to obtain the precise position of the traffic light frame, and the color of the traffic light is identified according to the position of the traffic light frame;

模块M3:环境感知系统将障碍物检测识别数据与红绿灯框位置与颜色识别数据传入计算机控制器,计算得到避免碰撞的最小安全距离;Module M3: The environment perception system transmits the obstacle detection identification data and the traffic light frame position and color identification data to the computer controller, and calculates the minimum safe distance to avoid collision;

模块M4:根据计算得到避免碰撞的最小安全距离,通过计算机控制器计算得到汽车目标角速度和速度;Module M4: Calculate the minimum safe distance to avoid collision according to the calculation, and calculate the target angular velocity and speed of the car through the computer controller;

模块M5:将汽车目标车速、角速度输入计算机控制器,计算机控制器由通过无人车动力学模型根据最小安全距离生成避障路径,最终使车辆自主完成行驶任务到达目标点;Module M5: Input the vehicle target speed and angular velocity into the computer controller, and the computer controller generates an obstacle avoidance path according to the minimum safe distance through the unmanned vehicle dynamics model, and finally enables the vehicle to complete the driving task autonomously and reach the target point;

所述无人车动力学模型是包括汽车的动力性、制动性、平顺性和稳定性,测量车辆的质量和受力情况与车辆运动之间的关系。The unmanned vehicle dynamics model includes the dynamics, braking, ride comfort and stability of the vehicle, and measures the relationship between the mass and force of the vehicle and the motion of the vehicle.

具体地,所述模块M1包括:Specifically, the module M1 includes:

所述环境感知系统包括超声波雷达1、毫米波雷达2、激光雷达6、相机和GPS天线;The environment perception system includesultrasonic radar 1, millimeter wave radar 2, lidar 6, camera and GPS antenna;

车头放置毫米波雷达与超声波雷达;Millimeter-wave radar and ultrasonic radar are placed on the front of the vehicle;

所述计算机控制器9包括惯性测量单元8和中央处理器与图形处理器10;通过计算机中央处理器与图形处理器10可以较快求解无人车当前环境情况,障碍情况并计算出决策结果;The computer controller 9 includes aninertial measurement unit 8 and a central processing unit and agraphics processor 10; the computer central processing unit and thegraphics processor 10 can quickly solve the current environmental conditions and obstacles of the unmanned vehicle and calculate the decision-making result;

所述相机包括前向相机3、预设个侧向相机4和后向相机5;四个侧向相机固定在无人车车身的四个角;汽车前挡风玻璃上方安置前向相机,后挡风玻璃上方固定后向相机;The cameras include a front-facingcamera 3, a preset side-facing camera 4 and a rear-facing camera 5; the four side-facing cameras are fixed at the four corners of the unmanned vehicle body; the front-facing cameras are placed above the front windshield of the car, and the rear The rear-facing camera is fixed above the windshield;

所述激光雷达6安装在车顶上;The lidar 6 is installed on the roof;

模块M1.1:如图3所示,基于所述激光雷达6点云数据,所述中央处理器与图形处理器10通过线下训练的卷积神经网络模型学习点云特征并预测障碍物的相关属性,根据障碍物的相关属性进行障碍物分割,从而对障碍物检测识别;Module M1.1: As shown in Figure 3, based on the point cloud data of the lidar 6, the central processing unit and thegraphics processing unit 10 learn the point cloud features and predict the obstacle's Relevant attributes, segment obstacles according to the relevant attributes of obstacles, so as to detect and identify obstacles;

模块M1.2:基于所述车头的毫米波雷达2点云数据的障碍物检测识别,主要用来对毫米波雷达原始数据通过中央处理器与图形处理器10进行处理而得到障碍物结果;Module M1.2: obstacle detection and identification based on the millimeter-wave radar 2 point cloud data of the front of the vehicle, mainly used to process the original data of the millimeter-wave radar through the central processing unit and thegraphics processor 10 to obtain obstacle results;

模块M1.3:所述中央处理器与图形处理器10通过融合激光雷达算法,将所述激光雷达7和所述毫米波雷达2的障碍物识别结果进行融合,得到无人车精确检测障碍物的位置;Module M1.3: The central processing unit and thegraphics processor 10 fuse the obstacle recognition results of the laser radar 7 and the millimeter-wave radar 2 by fusing the laser radar algorithm to obtain the unmanned vehicle accurately detecting obstacles s position;

所述融合激光雷达算法主要进行了单传感器结果和融合结果的管理、匹配以及基于卡尔曼滤波的障碍物速度融合。The fusion lidar algorithm mainly manages and matches single sensor results and fusion results, as well as obstacle velocity fusion based on Kalman filtering.

具体地,所述惯性测量单元8包括:惯性测量单元测量无人车三轴姿态角(或角速率)以及加速度来提高可靠性;所述惯性测量单元尽量安装在靠近无人车的重心上。Specifically, theinertial measurement unit 8 includes: the inertial measurement unit measures the three-axis attitude angle (or angular rate) and acceleration of the unmanned vehicle to improve reliability; the inertial measurement unit is installed as close to the center of gravity of the unmanned vehicle as possible.

具体地,所述模块M2包括:Specifically, the module M2 includes:

所述环境感知系统包括超声波雷达1、毫米波雷达2、激光雷达6和相机;The environment perception system includes anultrasonic radar 1, a millimeter-wave radar 2, a lidar 6 and a camera;

所述相机包括前向相机3、预设个侧向相机4和后向相机5;The camera includes a front-facingcamera 3, a preset side-facing camera 4 and a rear-facing camera 5;

所述激光雷达6安装在车顶上;The lidar 6 is installed on the roof;

根据环境感知系统中相机在投影区域外选取一个较大的感兴趣区域,运行红绿灯检测来获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯的颜色识别,得到红绿灯当前的状态;根据单帧的红绿灯状态,通过时序的滤波矫正算法进一步确认红绿灯的最终状态。According to the camera in the environmental perception system, select a large area of interest outside the projection area, run the traffic light detection to obtain the accurate traffic light frame position, and identify the color of the traffic light according to the position of the traffic light frame to obtain the current status of the traffic light; according to a single frame The final state of the traffic light is further confirmed by the filter correction algorithm of the time series.

具体地,所述模块M3包括:通过车辆自身消息与外界障碍物信息进行比较,计算无人车在复杂环境中面对障碍物所需的最小安全距离;Specifically, the module M3 includes: calculating the minimum safe distance required for the unmanned vehicle to face obstacles in a complex environment by comparing the vehicle's own information with the external obstacle information;

所述车辆自身消息包括车辆的尺寸信息、动力学模型信息;所述动力学模型信息包括汽车加速度、速度和角速度;相机当前采集到的图片信息,与下一帧的图片信息作比较完成闭环检测,实现汽车的建图与定位;The vehicle's own message includes the size information and dynamic model information of the vehicle; the dynamic model information includes the acceleration, speed and angular velocity of the vehicle; the picture information currently collected by the camera is compared with the picture information of the next frame to complete the closed-loop detection. , to realize the construction and positioning of the car;

所述外界障碍物信息包括环境感知系统的障碍物检测识别。The external obstacle information includes the obstacle detection and identification of the environment perception system.

具体地,如图2所示,所述模块M5中无人车动力学模型包括:Specifically, as shown in Figure 2, the unmanned vehicle dynamics model in the module M5 includes:

Figure BDA0002731562120000081
Figure BDA0002731562120000081

Figure BDA0002731562120000082
Figure BDA0002731562120000082

其中,k1,k2为前后轮的侧偏刚度,m为汽车总质量,a为车辆重心到前轴的距离,b车辆重心到后轴的距离,δ为前轮转角,Iz汽车车体转动惯量;β表示质心偏侧角,wr表示横摆角速度,u表示前进速度,

Figure BDA0002731562120000083
表示侧向加速度,
Figure BDA0002731562120000084
表示横摆角加速度。Among them, k1 , k2 are the cornering stiffnesses of the front and rear wheels, m is the total mass of the vehicle, a is the distance from the center of gravity of the vehicle to the front axle, b is the distance from the center of gravity of the vehicle to the rear axle, δ is the front wheel angle, Iz body moment of inertia; β represents the side angle of the center of mass, wr represents the yaw rate, u represents the forward speed,
Figure BDA0002731562120000083
represents the lateral acceleration,
Figure BDA0002731562120000084
represents the yaw angular acceleration.

具体地,所述模块M5还包括:行为决策输出系统和车辆控制系统;Specifically, the module M5 further includes: a behavior decision output system and a vehicle control system;

所述行为决策输出系统包括执行计算机控制器输出的指令,无人车纯电动线电能底盘根据计算机控制器计算得到的角速度和速度控制无人车轮子速度,转向;当环境感知系统感知到前方有障碍物时,通过计算机控制器将命令告知底盘,让底盘以稳定的减速度避让或停下;底盘对应无人车动力学特性,易于稳定的PID控制,使乘客或者运送货物有安全舒适的环境,体验;The behavior decision output system includes executing the instructions output by the computer controller, and the pure electric wire-powered chassis of the unmanned vehicle controls the wheel speed and steering of the unmanned vehicle according to the angular velocity and speed calculated by the computer controller; When there is an obstacle, the computer controller will inform the chassis of the command, so that the chassis can avoid or stop at a stable deceleration; the chassis corresponds to the dynamic characteristics of the unmanned vehicle, and the stable PID control is easy to make passengers or transporting goods have a safe and comfortable environment. , experience;

所述行为决策输出系统硬件上讲,行为层由计算机处理器和图形图像处理器处理。然而行为决策系统需要根据感知层输出的信息合理决策出当前车辆的行为;指导轨迹规划模块规划出合适的路径、车速等信息,发送给控制层。In terms of hardware of the behavior decision output system, the behavior layer is processed by a computer processor and a graphics image processor. However, the behavior decision-making system needs to reasonably decide the current vehicle behavior according to the information output by the perception layer; guide the trajectory planning module to plan the appropriate path, vehicle speed and other information, and send it to the control layer.

底盘对应无人车动力学特性,易于稳定控制,有较好的鲁棒性;The chassis corresponds to the dynamic characteristics of the unmanned vehicle, which is easy to stabilize and control, and has good robustness;

所述车辆控制系统包括采用PID控制算法对目标路径进行跟踪,保证车辆能够按照给定的路径信息点进行跟踪行驶,最终按照要求行驶完整的路径。The vehicle control system includes using a PID control algorithm to track the target path, so as to ensure that the vehicle can track and travel according to the given path information points, and finally travel the complete path as required.

根据本发明提供的一种无人车在动态复杂环境中的控制方法,包括:环境感知系统、计算机控制器和行为决策输出系统;A control method for an unmanned vehicle in a dynamic and complex environment provided according to the present invention includes: an environment perception system, a computer controller and a behavior decision output system;

步骤M1:将环境感知系统的点云数据传入计算机控制器,通过深度学习的方法完成障碍物的检测、分割和识别;Step M1: transfer the point cloud data of the environmental perception system to the computer controller, and complete the detection, segmentation and identification of obstacles by means of deep learning;

步骤M2:根据环境感知系统拍摄的图像传入计算机控制器获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯颜色识别;Step M2: According to the image captured by the environment perception system, the computer controller is transmitted to obtain the precise traffic light frame position, and the traffic light color recognition is performed according to the traffic light frame position;

步骤M3:环境感知系统将障碍物检测识别数据与红绿灯框位置与颜色识别数据传入计算机控制器,计算得到避免碰撞的最小安全距离;Step M3: The environment perception system transmits the obstacle detection identification data and the traffic light frame position and color identification data to the computer controller, and calculates the minimum safe distance to avoid collision;

步骤M4:根据计算得到避免碰撞的最小安全距离,通过计算机控制器计算得到汽车目标角速度和速度;Step M4: obtain the minimum safe distance to avoid collision according to the calculation, and obtain the target angular velocity and speed of the vehicle through calculation by the computer controller;

步骤M5:将汽车目标车速、角速度输入计算机控制器,计算机控制器由通过无人车动力学模型根据最小安全距离生成避障路径,最终使车辆自主完成行驶任务到达目标点;Step M5: input the target vehicle speed and angular velocity of the vehicle into the computer controller, and the computer controller generates an obstacle avoidance path according to the minimum safe distance through the unmanned vehicle dynamics model, and finally enables the vehicle to independently complete the driving task and reach the target point;

所述无人车动力学模型是包括汽车的动力性、制动性、平顺性和稳定性,测量车辆的质量和受力情况与车辆运动之间的关系。The unmanned vehicle dynamics model includes the dynamics, braking, ride comfort and stability of the vehicle, and measures the relationship between the mass and force of the vehicle and the motion of the vehicle.

具体地,所述步骤M1包括:Specifically, the step M1 includes:

所述环境感知系统包括超声波雷达1、毫米波雷达2、激光雷达6、相机和GPS天线;The environment perception system includesultrasonic radar 1, millimeter wave radar 2, lidar 6, camera and GPS antenna;

车头放置毫米波雷达与超声波雷达;Millimeter-wave radar and ultrasonic radar are placed on the front of the vehicle;

所述计算机控制器9包括惯性测量单元8和中央处理器与图形处理器10;通过计算机中央处理器与图形处理器10可以较快求解无人车当前环境情况,障碍情况并计算出决策结果;The computer controller 9 includes aninertial measurement unit 8, a central processing unit and agraphics processing unit 10; the computer central processing unit and thegraphics processing unit 10 can quickly solve the current environmental conditions and obstacles of the unmanned vehicle and calculate the decision-making result;

所述相机包括前向相机3、预设个侧向相机4和后向相机5;四个侧向相机固定在无人车车身的四个角;汽车前挡风玻璃上方安置前向相机,后挡风玻璃上方固定后向相机;The cameras include a front-facingcamera 3, a preset side-facing camera 4 and a rear-facing camera 5; the four side-facing cameras are fixed at the four corners of the unmanned vehicle body; the front-facing cameras are placed above the front windshield of the car, and the rear The rear-facing camera is fixed above the windshield;

所述激光雷达6安装在车顶上;The lidar 6 is installed on the roof;

步骤M1.1:基于所述激光雷达6点云数据,所述中央处理器与图形处理器10通过线下训练的卷积神经网络模型学习点云特征并预测障碍物的相关属性,根据障碍物的相关属性进行障碍物分割,从而对障碍物检测识别;Step M1.1: Based on the 6 point cloud data of the lidar, the central processing unit and thegraphics processor 10 learn the point cloud features and predict the relevant attributes of the obstacles through the offline training convolutional neural network model. The related attributes are used to segment obstacles, so as to detect and identify obstacles;

步骤M1.2:基于所述车头的毫米波雷达2点云数据的障碍物检测识别,主要用来对毫米波雷达原始数据通过中央处理器与图形处理器10进行处理而得到障碍物结果;Step M1.2: The obstacle detection and identification based on the millimeter-wave radar 2 point cloud data of the vehicle head is mainly used to process the original data of the millimeter-wave radar through the central processing unit and thegraphics processor 10 to obtain the obstacle result;

步骤M1.3:所述中央处理器与图形处理器10通过融合激光雷达算法,将所述激光雷达7和所述毫米波雷达2的障碍物识别结果进行融合,得到无人车精确检测障碍物的位置;Step M1.3: The central processing unit and thegraphics processor 10 fuse the obstacle recognition results of the laser radar 7 and the millimeter-wave radar 2 by fusing the laser radar algorithm to obtain an unmanned vehicle that accurately detects obstacles s position;

所述融合激光雷达算法主要进行了单传感器结果和融合结果的管理、匹配以及基于卡尔曼滤波的障碍物速度融合。The fusion lidar algorithm mainly manages and matches single sensor results and fusion results, as well as obstacle velocity fusion based on Kalman filtering.

具体地,所述惯性测量单元8包括:惯性测量单元测量无人车三轴姿态角(或角速率)以及加速度来提高可靠性;所述惯性测量单元尽量安装在靠近无人车的重心上。Specifically, theinertial measurement unit 8 includes: the inertial measurement unit measures the three-axis attitude angle (or angular rate) and acceleration of the unmanned vehicle to improve reliability; the inertial measurement unit is installed as close to the center of gravity of the unmanned vehicle as possible.

具体地,所述步骤M2包括:Specifically, the step M2 includes:

所述环境感知系统包括超声波雷达1、毫米波雷达2、激光雷达6和相机;The environment perception system includes anultrasonic radar 1, a millimeter-wave radar 2, a lidar 6 and a camera;

所述相机包括前向相机3、预设个侧向相机4和后向相机5;The camera includes a front-facingcamera 3, a preset side-facing camera 4 and a rear-facing camera 5;

所述激光雷达6安装在车顶上;The lidar 6 is installed on the roof;

根据环境感知系统中相机在投影区域外选取一个较大的感兴趣区域,运行红绿灯检测来获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯的颜色识别,得到红绿灯当前的状态;根据单帧的红绿灯状态,通过时序的滤波矫正算法进一步确认红绿灯的最终状态。According to the camera in the environmental perception system, select a large area of interest outside the projection area, run the traffic light detection to obtain the accurate traffic light frame position, and identify the color of the traffic light according to the position of the traffic light frame to obtain the current status of the traffic light; according to a single frame The final state of the traffic light is further confirmed through the filtering and correction algorithm of the time series.

具体地,所述步骤M3包括:通过车辆自身消息与外界障碍物信息进行比较,计算无人车在复杂环境中面对障碍物所需的最小安全距离;Specifically, the step M3 includes: calculating the minimum safe distance required for the unmanned vehicle to face obstacles in a complex environment by comparing the vehicle's own information with the external obstacle information;

所述车辆自身消息包括车辆的尺寸信息、动力学模型信息;所述动力学模型信息包括汽车加速度、速度和角速度;相机当前采集到的图片信息,与下一帧的图片信息作比较完成闭环检测,实现汽车的建图与定位;The vehicle's own message includes the size information and dynamic model information of the vehicle; the dynamic model information includes the acceleration, speed and angular velocity of the vehicle; the picture information currently collected by the camera is compared with the picture information of the next frame to complete the closed-loop detection. , to realize the construction and positioning of the car;

所述外界障碍物信息包括环境感知系统的障碍物检测识别。The external obstacle information includes the obstacle detection and identification of the environment perception system.

具体地,所述步骤M5中无人车动力学模型包括:Specifically, in the step M5, the unmanned vehicle dynamics model includes:

Figure BDA0002731562120000111
Figure BDA0002731562120000111

Figure BDA0002731562120000112
Figure BDA0002731562120000112

其中,k1,k2为前后轮的侧偏刚度,m为汽车总质量,a为车辆重心到前轴的距离,b车辆重心到后轴的距离,δ为前轮转角,Iz汽车车体转动惯量;β表示质心偏侧角,wr表示横摆角速度,u表示前进速度,

Figure BDA0002731562120000113
表示侧向加速度,
Figure BDA0002731562120000114
表示横摆角加速度。Among them, k1 , k2 are the cornering stiffnesses of the front and rear wheels, m is the total mass of the vehicle, a is the distance from the center of gravity of the vehicle to the front axle, b is the distance from the center of gravity of the vehicle to the rear axle, δ is the front wheel angle, Iz body moment of inertia; β represents the side angle of the center of mass, wr represents the yaw rate, u represents the forward speed,
Figure BDA0002731562120000113
represents the lateral acceleration,
Figure BDA0002731562120000114
represents the yaw angular acceleration.

具体地,所述步骤M5还包括:行为决策输出系统和车辆控制系统;Specifically, the step M5 further includes: a behavior decision output system and a vehicle control system;

所述行为决策输出系统包括执行计算机控制器输出的指令,无人车纯电动线电能底盘根据计算机控制器计算得到的角速度和速度控制无人车轮子速度,转向;当环境感知系统感知到前方有障碍物时,通过计算机控制器将命令告知底盘,让底盘以稳定的减速度避让或停下;底盘对应无人车动力学特性,易于稳定的PID控制,使乘客或者运送货物有安全舒适的环境,体验;The behavior decision output system includes executing the instructions output by the computer controller, and the unmanned vehicle pure electric wire power chassis controls the speed and steering of the unmanned vehicle wheel according to the angular velocity and speed calculated by the computer controller; When there is an obstacle, the computer controller will inform the chassis of the command, so that the chassis can avoid or stop at a stable deceleration; the chassis corresponds to the dynamic characteristics of the unmanned vehicle, and the stable PID control is easy to make the passengers or transported goods have a safe and comfortable environment. , experience;

所述行为决策输出系统硬件上讲,行为层由计算机处理器和图形图像处理器处理。然而行为决策系统需要根据感知层输出的信息合理决策出当前车辆的行为;指导轨迹规划模块规划出合适的路径、车速等信息,发送给控制层。In terms of hardware of the behavior decision output system, the behavior layer is processed by a computer processor and a graphics image processor. However, the behavior decision-making system needs to reasonably decide the current vehicle behavior according to the information output by the perception layer; guide the trajectory planning module to plan the appropriate path, vehicle speed and other information, and send it to the control layer.

底盘对应无人车动力学特性,易于稳定控制,有较好的鲁棒性;The chassis corresponds to the dynamic characteristics of the unmanned vehicle, which is easy to stabilize and control, and has good robustness;

所述车辆控制系统包括采用PID控制算法对目标路径进行跟踪,保证车辆能够按照给定的路径信息点进行跟踪行驶,最终按照要求行驶完整的路径。The vehicle control system includes using a PID control algorithm to track the target path, so as to ensure that the vehicle can track and travel according to the given path information points, and finally travel the complete path as required.

实施例2Example 2

实施例2是实施例1的变化例Example 2 is a modification of Example 1

一种在动态复杂环境中运行的无人车,包括环境感知系统、计算机控制器和行为决策输出系统。环境感知系统包括超声波雷达1、毫米波雷达2,分别装在车前和车尾。超声波和毫米波雷达用于障碍物检测识别。An unmanned vehicle operating in a dynamic and complex environment includes an environment perception system, a computer controller and a behavioral decision output system. The environmental perception system includesultrasonic radar 1 and millimeter-wave radar 2, which are installed in the front and rear of the car respectively. Ultrasonic and millimeter-wave radars are used for obstacle detection and identification.

车身7顶部固定激光雷达6,用于障碍物检测识别并学习预测障碍物的相关属性,根据这些属性进行障碍物分割。车身上的四个角落分别放置侧向相机4,以计算机视觉的方法完成坐标系的标定与矫正,并检测车身两边的障碍情况。The lidar 6 is fixed on the top of the vehicle body 7, which is used for obstacle detection and recognition, and learns to predict the relevant attributes of obstacles, and performs obstacle segmentation according to these attributes. The four corners of the vehicle body are respectively placed with side cameras 4 to complete the calibration and correction of the coordinate system by means of computer vision, and to detect obstacles on both sides of the vehicle body.

激光雷达下方,车前挡风玻璃上方固定前向相机3,用于红绿灯检测识别,行人、车道线检测等。车后挡风玻璃上方固定后向相机5,观测后方障碍物与行人情况是否会影响到行驶。A forward-facingcamera 3 is fixed under the lidar and above the front windshield of the car, which is used for traffic light detection and identification, pedestrian, lane line detection, etc. The rear-facing camera 5 is fixed above the rear windshield of the car to observe whether the rear obstacles and pedestrians will affect the driving.

计算机控制器9位于无人车内部以免被损坏。计算机控制器包括惯性测量单元8,用于测量无人车姿态角以及加速度,并由此提高可靠性。The computer controller 9 is located inside the unmanned vehicle so as not to be damaged. The computer controller includes aninertial measurement unit 8 for measuring the attitude angle and acceleration of the unmanned vehicle, thereby improving reliability.

中央处理器与图形处理器10位于车内,属于计算机控制器,是无人车的大脑。雷达数据图像数据等均由处理器执行与运算,并向决策输出系统发出控制指令。The central processing unit and thegraphics processing unit 10 are located in the vehicle, belong to the computer controller, and are the brain of the unmanned vehicle. Radar data, image data, etc. are executed and calculated by the processor, and control commands are sent to the decision output system.

所述的超声波雷达1、毫米波雷达2将点云数据等传入中央处理器和图形处理器10内,通过深度学习的方法完成障碍物的检测,分割,识别。Theultrasonic radar 1 and the millimeter-wave radar 2 transmit point cloud data and the like into the central processing unit and thegraphics processing unit 10, and complete the detection, segmentation and identification of obstacles by means of deep learning.

所述的相机3将拍摄的图像传入图形处理器10,通过选取一个较大的感兴趣区域,在其中运行红绿灯检测来获得精确的红绿灯框位置,并根据此红绿灯框的位置进行红绿灯的颜色识别,得到红绿灯当前的状态。得到单帧的红绿灯状态后,通过时序的滤波矫正算法进一步确认红绿灯的最终状态。Thecamera 3 transmits the captured image to thegraphics processor 10, selects a larger area of interest, runs the traffic light detection in it to obtain the precise traffic light frame position, and performs the color of the traffic light according to the position of the traffic light frame. Identify and get the current status of the traffic lights. After obtaining the traffic light status of a single frame, the final status of the traffic light is further confirmed through the filtering correction algorithm of the time series.

所述的环境感知系统将数据传入中央处理器10,由中央处理器算得避免碰撞的最小安全距离。The environment perception system transmits data to thecentral processing unit 10, and the central processing unit calculates the minimum safe distance to avoid collision.

所述的计算机控制器9通过无人车动力学模型计算得到车的角速度与速度等,并将指令告诉车身7,由无人车执行。The computer controller 9 calculates the angular velocity and speed of the unmanned vehicle through the dynamic model of the unmanned vehicle, and informs the vehicle body 7 of the instructions to be executed by the unmanned vehicle.

其中多无人车动力学模型可写为:The multi-unmanned vehicle dynamics model can be written as:

Figure BDA0002731562120000121
Figure BDA0002731562120000121

Figure BDA0002731562120000122
Figure BDA0002731562120000122

其中,k1,k2为前后轮的侧偏刚度,m为汽车总质量,a为车辆重心到前轴的距离,b车辆重心到后轴的距离,δ为前轮转角,Iz汽车车体转动惯量;β表示质心偏侧角,wr表示横摆角速度,u表示前进速度,

Figure BDA0002731562120000123
表示侧向加速度,
Figure BDA0002731562120000124
表示横摆角加速度。Among them, k1 , k2 are the cornering stiffnesses of the front and rear wheels, m is the total mass of the vehicle, a is the distance from the center of gravity of the vehicle to the front axle, b is the distance from the center of gravity of the vehicle to the rear axle, δ is the front wheel angle, Iz body moment of inertia; β represents the side angle of the center of mass, wr represents the yaw rate, u represents the forward speed,
Figure BDA0002731562120000123
represents the lateral acceleration,
Figure BDA0002731562120000124
represents the yaw angular acceleration.

在本申请的描述中,需要理解的是,术语“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。In the description of this application, it should be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", The orientation or positional relationship indicated by "bottom", "inner", "outer", etc. is based on the orientation or positional relationship shown in the accompanying drawings, which is only for the convenience of describing the present application and simplifying the description, rather than indicating or implying the indicated device. Or elements must have a particular orientation, be constructed and operate in a particular orientation, and therefore should not be construed as a limitation of the present application.

本领域技术人员知道,除了以纯计算机可读程序代码方式实现本发明提供的系统、装置及其各个模块以外,完全可以通过将方法步骤进行逻辑编程来使得本发明提供的系统、装置及其各个模块以逻辑门、开关、专用集成电路、可编程逻辑控制器以及嵌入式微控制器等的形式来实现相同程序。所以,本发明提供的系统、装置及其各个模块可以被认为是一种硬件部件,而对其内包括的用于实现各种程序的模块也可以视为硬件部件内的结构;也可以将用于实现各种功能的模块视为既可以是实现方法的软件程序又可以是硬件部件内的结构。Those skilled in the art know that, in addition to implementing the system, device and each module provided by the present invention in the form of pure computer readable program code, the system, device and each module provided by the present invention can be completely implemented by logically programming the method steps. The same program is implemented in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, and embedded microcontrollers, among others. Therefore, the system, device and each module provided by the present invention can be regarded as a kind of hardware component, and the modules used for realizing various programs included in it can also be regarded as the structure in the hardware component; A module for realizing various functions can be regarded as either a software program for realizing a method or a structure within a hardware component.

以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变化或修改,这并不影响本发明的实质内容。在不冲突的情况下,本申请的实施例和实施例中的特征可以任意相互组合。Specific embodiments of the present invention have been described above. It should be understood that the present invention is not limited to the above-mentioned specific embodiments, and those skilled in the art can make various changes or modifications within the scope of the claims, which do not affect the essential content of the present invention. The embodiments of the present application and features in the embodiments may be combined with each other arbitrarily, provided that there is no conflict.

Claims (10)

Translated fromChinese
1.一种无人车在动态复杂环境中的控制系统,其特征在于,包括:1. a control system of unmanned vehicle in dynamic complex environment, is characterized in that, comprises:模块M1:将环境感知系统的点云数据传入计算机控制器,通过深度学习的方法完成障碍物的检测、分割和识别;Module M1: Transfer the point cloud data of the environmental perception system to the computer controller, and complete the detection, segmentation and identification of obstacles by means of deep learning;模块M2:根据环境感知系统拍摄的图像传入计算机控制器获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯颜色识别;Module M2: According to the image captured by the environmental perception system, it is transmitted to the computer controller to obtain the precise position of the traffic light frame, and the color of the traffic light is identified according to the position of the traffic light frame;模块M3:环境感知系统将障碍物检测识别数据与红绿灯框位置与颜色识别数据传入计算机控制器,计算得到避免碰撞的最小安全距离;Module M3: The environment perception system transmits the obstacle detection identification data and the traffic light frame position and color identification data to the computer controller, and calculates the minimum safe distance to avoid collision;模块M4:根据计算得到避免碰撞的最小安全距离,通过计算机控制器计算得到汽车目标角速度和速度;Module M4: Calculate the minimum safe distance to avoid collision according to the calculation, and calculate the target angular velocity and speed of the car through the computer controller;模块M5:将汽车目标车速、角速度输入计算机控制器,计算机控制器由通过无人车动力学模型根据最小安全距离生成避障路径,最终使车辆自主完成行驶任务到达目标点;Module M5: Input the vehicle target speed and angular velocity into the computer controller, and the computer controller generates an obstacle avoidance path according to the minimum safe distance through the unmanned vehicle dynamics model, and finally enables the vehicle to complete the driving task autonomously and reach the target point;所述无人车动力学模型是包括汽车的动力性、制动性、平顺性和稳定性,测量车辆的质量和受力情况与车辆运动之间的关系。The unmanned vehicle dynamics model includes the dynamics, braking, ride comfort and stability of the vehicle, and measures the relationship between the mass and force of the vehicle and the motion of the vehicle.2.根据权利要求1所述的无人车在动态复杂环境中的控制系统,其特征在于,所述模块M1包括:2. The control system of an unmanned vehicle in a dynamic complex environment according to claim 1, wherein the module M1 comprises:所述环境感知系统包括超声波雷达(1)、毫米波雷达(2)、激光雷达(6)和相机;The environment perception system includes an ultrasonic radar (1), a millimeter-wave radar (2), a lidar (6) and a camera;所述计算机控制器(9)包括惯性测量单元(8)和中央处理器与图形处理器(10);The computer controller (9) includes an inertial measurement unit (8), a central processing unit and a graphics processor (10);所述相机包括前向相机(3)、侧向相机(4)和后向相机(5);The camera includes a front-facing camera (3), a side-facing camera (4) and a rear-facing camera (5);所述激光雷达(6)安装在车顶上;The lidar (6) is installed on the roof;模块M1.1:基于所述激光雷达(6)点云数据,所述中央处理器与图形处理器(10)通过卷积神经网络模型学习点云特征并预测障碍物的相关属性,根据障碍物的相关属性进行障碍物分割,从而对障碍物检测识别;Module M1.1: Based on the point cloud data of the lidar (6), the central processing unit and the graphics processor (10) learn the point cloud features through the convolutional neural network model and predict the relevant attributes of the obstacles, according to the obstacles The related attributes are used to segment obstacles, so as to detect and identify obstacles;模块M1.2:基于所述毫米波雷达(2)点云数据,所述中央处理器与图形处理器(10)进行处理从而对障碍物检测识别;Module M1.2: Based on the point cloud data of the millimeter wave radar (2), the central processing unit and the graphics processor (10) perform processing to detect and identify obstacles;模块M1.3:所述中央处理器与图形处理器(10)通过融合激光雷达算法,将所述激光雷达(7)和所述毫米波雷达(2)的障碍物识别结果进行融合,得到无人车精确检测障碍物的位置;Module M1.3: The central processing unit and the graphics processor (10) fuse the obstacle recognition results of the lidar (7) and the millimeter-wave radar (2) by fusing the lidar algorithm, and obtain no People and vehicles can accurately detect the location of obstacles;所述融合激光雷达算法主要进行了单传感器结果和融合结果的管理、匹配以及基于卡尔曼滤波的障碍物速度融合。The fusion lidar algorithm mainly manages and matches single sensor results and fusion results, as well as obstacle velocity fusion based on Kalman filtering.3.根据权利要求2所述的无人车在动态复杂环境中的控制系统,其特征在于,所述惯性测量单元(8)包括:惯性测量单元测量无人车角速率以及加速度;所述惯性测量单元安装在靠近无人车的重心上。3. The control system of an unmanned vehicle in a dynamic complex environment according to claim 2, wherein the inertial measurement unit (8) comprises: the inertial measurement unit measures the angular rate and acceleration of the unmanned vehicle; the inertial measurement unit (8) comprises: The measurement unit is installed close to the center of gravity of the unmanned vehicle.4.根据权利要求1所述的无人车在动态复杂环境中的控制系统,其特征在于,所述模块M2包括:4. The control system of an unmanned vehicle in a dynamic complex environment according to claim 1, wherein the module M2 comprises:所述环境感知系统包括超声波雷达(1)、毫米波雷达(2)、激光雷达(6)和相机;The environment perception system includes an ultrasonic radar (1), a millimeter-wave radar (2), a lidar (6) and a camera;所述相机包括前向相机(3)、预设个侧向相机(4)和后向相机(5);The camera includes a front-facing camera (3), a preset side-facing camera (4) and a rear-facing camera (5);所述激光雷达(6)安装在车顶上;The lidar (6) is installed on the roof;根据环境感知系统中相机在投影区域外选取感兴趣区域,运行红绿灯检测来获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯的颜色识别,得到红绿灯当前的状态;根据单帧的红绿灯状态,通过时序的滤波矫正算法进一步确认红绿灯的最终状态。According to the camera in the environmental perception system, select the area of interest outside the projection area, run the traffic light detection to obtain the accurate traffic light frame position, and identify the color of the traffic light according to the position of the traffic light frame to obtain the current status of the traffic light; according to the traffic light status of a single frame, The final state of the traffic light is further confirmed through the filtering and correction algorithm of the timing sequence.5.根据权利要求1所述的无人车在动态复杂环境中的控制系统,其特征在于,所述模块M3包括:通过车辆自身消息与外界障碍物信息进行比较,计算无人车在复杂环境中面对障碍物所需的最小安全距离;5. The control system for an unmanned vehicle in a dynamic complex environment according to claim 1, wherein the module M3 comprises: comparing the information of the vehicle itself with the external obstacle information, calculating the unmanned vehicle in a complex environment The minimum safe distance required to face the obstacle;所述车辆自身消息包括车辆的尺寸信息、动力学模型信息;所述动力学模型信息包括汽车加速度、速度和角速度;相机当前采集到的图片信息,与下一帧的图片信息作比较完成闭环检测,实现汽车的建图与定位;The vehicle's own message includes the size information and dynamic model information of the vehicle; the dynamic model information includes the acceleration, speed and angular velocity of the vehicle; the picture information currently collected by the camera is compared with the picture information of the next frame to complete the closed-loop detection. , to realize the construction and positioning of the car;所述外界障碍物信息包括环境感知系统的障碍物检测识别。The external obstacle information includes the obstacle detection and identification of the environment perception system.6.根据权利要求1所述的无人车在动态复杂环境中的控制系统,其特征在于,所述模块M5中无人车动力学模型包括:6. The control system of an unmanned vehicle in a dynamic complex environment according to claim 1, wherein the unmanned vehicle dynamics model in the module M5 comprises:
Figure FDA0002731562110000021
Figure FDA0002731562110000021
Figure FDA0002731562110000022
Figure FDA0002731562110000022
其中,k1,k2为前后轮的侧偏刚度,m为汽车总质量,a为车辆重心到前轴的距离,b车辆重心到后轴的距离,δ为前轮转角,Iz汽车车体转动惯量;β表示质心偏侧角,wr表示横摆角速度,u表示前进速度,
Figure FDA0002731562110000023
表示侧向加速度,
Figure FDA0002731562110000024
表示横摆角加速度。
Among them, k1 , k2 are the cornering stiffnesses of the front and rear wheels, m is the total mass of the vehicle, a is the distance from the center of gravity of the vehicle to the front axle, b is the distance from the center of gravity of the vehicle to the rear axle, δ is the front wheel angle, Iz body moment of inertia; β represents the side angle of the center of mass, wr represents the yaw rate, u represents the forward speed,
Figure FDA0002731562110000023
represents the lateral acceleration,
Figure FDA0002731562110000024
represents the yaw angular acceleration.
7.根据权利要求1所述的无人车在动态复杂环境中的控制系统,其特征在于,所述模块M5还包括:行为决策输出系统和车辆控制系统;7. The control system of an unmanned vehicle in a dynamic complex environment according to claim 1, wherein the module M5 further comprises: a behavior decision output system and a vehicle control system;所述行为决策输出系统包括电能底盘根据计算机控制器计算得到的角速度和速度控制无人车轮子速度,转向;当环境感知系统感知到前方有障碍物时,通过计算机控制器将命令告知底盘,让底盘以稳定的减速度避让或停下;The behavior decision output system includes an electric chassis to control the wheel speed and steering of the unmanned vehicle according to the angular velocity and speed calculated by the computer controller; when the environment perception system senses an obstacle ahead, the computer controller informs the chassis of the command to let the vehicle turn. The chassis avoids or stops at a stable deceleration;所述车辆控制系统包括采用PID控制算法对目标路径进行跟踪,保证车辆能够按照给定的路径信息点进行跟踪行驶。The vehicle control system includes using a PID control algorithm to track the target path, so as to ensure that the vehicle can track and travel according to a given path information point.8.一种无人车在动态复杂环境中的控制方法,其特征在于,包括:8. A control method for an unmanned vehicle in a dynamic complex environment, comprising:步骤M1:将环境感知系统的点云数据传入计算机控制器,通过深度学习的方法完成障碍物的检测、分割和识别;Step M1: transfer the point cloud data of the environmental perception system to the computer controller, and complete the detection, segmentation and identification of obstacles by means of deep learning;步骤M2:根据环境感知系统拍摄的图像传入计算机控制器获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯颜色识别;Step M2: According to the image captured by the environment perception system, the computer controller is transmitted to obtain the precise traffic light frame position, and the traffic light color recognition is performed according to the traffic light frame position;步骤M3:环境感知系统将障碍物检测识别数据与红绿灯框位置与颜色识别数据传入计算机控制器,计算得到避免碰撞的最小安全距离;Step M3: The environment perception system transmits the obstacle detection identification data and the traffic light frame position and color identification data to the computer controller, and calculates the minimum safe distance to avoid collision;步骤M4:根据计算得到避免碰撞的最小安全距离,通过计算机控制器计算得到汽车目标角速度和速度;Step M4: obtain the minimum safe distance to avoid collision according to the calculation, and obtain the target angular velocity and speed of the vehicle through calculation by the computer controller;步骤M5:将汽车目标车速、角速度输入计算机控制器,计算机控制器由通过无人车动力学模型根据最小安全距离生成避障路径,最终使车辆自主完成行驶任务到达目标点;Step M5: input the target vehicle speed and angular velocity of the vehicle into the computer controller, and the computer controller generates an obstacle avoidance path according to the minimum safe distance through the unmanned vehicle dynamics model, and finally enables the vehicle to independently complete the driving task and reach the target point;所述无人车动力学模型是包括汽车的动力性、制动性、平顺性和稳定性,测量车辆的质量和受力情况与车辆运动之间的关系。The unmanned vehicle dynamics model includes the dynamics, braking, ride comfort and stability of the vehicle, and measures the relationship between the mass and force of the vehicle and the motion of the vehicle.9.根据权利要求8所述的无人车在动态复杂环境中的控制方法,其特征在于,所述步骤M1包括:9. The method for controlling an unmanned vehicle in a dynamic complex environment according to claim 8, wherein the step M1 comprises:所述环境感知系统包括超声波雷达(1)、毫米波雷达(2)、激光雷达(6)和相机;The environment perception system includes an ultrasonic radar (1), a millimeter-wave radar (2), a lidar (6) and a camera;所述计算机控制器(9)包括惯性测量单元(8)和中央处理器与图形处理器(10);The computer controller (9) includes an inertial measurement unit (8), a central processing unit and a graphics processor (10);所述相机包括前向相机(3)、侧向相机(4)和后向相机(5);The camera includes a front-facing camera (3), a side-facing camera (4) and a rear-facing camera (5);所述激光雷达(6)安装在车顶上;The lidar (6) is installed on the roof;步骤M1.1:基于所述激光雷达(6)点云数据,所述中央处理器与图形处理器(10)通过卷积神经网络模型学习点云特征并预测障碍物的相关属性,根据障碍物的相关属性进行障碍物分割,从而对障碍物检测识别;Step M1.1: Based on the point cloud data of the lidar (6), the central processing unit and the graphics processor (10) learn the point cloud features and predict the relevant attributes of the obstacles through the convolutional neural network model. The related attributes are used to segment obstacles, so as to detect and identify obstacles;步骤M1.2:基于所述毫米波雷达(2)点云数据,所述中央处理器与图形处理器(10)进行处理从而对障碍物检测识别;Step M1.2: Based on the point cloud data of the millimeter wave radar (2), the central processing unit and the graphics processor (10) are processed to detect and identify obstacles;步骤M1.3:所述中央处理器与图形处理器(10)通过融合激光雷达算法,将所述激光雷达(7)和所述毫米波雷达(2)的障碍物识别结果进行融合,得到无人车精确检测障碍物的位置;Step M1.3: The central processing unit and the graphics processor (10) fuse the obstacle recognition results of the lidar (7) and the millimeter-wave radar (2) by fusing the lidar algorithm, and obtain no People and vehicles can accurately detect the position of obstacles;所述融合激光雷达算法主要进行了单传感器结果和融合结果的管理、匹配以及基于卡尔曼滤波的障碍物速度融合;The fusion lidar algorithm mainly manages and matches single sensor results and fusion results, as well as obstacle velocity fusion based on Kalman filtering;所述惯性测量单元(8)包括:惯性测量单元测量无人车角速度以及加速度;所述惯性测量单元安装在靠近无人车的重心上。The inertial measurement unit (8) includes: the inertial measurement unit measures the angular velocity and acceleration of the unmanned vehicle; the inertial measurement unit is installed near the center of gravity of the unmanned vehicle.10.根据权利要求8所述的无人车在动态复杂环境中的控制方法,其特征在于,所述步骤M2包括:10. The method for controlling an unmanned vehicle in a dynamic complex environment according to claim 8, wherein the step M2 comprises:所述环境感知系统包括超声波雷达(1)、毫米波雷达(2)、激光雷达(6)和相机;The environment perception system includes an ultrasonic radar (1), a millimeter-wave radar (2), a lidar (6) and a camera;所述相机包括前向相机(3)、预设个侧向相机(4)和后向相机(5);The camera includes a front-facing camera (3), a preset side-facing camera (4) and a rear-facing camera (5);所述激光雷达(6)安装在车顶上;The lidar (6) is installed on the roof;根据环境感知系统中相机在投影区域外选取感兴趣区域,运行红绿灯检测来获得精确的红绿灯框位置,并根据红绿灯框位置进行红绿灯的颜色识别,得到红绿灯当前的状态;根据单帧的红绿灯状态,通过时序的滤波矫正算法进一步确认红绿灯的最终状态;According to the camera in the environmental perception system, select the area of interest outside the projection area, run the traffic light detection to obtain the accurate traffic light frame position, and identify the color of the traffic light according to the position of the traffic light frame to obtain the current status of the traffic light; according to the traffic light status of a single frame, The final state of the traffic light is further confirmed through the filtering and correction algorithm of the timing sequence;所述步骤M3包括:通过车辆自身消息与外界障碍物信息进行比较,计算无人车在复杂环境中面对障碍物所需的最小安全距离;The step M3 includes: calculating the minimum safe distance required for the unmanned vehicle to face obstacles in a complex environment by comparing the information of the vehicle itself with the information of external obstacles;所述车辆自身消息包括车辆的尺寸信息、动力学模型信息;所述动力学模型信息包括汽车加速度、速度和角速度;相机当前采集到的图片信息,与下一帧的图片信息作比较完成闭环检测,实现汽车的建图与定位;The vehicle's own message includes the size information and dynamic model information of the vehicle; the dynamic model information includes the acceleration, speed and angular velocity of the vehicle; the picture information currently collected by the camera is compared with the picture information of the next frame to complete the closed-loop detection. , to realize the construction and positioning of the car;所述外界障碍物信息包括环境感知系统的障碍物检测识别;The external obstacle information includes obstacle detection and recognition of the environment perception system;所述步骤M5中无人车动力学模型包括:In the step M5, the unmanned vehicle dynamics model includes:
Figure FDA0002731562110000041
Figure FDA0002731562110000041
Figure FDA0002731562110000042
Figure FDA0002731562110000042
其中,k1,k2为前后轮的侧偏刚度,m为汽车总质量,a为车辆重心到前轴的距离,b车辆重心到后轴的距离,δ为前轮转角,Iz汽车车体转动惯量;β表示质心偏侧角,wr表示横摆角速度,u表示前进速度,
Figure FDA0002731562110000043
表示侧向加速度,
Figure FDA0002731562110000044
表示横摆角加速度;
Among them, k1 , k2 are the cornering stiffnesses of the front and rear wheels, m is the total mass of the vehicle, a is the distance from the center of gravity of the vehicle to the front axle, b is the distance from the center of gravity of the vehicle to the rear axle, δ is the front wheel angle, Iz body moment of inertia; β represents the side angle of the center of mass, wr represents the yaw rate, u represents the forward speed,
Figure FDA0002731562110000043
represents the lateral acceleration,
Figure FDA0002731562110000044
represents the yaw angular acceleration;
所述步骤M5还包括:行为决策输出系统和车辆控制系统;The step M5 further includes: a behavior decision output system and a vehicle control system;所述行为决策输出系统包括电能底盘根据计算机控制器计算得到的角速度和速度控制无人车轮子速度,转向;当环境感知系统感知到前方有障碍物时,通过计算机控制器将命令告知底盘,让底盘以稳定的减速度避让或停下;The behavior decision output system includes an electric chassis to control the wheel speed and steering of the unmanned vehicle according to the angular velocity and speed calculated by the computer controller; when the environment perception system senses that there is an obstacle ahead, the computer controller informs the chassis of the command, so that the The chassis avoids or stops at a stable deceleration;所述车辆控制系统包括采用PID控制算法对目标路径进行跟踪,保证车辆能够按照给定的路径信息点进行跟踪行驶。The vehicle control system includes using a PID control algorithm to track the target path, so as to ensure that the vehicle can track and travel according to the given path information points.
CN202011119650.XA2020-10-192020-10-19 A control method and system for an unmanned vehicle in a dynamic complex environmentPendingCN112068574A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011119650.XACN112068574A (en)2020-10-192020-10-19 A control method and system for an unmanned vehicle in a dynamic complex environment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011119650.XACN112068574A (en)2020-10-192020-10-19 A control method and system for an unmanned vehicle in a dynamic complex environment

Publications (1)

Publication NumberPublication Date
CN112068574Atrue CN112068574A (en)2020-12-11

Family

ID=73655332

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011119650.XAPendingCN112068574A (en)2020-10-192020-10-19 A control method and system for an unmanned vehicle in a dynamic complex environment

Country Status (1)

CountryLink
CN (1)CN112068574A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112817315A (en)*2020-12-312021-05-18江苏集萃智能制造技术研究所有限公司Obstacle avoidance method and system for unmanned cleaning vehicle in dynamic environment
CN112823377A (en)*2021-01-142021-05-18深圳市锐明技术股份有限公司Road edge segmentation method and device, terminal equipment and readable storage medium
CN113341697A (en)*2021-06-112021-09-03常州工程职业技术学院Separated vehicle moving robot cooperative control method capable of accurately extracting vehicle center
CN113515813A (en)*2021-07-162021-10-19长安大学 An on-site verification method for simulation reliability of vehicle dynamics simulation software
CN113619605A (en)*2021-09-022021-11-09盟识(上海)科技有限公司Automatic driving method and system for underground mining articulated vehicle
CN113635893A (en)*2021-07-162021-11-12安徽工程大学 A control method for unmanned vehicle steering based on urban smart transportation
CN113884090A (en)*2021-09-282022-01-04中国科学技术大学先进技术研究院 Intelligent platform vehicle environment perception system and its data fusion method
CN113895543A (en)*2021-10-092022-01-07西安电子科技大学Intelligent unmanned vehicle driving system based on park environment
CN115145272A (en)*2022-06-212022-10-04大连华锐智能化科技有限公司Coke oven vehicle environment sensing system and method
CN115436918A (en)*2022-08-192022-12-06九识(苏州)智能科技有限公司 A method and device for correcting the horizontal angle between laser radar and unmanned vehicle

Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104182991A (en)*2014-08-152014-12-03辽宁工业大学Vehicle running state estimation method and vehicle running state estimation device
CN107272687A (en)*2017-06-292017-10-20深圳市海梁科技有限公司A kind of driving behavior decision system of automatic Pilot public transit vehicle
CN107817798A (en)*2017-10-302018-03-20洛阳中科龙网创新科技有限公司A kind of farm machinery barrier-avoiding method based on deep learning system
CN107867290A (en)*2017-11-072018-04-03长春工业大学A kind of automobile emergency collision avoidance layer-stepping control method for considering moving obstacle
CN108196535A (en)*2017-12-122018-06-22清华大学苏州汽车研究院(吴江)Automated driving system based on enhancing study and Multi-sensor Fusion
CN108225364A (en)*2018-01-042018-06-29吉林大学A kind of pilotless automobile driving task decision system and method
CN108519773A (en)*2018-03-072018-09-11西安交通大学 A path planning method for unmanned vehicles in a structured environment
CN109649390A (en)*2018-12-192019-04-19清华大学苏州汽车研究院(吴江)A kind of autonomous follow the bus system and method for autonomous driving vehicle
CN110488805A (en)*2018-05-152019-11-22武汉小狮科技有限公司A kind of unmanned vehicle obstacle avoidance system and method based on 3D stereoscopic vision
CN110614998A (en)*2019-08-212019-12-27南京航空航天大学Aggressive driving-assisted curve obstacle avoidance and road changing path planning system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104182991A (en)*2014-08-152014-12-03辽宁工业大学Vehicle running state estimation method and vehicle running state estimation device
CN107272687A (en)*2017-06-292017-10-20深圳市海梁科技有限公司A kind of driving behavior decision system of automatic Pilot public transit vehicle
CN107817798A (en)*2017-10-302018-03-20洛阳中科龙网创新科技有限公司A kind of farm machinery barrier-avoiding method based on deep learning system
CN107867290A (en)*2017-11-072018-04-03长春工业大学A kind of automobile emergency collision avoidance layer-stepping control method for considering moving obstacle
CN108196535A (en)*2017-12-122018-06-22清华大学苏州汽车研究院(吴江)Automated driving system based on enhancing study and Multi-sensor Fusion
CN108225364A (en)*2018-01-042018-06-29吉林大学A kind of pilotless automobile driving task decision system and method
CN108519773A (en)*2018-03-072018-09-11西安交通大学 A path planning method for unmanned vehicles in a structured environment
CN110488805A (en)*2018-05-152019-11-22武汉小狮科技有限公司A kind of unmanned vehicle obstacle avoidance system and method based on 3D stereoscopic vision
CN109649390A (en)*2018-12-192019-04-19清华大学苏州汽车研究院(吴江)A kind of autonomous follow the bus system and method for autonomous driving vehicle
CN110614998A (en)*2019-08-212019-12-27南京航空航天大学Aggressive driving-assisted curve obstacle avoidance and road changing path planning system and method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
曾仕峰,等: "基于ROS的无人驾驶智能车", 《物联网技术》*
杨万福,等: "《汽车理论》", 31 August 2010, 广州:华南理工大学出版社*
陈庆樟,等: "汽车四轮转向的最优控制研究", 《常熟理工学院学报(自然科学)》*

Cited By (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112817315A (en)*2020-12-312021-05-18江苏集萃智能制造技术研究所有限公司Obstacle avoidance method and system for unmanned cleaning vehicle in dynamic environment
WO2022151147A1 (en)*2021-01-142022-07-21深圳市锐明技术股份有限公司Curb segmentation method and apparatus, and terminal device and readable storage medium
CN112823377A (en)*2021-01-142021-05-18深圳市锐明技术股份有限公司Road edge segmentation method and device, terminal equipment and readable storage medium
CN112823377B (en)*2021-01-142024-02-09深圳市锐明技术股份有限公司Road edge segmentation method and device, terminal equipment and readable storage medium
CN113341697B (en)*2021-06-112022-12-09常州工程职业技术学院 A Collaborative Control Method for Separate Vehicle Manipulation Robots Accurately Extracting the Center of the Vehicle
CN113341697A (en)*2021-06-112021-09-03常州工程职业技术学院Separated vehicle moving robot cooperative control method capable of accurately extracting vehicle center
CN113635893A (en)*2021-07-162021-11-12安徽工程大学 A control method for unmanned vehicle steering based on urban smart transportation
CN113515813A (en)*2021-07-162021-10-19长安大学 An on-site verification method for simulation reliability of vehicle dynamics simulation software
CN113515813B (en)*2021-07-162023-03-14长安大学On-site verification method for simulation reliability of automobile dynamics simulation software
CN113619605A (en)*2021-09-022021-11-09盟识(上海)科技有限公司Automatic driving method and system for underground mining articulated vehicle
CN113619605B (en)*2021-09-022022-10-11盟识(上海)科技有限公司Automatic driving method and system for underground mining articulated vehicle
CN113884090A (en)*2021-09-282022-01-04中国科学技术大学先进技术研究院 Intelligent platform vehicle environment perception system and its data fusion method
CN113895543A (en)*2021-10-092022-01-07西安电子科技大学Intelligent unmanned vehicle driving system based on park environment
CN115145272A (en)*2022-06-212022-10-04大连华锐智能化科技有限公司Coke oven vehicle environment sensing system and method
CN115145272B (en)*2022-06-212024-03-29大连华锐智能化科技有限公司Coke oven vehicle environment sensing system and method
CN115436918A (en)*2022-08-192022-12-06九识(苏州)智能科技有限公司 A method and device for correcting the horizontal angle between laser radar and unmanned vehicle

Similar Documents

PublicationPublication DateTitle
CN112068574A (en) A control method and system for an unmanned vehicle in a dynamic complex environment
CN109606354B (en)Automatic parking method and auxiliary system based on hierarchical planning
CN113460077B (en)Moving object control device, moving object control method, and storage medium
Liu et al.The role of the hercules autonomous vehicle during the covid-19 pandemic: An autonomous logistic vehicle for contactless goods transportation
US10606270B2 (en)Controlling an autonomous vehicle using cost maps
US10582137B1 (en)Multi-sensor data capture synchronizaiton
US10372130B1 (en)Communicating reasons for vehicle actions
CN114391088B (en) Trajectory Planner
Kim et al.Cooperative autonomous driving: A mirror neuron inspired intention awareness and cooperative perception approach
US12217431B2 (en)Systems and methods for panoptic segmentation of images for autonomous driving
Guidolini et al.Handling pedestrians in crosswalks using deep neural networks in the IARA autonomous car
WO2023109589A1 (en)Smart car-unmanned aerial vehicle cooperative sensing system and method
CN117452946A (en)Intelligent automobile remote driving method and system based on digital twin
JP2019158646A (en)Vehicle control device, vehicle control method, and program
CN110217231A (en)Controller of vehicle, control method for vehicle and storage medium
US20250256726A1 (en)Azimuth angle acquisition apparatus, azimuth angle acquisition system, and azimuth angle acquisition method for self-propelled vehicle
CN119968304A (en) Autonomous Vehicle Blind Spot Management
Liu et al.Hercules: An autonomous logistic vehicle for contact-less goods transportation during the COVID-19 outbreak
Mutz et al.Following the leader using a tracking system based on pre-trained deep neural networks
CN116394979A (en) A decision-making control method for automatic driving based on roadside fusion perception
CN113635893A (en) A control method for unmanned vehicle steering based on urban smart transportation
Souza et al.Vision-based waypoint following using templates and artificial neural networks
CN115027505A (en)Method, device and system for re-planning track of vehicle, vehicle and storage medium
US10332402B2 (en)Movement assistance system and movement assistance method
Tian et al.Autonomous formula racecar: Overall system design and experimental validation

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20201211


[8]ページ先頭

©2009-2025 Movatter.jp