Movatterモバイル変換


[0]ホーム

URL:


CN116540696A - A multi-modal obstacle avoidance system based on multi-sensor fusion of unmanned ships based on ROS - Google Patents

A multi-modal obstacle avoidance system based on multi-sensor fusion of unmanned ships based on ROS
Download PDF

Info

Publication number
CN116540696A
CN116540696ACN202310393001.6ACN202310393001ACN116540696ACN 116540696 ACN116540696 ACN 116540696ACN 202310393001 ACN202310393001 ACN 202310393001ACN 116540696 ACN116540696 ACN 116540696A
Authority
CN
China
Prior art keywords
obstacle
ship
obstacles
obstacle avoidance
radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310393001.6A
Other languages
Chinese (zh)
Inventor
傅安琪
冯晋晨
莫元昊
沈泊洋
滕影轩
刘乾泽
王紫宸
乔俊飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of TechnologyfiledCriticalBeijing University of Technology
Priority to CN202310393001.6ApriorityCriticalpatent/CN116540696A/en
Publication of CN116540696ApublicationCriticalpatent/CN116540696A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明提供了一种基于ROS的无人船多传感器融合的多模态避障系统,其特征在于,包括传感器、控制器和执行器三部分,本系统中激光雷达、工业相机、惯性测量单元和GPS模块用于感知周围环境和获取姿态和位置信息,属于系统的传感器;人工智能开发板和单片机是主控模块,根据输入通过内置算法做出决策,属于系统的控制器;驱动控制器及驱动机械装置用于执行控制器的决策,属于系统的执行器;本发明使用多传感器分布式模块化架构,使得各个功能耦合在一起,可以同时多线程执行任务,大大压缩了算力,控制更加灵活,搭载高效率高准确的视觉‑雷达融合避障算法,使无人船可以灵活应对水面上出现的各种各样的复杂情况,拥有自主巡航,自主避障的能力。

The invention provides a multi-modal obstacle avoidance system based on ROS multi-sensor fusion of unmanned ships, which is characterized in that it includes three parts: sensors, controllers and actuators. In this system, laser radar, industrial cameras, and inertial measurement units And the GPS module is used to perceive the surrounding environment and obtain attitude and position information, which belong to the sensor of the system; the artificial intelligence development board and the single-chip microcomputer are the main control modules, which make decisions through built-in algorithms according to the input, and belong to the system controller; the drive controller and The driving mechanical device is used to execute the decision of the controller, which belongs to the actuator of the system; the present invention uses a multi-sensor distributed modular architecture, so that various functions can be coupled together, and tasks can be executed by multiple threads at the same time, which greatly reduces the computing power and makes the control more efficient. Flexible, equipped with high-efficiency, high-accuracy vision-radar fusion obstacle avoidance algorithm, so that the unmanned ship can flexibly respond to various complex situations on the water surface, and has the ability of autonomous cruising and autonomous obstacle avoidance.

Description

Translated fromChinese
一种基于ROS的无人船多传感器融合的多模态避障系统A multi-modal obstacle avoidance system based on multi-sensor fusion of unmanned ships based on ROS

技术领域technical field

本发明涉及水上装备领域,具体地说是一种基于ROS的无人船多传感器融合的多模态避障系统。The invention relates to the field of water equipment, in particular to a ROS-based multi-mode obstacle avoidance system for unmanned ship multi-sensor fusion.

背景技术Background technique

目前,在无人船上使用的避障方式主要为单传感器感知避障。例如只使用激光雷达(1)探测周围障碍物信息,或只使用相机捕捉画面,再对画面或雷达反馈信息处理识别后根据识别结果进行避障。At present, the obstacle avoidance method used on unmanned ships is mainly single-sensor perception obstacle avoidance. For example, only use the laser radar (1) to detect the surrounding obstacle information, or only use the camera to capture the picture, and then process and recognize the picture or radar feedback information, and then avoid obstacles according to the recognition result.

激光雷达的优势在于可以探测远端障碍物,同时还可以探测周围横向360度范围内的障碍物信息。识别有效距离远,范围大。但是雷达只能探测一个平面,几乎没有纵向探测延展,因安装高度原因无法有效探测水面上漂浮的障碍物。相机的优势在于在横向和纵向上都有100度左右的识别范围,可以有效识别水面上漂浮的障碍物。但是探测距离很近,不能有效识别远端障碍物。The advantage of lidar is that it can detect far-end obstacles, and it can also detect obstacle information in the surrounding horizontal 360-degree range. The effective distance of recognition is long and the range is large. However, the radar can only detect one plane, and has almost no longitudinal detection extension. Due to the installation height, it cannot effectively detect obstacles floating on the water. The advantage of the camera is that it has a recognition range of about 100 degrees both horizontally and vertically, which can effectively identify obstacles floating on the water. However, the detection distance is very short, and it cannot effectively identify distant obstacles.

目前,无人船的总体控制系统大都是直接使用嵌入式开发板直接进行控制,如STM32系列、ESP系列、树莓派系列,这种控制方式受限于开发板的算力,无法执行某些需要较高算力运算的复杂功能如路径规划、视觉识别,并且开发难度大,研发周期长。随着近年来很多高新技术和算法的崛起,单纯使用嵌入式开发板已经不适用于当下的无人船开发与应用了。At present, most of the overall control systems of unmanned ships are directly controlled by embedded development boards, such as STM32 series, ESP series, and Raspberry Pi series. This control method is limited by the computing power of the development board and cannot execute certain Complex functions such as path planning and visual recognition that require high computing power are difficult to develop and have a long development cycle. With the rise of many high-tech and algorithms in recent years, simply using embedded development boards is no longer suitable for the current development and application of unmanned ships.

Robot Operation System机器人操作系统,简称ROS,是一款具有高度灵活性的机器人软件架构。该软件架构具有很多优势:Robot Operation System robot operating system, referred to as ROS, is a highly flexible robot software architecture. This software architecture has many advantages:

a)分布式点对点计算a) Distributed peer-to-peer computing

b)软件复用b) Software reuse

c)快速可视化调试c) Quick visual debugging

d)松耦合模块化开发d) Loosely coupled modular development

我们的多传感器融合避障系统就是基于ROS架构进行开发测试的,在该架构下,每个算法节点可以单独的执行进程,同时还可以通过ROS的通信方式进行高效模块化的通信,实现算法之间的协同配合。Our multi-sensor fusion obstacle avoidance system is developed and tested based on the ROS architecture. Under this architecture, each algorithm node can execute a process independently. collaboration between.

现有技术中的单传感器感知避障系统虽然应用起来较为容易,但是无法有效应对实际位置水域复杂的障碍物情况。在实际水域中可能会同时存在两种类型的障碍物:一种是漂浮在水面上,没有太高的出水高度的障碍物,例如“垃圾袋、水瓶、小型浮木、水草等”;另一种是大型的障碍物,需要在较远距离发现并进行避障,例如“突出的岛礁、河道两侧的岸堤、桥梁的桥洞、对向或同向行驶的船舶等”。尤其是当周围水域同时出现这两大类障碍物的一种或多种子类障碍物时,使用单一传感器进行避障将会变得极其复杂且困难,并且避障效果大大折扣,无法有效实践应用。Although the single-sensor perceptual obstacle avoidance system in the prior art is relatively easy to apply, it cannot effectively deal with complex obstacle situations in the actual water area. There may be two types of obstacles at the same time in the actual waters: one is the obstacles floating on the water surface without too high water height, such as "garbage bags, water bottles, small driftwood, aquatic plants, etc."; The second type is a large obstacle, which needs to be found and avoided at a relatively long distance, such as "protruding islands and reefs, banks on both sides of rivers, bridge holes, ships traveling in the opposite direction or in the same direction, etc." Especially when one or more subtypes of these two types of obstacles appear in the surrounding waters at the same time, using a single sensor for obstacle avoidance will become extremely complicated and difficult, and the effect of obstacle avoidance will be greatly reduced, making it impossible to effectively apply .

发明内容Contents of the invention

本发明的目的在于提出一种基于ROS的无人船多传感器融合的多模态避障系统,以解决单传感器感知避障系统无法有效应对实际位置水域复杂的障碍物情况,主要包括:一种是漂浮在水面上,没有太高的出水高度的障碍物,例如“垃圾袋、水瓶、小型浮木、水草等”;另一种是大型的障碍物,需要在较远距离发现并进行避障,例如“突出的岛礁、河道两侧的岸堤、桥梁的桥洞、对向或同向行驶的船舶等”;尤其是当周围水域同时出现这两大类障碍物的一种或多种子类障碍物时,使用单一传感器进行避障将会变得极其复杂且困难,并且避障效果大大折扣,无法有效实践应用的问题。The purpose of the present invention is to propose a multi-modal obstacle avoidance system based on ROS-based multi-sensor fusion of unmanned ships to solve the problem that the single-sensor perception obstacle avoidance system cannot effectively deal with complex obstacles in the actual location waters, mainly including: a It is an obstacle that floats on the water and does not have a high water height, such as "garbage bags, water bottles, small driftwood, water plants, etc."; the other is a large obstacle that needs to be found and avoided at a long distance , such as "protruding islands and reefs, banks on both sides of rivers, bridge holes, ships traveling in the opposite direction or in the same direction, etc."; especially when one or more subtypes of these two types of obstacles appear in the surrounding waters at the same time When there are obstacles, using a single sensor for obstacle avoidance will become extremely complex and difficult, and the effect of obstacle avoidance will be greatly reduced, making it impossible to effectively practice the application.

为实现上述目的,本发明提供以下技术方案:To achieve the above object, the present invention provides the following technical solutions:

一种基于ROS的无人船多传感器融合的多模态避障系统,其特征在于,包括传感器、控制器和执行器三部分,且本系统基于ROS架构进行开发测试,所述传感器包括激光雷达1、工业相机2、IMU惯性测量单元3和GPS模块4用于感知周围环境和获取姿态和位置信息;所述控制器包括人工智能开发板5和单片机6是无人船作业的主控模块,根据输入通过内置算法做出决策;所述执行器包括驱动控制器及驱动机械装置7用于执行控制器的决策;其中所述激光雷达1、所述工业相机2、所述IMU惯性导航模块和GPS模块4都与所述人工智能开发板5之间通过线缆连接,所述驱动控制器和所述人工智能开发板5都与单片机6之间通过线缆连接,所述驱动控制器和所述驱动机械装置7之间通过线缆连接;使用时,所述IMU惯性测量单元3和所述GPS模块4安装到连接桥上;所述IMU惯性测量单元3和所述GPS模块4采用当下通用的IMU+GPS数据融合算法的方式实现无人船在野外环境下较高精度的定位与路径计算功能,并进一步与导航功能包结合使用,可以实现定点导航的功能,该功能与避障功能同步进行;所述激光雷达1和所述工业相机2安装到无人船连接桥桥头位置,与连接桥中轴线对齐,工业相机2放在激光雷达1前方,获得雷达点云数据和工业相机2画面,分别经过滤波和分析处理后得到经过滤波的雷达信息和水面障碍物图像信息,打包封装成ROS架构中的“消息”并进行发布,再通过构建好的中枢避障算法节点订阅上述的水面障碍物图像信息和经过滤波的雷达信息,通过避障算法判断出船只接下来到达目标点需要的运动方向和速度,并根据采集到的实时数据不断迭代更新。A ROS-based multi-modal obstacle avoidance system for multi-sensor fusion of unmanned ships is characterized in that it includes three parts: sensors, controllers and actuators, and the system is developed and tested based on the ROS architecture. The sensors include laser radar 1. The industrial camera 2, the IMU inertial measurement unit 3 and the GPS module 4 are used to perceive the surrounding environment and obtain attitude and position information; the controller includes an artificial intelligence development board 5 and a single-chip microcomputer 6, which are the main control modules for unmanned ship operations, According to the input, a decision is made through a built-in algorithm; the actuator includes a drive controller and a drive mechanism 7 for executing the decision of the controller; wherein the laser radar 1, the industrial camera 2, the IMU inertial navigation module and The GPS module 4 is all connected with the artificial intelligence development board 5 by cables, and the drive controller and the artificial intelligence development board 5 are all connected with the single-chip microcomputer 6 by cables, and the drive controller and the artificial intelligence development board 5 are all connected by cables. The driving mechanism 7 is connected by a cable; when in use, the IMU inertial measurement unit 3 and the GPS module 4 are installed on the connecting bridge; the IMU inertial measurement unit 3 and the GPS module 4 adopt current general The advanced IMU+GPS data fusion algorithm realizes the high-precision positioning and path calculation functions of unmanned ships in the field environment, and further combines with the navigation function package to realize the function of fixed-point navigation, which is synchronized with the obstacle avoidance function Carry out; the laser radar 1 and the industrial camera 2 are installed at the head of the unmanned ship connecting bridge, aligned with the central axis of the connecting bridge, and the industrial camera 2 is placed in front of the laser radar 1 to obtain radar point cloud data and industrial camera 2 pictures , after filtering and analysis processing, the filtered radar information and water surface obstacle image information are obtained, packaged and packaged into "messages" in the ROS architecture and released, and then subscribe to the above water surface obstacles through the constructed central obstacle avoidance algorithm node Object image information and filtered radar information are used to determine the movement direction and speed of the ship to reach the target point through the obstacle avoidance algorithm, and iteratively updated according to the collected real-time data.

实现无人船在野外环境下的定位与路径计算功能,并进一步与ROS架构中的导航功能包结合使用具体包括以下内容:Realize the positioning and path calculation functions of unmanned ships in the wild environment, and further use them in combination with the navigation function package in the ROS architecture. Specifically include the following:

经过融合算法可以测量出当前船只的经纬度位置、姿态朝向、航行速度、航行加速度等信息,使用现行通用的IMU+GPS融合算法,将以上信息转化为里程计信息,里程计信息记录了船舶运动过程。操作者可以通过个人电脑在控制系统中,通过使用机器人操作系统(ROS)搭建的分布式框架,以通讯路由器8形成的WIFI局域网为媒介,向人工智能开发板5按次发送多个无人船巡航点经纬度坐标,导航功能包会结合当前位置信息和速度信息,计算出到达目标点所需的姿态、速度等信息,并转化为速度话题发送给单片机6。Through the fusion algorithm, the current latitude and longitude position, attitude orientation, navigation speed, navigation acceleration and other information of the current ship can be measured, and the current general IMU+GPS fusion algorithm is used to convert the above information into odometer information, which records the ship's movement process . The operator can use a personal computer in the control system to send multiple unmanned ships to the artificial intelligence development board 5 in sequence by using the distributed framework built by the robot operating system (ROS) and using the WIFI LAN formed by the communication router 8 as the medium. The longitude and latitude coordinates of the cruising point, the navigation function package will combine the current position information and speed information to calculate the attitude, speed and other information required to reach the target point, and convert it into a speed topic and send it to the microcontroller 6 .

所述激光雷达1和所述工业相机2同时进行识别,只是两个传感器识别区域不同,识别的目标物也不同,但最终会将识别结果统一到一个坐标系下,方便进行避障计算;具体为所述激光雷达1负责探测远端1m-40m范围和横向360度内高起的障碍物;所述工业相机2负责探测船舶前方横向100度和纵向80度范围内水面上的漂浮障碍物。The lidar 1 and the industrial camera 2 perform identification at the same time, but the recognition areas of the two sensors are different, and the recognized targets are also different, but eventually the recognition results will be unified into one coordinate system to facilitate obstacle avoidance calculations; specifically The lidar 1 is responsible for detecting obstacles raised in the range of 1m-40m at the far end and within 360 degrees in the lateral direction; the industrial camera 2 is responsible for detecting floating obstacles on the water surface within the range of 100 degrees in the lateral direction and 80 degrees in the vertical direction in front of the ship.

所述激光雷达1直接获得的雷达点云数据的数据量较大且在水面上会受到较多噪声干扰,因此需要建立一个雷达滤波简单去简化雷达数据信息,滤除干扰点和噪声点,该滤波算法是根据雷达点云数据的离散程度进行判断的;在散点相对聚集的区域,可以认为就是需要躲避的障碍物,反之则认为是噪声。The radar point cloud data directly obtained by the lidar 1 has a large amount of data and will be subject to more noise interference on the water surface. Therefore, it is necessary to establish a radar filter to simplify the radar data information and filter out interference points and noise points. The filtering algorithm is judged according to the degree of dispersion of the radar point cloud data; in the area where the scattered points are relatively concentrated, it can be considered as an obstacle to be avoided, otherwise it is considered as noise.

所述工业相机2直接获取的画面需要通过分析处理后才能最终识别出其中水面障碍物,具体分析处理时需要用到卷积神经网络Yolov5识别处理,使用的是已经提前训练好的数据集,该数据集包括一系列水面上常见的漂浮物如:矿泉水瓶、塑料垃圾袋、漂浮原木、木桶等,后续还会不断对数据集进行升级,加入其他野生环境中可能会出现的漂浮物,该神经网络读取工业相机2捕捉到的每一帧画面,并进行分析处理,最终识别出其中水面障碍物,记录相关障碍物在图像中的多个坐标角点(u,v)、面积S、障碍物种类H。The picture directly acquired by the industrial camera 2 needs to be analyzed and processed to finally identify the obstacles on the water surface. During the specific analysis and processing, the convolutional neural network Yolov5 recognition process is required, and the data set that has been trained in advance is used. The data set includes a series of common floating objects on the water surface, such as: mineral water bottles, plastic garbage bags, floating logs, wooden barrels, etc., and will continue to upgrade the data set in the future, adding other floating objects that may appear in the wild environment. The neural network reads each frame captured by the industrial camera 2, analyzes and processes it, and finally identifies the obstacles on the water surface, and records the multiple coordinate corner points (u, v), area S, Obstacle type H.

所述中枢避障算法主要包括以下内容:The central obstacle avoidance algorithm mainly includes the following contents:

S1、解算Yolov5节点发布的图像信息;将从所述工业相机2中获取的图像中水面障碍物的多个坐标角点(u,v)、面积S、障碍物种类H这些2D的图像信息转化成水面障碍物在船舶坐标系下的模糊坐标(xm,ym),称为“水面障碍坐标”;S1, solve the image information released by the Yolov5 node; the 2D image information of multiple coordinate corner points (u, v), area S, and obstacle type H of obstacles on the water surface in the image obtained from the industrial camera 2 Transformed into the fuzzy coordinates (xm , ym ) of the water surface obstacle in the ship coordinate system, called "water surface obstacle coordinates";

S2、解算雷达滤波节点发布的雷达信息;将雷达信息中ranges数组包含的“角度-距离”信息同样转化为障碍物在船舶坐标系下的坐标(xn,yn),称为“普通障碍坐标”;S2. Calculate the radar information released by the radar filter node; convert the "angle-distance" information contained in the ranges array in the radar information into the coordinates (xn , yn ) of the obstacle in the ship coordinate system, called "common Obstacle Coordinates";

S3、根据当前所有的水面障碍坐标和普通障碍坐标,得出各个障碍物坐标的1级威胁系数Th1和2级威胁系数Th2;Th1是障碍物坐标相对于船舶坐标系的距离,Th2是障碍物横坐标的平方x2,具体计算公式如下:S3, according to all current water surface obstacle coordinates and common obstacle coordinates, draw the first-level threat coefficient Th1 and the second-level threat coefficient Th2 of each obstacle coordinate; Th1 is the distance of the obstacle coordinate relative to the ship coordinate system, Th2 is the square x2 of the abscissa of the obstacle, and the specific calculation formula is as follows:

Th2=x2Th2 =x2

1级威胁系数描述的是障碍物与船舶的直线距离,2级威胁系数描述的是障碍物与船舶横向距离;设立1级威胁系数是为了让船舶根据障碍物直线距离进行避障,是最基础的避障参考数据,2级威胁系数主要针对于正对于船舶前进方向的障碍物,这种障碍物需要大幅度改变航向进行避障;1、2级威胁系数同时作用于避障环节。Level 1 threat coefficient describes the straight-line distance between the obstacle and the ship, and Level 2 threat coefficient describes the lateral distance between the obstacle and the ship; the establishment of level 1 threat coefficient is to allow the ship to avoid obstacles according to the straight-line distance of the obstacle, which is the most basic Based on the obstacle avoidance reference data, the level 2 threat coefficient is mainly aimed at obstacles facing the forward direction of the ship, and such obstacles need to greatly change the course to avoid obstacles; the level 1 and level 2 threat factors act on the obstacle avoidance link at the same time.

S4、在船舶的检测范围内检测到的所有障碍物都会计算1、2级威胁系数,但是船舶会设立一个“周边安全区域”,这是因为有些障碍物并不处于船舶行进的线路上,因此无需对这种障碍物进行避障处理;安全区域的范围是船舶左右两边各3.5m,前后各30m的一个矩形区域。S4. All obstacles detected within the detection range of the ship will calculate the level 1 and level 2 threat coefficients, but the ship will set up a "peripheral safety area", because some obstacles are not on the route of the ship, so There is no need to perform obstacle avoidance treatment for such obstacles; the scope of the safe area is a rectangular area of 3.5m on the left and right sides of the ship, and 30m on the front and back of the ship.

S5、为了方便在二维坐标系内进行描述,将描述船舶二维运动的参数:线速度V和角速度ω转化为x轴速度和y轴速度,记为(Vx,Vy),计算公式如下:S5. In order to facilitate the description in the two-dimensional coordinate system, the parameters describing the two-dimensional motion of the ship: linear velocity V and angular velocity ω are transformed into x-axis velocity and y-axis velocity, which are recorded as (Vx , Vy ), and the calculation formula as follows:

Vx=Vcos(ωΔt)Vx = Vcos(ωΔt)

Vy=Vsin(ωΔt)Vy =Vsin(ωΔt)

其中Δt为采样时间where Δt is the sampling time

根据当前检测到的所有障碍物坐标,计算出(Vx,Vy),公式如下:Calculate (Vx ,Vy ) based on the coordinates of all obstacles currently detected, the formula is as follows:

Vx=K1(∑leftTh1-∑rightTh1)+K2(∑leftTh2-∑rightTh2)Vx =K1 (∑left Th1 -∑right Th1 )+K2 (∑left Th2 -∑right Th2 )

Vy=K3Vx+K4(∑leftTh2-∑rightTH2)Vy =K3 Vx +K4 (∑left Th2 -∑right TH2 )

以y轴为分界线将坐标系分成两个区域,左半区和右半区;Vx是左右半区所有障碍物坐标的1级系数的差分乘以一个比例系数K1,再上左右半区所有障碍物坐标的2级系数的差分乘以一个比例系数K2;这样做的原因是为了让船舶更好的对船舶正前方和距离船舶侧方较近的障碍物具有良好的避障灵活度,还能应对多障碍物的复杂情况;Vx使用这个计算式能够让船舶很好的动态保持与各个障碍物的距离,当遇到两个或多个距离较近的障碍物时,还能够让船舶与各个障碍物保持最优安全距离。Vy是Vx乘以一个比例系数K3,再上左右半区所有障碍物坐标的2级系数的差分乘以一个比例系数K4;这样做的原因是为了让Vy一定程度上参考于Vx,还能达到同样的动态调速功能。The coordinate system is divided into two areas with the y-axis as the dividing line, the left half area and the right half area; Vx is the difference of the first-level coefficients of all obstacle coordinates in the left and right half areas multiplied by a proportional coefficient K1 , and then the left and right halves The difference of the second-level coefficients of all obstacle coordinates in the area is multiplied by a proportional coefficient K2 ; the reason for this is to allow the ship to have better obstacle avoidance flexibility for obstacles directly in front of the ship and closer to the side of the ship It can also deal with complex situations with multiple obstacles; Vx uses this calculation formula to allow the ship to dynamically maintain a good distance from each obstacle. When encountering two or more obstacles that are closer It can keep the ship and various obstacles at an optimal safe distance. Vy is Vx multiplied by a proportional coefficient K3 , and then the difference of the second-level coefficients of all obstacle coordinates in the upper left and right half areas multiplied by a proportional coefficient K4 ; the reason for this is to make Vy refer to a certain extent in Vx can also achieve the same dynamic speed regulation function.

与现有技术相比,本发明有益效果如下:Compared with the prior art, the beneficial effects of the present invention are as follows:

本发明使用多传感器分布式模块化架构,使得各个功能可以非常简便的耦合在一起,可以同时多线程执行任务,大大压缩了算力,控制更加灵活;使无人船其拥有了自主巡航,自主避障的能力,搭载高效率高准确的融合避障算法,使得无人船可以灵活应对水面上出现的各种各样的复杂情况,在智能化方面跨越了一大步。The present invention uses a multi-sensor distributed modular architecture, so that various functions can be easily coupled together, and tasks can be executed by multiple threads at the same time, which greatly compresses the computing power and makes the control more flexible; it enables the unmanned ship to have autonomous cruise, autonomous The ability to avoid obstacles, equipped with high-efficiency, high-accuracy fusion obstacle avoidance algorithms, enables unmanned ships to flexibly respond to various complex situations that appear on the water surface, and has taken a big step forward in terms of intelligence.

附图说明Description of drawings

图1为本发明避障监测原理示意图;Fig. 1 is a schematic diagram of the principle of obstacle avoidance monitoring in the present invention;

图2为本发明船舶实物图;Fig. 2 is the physical figure of ship of the present invention;

图3为本发明避障系统中用到的矩形安全区域图;Fig. 3 is a rectangular safe area diagram used in the obstacle avoidance system of the present invention;

图4为本发明实施例中第一种障碍分布及避障路线示意图;Fig. 4 is a schematic diagram of the first obstacle distribution and obstacle avoidance route in the embodiment of the present invention;

图5为本发明实施例中第二种障碍分布及避障路线示意图;Fig. 5 is a schematic diagram of the second obstacle distribution and obstacle avoidance route in the embodiment of the present invention;

图6为本发明实施例中第三种障碍分布及避障路线示意图;6 is a schematic diagram of the third obstacle distribution and obstacle avoidance route in the embodiment of the present invention;

图7为本发明中枢避障算法流程图;Fig. 7 is a flowchart of the central obstacle avoidance algorithm of the present invention;

图8为本发明运行基本结构图;Fig. 8 is a basic structural diagram of the operation of the present invention;

图中:1、激光雷达;2、工业相机;3、IMU惯性测量单元;4、GPS模块;5、人工智能开发板;6、单片机;7、驱动机械装置。In the figure: 1. Lidar; 2. Industrial camera; 3. IMU inertial measurement unit; 4. GPS module; 5. Artificial intelligence development board; 6. Single-chip microcomputer; 7. Driving mechanism.

具体实施方式Detailed ways

为阐明技术问题、技术方案、实施过程及性能展示,以下结合实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释。本发明,并不用于限定本发明。以下将参考附图详细说明本公开的各种示例性实施例、特征和方面。附图中相同的附图标记表示功能相同或相似的元件。尽管在附图中示出了实施例的各种方面,但是除非特别指出,不必按比例绘制附图。In order to clarify the technical problems, technical solutions, implementation process and performance demonstration, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described herein are by way of illustration only. The present invention is not intended to limit the present invention. Various exemplary embodiments, features, and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. The same reference numbers in the figures indicate functionally identical or similar elements. While various aspects of the embodiments are shown in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.

在这里专用的词“示例性”意为“用作例子、实施例或说明性”。这里作为“示例性”所说明的任何实施例不必解释为优于或好于其它实施例。The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration." Any embodiment described herein as "exemplary" is not necessarily to be construed as superior or better than other embodiments.

另外,为了更好的说明本公开,在下文的具体实施方式中给出了众多的具体细节。本领域技术人员应当理解,没有某些具体细节,本公开同样可以实施。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本公开的主旨。In addition, in order to better illustrate the present disclosure, numerous specific details are given in the following specific implementation manners. It will be understood by those skilled in the art that the present disclosure may be practiced without some of the specific details. In some instances, methods, means, components and circuits that are well known to those skilled in the art have not been described in detail so as to obscure the gist of the present disclosure.

实施例1Example 1

如图1、图2和图8所示,一种基于ROS的无人船多传感器融合的多模态避障系统,其特征在于,包括传感器、控制器和执行器三部分,且本系统基于ROS架构进行开发测试,所述传感器包括激光雷达1、工业相机2、IMU惯性测量单元3和GPS模块4用于感知周围环境和获取姿态和位置信息;所述控制器包括人工智能开发板5和单片机6是无人船作业的主控模块,根据输入通过内置算法做出决策;所述执行器包括驱动控制器及驱动机械装置7用于执行控制器的决策;其中所述激光雷达1、所述工业相机2、所述IMU惯性导航模块和GPS模块4都与所述人工智能开发板5之间通过线缆连接,所述驱动控制器和所述人工智能开发板5都与单片机6之间通过线缆连接,所述驱动控制器和所述驱动机械装置7之间通过线缆连接;使用时,所述IMU惯性测量单元3和所述GPS模块4安装到连接桥上;所述IMU惯性测量单元3和所述GPS模块4采用当下通用的IMU+GPS数据融合算法的方式实现无人船在野外环境下较高精度的定位与路径计算功能,并进一步与导航功能包结合使用,可以实现定点导航的功能,该功能与避障功能同步进行;所述激光雷达1和所述工业相机2安装到无人船连接桥桥头位置,与连接桥中轴线对齐,工业相机2放在激光雷达1前方,获得雷达点云数据和工业相机2画面,分别经过滤波和分析处理后得到经过滤波的雷达信息和水面障碍物图像信息,打包封装成ROS架构中的“消息”并进行发布,再通过构建好的中枢避障算法节点订阅上述的水面障碍物图像信息和经过滤波的雷达信息,通过避障算法推断出船只接下来到达目标点需要的运动方向和速度,并根据采集到的实时数据不断迭代更新,传感器的实时消息不断被中枢避障算法节点订阅,传感器节点会根据当前的形势判断继续执行上一时刻的避障方法还是中断上一避障过程做出新的避障动作;在避障系统中,鉴于激光雷达1和工业相机2各自的优劣势,雷达负责探测远端1m-40m范围和横向360度内高起的障碍物,如:河岸、暗礁、来往船只等;工业相机2负责探测船舶前方横向100度和纵向80度范围内水面上的漂浮障碍物,如:水瓶、塑料袋、浮木等。将这两种传感器进行优劣互补,这就是本避障系统的核心思想。As shown in Figure 1, Figure 2 and Figure 8, a ROS-based multi-modal obstacle avoidance system for multi-sensor fusion of unmanned ships is characterized in that it includes three parts: sensors, controllers and actuators, and the system is based on ROS architecture is developed and tested, and the sensor includes a laser radar 1, an industrial camera 2, an IMU inertial measurement unit 3 and a GPS module 4 for sensing the surrounding environment and obtaining attitude and position information; the controller includes an artificial intelligence development board 5 and The single-chip microcomputer 6 is the main control module of the unmanned ship operation, and makes a decision through a built-in algorithm according to the input; the actuator includes a drive controller and a drive mechanism 7 for executing the decision of the controller; wherein the laser radar 1, the The industrial camera 2, the IMU inertial navigation module and the GPS module 4 are all connected with the artificial intelligence development board 5 by cables, and the drive controller and the artificial intelligence development board 5 are all connected with the single chip microcomputer 6. Connected by cables, the drive controller and the drive mechanism 7 are connected by cables; in use, the IMU inertial measurement unit 3 and the GPS module 4 are installed on the connecting bridge; the IMU inertia The measurement unit 3 and the GPS module 4 use the current general IMU+GPS data fusion algorithm to realize the high-precision positioning and path calculation functions of the unmanned ship in the wild environment, and further use it in combination with the navigation function package to realize The function of fixed-point navigation, which is synchronized with the obstacle avoidance function; the laser radar 1 and the industrial camera 2 are installed at the bridge head of the unmanned ship connecting bridge, aligned with the central axis of the connecting bridge, and the industrial camera 2 is placed on the laser radar 1 In the front, the radar point cloud data and the industrial camera 2 screen are obtained, and the filtered radar information and water surface obstacle image information are obtained after filtering and analysis respectively, and are packaged into "messages" in the ROS architecture and released, and then through the construction A good central obstacle avoidance algorithm node subscribes to the above-mentioned surface obstacle image information and filtered radar information, and uses the obstacle avoidance algorithm to deduce the movement direction and speed that the ship needs to reach the target point next, and iterates continuously based on the collected real-time data Update, the real-time information of the sensor is continuously subscribed by the central obstacle avoidance algorithm node, and the sensor node will judge according to the current situation whether to continue to execute the previous obstacle avoidance method or interrupt the previous obstacle avoidance process to make a new obstacle avoidance action; In the system, in view of the respective advantages and disadvantages of LiDAR 1 and industrial camera 2, the radar is responsible for detecting high obstacles in the range of 1m-40m at the far end and within 360 degrees in the horizontal direction, such as: river banks, hidden reefs, passing ships, etc.; industrial camera 2 is responsible for detecting Detect floating obstacles on the water surface within the range of 100 degrees horizontally and 80 degrees vertically in front of the ship, such as: water bottles, plastic bags, driftwood, etc. Complementing the advantages and disadvantages of these two sensors is the core idea of this obstacle avoidance system.

所述驱动控制器可以选择使用电子调速器,所述驱动机械装置7可以选用水下推进器。The drive controller can choose to use an electronic governor, and the drive mechanism 7 can choose an underwater propeller.

实现无人船在野外环境下的定位与路径计算功能,并进一步与ROS架构中的导航功能包结合使用具体包括以下内容:Realize the positioning and path calculation functions of unmanned ships in the wild environment, and further use them in combination with the navigation function package in the ROS architecture. Specifically include the following:

经过融合算法可以测量出当前船只的经纬度位置、姿态朝向、航行速度、航行加速度等信息,使用现行通用的IMU+GPS融合算法,将以上信息转化为里程计信息,里程计信息记录了船舶运动过程。操作者可以通过个人电脑在控制系统中,通过使用机器人操作系统(ROS)搭建的分布式框架,以通讯路由器8形成的WIFI局域网为媒介,向人工智能开发板5按次发送多个无人船巡航点经纬度坐标,导航功能包会结合当前位置信息和速度信息,计算出到达目标点所需的姿态、速度等信息,并转化为速度话题发送给单片机6。Through the fusion algorithm, the current latitude and longitude position, attitude orientation, navigation speed, navigation acceleration and other information of the current ship can be measured, and the current general IMU+GPS fusion algorithm is used to convert the above information into odometer information, which records the ship's movement process . The operator can use a personal computer in the control system to send multiple unmanned ships to the artificial intelligence development board 5 in sequence by using the distributed framework built by the robot operating system (ROS) and using the WIFI LAN formed by the communication router 8 as the medium. The longitude and latitude coordinates of the cruising point, the navigation function package will combine the current position information and speed information to calculate the attitude, speed and other information required to reach the target point, and convert it into a speed topic and send it to the microcontroller 6 .

所述激光雷达1和所述工业相机2同时进行识别,只是两个传感器识别区域不同,识别的目标物也不同,但最终会将识别结果统一到一个坐标系下,方便进行避障计算;具体为所述激光雷达1负责探测远端1m-40m范围和横向360度内高起的障碍物;所述工业相机2负责探测船舶前方横向100度和纵向80度范围内水面上的漂浮障碍物。The lidar 1 and the industrial camera 2 perform identification at the same time, but the recognition areas of the two sensors are different, and the recognized targets are also different, but eventually the recognition results will be unified into one coordinate system to facilitate obstacle avoidance calculations; specifically The lidar 1 is responsible for detecting obstacles raised in the range of 1m-40m at the far end and within 360 degrees in the lateral direction; the industrial camera 2 is responsible for detecting floating obstacles on the water surface within the range of 100 degrees in the lateral direction and 80 degrees in the vertical direction in front of the ship.

所述激光雷达1直接获得的雷达点云数据的数据量较大且在水面上会受到较多噪声干扰,因此需要建立一个雷达滤波简单去简化雷达数据信息,滤除干扰点和噪声点,该滤波算法是根据雷达点云数据的离散程度进行判断的;在散点相对聚集的区域,可以认为就是需要躲避的障碍物,反之则认为是噪声。得益于雷达的性能,即使该方法在某几帧的数据处理上出现偏差,后续的过程也能将其成功补齐。The radar point cloud data directly obtained by the lidar 1 has a large amount of data and will be subject to more noise interference on the water surface. Therefore, it is necessary to establish a radar filter to simplify the radar data information and filter out interference points and noise points. The filtering algorithm is judged according to the degree of dispersion of the radar point cloud data; in the area where the scattered points are relatively concentrated, it can be considered as an obstacle to be avoided, otherwise it is considered as noise. Thanks to the performance of the radar, even if the method has deviations in the data processing of certain frames, the subsequent process can successfully make up for it.

所述工业相机2直接获取的画面需要通过分析处理后才能最终识别出其中水面障碍物,具体分析处理时需要用到卷积神经网络Yolov5识别处理,使用的是已经提前训练好的数据集,该数据集包括一系列水面上常见的漂浮物如:矿泉水瓶、塑料垃圾袋、漂浮原木、木桶等,后续还会不断对数据集进行升级,加入其他野生环境中可能会出现的漂浮物,该神经网络读取工业相机2捕捉到的每一帧画面,并进行分析处理,最终识别出其中水面障碍物,记录相关障碍物在图像中的多个坐标角点(u,v)、面积S、障碍物种类H。The picture directly acquired by the industrial camera 2 needs to be analyzed and processed to finally identify the obstacles on the water surface. During the specific analysis and processing, the convolutional neural network Yolov5 recognition process is required, and the data set that has been trained in advance is used. The data set includes a series of common floating objects on the water surface, such as: mineral water bottles, plastic garbage bags, floating logs, wooden barrels, etc., and will continue to upgrade the data set in the future, adding other floating objects that may appear in the wild environment. The neural network reads each frame captured by the industrial camera 2, analyzes and processes it, and finally identifies the obstacles on the water surface, and records the multiple coordinate corner points (u, v), area S, Obstacle type H.

如图7所示,所述中枢避障算法主要包括以下内容:As shown in Figure 7, the central obstacle avoidance algorithm mainly includes the following contents:

S1、解算Yolov5节点发布的图像信息;将从所述工业相机2中获取的图像中水面障碍物的多个坐标角点(u,v)、面积S、障碍物种类H这些2D的图像信息转化成水面障碍物在船舶坐标系下的模糊坐标(xm,ym),称为“水面障碍坐标”;S1, solve the image information released by the Yolov5 node; the 2D image information of multiple coordinate corner points (u, v), area S, and obstacle type H of obstacles on the water surface in the image obtained from the industrial camera 2 Transformed into the fuzzy coordinates (xm , ym ) of the water surface obstacle in the ship coordinate system, called "water surface obstacle coordinates";

S2、解算雷达滤波节点发布的雷达信息;将雷达信息中ranges数组包含的“角度-距离”信息同样转化为障碍物在船舶坐标系下的坐标(xn,yn),称为“普通障碍坐标”;S2. Calculate the radar information released by the radar filter node; convert the "angle-distance" information contained in the ranges array in the radar information into the coordinates (xn , yn ) of the obstacle in the ship coordinate system, called "common Obstacle Coordinates";

S3、根据当前所有的水面障碍坐标和普通障碍坐标,得出各个障碍物坐标的1级威胁系数Th1和2级威胁系数Th2;Th1是障碍物坐标相对于船舶坐标系的距离,Th2是障碍物横坐标的平方x2,具体计算公式如下:S3, according to all current water surface obstacle coordinates and common obstacle coordinates, draw the first-level threat coefficient Th1 and the second-level threat coefficient Th2 of each obstacle coordinate; Th1 is the distance of the obstacle coordinate relative to the ship coordinate system, Th2 is the square x2 of the abscissa of the obstacle, and the specific calculation formula is as follows:

Th2=x2Th2 =x2

1级威胁系数描述的是障碍物与船舶的直线距离,2级威胁系数描述的是障碍物与船舶横向距离;设立1级威胁系数是为了让船舶根据障碍物直线距离进行避障,是最基础的避障参考数据,2级威胁系数主要针对于正对于船舶前进方向的障碍物,这种障碍物需要大幅度改变航向进行避障;1、2级威胁系数同时作用于避障环节。Level 1 threat coefficient describes the straight-line distance between the obstacle and the ship, and Level 2 threat coefficient describes the lateral distance between the obstacle and the ship; the establishment of level 1 threat coefficient is to allow the ship to avoid obstacles according to the straight-line distance of the obstacle, which is the most basic Based on the obstacle avoidance reference data, the level 2 threat coefficient is mainly aimed at obstacles facing the forward direction of the ship, and such obstacles need to greatly change the course to avoid obstacles; the level 1 and level 2 threat factors act on the obstacle avoidance link at the same time.

S4、在船舶的检测范围内检测到的所有障碍物都会计算1、2级威胁系数,但是船舶会设立一个“周边安全区域”,这是因为有些障碍物并不处于船舶行进的线路上,因此无需对这种障碍物进行避障处理;安全区域的范围是船舶左右两边各3.5m,前后各30m的一个矩形区域,换算成船舶坐标系如图3所示。该矩形区域是通过实际测验测量出的最优安全区域,其综合了船体尺寸、转弯半径、航行速度、搭载传感器性能等因素。S4. All obstacles detected within the detection range of the ship will calculate the level 1 and level 2 threat coefficients, but the ship will set up a "peripheral safety area", because some obstacles are not on the route of the ship, so There is no need for obstacle avoidance processing for such obstacles; the safe area is a rectangular area of 3.5m on the left and right sides of the ship, and 30m on the front and rear sides of the ship, which is converted into the ship coordinate system as shown in Figure 3. The rectangular area is the optimal safe area measured through actual tests, which integrates factors such as hull size, turning radius, sailing speed, and sensor performance.

S5、为了方便在二维坐标系内进行描述,将描述船舶二维运动的参数:线速度V和角速度ω转化为x轴速度和y轴速度,记为(Vx,Vy),计算公式如下:S5. In order to facilitate the description in the two-dimensional coordinate system, the parameters describing the two-dimensional motion of the ship: linear velocity V and angular velocity ω are transformed into x-axis velocity and y-axis velocity, which are recorded as (Vx , Vy ), and the calculation formula as follows:

Vx=Vcos(ωΔt)Vx = Vcos(ωΔt)

Vy=Vsin(ωΔt)Vy =Vsin(ωΔt)

其中Δt为采样时间where Δt is the sampling time

根据当前检测到的所有障碍物坐标,计算出(Vx,Vy),公式如下:Calculate (Vx ,Vy ) based on the coordinates of all obstacles currently detected, the formula is as follows:

Vx=K1(∑leftTh1-∑rightTh1)+K2(∑leftTh2-∑rightTh2)Vx =K1 (∑left Th1 -∑right Th1 )+K2 (∑left Th2 -∑right Th2 )

Vy=K3Vx+K4(∑leftTh2-∑rightTh2)Vy =K3 Vx +K4 (∑left Th2 -∑right Th2 )

以y轴为分界线将坐标系分成两个区域,左半区和右半区;Vx是左右半区所有障碍物坐标的1级系数的差分乘以一个比例系数K1,再上左右半区所有障碍物坐标的2级系数的差分乘以一个比例系数K2;这样做的原因是为了让船舶更好的对船舶正前方和距离船舶侧方较近的障碍物具有良好的避障灵活度,还能应对多障碍物的复杂情况;Vx使用这个计算式能够让船舶很好的动态保持与各个障碍物的距离,当遇到两个或多个距离较近的障碍物时,还能够让船舶与各个障碍物保持最优安全距离。Vy是Vx乘以一个比例系数K3,再上左右半区所有障碍物坐标的2级系数的差分乘以一个比例系数K4;这样做的原因是为了让Vy一定程度上参考于Vx,还能达到同样的动态调速功能。The coordinate system is divided into two areas with the y-axis as the dividing line, the left half area and the right half area; Vx is the difference of the first-level coefficients of all obstacle coordinates in the left and right half areas multiplied by a proportional coefficient K1 , and then the left and right halves The difference of the second-level coefficients of all obstacle coordinates in the area is multiplied by a proportional coefficient K2 ; the reason for this is to allow the ship to have better obstacle avoidance flexibility for obstacles directly in front of the ship and closer to the side of the ship It can also deal with complex situations with multiple obstacles; Vx uses this calculation formula to allow the ship to dynamically maintain a good distance from each obstacle. When encountering two or more obstacles that are closer It can keep the ship and various obstacles at an optimal safe distance. Vy is Vx multiplied by a proportional coefficient K3 , and then the difference of the second-level coefficients of all obstacle coordinates in the upper left and right half areas multiplied by a proportional coefficient K4 ; the reason for this is to make Vy refer to a certain extent in Vx can also achieve the same dynamic speed regulation function.

使用上述避障系统进行避障实验,以下是几种出现的障碍物分布情况对应的船舶将执行的指令:Use the above obstacle avoidance system to conduct obstacle avoidance experiments. The following are the instructions that the ship will execute corresponding to the distribution of obstacles that appear:

①船只附近障碍物分布如图4所示时,船舶将正常按照导航指令航行;① When the distribution of obstacles near the ship is shown in Figure 4, the ship will sail normally in accordance with the navigation instructions;

②船舶附近障碍物分布如图5所示时,船舶将快速右转,待威胁消失后,将继续按原有导航路线行驶;②When the distribution of obstacles near the ship is shown in Figure 5, the ship will turn right quickly and continue to follow the original navigation route after the threat disappears;

③船只附近障碍物分布如图6所示时,船舶先右转,再快速左转,灵活航行于两个障碍物之间,各自保持最优的安全距离。③ When the distribution of obstacles near the ship is shown in Figure 6, the ship turns right first, then turns left quickly, and navigates flexibly between the two obstacles, each maintaining an optimal safe distance.

以上显示和描述了本发明的基本原理、主要特征和本发明的优点。本行业的技术人员应该了解,本发明不受上述实施例的限制,上述实施例和说明书中描述的仅为本发明的优选例,并不用来限制本发明,在不脱离本发明精神和范围的前提下,本发明还会有各种变化和改进,这些变化和改进都落入要求保护的本发明范围内。本发明要求保护范围由所附的权利要求书及其等效物界定。The basic principles, main features and advantages of the present invention have been shown and described above. Those skilled in the art should understand that the present invention is not limited by the above-mentioned embodiments, and those described in the above-mentioned embodiments and description are only preferred examples of the present invention, and are not intended to limit the present invention, without departing from the spirit and scope of the present invention. Under the premise, the present invention will have various changes and improvements, and these changes and improvements all fall within the scope of the claimed invention. The protection scope of the present invention is defined by the appended claims and their equivalents.

Claims (6)

Translated fromChinese
1.一种基于ROS的无人船多传感器融合的多模态避障系统,其特征在于,包括传感器、控制器和执行器三部分,且本系统基于ROS架构进行开发测试,所述传感器包括激光雷达(1)、工业相机(2)、IMU惯性测量单元(3)和GPS模块(4)用于感知周围环境和获取姿态和位置信息;所述控制器包括人工智能开发板(5)和单片机(6)是无人船作业的主控模块,根据输入通过内置算法做出决策;所述执行器包括驱动控制器及驱动机械装置(7)用于执行控制器的决策;其中所述激光雷达(1)、所述工业相机(2)、所述IMU惯性导航模块和GPS模块(4)都与所述人工智能开发板(5)之间通过线缆连接,所述驱动控制器和所述人工智能开发板(5)都与单片机(6)之间通过线缆连接,所述驱动控制器和所述驱动机械装置(7)之间通过线缆连接;使用时,所述IMU惯性测量单元(3)和所述GPS模块(4)安装到船体连接桥上;所述IMU惯性测量单元(3)和所述GPS模块(4)采用当下通用的IMU+GPS数据融合算法的方式实现无人船在野外环境下较高精度的定位与路径计算功能,并进一步与导航功能包结合使用,可以实现定点导航的功能,该功能与避障功能同步进行;所述激光雷达(1)和所述工业相机(2)安装到无人船连接桥桥头位置,与连接桥中轴线对齐,工业相机(2)放在激光雷达(1)前方,获得雷达点云数据和工业相机(2)画面,分别经过滤波和分析处理后得到经过滤波的雷达信息和水面障碍物图像信息,打包封装成ROS架构中的“消息”并进行发布,再通过构建好的中枢避障算法节点订阅上述的水面障碍物图像信息和经过滤波的雷达信息,通过避障算法判断出船只接下来到达目标点需要的运动方向和速度,并根据采集到的实时数据不断迭代更新。1. A multimodal obstacle avoidance system based on ROS-based unmanned ship multi-sensor fusion, characterized in that it includes three parts: sensors, controllers and actuators, and the system is developed and tested based on the ROS architecture, and the sensors include Laser radar (1), industrial camera (2), IMU inertial measurement unit (3) and GPS module (4) are used for sensing the surrounding environment and obtaining attitude and position information; Described controller includes artificial intelligence development board (5) and The single-chip microcomputer (6) is the main control module of the unmanned ship operation, and makes a decision through a built-in algorithm according to the input; the actuator includes a drive controller and a drive mechanism (7) for executing the decision of the controller; wherein the laser Radar (1), the industrial camera (2), the IMU inertial navigation module and the GPS module (4) are all connected with the artificial intelligence development board (5) by cables, and the drive controller and the The artificial intelligence development board (5) is all connected with the single-chip microcomputer (6) by cables, and the drive controller and the drive mechanism (7) are connected by cables; when in use, the IMU inertial measurement The unit (3) and the GPS module (4) are installed on the hull connection bridge; the IMU inertial measurement unit (3) and the GPS module (4) adopt the current general IMU+GPS data fusion algorithm to realize wireless The high-precision positioning and path calculation functions of the man-ship in the wild environment, and further combined with the navigation function package, can realize the function of fixed-point navigation, which is synchronized with the obstacle avoidance function; the laser radar (1) and the The industrial camera (2) is installed at the head of the unmanned ship connecting bridge, aligned with the central axis of the connecting bridge, the industrial camera (2) is placed in front of the laser radar (1), and the radar point cloud data and the industrial camera (2) picture are obtained. After filtering and analysis processing, the filtered radar information and water surface obstacle image information are obtained, packaged and packaged into "messages" in the ROS architecture and released, and then subscribe to the above water surface obstacles through the constructed central obstacle avoidance algorithm node Image information and filtered radar information are used to determine the direction and speed of the ship’s movement to reach the target point through the obstacle avoidance algorithm, and iteratively updated based on the collected real-time data.2.根据权利要求1所述的一种基于ROS的无人船多传感器融合的多模态避障系统,其特征在于,实现无人船在野外环境下的定位与路径计算功能,并进一步与ROS架构中的导航功能包结合使用具体包括以下内容:2. the multimodal obstacle avoidance system of a kind of ROS-based unmanned ship multi-sensor fusion according to claim 1, is characterized in that, realizes the positioning and path calculation function of unmanned ship in the field environment, and further with The combined use of the navigation function package in the ROS architecture includes the following:经过融合算法可以测量出当前船只的经纬度位置、姿态朝向、航行速度、航行加速度等信息,使用现行通用的IMU+GPS融合算法,将以上信息转化为里程计信息,里程计信息记录了船舶运动过程;操作者可以通过个人电脑在控制系统中,通过使用机器人操作系统(ROS)搭建的分布式框架,以通讯路由器(8)形成的WIFI局域网为媒介,向人工智能开发板(5)按次发送多个无人船巡航点经纬度坐标,导航功能包会结合当前位置信息和速度信息,计算出到达目标点所需的姿态、速度等信息,并转化为速度话题发送给单片机(6)。Through the fusion algorithm, the current latitude and longitude position, attitude orientation, navigation speed, navigation acceleration and other information of the current ship can be measured, and the current general IMU+GPS fusion algorithm is used to convert the above information into odometer information, which records the ship's movement process The operator can use the distributed framework built by the robot operating system (ROS) in the control system through the personal computer, and use the WIFI local area network formed by the communication router (8) as the medium to send to the artificial intelligence development board (5) one by one. The longitude and latitude coordinates of multiple unmanned ship cruise points, the navigation function package will combine the current position information and speed information to calculate the attitude, speed and other information required to reach the target point, and convert it into a speed topic and send it to the microcontroller (6).3.根据权利要求1所述的一种基于ROS的无人船多传感器融合的多模态避障系统,其特征在于,所述激光雷达(1)和所述工业相机(2)同时进行识别,只是两个传感器识别区域不同,识别的目标物也不同,但最终会将识别结果统一到一个坐标系下,方便进行避障计算;具体为所述激光雷达(1)负责探测远端1m-40m范围和横向360度内高起的障碍物;所述工业相机(2)负责探测船舶前方横向100度和纵向80度范围内水面上的漂浮障碍物。3. the multimodal obstacle avoidance system of a kind of ROS-based unmanned ship multi-sensor fusion according to claim 1, is characterized in that, described lidar (1) and described industrial camera (2) identify simultaneously , but the recognition areas of the two sensors are different, and the recognized targets are also different, but the recognition results will be unified into one coordinate system in the end, which is convenient for obstacle avoidance calculation; specifically, the laser radar (1) is responsible for detecting the far-end 1m- 40m range and elevated obstacles within 360 degrees laterally; the industrial camera (2) is responsible for detecting floating obstacles on the water surface within the range of 100 degrees laterally and 80 degrees longitudinally in front of the ship.4.根据权利要求1所述的一种基于ROS的无人船多传感器融合的多模态避障系统,其特征在于,所述激光雷达(1)直接获得的雷达点云数据的数据量较大且在水面上会受到较多噪声干扰,因此需要建立一个雷达滤波简单去简化雷达数据信息,滤除干扰点和噪声点,该滤波算法是根据雷达点云数据的离散程度进行判断的;在散点相对聚集的区域,可以认为就是需要躲避的障碍物,反之则认为是噪声。4. the multimodal obstacle avoidance system of a kind of ROS-based unmanned ship multi-sensor fusion according to claim 1, is characterized in that, the data volume of the radar point cloud data that described laser radar (1) directly obtains is relatively large and will be subject to more noise interference on the water surface, so it is necessary to establish a simple radar filter to simplify the radar data information and filter out interference points and noise points. The filter algorithm is judged according to the degree of dispersion of the radar point cloud data; The area where the scattered points are relatively concentrated can be considered as an obstacle that needs to be avoided, otherwise it can be considered as noise.5.根据权利要求1所述的一种基于ROS的无人船多传感器融合的多模态避障系统,其特征在于,所述工业相机(2)直接获取的画面需要通过分析处理后才能最终识别出其中水面障碍物,具体分析处理时需要用到卷积神经网络Yolov5识别处理,使用的是已经提前训练好的数据集,该数据集包括一系列水面上常见的漂浮物如:矿泉水瓶、塑料垃圾袋、漂浮原木、木桶等,后续还会不断对数据集进行升级,加入其他野生环境中可能会出现的漂浮物,该神经网络读取工业相机(2)捕捉到的每一帧画面,并进行分析处理,最终识别出其中水面障碍物,记录相关障碍物在图像中的多个坐标角点(u,v)、面积S、障碍物种类H。5. the multi-modal obstacle avoidance system of a kind of ROS-based unmanned ship multi-sensor fusion according to claim 1, is characterized in that, the picture that described industrial camera (2) directly acquires needs to be finalized after analyzing and processing. To identify obstacles on the water surface, the convolutional neural network Yolov5 recognition process is required for specific analysis and processing. The data set that has been trained in advance is used. The data set includes a series of common floating objects on the water surface such as: mineral water bottles, Plastic garbage bags, floating logs, wooden barrels, etc., will continue to upgrade the data set in the future, adding other floating objects that may appear in the wild environment. The neural network reads every frame captured by the industrial camera (2) , and analyze and process, and finally identify the obstacles on the water surface, and record the multiple coordinate corner points (u, v), area S, and obstacle type H of the relevant obstacles in the image.6.根据权利要求1所述的一种基于ROS的无人船多传感器融合的多模态避障系统,其特征在于,所述中枢避障算法主要包括以下内容:6. the multimodal obstacle avoidance system of a kind of ROS-based unmanned ship multisensor fusion according to claim 1, is characterized in that, described central obstacle avoidance algorithm mainly comprises the following contents:S1、解算Yolov5节点发布的图像信息;将从所述工业相机(2)中获取的图像中水面障碍物的多个坐标角点(u,v)、面积S、障碍物种类H这些2D的图像信息转化成水面障碍物在船舶坐标系下的模糊坐标(xm,ym),称为“水面障碍坐标”;S1, solve the image information issued by the Yolov5 node; multiple coordinate corner points (u, v), area S, and obstacle type H of the water surface obstacle in the image obtained from the industrial camera (2) are 2D The image information is converted into the fuzzy coordinates (xm , ym ) of the water surface obstacle in the ship coordinate system, called "water surface obstacle coordinates";S2、解算雷达滤波节点发布的雷达信息;将雷达信息中ranges数组包含的“角度-距离”信息同样转化为障碍物在船舶坐标系下的坐标(xn,yn),称为“普通障碍坐标”;S2. Calculate the radar information released by the radar filter node; convert the "angle-distance" information contained in the ranges array in the radar information into the coordinates (xn , yn ) of the obstacle in the ship coordinate system, called "common Obstacle Coordinates";S3、根据当前所有的水面障碍坐标和普通障碍坐标,得出各个障碍物坐标的1级威胁系数Th1和2级威胁系数Th2;Th1是障碍物坐标相对于船舶坐标系的距离,Th2是障碍物横坐标的平方x2,具体计算公式如下:S3, according to all current water surface obstacle coordinates and common obstacle coordinates, draw the first-level threat coefficient Th1 and the second-level threat coefficient Th2 of each obstacle coordinate; Th1 is the distance of the obstacle coordinate relative to the ship coordinate system, Th2 is the square x2 of the abscissa of the obstacle, and the specific calculation formula is as follows:Th2=x2Th2 =x21级威胁系数描述的是障碍物与船舶的直线距离,2级威胁系数描述的是障碍物与船舶横向距离;设立1级威胁系数是为了让船舶根据障碍物直线距离进行避障,是最基础的避障参考数据,2级威胁系数主要针对于正对于船舶前进方向的障碍物,这种障碍物需要大幅度改变航向进行避障;1、2级威胁系数同时作用于避障环节;Level 1 threat coefficient describes the straight-line distance between the obstacle and the ship, and Level 2 threat coefficient describes the lateral distance between the obstacle and the ship; the establishment of level 1 threat coefficient is to allow the ship to avoid obstacles according to the straight-line distance of the obstacle, which is the most basic The obstacle avoidance reference data, the level 2 threat coefficient is mainly aimed at the obstacle facing the forward direction of the ship, which requires a large change of course to avoid the obstacle; the level 1 and level 2 threat factors act on the obstacle avoidance link at the same time;S4、在船舶的检测范围内检测到的所有障碍物都会计算1、2级威胁系数,但是船舶会设立一个“周边安全区域”,这是因为有些障碍物并不处于船舶行进的线路上,因此无需对这种障碍物进行避障处理;安全区域的范围是船舶左右两边各3.5m,前后各30m的一个矩形区域;S4. All obstacles detected within the detection range of the ship will calculate the level 1 and level 2 threat coefficients, but the ship will set up a "peripheral safety area", because some obstacles are not on the route of the ship, so There is no need for obstacle avoidance treatment for such obstacles; the scope of the safe area is a rectangular area of 3.5m on the left and right sides of the ship, and 30m on the front and back;S5、为了方便在二维坐标系内进行描述,将描述船舶二维运动的参数:线速度V和角速度ω转化为x轴速度和y轴速度,记为(Vx,Vy),计算公式如下:S5. In order to facilitate the description in the two-dimensional coordinate system, the parameters describing the two-dimensional motion of the ship: linear velocity V and angular velocity ω are transformed into x-axis velocity and y-axis velocity, which are recorded as (Vx , Vy ), and the calculation formula as follows:Vx=Vcos(ωΔt)Vx = Vcos(ωΔt)Vy=Vsin(ωΔt)Vy =Vsin(ωΔt)其中Δt为采样时间where Δt is the sampling time根据当前检测到的所有障碍物坐标,计算出(Vx,Vy),公式如下:Calculate (Vx ,Vy ) based on the coordinates of all obstacles currently detected, the formula is as follows:Vx=K1(∑leftTh1-∑rightTh1)+K2(∑leftTh2-∑rightTh2)Vx =K1 (∑left Th1 -∑right Th1 )+K2 (∑left Th2 -∑right Th2 )Vy=K3Vx+K4(∑leftTh2-∑rightTh2)Vy =K3 Vx +K4 (∑left Th2 -∑right Th2 )以y轴为分界线将坐标系分成两个区域,左半区和右半区;Vx是左右半区所有障碍物坐标的1级系数的差分乘以一个比例系数K1,再上左右半区所有障碍物坐标的2级系数的差分乘以一个比例系数K2;这样做的原因是为了让船舶更好的对船舶正前方和距离船舶侧方较近的障碍物具有良好的避障灵活度,还能应对多障碍物的复杂情况;Vx使用这个计算式能够让船舶很好的动态保持与各个障碍物的距离,当遇到两个或多个距离较近的障碍物时,还能够让船舶与各个障碍物保持最优安全距离;Vy是Vx乘以一个比例系数K3,再加上左右半区所有障碍物坐标的2级系数的差分乘以一个比例系数K4;这样做的原因是为了让Vy一定程度上参考于Vx,还能达到同样的动态调速功能。The coordinate system is divided into two areas with the y-axis as the dividing line, the left half area and the right half area; Vx is the difference of the first-level coefficients of all obstacle coordinates in the left and right half areas multiplied by a proportional coefficient K1 , and then the left and right halves The difference of the second-level coefficients of all obstacle coordinates in the area is multiplied by a proportional coefficient K2 ; the reason for this is to allow the ship to have better obstacle avoidance flexibility for obstacles directly in front of the ship and closer to the side of the ship It can also deal with complex situations with multiple obstacles; Vx uses this calculation formula to allow the ship to dynamically maintain a good distance from each obstacle. When encountering two or more obstacles that are closer It can keep the ship and each obstacle at an optimal safe distance; Vy is Vx multiplied by a proportional coefficient K3 , plus the difference of the second-level coefficients of all obstacle coordinates in the left and right half areas multiplied by a proportional coefficient K4 ; The reason for doing this is to make Vy refer to Vx to a certain extent, and also achieve the same dynamic speed regulation function.将计算好的Vx和Vy传输给所述驱动控制器,控制所述驱动机械装置(7)按照指定速度进行输出,确保船舶能够根据避障算法实现避障。The calculated Vx and Vy are transmitted to the drive controller, and the drive mechanism (7) is controlled to output at a specified speed, so as to ensure that the ship can avoid obstacles according to the obstacle avoidance algorithm.
CN202310393001.6A2023-04-132023-04-13 A multi-modal obstacle avoidance system based on multi-sensor fusion of unmanned ships based on ROSPendingCN116540696A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310393001.6ACN116540696A (en)2023-04-132023-04-13 A multi-modal obstacle avoidance system based on multi-sensor fusion of unmanned ships based on ROS

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310393001.6ACN116540696A (en)2023-04-132023-04-13 A multi-modal obstacle avoidance system based on multi-sensor fusion of unmanned ships based on ROS

Publications (1)

Publication NumberPublication Date
CN116540696Atrue CN116540696A (en)2023-08-04

Family

ID=87446042

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310393001.6APendingCN116540696A (en)2023-04-132023-04-13 A multi-modal obstacle avoidance system based on multi-sensor fusion of unmanned ships based on ROS

Country Status (1)

CountryLink
CN (1)CN116540696A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN117690194A (en)*2023-12-082024-03-12北京虹湾威鹏信息技术有限公司Multi-source AI biodiversity observation method and acquisition system

Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102999050A (en)*2012-12-132013-03-27哈尔滨工程大学Automatic obstacle avoidance method for intelligent underwater robots
US8825259B1 (en)*2013-06-212014-09-02Google Inc.Detecting lane closures and lane shifts by an autonomous vehicle
CN105468337A (en)*2014-06-162016-04-06比亚迪股份有限公司Method and system for seeking vehicle through mobile terminal and mobile terminal
CN105629985A (en)*2016-03-202016-06-01北京工业大学 Indoor four-rotor UAV 360° three-dimensional obstacle avoidance system
US20180075762A1 (en)*2016-09-092018-03-15Garmin International, Inc.Obstacle determination and display system
CN109270233A (en)*2018-08-272019-01-25东南大学It is a kind of for searching the unmanned boat system of pollution entering the water
CN109460035A (en)*2018-12-182019-03-12国家海洋局北海海洋工程勘察研究院(青岛环海海洋工程勘察研究院)Second level automatic obstacle avoiding system and barrier-avoiding method under a kind of unmanned boat fast state
CN109916400A (en)*2019-04-102019-06-21上海大学 An Obstacle Avoidance Method for Unmanned Vehicle Based on the Combination of Gradient Descent Algorithm and VO Method
CN110580044A (en)*2019-08-302019-12-17天津大学 Heterogeneous system for fully automatic navigation of unmanned ships based on intelligent perception
CN113985419A (en)*2021-10-222022-01-28中国科学院合肥物质科学研究院Water surface robot cooperative obstacle detection and avoidance method and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN102999050A (en)*2012-12-132013-03-27哈尔滨工程大学Automatic obstacle avoidance method for intelligent underwater robots
US8825259B1 (en)*2013-06-212014-09-02Google Inc.Detecting lane closures and lane shifts by an autonomous vehicle
CN105468337A (en)*2014-06-162016-04-06比亚迪股份有限公司Method and system for seeking vehicle through mobile terminal and mobile terminal
CN105629985A (en)*2016-03-202016-06-01北京工业大学 Indoor four-rotor UAV 360° three-dimensional obstacle avoidance system
US20180075762A1 (en)*2016-09-092018-03-15Garmin International, Inc.Obstacle determination and display system
CN109270233A (en)*2018-08-272019-01-25东南大学It is a kind of for searching the unmanned boat system of pollution entering the water
CN109460035A (en)*2018-12-182019-03-12国家海洋局北海海洋工程勘察研究院(青岛环海海洋工程勘察研究院)Second level automatic obstacle avoiding system and barrier-avoiding method under a kind of unmanned boat fast state
CN109916400A (en)*2019-04-102019-06-21上海大学 An Obstacle Avoidance Method for Unmanned Vehicle Based on the Combination of Gradient Descent Algorithm and VO Method
CN110580044A (en)*2019-08-302019-12-17天津大学 Heterogeneous system for fully automatic navigation of unmanned ships based on intelligent perception
CN113985419A (en)*2021-10-222022-01-28中国科学院合肥物质科学研究院Water surface robot cooperative obstacle detection and avoidance method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
乔俊飞 等: "基于神经网络的强化学习在避障中的应用", 清华大学学报(自然科学版), vol. 48, no. 2, 2 December 2008 (2008-12-02), pages 1747 - 1750*
胡朝辉 等: "基于横向安全距离模型的主动避障算法", 汽车工程, vol. 42, no. 5, 25 May 2020 (2020-05-25), pages 581 - 587*
赵飞颖: "面向横纵向运动障碍物的智能车辆避撞方法", 中国优秀硕士学位论文全文数据库,工程科技Ⅱ辑, 15 January 2023 (2023-01-15), pages 1 - 85*

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN117690194A (en)*2023-12-082024-03-12北京虹湾威鹏信息技术有限公司Multi-source AI biodiversity observation method and acquisition system
CN117690194B (en)*2023-12-082024-06-07北京虹湾威鹏信息技术有限公司Multi-source AI biodiversity observation method and acquisition system

Similar Documents

PublicationPublication DateTitle
KR102566724B1 (en)Harbor monitoring device and harbor monitoring method
KR102794429B1 (en)Device and method for monitoring a berthing
Balasuriya et al.Vision based autonomous underwater vehicle navigation: underwater cable tracking
CN106199625A (en)A kind of ship berthing detecting system based on laser radar and method
CN113256697B (en) Three-dimensional reconstruction method, system, device and storage medium of underwater scene
CN110610130A (en) A transmission line robot navigation method and system based on multi-sensor information fusion
CN112527019A (en)Heterogeneous unmanned system cooperative formation control system suitable for severe sea conditions and control method thereof
Akram et al.A visual control scheme for auv underwater pipeline tracking
CN116540696A (en) A multi-modal obstacle avoidance system based on multi-sensor fusion of unmanned ships based on ROS
CN119658721B (en) A robot rescue emergency control system based on voice interaction
CN118672276B (en)Unmanned ship autonomous navigation control method and system
CN120085649A (en) Intelligent unmanned forklift system and method based on visual navigation
Chou et al.An AI AUV enabling vision-based diver-following and obstacle avoidance with 3D-modeling dataset
CN118244755B (en) Underwater vehicle docking control method and device based on imaging sonar
Neto et al.Autonomous underwater vehicle to inspect hydroelectric dams
CN118466483A (en) A local navigation planning method for unmanned boat based on beam diagram state input
CN117032257A (en)Unmanned ship motion control method based on computer vision guidance
Singh et al.Rope augmented path following and control of remotely operated underwater vehicle using vision for stilling basin surveillance
Lee et al.A docking and control system for an autonomous underwater vehicle
Yi et al.A robust visual tracking method for unmanned mobile systems
CN114594771A (en) A method for detecting and getting out of trouble for a surface robot and its device
Hamrén et al.Situation Awareness within Maritime Applications
Cai et al.A Data-Driven Velocity Estimator for Autonomous Underwater Vehicles Experiencing Unmeasurable Flow and Wave Disturbance
Ruba et al.Enhancing Underwater Tunnel Safety through Smart Inspection: An ROV-based System with AI-Powered Crack Detection using Canny Edge Detection
Lin et al.Development of an image processing module for autonomous underwater vehicles through integration of object recognition with stereoscopic image reconstruction

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp