Movatterモバイル変換


[0]ホーム

URL:


CN110825108A - A collaborative collision avoidance method for multiple tracking UAVs in the same airspace - Google Patents

A collaborative collision avoidance method for multiple tracking UAVs in the same airspace
Download PDF

Info

Publication number
CN110825108A
CN110825108ACN201911092138.8ACN201911092138ACN110825108ACN 110825108 ACN110825108 ACN 110825108ACN 201911092138 ACN201911092138 ACN 201911092138ACN 110825108 ACN110825108 ACN 110825108A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
speed
collision
velocity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911092138.8A
Other languages
Chinese (zh)
Other versions
CN110825108B (en
Inventor
唐文兵
黎瑶
陈博琛
丁佐华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sci Tech University ZSTU
Original Assignee
Zhejiang Sci Tech University ZSTU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sci Tech University ZSTUfiledCriticalZhejiang Sci Tech University ZSTU
Priority to CN201911092138.8ApriorityCriticalpatent/CN110825108B/en
Publication of CN110825108ApublicationCriticalpatent/CN110825108A/en
Application grantedgrantedCritical
Publication of CN110825108BpublicationCriticalpatent/CN110825108B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开一种在同一空域内多架跟踪无人机的协同防碰撞方法,该方法中每架无人机跟踪各自的目标,由于无人机之间的航路可能会有交叉,因此本发明旨在实现无人机之间无碰撞地跟踪目标。对于每架无人机,本发明首先使用OpenCV检测移动目标,然后根据目标在机载相机所获得图像上的位置信息使用PID控制器计算无人机应该执行的速度。然后使用速度障碍模型进行碰撞检测,最后给出了一种目标导向的速度优化方法。本发明考虑到复杂空域中,无人机路径交叉情况下多机协同防碰撞问题,使用速度障碍模型将多无人机的协同防碰撞问题转化为速度空间的规划问题,考虑到实时性要求,给出了一种目标导向的速度优化方法。

Figure 201911092138

The invention discloses a collaborative anti-collision method for tracking multiple drones in the same airspace. In the method, each drone tracks its own target. Since the routes between the drones may intersect, the present invention It is designed to achieve collision-free tracking of targets between drones. For each drone, the present invention first uses OpenCV to detect the moving target, and then uses the PID controller to calculate the speed that the drone should perform according to the target's position information on the image obtained by the onboard camera. Then use the velocity obstacle model for collision detection, and finally give a goal-oriented velocity optimization method. The invention takes into account the multi-machine collaborative anti-collision problem in the complex airspace and the intersection of UAV paths, uses the speed obstacle model to transform the multi-UAV cooperative anti-collision problem into the planning problem of the speed space, and takes into account the real-time requirements, A goal-oriented speed optimization method is presented.

Figure 201911092138

Description

Cooperative anti-collision method for multiple tracking unmanned aerial vehicles in same airspace
Technical Field
The invention relates to the field of multi-unmanned aerial vehicle collaborative motion planning under constraint conditions, in particular to a collaborative anti-collision method for multiple tracking unmanned aerial vehicles in the same airspace.
Background
With the maturity of low altitude Unmanned Aerial Vehicle (UAV) hovering, cruising and pan-tilt technologies, as well as the rise of Computer Vision and Deep Learning (Deep Learning), target tracking based on an Unmanned Aerial Vehicle platform has become a research hotspot at home and abroad, and has been used for tracking crime vehicles in urban anti-terrorism, tracking ground maneuvering targets in air war and the like. Tracking the unmanned aerial vehicle refers to an unmanned aerial vehicle with special purposes, which acquires image information by using sensors such as an airborne pan-tilt and the like, acquires position information of a target on an image (or predicts the position information of the target by state estimation and multi-sensor information fusion) through a target detection process, and calculates a control command (usually a velocity vector) through a tracking algorithm to control the motion of the unmanned aerial vehicle so as to ensure that the target is always located near a central area of a visual field obtained by an airborne camera. However, the above process is driven only by the movement of the target, and does not take into account the environment in which the drone is located (i.e. the threat of other aircraft). With the popularity of civilian drones and the increasing complexity of the tasks performed, the routes for tracking drones may cross. Therefore, a collision detection and resolution method between unmanned aerial vehicles needs to be designed to guarantee flight safety.
The problem of cooperative collision prevention of multiple unmanned aerial vehicles relates to the problem of high-dimensional combined space generated by superposition of multiple unmanned aerial vehicle degrees of freedom, the problem of optimization, the problem of static and dynamic constraint of unmanned aerial vehicles and the like. The control architecture of the multi-unmanned aerial vehicle anti-collision system comprises a centralized structure and a distributed structure. The centralized control can obtain the planning result with high efficiency and global optimization, but the centralized control is mainly suitable for static environment and is difficult to cope with the change of the environment; in a distributed system, each unmanned aerial vehicle plans its own action according to its own environmental information, which has the advantages of being adaptable to environmental changes and having the disadvantages of not being able to obtain a global optimal solution and possibly causing a Deadlock (Deadlock) problem. At present, the multi-drone anti-collision method is mostly expanded from research results of single-robot motion planning, such as Sampling-based Methods (Sampling-based Methods), Neural Network-based Methods (Neural Network-based Methods), Fuzzy Logic (Fuzzy Logic), geometric models (such as Velocity Obstacles (Velocity), Artificial Potential Field Methods (Artificial Potential fields)), mathematical optimization models (mathematical optimization or planning Methods), Swarm Intelligence or soft computing (Swarm Intelligence, software computing), and Reinforcement Learning (relationship Learning).
Among the above methods, the speed obstacle is a Local path planning (Local path planning) model (i.e. obstacle avoidance model) commonly used by multi-robot systems, but it is assumed that the obstacle is stationary or moving at a constant speed, and the model needs to solve a complex Nonlinear Optimization Problem (Nonlinear Optimization Problem) in a Relative speed Space (Relative Velocity Space), so that the real-time property is difficult to guarantee. In addition, on tracking the unmanned aerial vehicles, the speed optimization not only needs to consider potential collision among the unmanned aerial vehicles, but also needs to consider the motion trend of the tracked target, and meanwhile needs to ensure real-time response.
Disclosure of Invention
In order to solve the problems, improve the use efficiency of an airspace, ensure the safety and smoothness of the flight of the unmanned aerial vehicle and realize the continuous tracking of a target, the invention provides a cooperative anti-collision method for multiple tracking unmanned aerial vehicles in the same airspace.
The invention is realized by the following technical scheme: a cooperative anti-collision method for multiple tracking unmanned aerial vehicles in the same airspace specifically comprises the following steps:
step 1, firstly, acquiring image information from a real-time video stream issued by an unmanned aerial vehicle sensor;
step 2, using a tracker in OpenCV to locate the center coordinate (x) of the target on the imageo,yo);
Step 3, the central coordinate (x) in the step 2 is comparedo,yo) Scaling to get a normalized offset Δx' and Deltay' calculating the velocity vector v of the drone using a PID algorithm for tracking the target.
Step 4, using a velocity barrier model to perform collision detection on v, namely checking whether the tail end of v is in a combined velocity barrier area, and when the tail end of v is in the combined velocity barrier area, adjusting the candidate velocity by using a target-oriented velocity optimization method to eliminate collision;
and 5, repeatedly executing the processes of the steps 1-4 by the unmanned aerial vehicle until the tracking task is completed.
Further, the method for tracking the unmanned aerial vehicle specifically comprises the following steps:
step 1: collecting image information by using equipment such as an airborne holder and the like, and returning video data to a control end;
step 2: selecting a target to be tracked by the unmanned aerial vehicle from the video data, and extracting features such as HOG, HSV and the like;
and 3, step 3: positioning the central coordinate (x) of the target on the image by using the features of HOG, HSV and the like extracted in the step 2 through a tracker of OpenCVo,yo) I.e. the centre of the smallest enclosing circle or the centre of the smallest enclosing rectangle;
and 4, step 4: calculating the center coordinate (x)o,yo) With unmanned aerial vehicle field of vision central point (x)c,yc) The absolute offsets Δ x and Δ y in both the horizontal and vertical directions:
Figure BDA0002267129160000021
and 5, step 5: setting a dead zone range, wherein the width is w, the height is h, and the center of the dead zone range is superposed with the center of the visual field of the unmanned aerial vehicle; when | xo-xcWhen | < w/2, the speed direction of the unmanned aerial vehicle does not need to be adjusted; when yo-ycWhen | < h/2, the speed of the unmanned aerial vehicle is 0, namely, the unmanned aerial vehicle does not need to advance or retreat, otherwise, the step 6 is implemented;
and 6, step 6: let the resolution of the view picture of the unmanned aerial vehicle be xr×yrThe offsets are first normalized, scaling Δ x and Δ y to [ -1, 1]I.e. deltax′=Δx/(xr/2),Δy′=Δy/(yr/2), then the speed of the drone at the next moment is:
v=vmaxΔy
α=αmaxΔx
vmaxmaximum speed of drone, αmaxmaxPi/2) is the maximum rotation angle of the unmanned aerial vehicle rotating around the Z axis only,
wherein when isy' < 0, indicating that it should be reversed, otherwise it should be advanced; similarly, when ΔxWhen the' is less than 0, the unmanned aerial vehicle rotates leftwards, otherwise, the unmanned aerial vehicle rotates rightwards;
at this time, a new velocity vector v ═ (v, α) is generated, the direction of which is relative to the current straight ahead α, and the velocity magnitude is v;
and 7, step 7: and repeating the steps 3-6 until the tracking task is completed.
Further, step 4 comprises the following substeps:
when there are multiple tracked drones in the airspace, collision check on v is also needed.
Step 1: screening the intruders by using a collision cone model, and selecting the threateners threatening the local machine, wherein the specific process comprises the following steps:
for each at u0Unmanned plane u in sensing rangeiDefining the relative velocity v0|i=v0-viDefining the ray:
λ(p,v)={p+vt|t>0}
wherein the state information of the drone is represented by S ═ (p, v), and p ═ p (p)x,py) Indicating the position of the unmanned aerial vehicle, and v indicating the speed information of the unmanned aerial vehicle; at unmanned plane u0Has u in the sensing range1,…,unAlso executing the task, defining n unmanned planes u1,…,unIs u0Is the "invader", i.e. | | p0-pi||2≤ρSR,ρSRA radius representing a perception range of the drone, or a radius representing an early warning range; λ (p, v) represents a ray whose origin is in p, the direction being along the direction of v;
according to u0Radius size of (d) to uiIs subjected to 'puffing', i.e. ri′=ri+r0Obtaining a position obstacle PO0|iDefine the collision cone CC0|iComprises the following steps:
Figure BDA0002267129160000031
any relative velocity vt|iIs located in CC0|iAll cause u0And uiBy collision of CC0|iScreening out the pairs u0Threatening unmanned plane denoted u1,…,um(m≤n)。
Step 2, defining a speed obstacle model:
Figure BDA0002267129160000032
wherein
Figure BDA0002267129160000033
Representing a minkowski vector sum operation, i.e.:
wherein, VOt|iIs a set of a series of speeds that result in u0And uiA collision at some future time;
to avoid multiple threats, multiple VOs need to be combined:
Figure BDA0002267129160000042
where U denotes the combination of m VO geometric regions, so when v0When the tail end of the motor is positioned in the VO, the speed outside the VO is selected to avoid potential collision.
Step 3, speed optimization, under the condition of ensuring that the direction of a speed vector v is unchanged, under the constraint of VO, the speed v (t +1) of the unmanned aerial vehicle at the next moment is optimized, and the specific steps are as follows:
assuming that the boundary curve of VO is Ω and the extension of velocity v is defined as λ (p, v), then:
P=λ(p,v)∩Ω
where P denotes the intersection of the candidate speed and the VO boundary curve. Of all the intersections, the intersection P closest to the distance P is selectediAs end of the next time velocity EvNamely:
wherein P isiDenotes the intersection of p with the i' th VO boundary curve.
The optimized speed of the unmanned aerial vehicle is as follows:
v′=(v′,α)=(||Ev-p||2,α)。
compared with the prior art, the invention has the following beneficial effects:
(1) the method has a complete geometric basis, is commonly used for the problems of obstacle avoidance and collision avoidance of a multi-robot system, and is suitable for the application scene of the method;
(2) when a plurality of intruders exist, the collision cone model is used for screening the intruders, so that the problem scale is reduced;
(3) compared with a classical velocity barrier model, the target-oriented velocity optimization method is provided in consideration of the requirements of real-time performance and directivity of the tracking task (namely the requirement that the tracking unmanned aerial vehicle needs to keep the motion trend basically consistent with the tracked target), namely a simple method for solving the optimization problem is provided.
The method is characterized in that: the collision problem that probably takes place between them when having solved many tracking unmanned aerial vehicle flights fast high-efficiently.
Drawings
FIG. 1 is a flow chart embodying the present invention;
FIG. 2 is a diagram of an implementation of the present invention;
FIG. 3 is a diagram showing a relationship between an actual position and an ideal position of a target on a visual field picture of an airborne tripod head of an unmanned aerial vehicle;
FIG. 4 is a schematic diagram of the screening of "intruding" drones based on velocity barriers in the present invention;
FIG. 5 is a schematic diagram of a target-oriented speed optimization method introduced in the present invention.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Fig. 1 shows a specific flowchart of a cooperative anti-collision method for multiple tracked drones in the same airspace, which specifically includes:
step 1, firstly, acquiring image information from a real-time video stream issued by an unmanned aerial vehicle sensor;
step 2, using a tracker in OpenCV to locate the center coordinate (x) of the target on the imageo,yo);
Step 3, the central coordinate (x) in the step 2 is comparedo,yo) Normalized offset Δx' and Deltay' calculating the velocity vector v of the drone using a PID algorithm for tracking the target.
Step 4, using a velocity barrier model to perform collision detection on v, namely checking whether the tail end of v is in a combined velocity barrier area, and when the tail end of v is in the combined velocity barrier area, adjusting the candidate velocity by using a target-oriented velocity optimization method to eliminate collision;
and 5, repeatedly executing the processes of the steps 1-4 by the unmanned aerial vehicle until the tracking task is completed.
Fig. 2 shows an implementation architecture diagram for implementing the method of the invention on a real drone platform. The bottom layer of the architecture is an unmanned aerial vehicle hardware platform which comprises a battery, a point machine, a sensor, an antenna, an onboard processor and the like, and a Linux operating system is installed on the onboard processor of the unmanned aerial vehicle. In order to realize global information sharing between drones, a Robot Operation System (ROS) is considered to be deployed on a Linux Operation System as a Meta-Operation System. In the ROS, a Master Node (Master Node) is responsible for message communication between nodes. Therefore, each unmanned aerial vehicle can be used as a node in one ROS, all unmanned aerial vehicles register own information with a main node, and each unmanned aerial vehicle broadcasts own information such as position, speed and the like through a Topic (Topic). Similarly, other drones may subscribe to the topic to obtain information for the drone. This enables asynchronous global communication between drones. On the basis of an ROS layer, the cooperative anti-collision system for the tracking unmanned aerial vehicle comprises two main functional modules, namely target tracking and cooperative anti-collision. The target tracking module mainly comprises two parts, namely target detection based on OpenCV and speed control based on PID; the cooperative collision avoidance mainly comprises a collision detection based on speed obstacle and a speed optimization part of target guidance. A detailed process description of these two modules is given below:
the specific implementation steps of the target tracking process are as follows:
step 1: collecting image information by using equipment such as an airborne holder and the like, and returning video data to a control end;
step 2: selecting a target to be tracked by the unmanned aerial vehicle from the video data, and extracting features such as HOG, HSV and the like;
and 3, step 3: locating the central coordinates (x) of the target on the image by using the features of HOG, HSV, etc. extracted in step 2 through the 'tracker' of OpenCV, such as MIL, KCF, TLD, etco,yo) I.e. the center of the smallest enclosing circle or the center of the smallest enclosing rectangle, such as the center of the red circle in fig. 3;
and 4, step 4: calculating the center coordinate (x)o,yo) With unmanned aerial vehicle field of vision central point (x)c,yc) The absolute offsets Δ x and Δ y in both the horizontal and vertical directions:
Figure BDA0002267129160000061
and the delta x and the delta y are used for judging whether the unmanned aerial vehicle needs to carry out speed adjustment at the current moment or not and calculating a control instruction by using a PID algorithm.
And 5, step 5: in order to reduce the "jitter" problem of the drone when the target center coordinates are near the ideal value, a dead zone range is set, with width w and height h, as shown by the small rectangle in fig. 3. The center of the dead zone range is superposed with the center of the visual field of the unmanned aerial vehicle; when | xo-xcWhen the | is less than or equal to w/2, the speed direction of the unmanned aerial vehicle does not need to be adjusted, namely, the unmanned aerial vehicle does not need to be rotated left or right; when yo-ycWhen | < h/2, the speed of the unmanned aerial vehicle is 0, namely, the unmanned aerial vehicle does not need to advance or retreat, otherwise, the step 6 is implemented;
and 6, step 6: arriving at this step to illustrate that the position of the target on the image has already gone out of the dead zone range, the present invention calculates the drone speed using a PID controller, only the P control is given here as an example. Let the resolution of the view picture of the unmanned aerial vehicle be xr×yrThe offsets are first normalized, scaling Δ x and Δ y to [ -1, 1]I.e. deltax′=Δx/(xr/2),Δy′=Δy/(yr/2), then the speed of the drone at the next moment is:
v=vmaxΔy
α=αmaxΔx
vmaxmaximum speed of drone, αmaxmaxPi/2) is the maximum rotation angle of the unmanned aerial vehicle rotating around the Z axis only,
wherein when isy' < 0, indicating that it should be reversed, otherwise it should be advanced; similarly, when ΔxWhen the' is less than 0, the unmanned aerial vehicle rotates leftwards, otherwise, the unmanned aerial vehicle rotates rightwards;
at this time, a new velocity vector v ═ (v, α) is generated, the direction of which is relative to the current straight ahead α, and the velocity magnitude is v;
and 7, step 7: and repeating the steps 3-6 until the tracking task is completed.
However, the speed generated by the above process cannot guarantee that no collision occurs between the drones, and for this reason, a synergistic collision avoidance process is given. The anti-collision process provided by the invention is based on a speed obstacle model, is commonly used for the problems of obstacle avoidance and collision avoidance of a multi-robot system, and is suitable for the application scene of the invention. The specific implementation steps of the cooperative anti-collision process are as follows:
step 1: screening the intruders by using a collision cone model, and selecting the threateners threatening the local machine, wherein the specific process comprises the following steps:
for each at u0Unmanned plane u in sensing rangeiDefining the relative velocity v0|i=v0-viDefining the ray:
λ(p,v)={p+vt|t>0}
wherein the state information of the drone is represented by S ═ (p, v), and p ═ p (p)x,py) Indicating the position of the unmanned aerial vehicle, and v indicating the speed information of the unmanned aerial vehicle; at unmanned plane u0Has u in the sensing range1,…,unAlso executing the task, defining n unmanned planes u1,…,unIs u0Is the "invader", i.e. | | p0-pi||2≤ρSR,ρSRA radius representing a perception range of the drone, or a radius representing an early warning range; λ (p, v) represents a ray whose origin is in p, the direction being along the direction of v;
according to u0Radius size of (d) to uiIs subjected to 'puffing', i.e. ri′=ri+r0Obtaining a position obstacle PO0|iDefine the collision cone CC0|iComprises the following steps:
Figure BDA0002267129160000071
any relative velocity vt|iIs located in CC0|iAll cause u0And uiSuch as the shaded portion on the right in fig. 4. By CC0|iScreening out the pairs u0Threatening unmanned plane denoted u1,…,um(m≤n)。
Step 2, combining potential collisions: in order to directly use the absolute velocity v0Performing collision detection, and proposing a speed obstacle model;
Figure BDA0002267129160000072
wherein
Figure BDA0002267129160000073
Representing a minkowski vector sum operation, i.e.:
Figure BDA0002267129160000074
wherein, VOt|iIs a set of a series of speeds that result in u0And uiA collision at some future time;
to avoid multiple threats, multiple VOs need to be combined, as in fig. 5 a VO consists of two speed barriers:
Figure BDA0002267129160000075
where U denotes the combination of m VO geometric regions, so when v0When the tail end of the motor is positioned in the VO, the speed outside the VO is selected to avoid potential collision.
Step 3, collision handling (speed optimization). In the classical velocity barrier model, the following optimization problem is solved
Figure BDA0002267129160000076
The speed of the robot is adjusted. However, the process is time-consuming, and therefore, the method for optimizing the target-oriented speed is provided, and under the condition that the direction of the speed vector v is guaranteed to be unchanged to the maximum extent and under the constraint of the VO, the speed v (t +1) of the unmanned aerial vehicle at the next moment is optimized, and the method specifically comprises the following steps:
assuming that the boundary curve of VO is Ω and the extension of velocity v is defined as λ (p, v), then:
P=λ(p,v)∩Ω
where P denotes the intersection of the candidate speed and the VO boundary curve. Of all the intersections, the intersection P closest to the distance P is selectediAs end of the next time velocity EvNamely:
Figure BDA0002267129160000081
wherein P isiDenotes the intersection of p with the i' th VO boundary curve.
The optimized speed of the unmanned aerial vehicle is as follows:
v′=(v′,α)=(||Ev-p||2,α)。
the present invention is not limited to the above-described embodiments, and those skilled in the art can implement the present invention in other various embodiments based on the disclosure of the present invention. Therefore, the design of the invention is within the scope of protection, with simple changes or modifications, based on the design structure and thought of the invention.

Claims (3)

1. A cooperative anti-collision method for multiple tracking unmanned aerial vehicles in the same airspace is characterized by comprising the following steps:
step 1, firstly, acquiring image information from a real-time video stream issued by an unmanned aerial vehicle sensor;
step 2, using a tracker in OpenCV to locate the center coordinate (x) of the target on the imageo,yo);
Step 3, the central coordinate (x) in the step 2 is comparedo,yo) Scaling to get a normalized offset Δx' and Deltay' calculating the velocity vector v of the drone using a PID algorithm for tracking the target.
Step 4, using a velocity barrier model to perform collision detection on v, namely checking whether the tail end of v is in a combined velocity barrier area, and when the tail end of v is in the combined velocity barrier area, adjusting the candidate velocity by using a target-oriented velocity optimization method to eliminate collision;
and 5, repeatedly executing the processes of the steps 1-4 by the unmanned aerial vehicle until the tracking task is completed.
2. The cooperative anti-collision method according to claim 1, wherein the method for tracking the drone is specifically:
step 1: collecting image information by using equipment such as an airborne holder and the like, and returning video data to a control end;
step 2: selecting a target to be tracked by the unmanned aerial vehicle from the video data, and extracting features such as HOG, HSV and the like;
and 3, step 3: positioning the central coordinate (x) of the target on the image by using the features of HOG, HSV and the like extracted in the step 2 through a tracker of OpenCVo,yo) I.e. the centre of the smallest enclosing circle or the centre of the smallest enclosing rectangle;
and 4, step 4: calculating the center coordinate (x)o,yo) With unmanned aerial vehicle field of vision central point (x)c,yc) The absolute offsets Δ x and Δ y in both the horizontal and vertical directions:
Figure FDA0002267129150000011
and 5, step 5: setting a dead zone range, wherein the width is w, the height is h, and the center of the dead zone range is superposed with the center of the visual field of the unmanned aerial vehicle; when | xo-xcWhen | < w/2, the speed direction of the unmanned aerial vehicle does not need to be adjusted; when yo-ycWhen | < h/2, the speed of the unmanned aerial vehicle is 0, namely, the unmanned aerial vehicle does not need to advance or retreat, otherwise, the step 6 is implemented;
and 6, step 6: let the resolution of the view picture of the unmanned aerial vehicle be xr×yrThe offsets are first normalized, scaling Δ x and Δ y to [ -1, 1]I.e. deltax′=Δx/(xr/2),Δy′=Δy/(yr/2), then the speed of the drone at the next moment is:
v=vmaxΔy
α=αmaxΔx
vmaxmaximum speed of drone, αmaxmaxPi/2) is the maximum rotation angle of the unmanned aerial vehicle rotating around the Z axis only,
wherein when isy′<0, indicating that it should be backed up, otherwise it should be advanced;similarly, when Δx′<When 0, the unmanned aerial vehicle rotates leftwards, otherwise, the unmanned aerial vehicle rotates rightwards;
at this time, a new velocity vector v ═ (v, α) is generated, the direction of which is relative to the current straight ahead α, and the velocity magnitude is v;
and 7, step 7: and repeating the steps 3-6 until the tracking task is completed.
3. The cooperative anti-collision method according to claim 1, wherein step 4 comprises the following sub-steps:
when there are multiple tracked drones in the airspace, collision check on v is also needed.
Step 1: screening the intruders by using a collision cone model, and selecting the threateners threatening the local machine, wherein the specific process comprises the following steps:
for each at u0Unmanned plane u in sensing rangeiDefining the relative velocity v0|i=v0-viDefining the ray:
λ(p,v)={p+vt|t>0}
wherein the state information of the drone is represented by S ═ (p, v), and p ═ p (p)x,py) Indicating the position of the unmanned aerial vehicle, and v indicating the speed information of the unmanned aerial vehicle; at unmanned plane u0Has u in the sensing range1,…,unAlso executing the task, defining n unmanned planes u1,…,unIs u0Is the "invader", i.e. | | p0-pi||2≤ρSR,ρSRA radius representing a perception range of the drone, or a radius representing an early warning range; λ (p, v) represents a ray whose origin is in p, the direction being along the direction of v;
according to u0Radius size of (d) to uiIs subjected to 'puffing', i.e. ri′=ri+r0Obtaining a position obstacle PO0|iDefine the collision cone CC0|iComprises the following steps:
Figure FDA0002267129150000021
any relative velocity vt|iIs located in CC0|iAll cause u0And uiBy collision of CC0|iScreening out the pairs u0Threatening unmanned plane denoted u1,…,um(m≤n)。
Step 2, defining a speed obstacle model:
Figure FDA0002267129150000022
wherein
Figure FDA0002267129150000023
Representing a minkowski vector sum operation, i.e.:
Figure FDA0002267129150000024
wherein, VOt|iIs a set of a series of speeds that result in u0And uiA collision at some future time; to avoid multiple threats, multiple VOs need to be combined:
Figure FDA0002267129150000031
wherein ∪ denotes the combination of m VO geometric regions, so when v is0When the tail end of the motor is positioned in the VO, the speed outside the VO is selected to avoid potential collision.
Step 3, speed optimization, under the condition of ensuring that the direction of a speed vector v is unchanged, under the constraint of VO, the speed v (t +1) of the unmanned aerial vehicle at the next moment is optimized, and the specific steps are as follows:
assuming that the boundary curve of VO is Ω and the extension of velocity v is defined as λ (p, v), then:
P=λ(p,v)∩Ω
where P denotes the intersection of the candidate speed and the VO boundary curve. In all intersections, distances are selectedThe intersection point P nearest to PiAs end of the next time velocity EvNamely:
wherein P isiDenotes the intersection of p with the i' th VO boundary curve.
The optimized speed of the unmanned aerial vehicle is as follows:
v′=(v′,α)=(||Ev-p||2,α)。
CN201911092138.8A2019-11-112019-11-11Cooperative anti-collision method for multiple tracking unmanned aerial vehicles in same airspaceActiveCN110825108B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911092138.8ACN110825108B (en)2019-11-112019-11-11Cooperative anti-collision method for multiple tracking unmanned aerial vehicles in same airspace

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911092138.8ACN110825108B (en)2019-11-112019-11-11Cooperative anti-collision method for multiple tracking unmanned aerial vehicles in same airspace

Publications (2)

Publication NumberPublication Date
CN110825108Atrue CN110825108A (en)2020-02-21
CN110825108B CN110825108B (en)2023-03-14

Family

ID=69553838

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911092138.8AActiveCN110825108B (en)2019-11-112019-11-11Cooperative anti-collision method for multiple tracking unmanned aerial vehicles in same airspace

Country Status (1)

CountryLink
CN (1)CN110825108B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111982127A (en)*2020-08-312020-11-24华通科技有限公司 LightWeight-3D obstacle avoidance method
CN112270250A (en)*2020-10-262021-01-26浙江理工大学Target tracking method for tracking ground moving target by unmanned aerial vehicle
CN112506194A (en)*2020-12-032021-03-16中山大学Distributed safety learning control method for mobile robot cluster
CN112925342A (en)*2021-01-202021-06-08北京工商大学Unmanned aerial vehicle dynamic obstacle avoidance method based on improved mutual velocity obstacle method
CN113885562A (en)*2021-10-082022-01-04北京理工大学 A collaborative collision avoidance method for multi-UAVs under perception constraints based on speed obstacles
CN114355958A (en)*2021-09-092022-04-15南京航空航天大学Interactive task deployment method of multi-unmanned-aerial-vehicle intelligent cooperative system
CN115016492A (en)*2022-06-242022-09-06山东新一代信息产业技术研究院有限公司Multi-robot anti-collision method and system
CN117241133A (en)*2023-11-132023-12-15武汉益模科技股份有限公司Visual work reporting method and system for multi-task simultaneous operation based on non-fixed position
CN118331291A (en)*2024-03-182024-07-12北京中航智科技有限公司Unmanned aerial vehicle multi-machine dynamic conflict resolution method based on speed or course allocation

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20120265380A1 (en)*2011-04-132012-10-18California Institute Of TechnologyTarget Trailing with Safe Navigation with colregs for Maritime Autonomous Surface Vehicles
CN104501816A (en)*2015-01-082015-04-08中国航空无线电电子研究所Multi-unmanned aerial vehicle coordination and collision avoidance guide planning method
CN105022270A (en)*2015-03-202015-11-04武汉理工大学Automatic ship collision avoidance method based on velocity vector coordinate system
CN108958289A (en)*2018-07-282018-12-07天津大学 Collision avoidance method for swarm unmanned aerial vehicles based on relative velocity obstacles

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20120265380A1 (en)*2011-04-132012-10-18California Institute Of TechnologyTarget Trailing with Safe Navigation with colregs for Maritime Autonomous Surface Vehicles
CN104501816A (en)*2015-01-082015-04-08中国航空无线电电子研究所Multi-unmanned aerial vehicle coordination and collision avoidance guide planning method
CN105022270A (en)*2015-03-202015-11-04武汉理工大学Automatic ship collision avoidance method based on velocity vector coordinate system
CN108958289A (en)*2018-07-282018-12-07天津大学 Collision avoidance method for swarm unmanned aerial vehicles based on relative velocity obstacles

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FIORINIP: "motion planning in dynamic environments using velocity obstacles", 《THEINTERNATIONALJOURNALOFROBOTICSRESEARCH》*
YAMIN HUANG: "Generalized velocity obstacle algorithm for preventing ship collisions at sea", 《OCEAN ENGINEERING》*
杨秀霞 等: "一种三维空间UAV自主避障算法研究", 《计算机与数字工程》*
熊勇 等: "基于速度障碍的多船自动避碰控制方法", 《中国航海》*

Cited By (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111982127A (en)*2020-08-312020-11-24华通科技有限公司 LightWeight-3D obstacle avoidance method
CN112270250A (en)*2020-10-262021-01-26浙江理工大学Target tracking method for tracking ground moving target by unmanned aerial vehicle
CN112270250B (en)*2020-10-262024-04-09浙江理工大学 A target tracking method for unmanned aerial vehicle to track ground moving targets
CN112506194A (en)*2020-12-032021-03-16中山大学Distributed safety learning control method for mobile robot cluster
CN112506194B (en)*2020-12-032022-03-29中山大学Distributed safety learning control method for mobile robot cluster
CN112925342B (en)*2021-01-202022-07-01北京工商大学 Dynamic obstacle avoidance method for UAV based on improved mutual velocity obstacle method
CN112925342A (en)*2021-01-202021-06-08北京工商大学Unmanned aerial vehicle dynamic obstacle avoidance method based on improved mutual velocity obstacle method
CN114355958A (en)*2021-09-092022-04-15南京航空航天大学Interactive task deployment method of multi-unmanned-aerial-vehicle intelligent cooperative system
CN114355958B (en)*2021-09-092022-06-21南京航空航天大学Interactive task deployment method of multi-unmanned-aerial-vehicle intelligent cooperative system
CN113885562B (en)*2021-10-082023-01-10北京理工大学 A multi-UAV cooperative collision avoidance method based on speed obstacle perception constraints
CN113885562A (en)*2021-10-082022-01-04北京理工大学 A collaborative collision avoidance method for multi-UAVs under perception constraints based on speed obstacles
CN115016492A (en)*2022-06-242022-09-06山东新一代信息产业技术研究院有限公司Multi-robot anti-collision method and system
CN117241133A (en)*2023-11-132023-12-15武汉益模科技股份有限公司Visual work reporting method and system for multi-task simultaneous operation based on non-fixed position
CN117241133B (en)*2023-11-132024-02-06武汉益模科技股份有限公司Visual work reporting method and system for multi-task simultaneous operation based on non-fixed position
CN118331291A (en)*2024-03-182024-07-12北京中航智科技有限公司Unmanned aerial vehicle multi-machine dynamic conflict resolution method based on speed or course allocation

Also Published As

Publication numberPublication date
CN110825108B (en)2023-03-14

Similar Documents

PublicationPublication DateTitle
CN110825108A (en) A collaborative collision avoidance method for multiple tracking UAVs in the same airspace
Ryan et al.An overview of emerging results in cooperative UAV control
Roelofsen et al.Reciprocal collision avoidance for quadrotors using on-board visual detection
Eresen et al.Autonomous quadrotor flight with vision-based obstacle avoidance in virtual environment
Ma'Sum et al.Simulation of intelligent unmanned aerial vehicle (UAV) for military surveillance
Lin et al.A robust real-time embedded vision system on an unmanned rotorcraft for ground target following
CN104656663B (en)A kind of unmanned plane formation of view-based access control model perceives and bypassing method
Rafi et al.Autonomous target following by unmanned aerial vehicles
Zhang et al.Agile formation control of drone flocking enhanced with active vision-based relative localization
US11865978B2 (en)Object tracking system including stereo camera assembly and methods of use
Xu et al.Vision-based autonomous landing of unmanned aerial vehicle on a motional unmanned surface vessel
Pritzl et al.Cooperative navigation and guidance of a micro-scale aerial vehicle by an accompanying UAV using 3D LiDAR relative localization
JP7044061B2 (en) Mobiles, mobile control systems, mobile control methods, interface devices, and programs
Leong et al.Vision-based sense and avoid with monocular vision and real-time object detection for uavs
Lombaerts et al.Adaptive multi-sensor fusion based object tracking for autonomous urban air mobility operations
KimControl laws to avoid collision with three dimensional obstacles using sensors
Ahmed et al.An energy efficient IoD static and dynamic collision avoidance approach based on gradient optimization
Barišić et al.Brain over brawn: Using a stereo camera to detect, track, and intercept a faster UAV by reconstructing the intruder's trajectory
Nagrare et al.Decentralized path planning approach for crowd surveillance using drones
Choi et al.Multi-robot avoidance control based on omni-directional visual SLAM with a fisheye lens camera
Bouzerzour et al.Robust vision-based sliding mode control for uncooperative ground target searching and tracking by quadrotor
Marlow et al.Local terrain mapping for obstacle avoidance using monocular vision
Lwowski et al.A reactive bearing angle only obstacle avoidance technique for unmanned ground vehicles
Rondon et al.Optical flow-based controller for reactive and relative navigation dedicated to a four rotor rotorcraft
Yang et al.A new image-based visual servo control algorithm for target tracking problem of fixed-wing unmanned aerial vehicle

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp