Navigation method of inspection robot for livestock and poultry farmTechnical Field
The invention relates to the technical field of safe and efficient livestock and poultry breeding information, in particular to a navigation method of a patrol robot in a livestock and poultry farm.
Background
In the autonomous navigation process, the inspection robot knows the position of the inspection robot, and only when knowing the accurate position of the inspection robot, the inspection robot can accurately move to a target point.
The existing navigation control technology of the inspection robot is to control the inspection robot to run along a magnetic strip paved on the ground, and a control system adopting a GPS or radar technology is also provided. The form of laying the magnetic strips on the ground not only has high requirement on the environment of a use site, but also is time-consuming and high in price, and is not convenient for replacing scenes for use; the GPS navigation is mostly used outdoors, and the indoor use effect is not good.
With the rapid development of electronic technology, a navigation technology based on laser radar, millimeter wave radar and an inertial navigation technology also makes a significant breakthrough, and the method is widely applied to the related fields of automatic inspection, agricultural machinery, automatic logistics distribution, automatic driving and the like, and overcomes the defects of magnetic navigation, GPS and radar navigation to a certain extent. Laser radar is also great by ambient light's influence, and inertial navigation technique adds GPS and is used for outdoor navigation, nevertheless can produce accumulative error because of long-time the use, and the robot operation is patrolled and examined at fixed plant or confined structural space to beasts and birds scale plant, and regular or unscheduled patrols and examines.
Therefore, the industry needs to develop a navigation control method or system of the inspection robot for the livestock and poultry farm with low cost and high reliability.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a navigation method of a patrol robot in a livestock and poultry farm.
The purpose of the invention is realized by the following technical scheme:
a navigation method of a patrol robot in a livestock and poultry farm comprises the following steps:
s1, determining a starting point and a terminal point on the overall map of the livestock and poultry farm, and automatically planning an optimal path;
s2, the inspection robot moves in the livestock and poultry farm according to the optimal path, and in the moving process, the path identifier and the image of the QR two-dimensional code which are arranged on the ground are obtained in real time, the central navigation path is found out by using a centerline method, and the QR two-dimensional code is identified and decoded; acquiring a drive control parameter, acquiring a navigation parameter according to the path information and the drive control parameter, and correcting the motion deviation in real time by the inspection robot according to the navigation parameter; detecting whether an obstacle exists or not simultaneously in the moving process; if so, avoiding the obstacle until the inspection robot moves to the terminal point; wherein the drive control parameters include a lateral deviation and an average angular deviation.
Preferably, step S2 includes: shooting an image of a path identifier and a QR two-dimensional code which are arranged on the ground and an image of the depth of an obstacle in front of the movement; acquiring a driving control parameter of the inspection robot in real time; and processing the path identifier, the image of the QR two-dimensional code and the barrier depth image, and fusing the processed image information and the drive control parameter to obtain a navigation parameter.
Preferably, step S2 further includes: converting the image of the identification path and the QR two-dimensional code from an RGB image into a BGR image, and adjusting the size of the image to 640x 480; splitting the converted image into H, S, V three channels, traversing all pixel points, and when the pixel point of the image meets the following conditions: when H _ min < ═ H _ max, S _ min < ═ S _ max, and V _ min < ═ V _ max, the pixel value of the point of the output image is 255, and otherwise 0, where H _ min, H _ max, S _ min, and S _ max are threshold values for path segmentation; after the path is divided, scanning from 0 row to 479 row from top to bottom, scanning from the middle 319 column to two sides, and scanning from 319 column to 9 column on the left; the right is the sweep from column 319 through column 630; for a specific certain row, on the left side, when the pixel values of 8 continuous rows on the left side of a certain row are 0, the pixel point is the left boundary point of the path and is marked as left; on the right side, when the pixel value of a certain continuous 8 rows on the right side is 0, the pixel point is the right boundary point of the path and is marked as right, the value of (left + right)/2 is calculated, namely the center point of the path, and all the center points are connected to form the center line of the path; setting a threshold value of a straight line, and fitting a straight line of the center of the navigation path by utilizing Hough straight line transformation; calculating the current transverse deviation and angle deviation of the inspection robot by using the path center line; and formulating a fuzzy rule, and determining navigation parameters according to the transverse deviation and the average angle deviation obtained by fusing the angle deviation, the course angle fed back by the IMU and the theta in the QR two-dimensional code.
Preferably, automatically planning the optimal path includes: setting an ID number of a QR two-dimensional code of a starting point on a global map of a livestock and poultry farm; the QR two-dimensional code is simulated by using a rectangular frame on a global map of a livestock and poultry farm, the number in the rectangular frame is the position ID number of the QR two-dimensional code, the numerical value between the rectangular frames is a weight value, and a straight line with a bidirectional arrow represents the movable direction of the inspection robot; the QR two-dimensional code is associated with a corresponding position ID number and an absolute position and pose (X, Y, theta) of the current position; generating an optimal path by utilizing a Dijkstra algorithm; the method for generating the optimal path by utilizing the Dijkstra algorithm comprises the following steps: (a) dividing all vertexes of the global map of the livestock and poultry farm into two types, wherein one type is a used vertex, and the other type is an unused vertex; firstly, taking a set starting point as a used point, and then finding out a point closest to the starting point from the unused points; (b) updating the unused point by using the nearest point obtained in the step (a) so as to find out the next nearest point to the starting point after the point nearest to the starting point is found out in the last step.
Preferably, the recognizing and decoding of the QR two-dimensional code includes: graying the shot QR two-dimensional code image, and smoothing the grayed image through median filtering; carrying out binarization processing on the smoothed picture; using a contour extraction function findCountours to find out all contours in the binary image; judging whether two sub-contours exist or not through hierarchy, and screening out three black positioning angle contours in the image; calculating to obtain the central winning of the three positioning angles; finding out a point with the largest angle in the central coordinate; acquiring a projective transformation matrix through a getAffiniTransform function; carrying out affine transformation on the image through a warpAffine function by utilizing a transformation matrix so as to correct a distorted picture; and calling a Zbar library in OpenCV to identify and decode the corrected QR two-dimensional code information.
Preferably, the detecting of the obstacle comprises: firstly, acquiring a depth map by using a depth camera, and converting the depth map into a point cloud map; carrying out binarization processing on the point cloud image by using a depth threshold value; carrying out corrosion expansion operation on the binary image to remove noise; extracting the outlines of all the objects, deleting the outlines with undersized areas, and then reordering the rest outlines according to the distance; extracting the screened obstacle outline and the non-obstacle outline, and using rectangular frames with different colors to obtain the obstacle outline and the non-obstacle outline; judging whether the middle path meets the width requirement of operation or not, and if the middle path does not meet the width requirement of operation, judging the sizes of the obstacles on the left side and the right side; if the left obstacle is large, the inspection robot turns right, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns left according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path; and if the obstacle on the right side is large, the inspection robot turns left, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns right according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path.
Compared with the prior art, the invention has the following advantages:
according to the invention, the shot path identifier and the image of the QR two-dimensional code are processed to obtain path information and navigation parameters, the mobile inspection mobile platform is controlled to move according to the navigation parameters, so that the mobile inspection mobile platform can automatically inspect in a livestock and poultry farm, and meanwhile, the invention detects the environmental temperature and humidity, the concentration of harmful gases and the wind speed, and wirelessly transmits the livestock and poultry behavior video to the PC terminal in real time. The navigation control method fusing the identification path and the QR two-dimensional code information can overcome the defect of influence of ambient light; compared with an inertial navigation control method, the navigation control method fusing the identification path and the QR two-dimensional code information can correct accumulated errors generated by long-time work of an inertial system and an encoder, and therefore a target point can be accurately reached.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flowchart illustrating a navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
FIG. 2 is a path planning diagram of a navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
FIG. 3 is a flow chart of the identification path recognition of the navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
FIG. 4 is a QR two-dimensional code recognition flow chart of the navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
FIG. 5 is a navigation parameter acquisition flow chart of the navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
fig. 6 is an obstacle avoidance flow chart of the navigation method of the inspection robot for the livestock and poultry farm of the present invention.
Detailed Description
The invention is further illustrated by the following figures and examples.
Referring to fig. 1, a navigation method of a patrol robot for a livestock and poultry farm includes:
s1, determining a starting point and a terminal point on the overall map of the livestock and poultry farm, and automatically planning an optimal path;
in this embodiment, the inspection robot is further configured between the step S1 and the step S2 to detect whether the electric quantity of the robot meets the threshold requirement, and if not, the robot is provided with a buzzer to alarm; if yes, proceed to step S2;
and S2, the inspection robot moves in the livestock and poultry farm according to the optimal path and performs autonomous navigation. In the moving process, acquiring a path identifier and an image of the QR two-dimensional code which are arranged on the ground in real time, finding out a central navigation path by using a centerline method, and identifying and decoding the QR two-dimensional code; acquiring a drive control parameter, acquiring a navigation parameter according to the path information and the drive control parameter, and correcting the motion deviation in real time by the inspection robot according to the navigation parameter; detecting whether an obstacle exists or not simultaneously in the moving process; if so, avoiding the obstacle until the inspection robot moves to the terminal point; wherein the drive control parameters include a lateral deviation and an average angular deviation. In the process of obtaining navigation parameters according to the path information and the drive control parameters, judging whether the CCD camera detects the image of the QR two-dimensional code, and when the QR two-dimensional code is not detected, the average angle deviation is the average value of the angle deviation obtained after the image processing and the course angle fed back by the IMU module; when the QR two-dimensional code is detected, the angle deviation is the average value of the angle deviation obtained after image processing, the heading angle fed back by the IMU module and theta in the QR two-dimensional code;
in the present embodiment, step S2 includes: shooting an image of a path identifier and a QR two-dimensional code which are arranged on the ground and an image of the depth of an obstacle in front of the movement; the path mark is a yellow colored ribbon, a central navigation path is found out by using a centerline method according to the ground, and the inspection robot moves on the central navigation path. Acquiring a driving control parameter of the inspection robot in real time; and processing the path identifier, the image of the QR two-dimensional code and the barrier depth image, and fusing the processed image information and the drive control parameter to obtain a navigation parameter. The QR two-dimensional code is associated with an ID number of a corresponding position and an absolute pose (X, Y, theta) of the current position.
In this embodiment, a clearly refined path (central navigation path) is obtained by processing the image of the path identifier and the QR two-dimensional code on the ground, and the motion deviation (lateral deviation and angle deviation) of the inspection robot is calculated, which specifically includes, referring to fig. 3:
converting the image of the identification path and the QR two-dimensional code from an RGB image into a BGR image, and adjusting the size of the image to 640x 480;
splitting the converted image into H, S, V three channels, traversing all pixel points, and when the pixel point of the image meets the following conditions: when H _ min < ═ H _ max, S _ min < ═ S _ max, and V _ min < ═ V _ max, the pixel value of the point of the output image is 255, and otherwise 0, where H _ min, H _ max, S _ min, and S _ max are threshold values for path segmentation;
after the path is divided, scanning from 0 row to 479 row from top to bottom, scanning from the middle 319 column to two sides, and scanning from 319 column to 9 column on the left; the right is the sweep from column 319 through column 630; for a specific certain row, on the left side, when the pixel values of 8 continuous rows on the left side of a certain row are 0, the pixel point is the left boundary point of the path and is marked as left; on the right side, when the pixel value of a certain continuous 8 rows on the right side is 0, the pixel point is the right boundary point of the path and is marked as right, the value of (left + right)/2 is calculated, namely the center point of the path, and all the center points are connected to form the center line of the path; setting a threshold value of a straight line, and fitting a straight line of the center of the navigation path by utilizing Hough straight line transformation; calculating the current transverse deviation and angle deviation of the inspection robot by using the path center line; and formulating a fuzzy rule, and determining navigation parameters according to the transverse deviation and the average angle deviation obtained by fusing the angle deviation, the course angle fed back by the IMU and the theta in the QR two-dimensional code.
In this embodiment, automatically planning the optimal path includes: setting an ID number of a QR two-dimensional code of a starting point on a global map of a livestock and poultry farm; the QR two-dimensional code is simulated by using a rectangular frame on a global map of the livestock and poultry farm, the number in the rectangular frame is the position ID number of the QR two-dimensional code, and in the figure 2, 1-16 are the specific numerical value numbers of the QR two-dimensional code; wherein the lower left corner is the origin of coordinates; x is the abscissa position of the label in the whole map, Y is the coordinate position, and theta is the set deviation angle;
(X, Y, θ) denoted by 1 is (0,0,0), and is regarded as the coordinate of the image in-place;
(X, Y, theta) numbered 2 is (0,10, 0);
……
and so on.
The numerical values among the rectangular frames are weight values, and the straight line with the bidirectional arrow represents the movable direction of the inspection robot; the QR two-dimensional code is associated with a corresponding position ID number and an absolute position and pose (X, Y, theta) of the current position; generating an optimal path by utilizing a Dijkstra algorithm;
the method for generating the optimal path by utilizing the Dijkstra algorithm comprises the following steps: (a) dividing all vertexes of the global map of the livestock and poultry farm into two types, wherein one type is a used vertex, and the other type is an unused vertex; firstly, taking a set starting point as a used point, and then finding out a point closest to the starting point from the unused points; x is the abscissa position of the label in the whole map, Y is the coordinate position, and theta is the set deviation angle; (b) updating the unused point by using the nearest point obtained in the step (a) so as to find out the next nearest point to the starting point after the point nearest to the starting point is found out in the last step.
In this embodiment, referring to fig. 4, the recognizing and decoding of the QR two-dimensional code includes: graying the shot QR two-dimensional code image, and smoothing the grayed image through median filtering; carrying out binarization processing on the smoothed picture; using a contour extraction function findCountours to find out all contours in the binary image; judging whether two sub-contours exist or not through hierarchy, and screening out three black positioning angle contours in the image; calculating to obtain the central winning of the three positioning angles; finding out a point with the largest angle in the central coordinate; acquiring a projective transformation matrix through a getAffiniTransform function; carrying out affine transformation on the image through a warpAffine function by utilizing a transformation matrix so as to correct a distorted picture; and calling a Zbar library in OpenCV to identify and decode the corrected QR two-dimensional code information.
In the present embodiment, referring to fig. 5 to 6, the detecting of the obstacle includes: firstly, acquiring a depth map by using a depth camera, and converting the depth map into a point cloud map; carrying out binarization processing on the point cloud image by using a depth threshold value; carrying out corrosion expansion operation on the binary image to remove noise; extracting the outlines of all the objects, deleting the outlines with undersized areas, and then reordering the rest outlines according to the distance; extracting the screened obstacle outline and the non-obstacle outline, and using rectangular frames with different colors to obtain the obstacle outline and the non-obstacle outline; judging whether the middle path meets the width requirement of operation or not, and if the middle path does not meet the width requirement of operation, judging the sizes of the obstacles on the left side and the right side; if the left obstacle is large, the inspection robot turns right, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns left according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path; and if the obstacle on the right side is large, the inspection robot turns left, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns right according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path.
In step S2, the obstacle information obtained by the depth camera, the identification path and QR two-dimensional code information obtained by the CCD camera, and the navigation parameters obtained after image processing are simultaneously transmitted to the PC upper computer as the driving control parameters and the information transmitted thereby is received.
It should be noted that as long as the inspection robot moves, the encoder and the IMU module both feed back real-time data. If the QR two-dimensional code is not detected, selecting a proper navigation parameter according to the transverse deviation and the angle deviation which are calculated after the image processing and the average angle deviation of the fed-back course angle of the IMU module by referring to the fuzzy rule table; if the QR two-dimensional code is detected, selecting a proper navigation parameter according to the transverse deviation obtained by calculation after image processing, the angle deviation obtained by calculation after image processing and the average angle deviation of the heading angle fed back by the IMU module and theta in the QR two-dimensional code by referring to the fuzzy rule table;
the above-mentioned embodiments are preferred embodiments of the present invention, and the present invention is not limited thereto, and any other modifications or equivalent substitutions that do not depart from the technical spirit of the present invention are included in the scope of the present invention.