Movatterモバイル変換


[0]ホーム

URL:


CN112363495A - Navigation method of inspection robot for livestock and poultry farm - Google Patents

Navigation method of inspection robot for livestock and poultry farm
Download PDF

Info

Publication number
CN112363495A
CN112363495ACN202011038150.3ACN202011038150ACN112363495ACN 112363495 ACN112363495 ACN 112363495ACN 202011038150 ACN202011038150 ACN 202011038150ACN 112363495 ACN112363495 ACN 112363495A
Authority
CN
China
Prior art keywords
path
image
inspection robot
point
livestock
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011038150.3A
Other languages
Chinese (zh)
Inventor
张铁民
卢锦枫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural UniversityfiledCriticalSouth China Agricultural University
Priority to CN202011038150.3ApriorityCriticalpatent/CN112363495A/en
Publication of CN112363495ApublicationCriticalpatent/CN112363495A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种畜禽养殖场巡检机器人的导航方法,包括:S1,在畜禽养殖场全局地图上确定起点与终点,自动规划出最优路径;S2,巡检机器人根据最优路径在畜禽养殖场中移动,在移动过程中,实时获取设置在地面上的路径标识和QR二维码的图像;获取驱动控制参数,根据路径信息和驱动控制参数得到导航参数,巡检机器人根据导航参数实时校正运动偏差;在移动过程中,同时检测是否有障碍物;若有,则避障;其中驱动控制参数包括横向偏差和平均角度偏差。本发明通过对拍摄的路径标识和QR二维码的图像进行处理,得到路径信息和导航参数,根据导航参数控制巡检移动平台的移动,实现巡检移动平台能够在畜禽养殖场自动巡检。

Figure 202011038150

The invention discloses a navigation method for an inspection robot in a livestock and poultry farm, comprising: S1, determining a starting point and an end point on a global map of the livestock and poultry farm, and automatically planning an optimal path; S2, the inspection robot according to the optimal path Moving in the livestock and poultry farm, during the moving process, the image of the path identification and QR code set on the ground is obtained in real time; the driving control parameters are obtained, and the navigation parameters are obtained according to the path information and driving control parameters, and the inspection robot is based on the navigation The parameters correct the motion deviation in real time; during the moving process, detect whether there are obstacles at the same time; if there are obstacles, avoid obstacles; the drive control parameters include lateral deviation and average angle deviation. The invention obtains path information and navigation parameters by processing the captured path identification and QR code images, and controls the movement of the inspection mobile platform according to the navigation parameters, so that the inspection mobile platform can automatically inspect in livestock and poultry farms. .

Figure 202011038150

Description

Navigation method of inspection robot for livestock and poultry farm
Technical Field
The invention relates to the technical field of safe and efficient livestock and poultry breeding information, in particular to a navigation method of a patrol robot in a livestock and poultry farm.
Background
In the autonomous navigation process, the inspection robot knows the position of the inspection robot, and only when knowing the accurate position of the inspection robot, the inspection robot can accurately move to a target point.
The existing navigation control technology of the inspection robot is to control the inspection robot to run along a magnetic strip paved on the ground, and a control system adopting a GPS or radar technology is also provided. The form of laying the magnetic strips on the ground not only has high requirement on the environment of a use site, but also is time-consuming and high in price, and is not convenient for replacing scenes for use; the GPS navigation is mostly used outdoors, and the indoor use effect is not good.
With the rapid development of electronic technology, a navigation technology based on laser radar, millimeter wave radar and an inertial navigation technology also makes a significant breakthrough, and the method is widely applied to the related fields of automatic inspection, agricultural machinery, automatic logistics distribution, automatic driving and the like, and overcomes the defects of magnetic navigation, GPS and radar navigation to a certain extent. Laser radar is also great by ambient light's influence, and inertial navigation technique adds GPS and is used for outdoor navigation, nevertheless can produce accumulative error because of long-time the use, and the robot operation is patrolled and examined at fixed plant or confined structural space to beasts and birds scale plant, and regular or unscheduled patrols and examines.
Therefore, the industry needs to develop a navigation control method or system of the inspection robot for the livestock and poultry farm with low cost and high reliability.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provides a navigation method of a patrol robot in a livestock and poultry farm.
The purpose of the invention is realized by the following technical scheme:
a navigation method of a patrol robot in a livestock and poultry farm comprises the following steps:
s1, determining a starting point and a terminal point on the overall map of the livestock and poultry farm, and automatically planning an optimal path;
s2, the inspection robot moves in the livestock and poultry farm according to the optimal path, and in the moving process, the path identifier and the image of the QR two-dimensional code which are arranged on the ground are obtained in real time, the central navigation path is found out by using a centerline method, and the QR two-dimensional code is identified and decoded; acquiring a drive control parameter, acquiring a navigation parameter according to the path information and the drive control parameter, and correcting the motion deviation in real time by the inspection robot according to the navigation parameter; detecting whether an obstacle exists or not simultaneously in the moving process; if so, avoiding the obstacle until the inspection robot moves to the terminal point; wherein the drive control parameters include a lateral deviation and an average angular deviation.
Preferably, step S2 includes: shooting an image of a path identifier and a QR two-dimensional code which are arranged on the ground and an image of the depth of an obstacle in front of the movement; acquiring a driving control parameter of the inspection robot in real time; and processing the path identifier, the image of the QR two-dimensional code and the barrier depth image, and fusing the processed image information and the drive control parameter to obtain a navigation parameter.
Preferably, step S2 further includes: converting the image of the identification path and the QR two-dimensional code from an RGB image into a BGR image, and adjusting the size of the image to 640x 480; splitting the converted image into H, S, V three channels, traversing all pixel points, and when the pixel point of the image meets the following conditions: when H _ min < ═ H _ max, S _ min < ═ S _ max, and V _ min < ═ V _ max, the pixel value of the point of the output image is 255, and otherwise 0, where H _ min, H _ max, S _ min, and S _ max are threshold values for path segmentation; after the path is divided, scanning from 0 row to 479 row from top to bottom, scanning from the middle 319 column to two sides, and scanning from 319 column to 9 column on the left; the right is the sweep from column 319 through column 630; for a specific certain row, on the left side, when the pixel values of 8 continuous rows on the left side of a certain row are 0, the pixel point is the left boundary point of the path and is marked as left; on the right side, when the pixel value of a certain continuous 8 rows on the right side is 0, the pixel point is the right boundary point of the path and is marked as right, the value of (left + right)/2 is calculated, namely the center point of the path, and all the center points are connected to form the center line of the path; setting a threshold value of a straight line, and fitting a straight line of the center of the navigation path by utilizing Hough straight line transformation; calculating the current transverse deviation and angle deviation of the inspection robot by using the path center line; and formulating a fuzzy rule, and determining navigation parameters according to the transverse deviation and the average angle deviation obtained by fusing the angle deviation, the course angle fed back by the IMU and the theta in the QR two-dimensional code.
Preferably, automatically planning the optimal path includes: setting an ID number of a QR two-dimensional code of a starting point on a global map of a livestock and poultry farm; the QR two-dimensional code is simulated by using a rectangular frame on a global map of a livestock and poultry farm, the number in the rectangular frame is the position ID number of the QR two-dimensional code, the numerical value between the rectangular frames is a weight value, and a straight line with a bidirectional arrow represents the movable direction of the inspection robot; the QR two-dimensional code is associated with a corresponding position ID number and an absolute position and pose (X, Y, theta) of the current position; generating an optimal path by utilizing a Dijkstra algorithm; the method for generating the optimal path by utilizing the Dijkstra algorithm comprises the following steps: (a) dividing all vertexes of the global map of the livestock and poultry farm into two types, wherein one type is a used vertex, and the other type is an unused vertex; firstly, taking a set starting point as a used point, and then finding out a point closest to the starting point from the unused points; (b) updating the unused point by using the nearest point obtained in the step (a) so as to find out the next nearest point to the starting point after the point nearest to the starting point is found out in the last step.
Preferably, the recognizing and decoding of the QR two-dimensional code includes: graying the shot QR two-dimensional code image, and smoothing the grayed image through median filtering; carrying out binarization processing on the smoothed picture; using a contour extraction function findCountours to find out all contours in the binary image; judging whether two sub-contours exist or not through hierarchy, and screening out three black positioning angle contours in the image; calculating to obtain the central winning of the three positioning angles; finding out a point with the largest angle in the central coordinate; acquiring a projective transformation matrix through a getAffiniTransform function; carrying out affine transformation on the image through a warpAffine function by utilizing a transformation matrix so as to correct a distorted picture; and calling a Zbar library in OpenCV to identify and decode the corrected QR two-dimensional code information.
Preferably, the detecting of the obstacle comprises: firstly, acquiring a depth map by using a depth camera, and converting the depth map into a point cloud map; carrying out binarization processing on the point cloud image by using a depth threshold value; carrying out corrosion expansion operation on the binary image to remove noise; extracting the outlines of all the objects, deleting the outlines with undersized areas, and then reordering the rest outlines according to the distance; extracting the screened obstacle outline and the non-obstacle outline, and using rectangular frames with different colors to obtain the obstacle outline and the non-obstacle outline; judging whether the middle path meets the width requirement of operation or not, and if the middle path does not meet the width requirement of operation, judging the sizes of the obstacles on the left side and the right side; if the left obstacle is large, the inspection robot turns right, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns left according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path; and if the obstacle on the right side is large, the inspection robot turns left, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns right according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path.
Compared with the prior art, the invention has the following advantages:
according to the invention, the shot path identifier and the image of the QR two-dimensional code are processed to obtain path information and navigation parameters, the mobile inspection mobile platform is controlled to move according to the navigation parameters, so that the mobile inspection mobile platform can automatically inspect in a livestock and poultry farm, and meanwhile, the invention detects the environmental temperature and humidity, the concentration of harmful gases and the wind speed, and wirelessly transmits the livestock and poultry behavior video to the PC terminal in real time. The navigation control method fusing the identification path and the QR two-dimensional code information can overcome the defect of influence of ambient light; compared with an inertial navigation control method, the navigation control method fusing the identification path and the QR two-dimensional code information can correct accumulated errors generated by long-time work of an inertial system and an encoder, and therefore a target point can be accurately reached.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a flowchart illustrating a navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
FIG. 2 is a path planning diagram of a navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
FIG. 3 is a flow chart of the identification path recognition of the navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
FIG. 4 is a QR two-dimensional code recognition flow chart of the navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
FIG. 5 is a navigation parameter acquisition flow chart of the navigation method of the inspection robot for the livestock and poultry farm according to the present invention;
fig. 6 is an obstacle avoidance flow chart of the navigation method of the inspection robot for the livestock and poultry farm of the present invention.
Detailed Description
The invention is further illustrated by the following figures and examples.
Referring to fig. 1, a navigation method of a patrol robot for a livestock and poultry farm includes:
s1, determining a starting point and a terminal point on the overall map of the livestock and poultry farm, and automatically planning an optimal path;
in this embodiment, the inspection robot is further configured between the step S1 and the step S2 to detect whether the electric quantity of the robot meets the threshold requirement, and if not, the robot is provided with a buzzer to alarm; if yes, proceed to step S2;
and S2, the inspection robot moves in the livestock and poultry farm according to the optimal path and performs autonomous navigation. In the moving process, acquiring a path identifier and an image of the QR two-dimensional code which are arranged on the ground in real time, finding out a central navigation path by using a centerline method, and identifying and decoding the QR two-dimensional code; acquiring a drive control parameter, acquiring a navigation parameter according to the path information and the drive control parameter, and correcting the motion deviation in real time by the inspection robot according to the navigation parameter; detecting whether an obstacle exists or not simultaneously in the moving process; if so, avoiding the obstacle until the inspection robot moves to the terminal point; wherein the drive control parameters include a lateral deviation and an average angular deviation. In the process of obtaining navigation parameters according to the path information and the drive control parameters, judging whether the CCD camera detects the image of the QR two-dimensional code, and when the QR two-dimensional code is not detected, the average angle deviation is the average value of the angle deviation obtained after the image processing and the course angle fed back by the IMU module; when the QR two-dimensional code is detected, the angle deviation is the average value of the angle deviation obtained after image processing, the heading angle fed back by the IMU module and theta in the QR two-dimensional code;
in the present embodiment, step S2 includes: shooting an image of a path identifier and a QR two-dimensional code which are arranged on the ground and an image of the depth of an obstacle in front of the movement; the path mark is a yellow colored ribbon, a central navigation path is found out by using a centerline method according to the ground, and the inspection robot moves on the central navigation path. Acquiring a driving control parameter of the inspection robot in real time; and processing the path identifier, the image of the QR two-dimensional code and the barrier depth image, and fusing the processed image information and the drive control parameter to obtain a navigation parameter. The QR two-dimensional code is associated with an ID number of a corresponding position and an absolute pose (X, Y, theta) of the current position.
In this embodiment, a clearly refined path (central navigation path) is obtained by processing the image of the path identifier and the QR two-dimensional code on the ground, and the motion deviation (lateral deviation and angle deviation) of the inspection robot is calculated, which specifically includes, referring to fig. 3:
converting the image of the identification path and the QR two-dimensional code from an RGB image into a BGR image, and adjusting the size of the image to 640x 480;
splitting the converted image into H, S, V three channels, traversing all pixel points, and when the pixel point of the image meets the following conditions: when H _ min < ═ H _ max, S _ min < ═ S _ max, and V _ min < ═ V _ max, the pixel value of the point of the output image is 255, and otherwise 0, where H _ min, H _ max, S _ min, and S _ max are threshold values for path segmentation;
after the path is divided, scanning from 0 row to 479 row from top to bottom, scanning from the middle 319 column to two sides, and scanning from 319 column to 9 column on the left; the right is the sweep from column 319 through column 630; for a specific certain row, on the left side, when the pixel values of 8 continuous rows on the left side of a certain row are 0, the pixel point is the left boundary point of the path and is marked as left; on the right side, when the pixel value of a certain continuous 8 rows on the right side is 0, the pixel point is the right boundary point of the path and is marked as right, the value of (left + right)/2 is calculated, namely the center point of the path, and all the center points are connected to form the center line of the path; setting a threshold value of a straight line, and fitting a straight line of the center of the navigation path by utilizing Hough straight line transformation; calculating the current transverse deviation and angle deviation of the inspection robot by using the path center line; and formulating a fuzzy rule, and determining navigation parameters according to the transverse deviation and the average angle deviation obtained by fusing the angle deviation, the course angle fed back by the IMU and the theta in the QR two-dimensional code.
In this embodiment, automatically planning the optimal path includes: setting an ID number of a QR two-dimensional code of a starting point on a global map of a livestock and poultry farm; the QR two-dimensional code is simulated by using a rectangular frame on a global map of the livestock and poultry farm, the number in the rectangular frame is the position ID number of the QR two-dimensional code, and in the figure 2, 1-16 are the specific numerical value numbers of the QR two-dimensional code; wherein the lower left corner is the origin of coordinates; x is the abscissa position of the label in the whole map, Y is the coordinate position, and theta is the set deviation angle;
(X, Y, θ) denoted by 1 is (0,0,0), and is regarded as the coordinate of the image in-place;
(X, Y, theta) numbered 2 is (0,10, 0);
……
and so on.
The numerical values among the rectangular frames are weight values, and the straight line with the bidirectional arrow represents the movable direction of the inspection robot; the QR two-dimensional code is associated with a corresponding position ID number and an absolute position and pose (X, Y, theta) of the current position; generating an optimal path by utilizing a Dijkstra algorithm;
the method for generating the optimal path by utilizing the Dijkstra algorithm comprises the following steps: (a) dividing all vertexes of the global map of the livestock and poultry farm into two types, wherein one type is a used vertex, and the other type is an unused vertex; firstly, taking a set starting point as a used point, and then finding out a point closest to the starting point from the unused points; x is the abscissa position of the label in the whole map, Y is the coordinate position, and theta is the set deviation angle; (b) updating the unused point by using the nearest point obtained in the step (a) so as to find out the next nearest point to the starting point after the point nearest to the starting point is found out in the last step.
In this embodiment, referring to fig. 4, the recognizing and decoding of the QR two-dimensional code includes: graying the shot QR two-dimensional code image, and smoothing the grayed image through median filtering; carrying out binarization processing on the smoothed picture; using a contour extraction function findCountours to find out all contours in the binary image; judging whether two sub-contours exist or not through hierarchy, and screening out three black positioning angle contours in the image; calculating to obtain the central winning of the three positioning angles; finding out a point with the largest angle in the central coordinate; acquiring a projective transformation matrix through a getAffiniTransform function; carrying out affine transformation on the image through a warpAffine function by utilizing a transformation matrix so as to correct a distorted picture; and calling a Zbar library in OpenCV to identify and decode the corrected QR two-dimensional code information.
In the present embodiment, referring to fig. 5 to 6, the detecting of the obstacle includes: firstly, acquiring a depth map by using a depth camera, and converting the depth map into a point cloud map; carrying out binarization processing on the point cloud image by using a depth threshold value; carrying out corrosion expansion operation on the binary image to remove noise; extracting the outlines of all the objects, deleting the outlines with undersized areas, and then reordering the rest outlines according to the distance; extracting the screened obstacle outline and the non-obstacle outline, and using rectangular frames with different colors to obtain the obstacle outline and the non-obstacle outline; judging whether the middle path meets the width requirement of operation or not, and if the middle path does not meet the width requirement of operation, judging the sizes of the obstacles on the left side and the right side; if the left obstacle is large, the inspection robot turns right, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns left according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path; and if the obstacle on the right side is large, the inspection robot turns left, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns right according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path.
In step S2, the obstacle information obtained by the depth camera, the identification path and QR two-dimensional code information obtained by the CCD camera, and the navigation parameters obtained after image processing are simultaneously transmitted to the PC upper computer as the driving control parameters and the information transmitted thereby is received.
It should be noted that as long as the inspection robot moves, the encoder and the IMU module both feed back real-time data. If the QR two-dimensional code is not detected, selecting a proper navigation parameter according to the transverse deviation and the angle deviation which are calculated after the image processing and the average angle deviation of the fed-back course angle of the IMU module by referring to the fuzzy rule table; if the QR two-dimensional code is detected, selecting a proper navigation parameter according to the transverse deviation obtained by calculation after image processing, the angle deviation obtained by calculation after image processing and the average angle deviation of the heading angle fed back by the IMU module and theta in the QR two-dimensional code by referring to the fuzzy rule table;
the above-mentioned embodiments are preferred embodiments of the present invention, and the present invention is not limited thereto, and any other modifications or equivalent substitutions that do not depart from the technical spirit of the present invention are included in the scope of the present invention.

Claims (6)

1. A navigation method of a patrol robot for a livestock and poultry farm is characterized by comprising the following steps:
s1, determining a starting point and a terminal point on the overall map of the livestock and poultry farm, and automatically planning an optimal path;
s2, the inspection robot moves in the livestock and poultry farm according to the optimal path, and in the moving process, the path identifier and the image of the QR two-dimensional code which are arranged on the ground are obtained in real time, the central navigation path is found out by using a centerline method, and the QR two-dimensional code is identified and decoded; acquiring a drive control parameter, acquiring a navigation parameter according to the path information and the drive control parameter, and correcting the motion deviation in real time by the inspection robot according to the navigation parameter; detecting whether an obstacle exists or not simultaneously in the moving process; if so, avoiding the obstacle until the inspection robot moves to the terminal point; wherein the drive control parameters include a lateral deviation and an average angular deviation.
2. The navigation method of the inspection robot for the livestock and poultry farm according to claim 1, wherein the step S2 comprises:
shooting an image of a path identifier and a QR two-dimensional code which are arranged on the ground and an image of the depth of an obstacle in front of the movement;
acquiring a driving control parameter of the inspection robot in real time;
and processing the path identifier, the image of the QR two-dimensional code and the barrier depth image, and fusing the processed image information and the drive control parameter to obtain a navigation parameter.
3. The navigation method for the inspection robot of the livestock and poultry farm according to claim 1, wherein the step S2 further comprises:
converting the image of the identification path and the QR two-dimensional code from an RGB image into a BGR image, and adjusting the size of the image to 640x 480;
splitting the converted image into H, S, V three channels, traversing all pixel points, and when the pixel point of the image meets the following conditions: when H _ min < ═ H _ max, S _ min < ═ S _ max, and V _ min < ═ V _ max, the pixel value of the point of the output image is 255, and otherwise 0, where H _ min, H _ max, S _ min, and S _ max are threshold values for path segmentation;
after the path is divided, scanning from 0 row to 479 row from top to bottom, scanning from the middle 319 column to two sides, and scanning from 319 column to 9 column on the left; the right is the sweep from column 319 through column 630; for a specific certain row, on the left side, when the pixel values of 8 continuous rows on the left side of a certain row are 0, the pixel point is the left boundary point of the path and is marked as left; on the right side, when the pixel value of a certain continuous 8 rows on the right side is 0, the pixel point is the right boundary point of the path and is marked as right, the value of (left + right)/2 is calculated, namely the center point of the path, and all the center points are connected to form the center line of the path;
setting a threshold value of a straight line, and fitting a straight line of the center of the navigation path by utilizing Hough straight line transformation;
calculating the current transverse deviation and the average angle deviation of the inspection robot by using the path center line;
and formulating a fuzzy rule, and determining navigation parameters according to the transverse deviation and the average angle deviation obtained by fusing the course angle deviation, the course angle fed back by the IMU and the theta in the QR two-dimensional code.
4. The navigation method of the inspection robot for the livestock and poultry farm according to claim 1, wherein automatically planning the optimal path comprises:
setting an ID number of a QR two-dimensional code of a starting point on a global map of a livestock and poultry farm; the QR two-dimensional code is simulated by using a rectangular frame on a global map of a livestock and poultry farm, the number in the rectangular frame is the position ID number of the QR two-dimensional code, the numerical value between the rectangular frames is a weight value, and a straight line with a bidirectional arrow represents the movable direction of the inspection robot; the QR two-dimensional code is associated with a corresponding position ID number and an absolute position and pose (X, Y, theta) of the current position;
generating an optimal path by utilizing a Dijkstra algorithm;
the method for generating the optimal path by utilizing the Dijkstra algorithm comprises the following steps: (a) dividing all vertexes of the global map of the livestock and poultry farm into two types, wherein one type is a used vertex, and the other type is an unused vertex; firstly, taking a set starting point as a used point, and then finding out a point closest to the starting point from the unused points;
(b) updating the unused point by using the nearest point obtained in the step (a) so as to find out the next nearest point to the starting point after the point nearest to the starting point is found out in the last step.
5. The navigation method of the inspection robot for the livestock and poultry farm according to claim 1, wherein the recognizing and decoding of the QR two-dimensional code comprises:
graying the shot QR two-dimensional code image, and smoothing the grayed image through median filtering;
carrying out binarization processing on the smoothed picture;
using a contour extraction function findCountours to find out all contours in the binary image;
judging whether two sub-contours exist or not through hierarchy, and screening out three black positioning angle contours in the image;
calculating to obtain the central winning of the three positioning angles;
finding out a point with the largest angle in the central coordinate;
acquiring a projective transformation matrix through a getAffiniTransform function;
carrying out affine transformation on the image through a warpAffine function by utilizing a transformation matrix so as to correct a distorted picture;
and calling a Zbar library in OpenCV to identify and decode the corrected QR two-dimensional code information.
6. The navigation method of the inspection robot for the livestock and poultry farm according to claim 1, wherein the detecting of the obstacle comprises:
firstly, acquiring a depth map by using a depth camera, and converting the depth map into a point cloud map;
carrying out binarization processing on the point cloud image by using a depth threshold value;
carrying out corrosion expansion operation on the binary image to remove noise;
extracting the outlines of all the objects, deleting the outlines with undersized areas, and then reordering the rest outlines according to the distance;
extracting the screened obstacle outline and the non-obstacle outline, and using rectangular frames with different colors to obtain the obstacle outline and the non-obstacle outline;
determining whether the intermediate path meets the width requirements of the run,
if the middle path does not meet the width requirement of operation, judging the size of the obstacles on the left side and the right side; if the left obstacle is large, the inspection robot turns right, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns left according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path;
and if the obstacle on the right side is large, the inspection robot turns left, when the obstacle cannot be detected, the course angle of the IMU is recorded, the inspection robot turns right according to the angle of the negative value of the recorded course angle until the original path is detected, and then the inspection robot continues to move according to the original path.
CN202011038150.3A2020-09-282020-09-28Navigation method of inspection robot for livestock and poultry farmPendingCN112363495A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011038150.3ACN112363495A (en)2020-09-282020-09-28Navigation method of inspection robot for livestock and poultry farm

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011038150.3ACN112363495A (en)2020-09-282020-09-28Navigation method of inspection robot for livestock and poultry farm

Publications (1)

Publication NumberPublication Date
CN112363495Atrue CN112363495A (en)2021-02-12

Family

ID=74508055

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011038150.3APendingCN112363495A (en)2020-09-282020-09-28Navigation method of inspection robot for livestock and poultry farm

Country Status (1)

CountryLink
CN (1)CN112363495A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113239134A (en)*2021-05-072021-08-10河南牧原智能科技有限公司Pig house navigation map establishing method and device, electronic equipment and storage medium
CN114019977A (en)*2021-11-032022-02-08诺力智能装备股份有限公司Path control method and device for mobile robot, storage medium and electronic device
CN114170255A (en)*2021-11-022022-03-11河南牧原智能科技有限公司 A kind of inspection and positioning method, device and computer readable storage medium
CN115165120A (en)*2022-07-272022-10-11广州高新兴机器人有限公司Livestock and poultry temperature measuring method and system, electronic equipment and storage medium
CN117270548A (en)*2023-11-232023-12-22安徽领云物联科技有限公司Intelligent inspection robot with route correction function

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105388899A (en)*2015-12-172016-03-09中国科学院合肥物质科学研究院An AGV navigation control method based on two-dimension code image tags
CN106054900A (en)*2016-08-082016-10-26电子科技大学Temporary robot obstacle avoidance method based on depth camera
CN107943051A (en)*2017-12-142018-04-20华南理工大学Indoor AGV navigation methods and systems based on Quick Response Code guiding with visible light-seeking
CN109460029A (en)*2018-11-292019-03-12华南农业大学Livestock and poultry cultivation place inspection mobile platform and its control method
CN111046776A (en)*2019-12-062020-04-21杭州成汤科技有限公司Mobile robot traveling path obstacle detection method based on depth camera

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105388899A (en)*2015-12-172016-03-09中国科学院合肥物质科学研究院An AGV navigation control method based on two-dimension code image tags
CN106054900A (en)*2016-08-082016-10-26电子科技大学Temporary robot obstacle avoidance method based on depth camera
CN107943051A (en)*2017-12-142018-04-20华南理工大学Indoor AGV navigation methods and systems based on Quick Response Code guiding with visible light-seeking
CN109460029A (en)*2018-11-292019-03-12华南农业大学Livestock and poultry cultivation place inspection mobile platform and its control method
CN111046776A (en)*2019-12-062020-04-21杭州成汤科技有限公司Mobile robot traveling path obstacle detection method based on depth camera

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
庄晓霖: "基于机器视觉的路径识别及避障导航系统", 《中国优秀硕士学位论文全文数据库·农业科技辑》*
曹勇: "基于多传感器融合的仓储AGV导航定位系统设计与实现", 《中国优秀硕士学位论文全文数据库·信息科技辑》*
李林慧: "智能AGV多目视觉导引与惯性测量的复合导航技术研究", 《中国优秀硕士学位论文全文数据库·信息科技辑》*
王松涛: "基于视觉的AGV路径识别和跟踪控制研究", 《中国优秀硕士学位论文全文数据库·信息科技辑》*

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113239134A (en)*2021-05-072021-08-10河南牧原智能科技有限公司Pig house navigation map establishing method and device, electronic equipment and storage medium
CN113239134B (en)*2021-05-072024-06-14河南牧原智能科技有限公司Pig house navigation map building method and device, electronic equipment and storage medium
CN114170255A (en)*2021-11-022022-03-11河南牧原智能科技有限公司 A kind of inspection and positioning method, device and computer readable storage medium
CN114019977A (en)*2021-11-032022-02-08诺力智能装备股份有限公司Path control method and device for mobile robot, storage medium and electronic device
CN114019977B (en)*2021-11-032024-06-04诺力智能装备股份有限公司Path control method and device for mobile robot, storage medium and electronic equipment
CN115165120A (en)*2022-07-272022-10-11广州高新兴机器人有限公司Livestock and poultry temperature measuring method and system, electronic equipment and storage medium
CN117270548A (en)*2023-11-232023-12-22安徽领云物联科技有限公司Intelligent inspection robot with route correction function
CN117270548B (en)*2023-11-232024-02-09安徽领云物联科技有限公司Intelligent inspection robot with route correction function

Similar Documents

PublicationPublication DateTitle
CN112363495A (en)Navigation method of inspection robot for livestock and poultry farm
Kalinov et al.Warevision: Cnn barcode detection-based uav trajectory optimization for autonomous warehouse stocktaking
CN106607907B (en)A kind of moving-vision robot and its investigating method
Duggal et al.Plantation monitoring and yield estimation using autonomous quadcopter for precision agriculture
CN111037552B (en)Inspection configuration and implementation method of wheel type inspection robot for power distribution room
CN112149555A (en) A multi-warehouse AGV tracking method based on global vision
US20210190526A1 (en)System and method of generating high-definition map based on camera
CN109828267A (en)The Intelligent Mobile Robot detection of obstacles and distance measuring method of Case-based Reasoning segmentation and depth camera
CN112232139B (en) An obstacle avoidance method based on the combination of Yolo v4 and Tof algorithm
CN113378701B (en) A ground multi-AGV state monitoring method based on UAV
CN109143167B (en)Obstacle information acquisition device and method
Zhang et al.Factor graph-based high-precision visual positioning for agricultural robots with fiducial markers
Chen et al.Global path planning in mobile robot using omnidirectional camera
CN117367425B (en)Mobile robot positioning method and system based on multi-camera fusion
CN117671529B (en)Unmanned aerial vehicle scanning measurement-based farmland water level observation device and observation method
CN114603561A (en) Intelligent robot vision sensor control system and method
CN112083732B (en)Robot navigation method and navigation system for detecting visible line laser
CN115880673B (en) A method and system for avoiding obstacles based on computer vision
CN112540382B (en)Laser navigation AGV auxiliary positioning method based on visual identification detection
CN118129730A (en)Multi-sensor fusion elevation semantic map construction method
Zhang et al.Autonomous navigation using machine vision and self-designed fiducial marker in a commercial chicken farming house
CN112330748A (en)Tray identification and positioning method based on binocular depth camera
US20230351755A1 (en)Processing images for extracting information about known objects
KR102062874B1 (en)Automated Guided Vehicle
US12427673B2 (en)Apparatus and method for controlling rail boarding for autonomous driving of a mobile robot

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication

Application publication date:20210212

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp