Movatterモバイル変換


[0]ホーム

URL:


CN116787428B - Mobile robot safety protection method and device - Google Patents

Mobile robot safety protection method and device

Info

Publication number
CN116787428B
CN116787428BCN202310413879.1ACN202310413879ACN116787428BCN 116787428 BCN116787428 BCN 116787428BCN 202310413879 ACN202310413879 ACN 202310413879ACN 116787428 BCN116787428 BCN 116787428B
Authority
CN
China
Prior art keywords
image
coordinate system
difference
robot
mobile robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310413879.1A
Other languages
Chinese (zh)
Other versions
CN116787428A (en
Inventor
丁蕾
周国成
翟瑾
王瑞
祁伟建
胡从洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yunxiang Business Machine Co ltd
Original Assignee
Hangzhou Yunxiang Business Machine Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Yunxiang Business Machine Co ltdfiledCriticalHangzhou Yunxiang Business Machine Co ltd
Priority to CN202310413879.1ApriorityCriticalpatent/CN116787428B/en
Publication of CN116787428ApublicationCriticalpatent/CN116787428A/en
Application grantedgrantedCritical
Publication of CN116787428BpublicationCriticalpatent/CN116787428B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Landscapes

Abstract

Translated fromChinese

本发明公开了一种移动机器人安全防护方法及装置。为了克服现有技术单一的传感器难以解决玻璃墙等感知的问题;方法包括以下步骤:S1:相机采集获得探测区域关于投射图像的实时图,与预置的平面区域关于投射图像的参考图对比,获得图像差异区域;S2:根据图像差异区域确定障碍物位置,计算图像差异区域中线段差异的起点;S3:通过坐标系的映射关系确定线段差异起点在车体坐标系中的位置信息;S4:根据车体坐标系中的位置信息实时进行机器人安全防护。借助投射图案的结构化信息,探测场景中障碍物,能够有效的解决玻璃墙等其他传感器的感知痛点问题,且成本相对低廉,能够较好的满足移动机器人防护需求。

The present invention discloses a method and device for mobile robot safety protection. In order to overcome the problem that a single sensor in the prior art is difficult to solve the perception problem of glass walls, etc.; the method includes the following steps: S1: the camera collects and obtains a real-time image of the detection area on the projected image, and compares it with a preset reference image of the plane area on the projected image to obtain the image difference area; S2: the obstacle position is determined according to the image difference area, and the starting point of the line segment difference in the image difference area is calculated; S3: the position information of the starting point of the line segment difference in the vehicle body coordinate system is determined through the mapping relationship of the coordinate system; S4: the robot safety protection is performed in real time according to the position information in the vehicle body coordinate system. By using the structured information of the projected pattern to detect obstacles in the scene, it can effectively solve the perception pain point problem of other sensors such as glass walls, and the cost is relatively low, which can better meet the protection needs of mobile robots.

Description

Mobile robot safety protection method and device
Technical Field
The invention relates to the field of robot safety, in particular to a mobile robot safety protection method and device.
Background
Scene perception is a very critical link in the operation of mobile robots, which directly determines that mobile robots cannot operate stably, reliably and safely in a scene. The sensors for sensing the robot scene can be divided into a contact type and a non-contact type according to the contact type, wherein the contact type is widely applied to household sweeping robots, and other types of mobile robots generally adopt the non-contact type due to the limitation of application scenes.
Non-contact mainstream sensors include 2D lasers, 3D lasers, depth cameras, and radar.
The 2D laser can only sense the distance information of the object under a certain plane, and the object exists in the whole three-dimensional space, so that the scene sensing capability is weaker.
The 3D laser can sense 3D information of the whole scene, but its cost is higher, and it also can cause missing detection problem due to the laser passing through transparent glass.
The depth camera has good performance in the short-distance three-dimensional space sensing capability, and the cost is far lower than that of 3D laser, but the sensing problems of glass walls and the like are difficult to solve by using TOF, structured light and binocular depth cameras.
The radar can effectively detect the glass wall, but can only detect whether an obstacle exists in a certain area, and cannot give out space position information, so that the radar generally plays a role in safety warning assistance. For example, a robot obstacle avoidance system disclosed in Chinese patent literature, the bulletin number CN108527364A of which comprises an acquisition module, a control module and an execution module, wherein the acquisition module is used for acquiring distance information between a robot and an obstacle, the control module is in communication connection with the acquisition module and acquires the distance information to generate a control signal, and the execution module is in communication connection with the control module and controls the movement of the robot according to the control signal. According to the scheme, the active obstacle avoidance of the robot is realized by arranging two types of sensors, namely the laser radar and the ultrasonic array. The scheme adopts two sensors to realize the obstacle avoidance function for glass, and has high cost.
Disclosure of Invention
The invention mainly solves the problem that the single sensor in the prior art is difficult to solve the perception of glass walls and the like; the method and the device for protecting the mobile robot safely are provided, and by means of the structural information of the projection pattern, the obstacle in the scene is detected, so that the problem of perceived pain points of other sensors such as a glass wall can be effectively solved, the cost is relatively low, and the protection requirement of the mobile robot can be better met.
The technical problems of the invention are mainly solved by the following technical proposal:
a mobile robot safety protection method, comprising the following steps:
S1, acquiring a real-time image of a detection area relative to a projection image by a camera, and comparing the real-time image with a reference image of a preset plane area relative to the projection image to acquire an image difference area;
S2, determining the position of an obstacle according to the image difference region, and calculating the starting point of line segment difference in the image difference region;
S3, determining the position information of the line segment difference starting point in a vehicle body coordinate system through the mapping relation of the coordinate system;
And S4, carrying out robot safety protection in real time according to the position information in the vehicle body coordinate system.
The scheme includes comparing a real-time image of a projected image with a reference image, determining the position of an obstacle starting point through image difference, obtaining the coordinates of the obstacle starting point under a vehicle body coordinate system through a coordinate system mapping relation, and controlling the protection start of the robot according to the threshold comparison of the distance. By means of the structural information of the projection patterns, obstacles in a scene are detected, the problem of perception pain points of other sensors such as a glass wall can be effectively solved, the cost is relatively low, and the protection requirement of a mobile robot can be better met.
Preferably, the presetting process of the reference graph is as follows:
the pattern projector projects the projection image into the scene, and when the projection image is in a plane area, the reference frame image is shot and acquired to be used as a preset reference image.
The reference image is a reference frame image taken when the image and the planar area are projected.
Preferably, the real-time image and the reference image obtain an image difference region through image difference, and whether a non-planar object exists in the scene is judged according to whether the difference region exists after the image difference.
Preferably, the expression of the image difference is:
wherein I (x, y) is the pixel value at coordinates (x, y) in the real-time map;
Iref (x, y) is the pixel value at coordinate (x, y) in the reference map;
thre is a threshold of whether a change occurs.
Preferably, when determining that a non-planar object exists in the scene, determining the starting point of the difference of each line segment, searching for a difference point along the direction from bottom to top of each line segment in the reference graph, wherein the first difference point is the starting point of the difference of each line segment.
The difference points are searched from the bottom to the top, the first difference point is used as the starting point of the difference of each line segment, the lines in the image are farther from the robot body from bottom to top due to the installation position relation of the pattern projector, and one point closest to the robot is selected as the starting point of the obstacle.
Preferably, the process for establishing the mapping relation from the point of the image plane to the vehicle body coordinate system comprises the following steps:
A1, fixedly placing a calibration plate on a ground plane;
A2, mapping points on the image to a ground plane coordinate system;
Establishing a ground plane coordinate system on a calibration plate (so that coordinate points of each angular point of the checkerboard under the ground plane coordinate system can be determined), and simultaneously detecting the angular points of the checkerboard (so that coordinate points of each angular point of the checkerboard under the image plane coordinate system can be determined), and solving a homography matrix based on the corresponding relation between the coordinate points under the ground plane coordinate system and the coordinate points under the image plane coordinate system;
a3, mapping the ground plane coordinate system to a camera coordinate system;
acquiring image data of a covering complete calibration plate, and performing external parameter calibration (assuming that the internal parameter calibration of the camera is finished in advance) to obtain a mapping relation from a ground plane coordinate system to a camera coordinate system;
and A4, mapping the camera coordinate system to the vehicle body coordinate system through hand-eye calibration.
The mapping relation between the image plane points and the vehicle body coordinate system is established in advance, and the position information of the starting point of the obstacle under the vehicle body coordinate system can be determined through the mapping relation.
Preferably, the distance from the current coordinate of the robot to the starting point of the obstacle is calculated, and when the distance is smaller than or equal to a set safety threshold value, protection is started, and the robot is braked or controlled to turn.
And carrying out robot safety protection in real time according to the position information of the starting point of the non-planar object under the vehicle body coordinate system.
A mobile robot safety shield apparatus, comprising:
A camera for acquiring a real-time image including a projection image in a detection area;
a pattern projector that projects a projection image into a detection area in a robot advancing direction;
The calculation and analysis unit compares the real-time image comprising the projection image with the reference image, determines the position of the starting point of the obstacle through image difference, obtains the coordinate of the starting point of the obstacle under the coordinate system of the vehicle body through the mapping relation of the coordinate system, and controls the protection start of the robot according to the threshold comparison of the distance.
By means of the structural information of the projection patterns, obstacles in a scene are detected, the problem of perception pain points of other sensors such as a glass wall can be effectively solved, the cost is relatively low, and the protection requirement of a mobile robot can be better met.
Preferably, the projection image is a plurality of parallel lines, and the line direction extends along the advancing direction of the robot. The arrangement of the scheme enables lines in the image to be farther from the robot body from bottom to top, and a point closest to the robot is selected as a starting point of the obstacle.
Preferably, the line pitches are equal. The line density can be adjusted according to the measurement accuracy requirement.
The beneficial effects of the invention are as follows:
Comparing a real-time image comprising the projected image with a reference image, determining the position of an obstacle starting point through image difference, obtaining the coordinates of the obstacle starting point under a vehicle body coordinate system through a coordinate system mapping relation, and controlling the protection starting of the robot according to the threshold comparison of the distance. By means of the structural information of the projection patterns, obstacles in a scene are detected, the problem of perception pain points of other sensors such as a glass wall can be effectively solved, the cost is relatively low, and the protection requirement of a mobile robot can be better met.
Drawings
Fig. 1 is a schematic view of a mobile robot safety device of the present invention.
Fig. 2 is a flow chart of the mobile robot safety protection method of the present invention.
Fig. 3 is a planar scene image of the present invention.
Fig. 4 is a non-planar scene image of the present invention.
Fig. 5 is a graph of the differential results of a planar scene image of the present invention.
Fig. 6 is a differential result diagram of a non-planar scene image of the present invention.
In the figure, 1, a robot body, 2, a pattern projector, and 3, a camera and 4, projecting an image.
Detailed Description
The technical scheme of the invention is further specifically described below through examples and with reference to the accompanying drawings.
Examples:
A mobile robot safety device of the present embodiment, as shown in fig. 1, includes a robot body 1, a pattern projector 2 and a camera 3 provided on the robot body 1.
In the present embodiment, the pattern projector 2 and the camera 3 are provided on the top of the robot body 1, facing the robot advancing direction.
The pattern projector 2 projects a projection image 4 into a detection area in the robot advancing direction, and the camera 3 acquires a real-time map including the projection image in the detection area. In the present embodiment, the camera 3 employs an RGB camera.
In this embodiment, the projected image 4 is a plurality of parallel lines, and the line direction extends along the robot advancing direction. So that the line in the projected image 4 is further from the robot body 1 from bottom to top, and a point nearest to the robot is selected as the starting point of the obstacle. In this embodiment, the line pitches in the projected image 4 are equal, and the line density can be adaptively adjusted according to the measurement accuracy requirement.
The mobile robot safety protection device further comprises a calculation and analysis unit, the calculation and analysis unit compares a real-time image comprising the projection image 4 with a reference image, determines the starting point position of the obstacle through image difference, obtains the starting point coordinates of the obstacle under the vehicle body coordinate system through the coordinate system mapping relation, and compares and controls the protection starting of the robot according to the threshold value of the distance.
By means of the structural information of the projection pattern 4, obstacles in a scene are detected, the problem of perception pain points of other sensors such as a glass wall can be effectively solved, the cost is relatively low, and the protection requirement of a mobile robot can be better met.
The mobile robot safety protection method of the embodiment, as shown in fig. 2, includes the following steps:
S1, acquiring a real-time image of a detection area relative to a projection image by a camera, and comparing the real-time image with a reference image of a preset plane area relative to the projection image to acquire an image difference area.
The presetting process of the reference picture is as follows:
the pattern projector 2 projects a projection image 4 into a scene, and takes a reference frame image as a preset reference image when the projection image is in a planar area as shown in fig. 3.
The reference image is a reference frame image shot when the projection image 4 is in a plane area, and is used for comparing with a real-time image acquired by the camera 3 to obtain an image difference area.
When the area detected by the robot is a plane, as shown in fig. 3, the straight line of the projected image 4 is still a straight line on the image plane. When there is a non-planar object in the area detected by the robot, the straight line of the projected image 4 will be deformed in the image plane as shown in fig. 4. When the laser line penetrates the glass, the straight line of the projected image 4 will be deformed due to the refraction of the light.
And the real-time image and the reference image obtain an image difference area through image difference, and whether a non-planar object exists in the scene is judged according to whether the difference area exists after the image difference.
The expression of the image difference is:
wherein I (x, y) is the pixel value at coordinates (x, y) in the real-time map;
Iref (x, y) is the pixel value at coordinate (x, y) in the reference map;
thre is a threshold of whether a change occurs.
When the difference area does not exist after the image difference, the image difference result is shown in fig. 5, and the scenes are judged to be plane objects.
When a difference area exists after the image difference, the image difference result is shown in fig. 6, and then a non-planar object or glass exists in the scene.
S2, determining the position of the obstacle according to the image difference area, and calculating the starting point of the line segment difference in the image difference area.
And determining the position of the obstacle according to the image difference area of the image difference, wherein the position with larger difference indicates that a non-planar object (obstacle) appears.
When it is determined that a non-planar object exists in the scene, a starting point for each line segment difference is determined. And searching a difference point along the direction from bottom to top of each line segment in the reference graph, wherein the first difference point is the starting point of the difference of each line segment.
The difference points are searched from the bottom to the top, the first difference point is used as the starting point of the difference of each line segment, the lines in the image are farther from the robot body from bottom to top due to the installation position relation of the pattern projector, and one point closest to the robot is selected as the starting point of the obstacle.
And S3, determining the position information of the line segment difference starting point in the vehicle body coordinate system through the mapping relation of the coordinate system.
The mapping relation between the image plane points and the vehicle body coordinate system is established in advance, and the position information of the starting point of the obstacle under the vehicle body coordinate system can be determined through the mapping relation.
The mapping relation establishment process from the point of the image plane to the vehicle body coordinate system comprises the following steps:
A1, fixedly placing a calibration plate on a ground plane;
A2, mapping points on the image to a ground plane coordinate system;
Establishing a ground plane coordinate system on a calibration plate (so that coordinate points of each angular point of the checkerboard under the ground plane coordinate system can be determined), and simultaneously detecting the angular points of the checkerboard (so that coordinate points of each angular point of the checkerboard under the image plane coordinate system can be determined), and solving a homography matrix based on the corresponding relation between the coordinate points under the ground plane coordinate system and the coordinate points under the image plane coordinate system;
a3, mapping the ground plane coordinate system to a camera coordinate system;
acquiring image data of a covering complete calibration plate, and performing external parameter calibration (assuming that the internal parameter calibration of the camera is finished in advance) to obtain a mapping relation from a ground plane coordinate system to a camera coordinate system;
and A4, mapping the camera coordinate system to the vehicle body coordinate system through hand-eye calibration.
And S4, carrying out robot safety protection in real time according to the position information in the vehicle body coordinate system.
And calculating the distance from the current coordinate of the robot to the starting point of the obstacle, and starting protection and braking or controlling the robot to turn when the distance is smaller than or equal to a set safety threshold value.
And carrying out robot safety protection in real time according to the position information of the starting point of the non-planar object under the vehicle body coordinate system.
The scheme of the embodiment compares a real-time image including a projection image with a reference image, determines the position of an obstacle starting point through image difference, obtains the coordinates of the obstacle starting point under a vehicle body coordinate system through a coordinate system mapping relation, and controls the protection start of the robot according to the threshold comparison of the distance. By means of the structural information of the projection patterns, obstacles in a scene are detected, the problem of perception pain points of other sensors such as a glass wall can be effectively solved, the cost is relatively low, and the protection requirement of a mobile robot can be better met.
It should be understood that the examples are only for illustrating the present application and are not intended to limit the scope of the present application. Furthermore, it should be understood that various changes and modifications can be made by one skilled in the art after reading the teachings of the present application, and such equivalents are intended to fall within the scope of the application as defined in the appended claims.

Claims (9)

CN202310413879.1A2023-04-122023-04-12Mobile robot safety protection method and deviceActiveCN116787428B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202310413879.1ACN116787428B (en)2023-04-122023-04-12Mobile robot safety protection method and device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202310413879.1ACN116787428B (en)2023-04-122023-04-12Mobile robot safety protection method and device

Publications (2)

Publication NumberPublication Date
CN116787428A CN116787428A (en)2023-09-22
CN116787428Btrue CN116787428B (en)2025-07-29

Family

ID=88047038

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202310413879.1AActiveCN116787428B (en)2023-04-122023-04-12Mobile robot safety protection method and device

Country Status (1)

CountryLink
CN (1)CN116787428B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109101862A (en)*2017-06-202018-12-28中建空列(北京)科技有限公司A kind of obstacle detection method of overhead rail vehicle, barrier preventing collision method and its system
CN111035321A (en)*2018-10-112020-04-21原相科技股份有限公司 Cleaning robot capable of detecting two-dimensional depth information and its operation method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CA2950791C (en)*2013-08-192019-04-16State Grid Corporation Of ChinaBinocular visual navigation system and method based on power robot
CN112797915B (en)*2020-12-292023-09-12杭州海康机器人股份有限公司Calibration method, calibration device and system of line structured light measurement system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109101862A (en)*2017-06-202018-12-28中建空列(北京)科技有限公司A kind of obstacle detection method of overhead rail vehicle, barrier preventing collision method and its system
CN111035321A (en)*2018-10-112020-04-21原相科技股份有限公司 Cleaning robot capable of detecting two-dimensional depth information and its operation method

Also Published As

Publication numberPublication date
CN116787428A (en)2023-09-22

Similar Documents

PublicationPublication DateTitle
US20220036574A1 (en)System and method for obstacle avoidance
CN109961468B (en)Volume measurement method and device based on binocular vision and storage medium
KR102176376B1 (en)Apparatus and method for measuring distance of object
JP5588812B2 (en) Image processing apparatus and imaging apparatus using the same
US12235113B2 (en)Parking support apparatus
JP6589926B2 (en) Object detection device
EP1394761A2 (en)Obstacle detection device and method therefor
JP5982298B2 (en) Obstacle detection device and obstacle detection method
CN113658241B (en)Monocular structured light depth recovery method, electronic device and storage medium
KR20200071960A (en)Method and Apparatus for Vehicle Detection Using Lidar Sensor and Camera Convergence
WO2022188292A1 (en)Target detection method and system, and control method, devices and storage medium
KR20230000585A (en)Method and apparatus for calibrating multiple sensors
CN113768419B (en)Method and device for determining sweeping direction of sweeper and sweeper
JP7348414B2 (en) Method and device for recognizing blooming in lidar measurement
CN111256651A (en)Week vehicle distance measuring method and device based on monocular vehicle-mounted camera
CN116787428B (en)Mobile robot safety protection method and device
EP4071578B1 (en)Light source control method for vision machine, and vision machine
JPH10187974A (en) Logistics measurement equipment
US12183041B2 (en)Vehicle and control method thereof
CN116954228A (en)Human body tracking method for mobile robot
JP6561688B2 (en) DETECTING DEVICE, DETECTING METHOD, IMAGING DEVICE, DEVICE CONTROL SYSTEM, AND PROGRAM
JP2023174272A (en)Object recognition method and object recognition apparatus
JP5580062B2 (en) Obstacle detection alarm device
JP7114963B2 (en) Imaging system, imaging device, imaging method, and mobile object equipped with imaging system
CN110598505A (en)Method and device for judging suspension state of obstacle and terminal

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp