Movatterモバイル変換


[0]ホーム

URL:


CN111144406A - Self-adaptive target ROI (region of interest) positioning method of solar panel cleaning robot - Google Patents

Self-adaptive target ROI (region of interest) positioning method of solar panel cleaning robot
Download PDF

Info

Publication number
CN111144406A
CN111144406ACN201911332440.6ACN201911332440ACN111144406ACN 111144406 ACN111144406 ACN 111144406ACN 201911332440 ACN201911332440 ACN 201911332440ACN 111144406 ACN111144406 ACN 111144406A
Authority
CN
China
Prior art keywords
target
image
robot
frame
interest
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911332440.6A
Other languages
Chinese (zh)
Other versions
CN111144406B (en
Inventor
杨大卫
张文强
张传法
李馨蕾
陶玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan UniversityfiledCriticalFudan University
Priority to CN201911332440.6ApriorityCriticalpatent/CN111144406B/en
Publication of CN111144406ApublicationCriticalpatent/CN111144406A/en
Application grantedgrantedCritical
Publication of CN111144406BpublicationCriticalpatent/CN111144406B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明属于机器视觉图像处理技术领域,具体为一种太阳能面板清洁机器人自适应ROI目标定位方法。本发明利用目标在两帧图像中位置变化有限的特点,将上一帧的检测结果融合传感器运动信息补偿目标位置变化,估计出目标在当前图像中可能出现的感兴趣区域,缩小了检测范围,避免了全图扫描目标的大计算量和无用背景区域引入的干扰,专注于有效区域,能够实时高效精准地检测目标。本发明解决了清洁机器人在太阳能面板上由于检测范围广、背景复杂、运动变化导致的运算量大、实时性差、干扰多、容易丢失目标等问题,极大地提升了其检测效率和稳定性,使得清洁机器人快速、高效、精准地完成太阳能面板全自动清洁工作。

Figure 201911332440

The invention belongs to the technical field of machine vision image processing, in particular to an adaptive ROI target positioning method for a solar panel cleaning robot. The invention utilizes the feature of limited position change of the target in two frames of images, fuses the detection result of the previous frame with the motion information of the sensor to compensate the target position change, estimates the area of interest that the target may appear in the current image, and narrows the detection range. It avoids the large amount of calculation of the full-image scanning target and the interference introduced by the useless background area, focuses on the effective area, and can detect the target efficiently and accurately in real time. The invention solves the problems of the cleaning robot on the solar panel due to the wide detection range, complex background, and motion changes, such as large computational load, poor real-time performance, much interference, easy target loss, etc., and greatly improves its detection efficiency and stability. The cleaning robot completes the automatic cleaning of solar panels quickly, efficiently and accurately.

Figure 201911332440

Description

Self-adaptive target ROI (region of interest) positioning method of solar panel cleaning robot
Technical Field
The invention belongs to the technical field of machine vision image processing, and particularly relates to a method for a solar panel cleaning robot to perform self-adaptive positioning on a target ROI.
Background
In the modern society, with the rapid development of science and technology and economy and the continuous promotion of industrial process, the worldwide demand for energy is continuously increased, and the energy problem becomes a problem which is concerned and urgently hoped to be solved in each country. The solar energy has the advantages of high efficiency, environmental protection, low cost and the like, and is rapidly developed and widely applied in recent years. Because the photovoltaic power generation system is exposed outdoors for a long time, the transmittance of sunlight can be greatly reduced by dust and foreign matters accumulated on the solar panel, so that the power generation efficiency is reduced, and the solar panel is cleaned by removing dust regularly, which is an important task.
The current ways of dust removal cleaning of solar panels can be classified into three categories: the first type is manual cleaning. Manual cleaning requires the employment and training of a large number of professionals, coupled with work environment limitations and task specificity, resulting in low personnel availability, high maintenance costs, dead corners in the cleaning area, and incomplete cleaning. The second type is a fixed-orbit cleaning robot. The robot needs to be installed with an additional guide rail to assist the operation of the robot, so that high equipment cost and maintenance cost are increased, and the flexibility is poor. The third type is an autonomous cleaning robot. The robot can identify a working area, plan a driving path and complete a cleaning task.
Traditional robotic autonomous navigation solutions rely primarily on radar sensors. The scheme has high reliability and mature technology, but has high cost (for example, the price of the laser radar ranges from tens of thousands to hundreds of thousands), complex structure and high installation requirement, and some radars are heavy and are not suitable for being used on solar panels made of special materials. Compared with the prior art, the camera has the advantages of low cost, low power consumption, small size and convenience in installation. However, the solar panel cleaning robot has limited performance of a used processor due to reasons such as power consumption and cost, and has a wide detection range of a working area, a complex background, continuous movement and difficulty in processing a large amount of image data in real time, so that the realization of autonomous navigation by using a vision technology is a problem which is urgently needed to be solved by the solar panel cleaning robot.
Disclosure of Invention
The invention aims to provide a self-adaptive target ROI (region of interest) positioning method of a solar panel cleaning robot, which can detect a target efficiently and accurately in real time and realize autonomous navigation.
The visual solar panel cleaning robot needs to detect targets (such as panel edges, signs, two-dimensional codes and the like) in a picture in real time in the working process, judge a travelable area, adjust the self pose, analyze and execute path planning, and finish a solar panel cleaning task through autonomous navigation. The target may appear at any position in the frame and the robot needs to scan the full image to detect the target. However, the computation amount of the full-image scanning is huge, and the background is complex and constantly moving, so that the requirements on real-time performance and precision are difficult to meet. The self-adaptive target ROI positioning method provided by the invention utilizes the characteristic that the position change of a target in two frames of images is limited, integrates the detection result of the previous frame with the motion information of a sensor, compensates the position change of the target, estimates the region of interest of the target possibly appearing in the current image, reduces the detection range, avoids the interference caused by large calculation amount of the whole image scanning target and useless background regions, concentrates on the effective region, can detect the target in real time, efficiently and accurately and further realizes autonomous navigation.
The invention provides a self-adaptive target ROI positioning method of a solar panel cleaning robot, which comprises the following specific steps
Firstly, carrying out full-image detection by using a target detection algorithm, and screening a result to obtain the position of a target; the method comprises the following specific steps:
step 101: the system captures a frame of image at regular time or according to a command, if the first frame of image is not the system and a detection result is available, the step 201 is skipped;
step 102: according to the purpose of detectionAnd (5) calling a corresponding algorithm to perform full-image detection, and screening a result to obtain the position of the target. E.g. detected object B0Denoted as curt _ B0=(x0,y0,x1,y1) Wherein x is0,y0,x1,y1The coordinates of the top left and bottom right corners of the target, respectively. If the target is not detected in the frame, returning to the step 101 to wait for detecting the next frame of image;
step 103: after the system finishes processing the current image, the system updates pre _ B0=curt_B0Returning to the step 101, waiting for the next frame image to be detected;
secondly, estimating the region of interest of the target in the current image according to the target position in the previous frame image, specifically as follows:
step 201: the object B0The position in the previous frame image is pre _ B0=(x0,y0,x1,y1) Will be pre _ B0Expansion by 1.2 fold gave: ROI _ B'0=(x′0,y′0,x′1,y′1)=(x0-δx,y0-δy,x1+δx,y1+ δ y), where δ x ═ 0.6 × (x)1-x0),δy=0.6×(y1-y0) Is an intermediate variable; extended ROI _ B'0Compare pre _ B0Increase the inclusion of B in the current frame image0The detection success rate and speed are improved;
thirdly, modeling the motion state of the robot, and calculating the position change of the robot in the interval time of the two frames of images, wherein the method specifically comprises the following steps:
step 301: although the target detected by the solar panel cleaning robot is still, the movement of the robot itself causes the position of the target in the image to change, as shown in fig. 2, assuming that the target B is0Center point P at t1P with time instant mapped on image11Point, after the robot has rotated R and translated t, at t2Time of day mapping and p on image22And (4) point.
According to pre _ B0=(x0,y0,x1,y1) Calculate p1=((x0+x1)/2,(y0+y1) /2) and p) are2Is the target B to be solved0Location frame center on image 2;
step 302: object B0The position change in the two images depends on t1→t2]The rotation R and translation t of the robot in the interval are large and small. Sampling IMU (inertial measurement unit) at a timing interval delta t to obtain linear acceleration a of the robot in three axial directionsx,ay,azAnd angular velocities w in three directionsx,wy,wzThe invention describes the motion of the robot by using only a few motion parameters, and the storage space and the operation time occupied by the parameters are basically negligible, the IMU data is the sum of a real value omega, a drift value b and a Gaussian error η:
Figure BDA0002330018550000031
instantaneous angular velocity of movement at time t
Figure BDA0002330018550000032
And linear acceleration
Figure BDA0002330018550000033
The calculation formulas of (A) are respectively as follows:
Figure BDA0002330018550000034
Figure BDA0002330018550000035
wherein the superscripts g and a denote angular velocity and linear acceleration, respectively, and the left-hand notation w denotes the world coordinate system (the left-hand notation w appearing hereinafter denotes the same meaning, e.g. as
Figure BDA0002330018550000036
) And R represents a rotation angle.
Step 303: assuming that Δ t is the sampling time interval of the IMU, since the sampling frequency of the IMU is relatively high, above 100HZ, Δ t is short, so it can be assumed that the angular velocity and the linear acceleration in Δ t time remain unchanged. Combining the formula of a physical motion model:
Figure BDA0002330018550000037
Figure BDA0002330018550000038
(superscript. denotes derivative), the rotation angle R, velocity v and position p at time t + Δ t are calculated:
R(t+Δt)=R(t)Exp(ω(t)Δt)
wv(t+Δt)=wv(t)+wa(t)Δt
wp(t+Δt)=wp(t)+wa(t)Δt+0.5×wa(t)Δt2
step 304: combining the angular velocity and acceleration measurement formulas described in step 302, obtaining complete formulas of the rotation angle R at the time t + Δ t, the velocity v and the position p (where the superscript d represents a discrete value):
Figure BDA0002330018550000039
Figure BDA00023300185500000310
Figure BDA00023300185500000311
step 305: and calculating the motion state of the robot at the moment of the second frame of image. Since the sampling frequency of the IMU is higher than the image, it is necessary to accumulate IMU data at multiple time instants. Suppose that t is taken from the first frame image1To the second frame t2Samples j-i IMU data in between, then from t1T is obtained by accumulating j-i IMU data at moment2State value at time:
Figure BDA00023300185500000312
Figure BDA00023300185500000313
Figure BDA00023300185500000314
wherein the subscript j equals t2Because both correspond to the same time;
step 306: computing two frame images [ t ]1→t2]The robot motion state within the interval changes. t is t1And t2The difference between the state quantities at the instant of time, i.e. the accumulation of all rotations Δ R of the IMU between i and jijVelocity Δ vijAnd a displacement Δ pijThe calculation formula of the variation amount of (c) is as follows:
Figure BDA00023300185500000315
Figure BDA0002330018550000041
Figure BDA0002330018550000042
fourthly, performing motion estimation and compensation on the region of interest according to the position change of the robot; the method comprises the following specific steps:
step 401: and establishing a conversion relation between a world three-dimensional coordinate system and an image coordinate system. The real world is a three-dimensional space, and a certain point P is set to be used under a world coordinate system (x)w,yw,zw) Expressed as (u, v) in the corresponding image coordinate system, and the conversion relationship is represented by "world coordinate system ═ v>Coordinate system of camera>Physical coordinate system of image>Image pixel coordinate system ", wherein the positional relationship between the camera coordinate system and the world coordinate system is described by a rotation matrix R and a translation vector t, f denotes the camera focal length, dx,dyRespectively representing the unit size of a single pixel on an image row and a single pixel on an image column, Zc represents depth information, and the calculation formula is as follows:
Figure BDA0002330018550000043
step 402: change in robot attitude (Δ R) calculated in step 305ij,Δvij,Δpij) The position p of the target in the second frame image coordinate system is estimated by integrating the camera model2The formula (where K is a reference calibrated in advance, and is not described here) is as follows:
Figure BDA0002330018550000044
fifthly, correcting the region of interest and detecting the accurate position of the target, specifically as follows:
step 501: calculating a pixel shift amount, Δ u ═ u2-u1,Δv=v2-v1Compensating for ROI _ B'0=(x′0,y′0,x′1,y′1) Obtaining a new ROI prediction frame ROI _ B0=(x″0,y″0,x″1,y″1)=(x′0+Δu,y′0+Δv,x′1+Δu,y′1+Δv);
Step 502: compensate for said B'0=(x′0,y′0,x′1,y′1) Obtaining a new ROI estimation frame B0=(x″0,y″0,x″1,y″1)=(x′0+Δu,y′0+Δv,x′1+Δu,y′1+Δv);
Step 503: judging a new region of interest ROI _ B0And if the boundary is out of range, adjusting. If the image size is h × w, the judgment and adjustment formula is: x ″)0=min(x″0,0);y″0=min(y″0,0);x″1=max(x″1,w);y″1=max(y″1,h);
Step 504: ROI _ B "in the current picture0Executing corresponding detection algorithm in the range to obtain a target B0Accurate positioning of current _ B0=(x0,y0,x1,y1) If no target is detected, then clear pre _ B0Skipping to step 102 for full-image detection;
step 505: after the system finishes processing the current image, the system updates pre _ B0=curt_B0Returning to step 101.
The self-adaptive dynamic ROI positioning method designed by the invention utilizes the historical detection result to fuse the sensor information to compensate the motion of the target on the image, avoids the detection of large computation and useless areas of the full-image scanning target, solves the problems of low computation speed and poor real-time performance of the cleaning robot on the solar panel due to wide detection range and complex background and the problem of easy target loss in the driving process, and greatly improves the detection efficiency and stability.
Drawings
FIG. 1 is a schematic diagram of the algorithm of the present invention for locating a target ROI.
FIG. 2 is a schematic diagram of the change of an object in an image during the movement process of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following is a detailed description of the embodiments of the present invention with reference to the accompanying drawings.
Fig. 1 is a schematic diagram of an embodiment. First, sub-graph A is t1The image collected by the camera at any moment is subjected to full-image scanning by using a target detection algorithm and is screened to obtain the position of a target; t is t2Acquiring a second frame of image at a moment, and predicting the position of a target possibly existing in the sub-image B by using a result in the first frame of image; and in the sub-graph C, calculating the motion state of the robot, performing motion estimation and compensation on the image, and correcting the position in the sub-graph B. And finally, calling a target detection algorithm in the corrected region to track out the target, as shown in a subgraph D.
The first step, using target detection algorithm to detect the whole image, screening the result to obtain the position of the target, the implementation steps are as follows:
step 101: the robot system captures a frame of image at regular time or according to a command and stamps a time stamp t, and if the first frame of image is not captured and the last frame has a detection result, the robot system jumps to step 201.
102, calling a corresponding algorithm to perform full-image detection according to the target to be detected, and obtaining the position frame information of the target to be detected by a screening result, calling a sign detection algorithm in the embodiment, and screening out a target sign B according to a preset threshold value of 100-h, w-400, ceter e (100, w) ∩ (100, h)0Is denoted as curt _ B0=(x0,y0,x1,y1). Wherein h and w are respectively image height and width and x0,y0,x1,y1The coordinates of the top left and bottom right corners of the target, respectively. If the frame is not detected or any result is screened out, the process returns to step 101 to wait for the next frame of image to be detected.
Step 103: after the system has processed the current image information and the related procedures (e.g., performing path planning, position shifting), the update is performed
Figure BDA0002330018550000051
And then returns to step 101 to wait for the next frame image to be detected.
Secondly, estimating the interested area of the target in the current image according to the target position in the previous frame image, and implementing the following steps:
step 201: the object B0The position in the previous frame image is pre _ B0=(x0,y0,x1,y1) Will be pre _ B0Enlarging by 1.2 times to obtain ROI _ B'0=(x′0,y′0,x′1,y′1)=(x0-δx,y0-δy,x1+δx,y1+ δ y), where δ x ═ 0.6 × (x)1-x0),δy=0.6×(y1-y0). Extended ROI _ B'0Compare pre _ B0Increase the inclusion of B in the current frame image0The detection success rate and speed are improvedAnd (4) degree.
Thirdly, modeling the motion state of the robot, and calculating the position change of the robot in the interval time of the two frames of images, wherein the implementation steps are as follows:
step 301: although the target detected by the solar panel cleaning robot is stationary, the movement of the robot itself causes the position of the target in the image to change. The motion in three-dimensional space is composed of three axes, so the robot motion is described by translation in three axes and rotation around three axes, which have six degrees of freedom in total, as shown in fig. 2, and the object B0Center point P at t1P with time instant mapped on image11Point, after the robot has rotated R and translated t, at t2Time of day mapping and p on image22And (4) point. According to pre _ B0=(x0,y0,x1,y1) Calculate p1=((x0+x1)/2,(y0+y1)/2);
Step 302: object B0The position change in the two images depends on t1→t2]The rotation R and translation t of the robot in the interval are large and small. In the embodiment, the IMU inertial measurement unit is used for outputting linear acceleration a of the robot along three axial directionsx,ay,azAnd angular velocities w in three directionsx,wy,wzThe invention describes the motion of the robot by only using a few motion parameters, and the storage space and the operation time occupied by the parameters can be basically ignored, the IMU is sampled at a timing interval delta t, and the obtained data is the sum of a real value, a drift value b and a Gaussian error η:
Figure BDA0002330018550000061
the moving instantaneous angular velocity and linear acceleration are expressed as follows:
Figure BDA0002330018550000062
Figure BDA0002330018550000063
step 303: in this embodiment, the IMU sampling interval is set to Δ t equal to 10ms, and a physical motion model formula is combined
Figure BDA0002330018550000064
And calculating the rotating angle R, the speed v and the position p at the moment t + delta t, wherein the formula is as follows:
R(t+Δt)=R(t)Exp(ω(t)Δt)
wv(t+Δt)=wv(t)+wa(t)Δt
wp(t+Δt)=wp(t)+wa(t)Δt+0.5×wa(t)Δt2
step 304: in conjunction with the angular velocity and acceleration measurement equations described in step 302, the complete rotation angle R, velocity v and position p at time t + Δ t are calculated, where the superscript d represents the dispersion:
Figure BDA0002330018550000065
Figure BDA0002330018550000066
Figure BDA0002330018550000067
step 305: and calculating the motion state of the robot at the moment of the second frame of image. Since the sampling frequency of the IMU is higher than the image, it is necessary to accumulate IMU data at multiple time instants. Suppose that t is taken from the first frame image1To the second frame t2Samples j-i IMU data in between, then from t1T is obtained by accumulating j-i IMU data at moment2State value at time:
Figure BDA0002330018550000068
Figure BDA0002330018550000071
Figure BDA0002330018550000072
step 306: computing two frame images [ t ]1→t2]The robot motion state within the interval changes. t is t1And t2The difference between the state quantities at the instant of time, i.e. the accumulation of all rotations Δ R of the IMU between i and jijVelocity Δ vijAnd a displacement Δ pijThe calculation formula of the variation amount of (c) is as follows:
Figure BDA0002330018550000073
Figure BDA0002330018550000074
Figure BDA0002330018550000075
fourthly, motion estimation and compensation are carried out on the region of interest according to the position change of the robot, and the implementation steps are as follows:
step 401: and establishing a conversion relation between a world three-dimensional coordinate system and an image coordinate system. The real world is a three-dimensional space, and a certain point P is set to be used under a world coordinate system (x)w,yw,zw) Expressed as (u, v) in the corresponding image coordinate system, and the conversion relationship is represented by "world coordinate system ═ v>Coordinate system of camera>Physical coordinate system of image>Image pixel coordinate system ", wherein the positional relationship between the camera coordinate system and the world coordinate system is described by a rotation matrix R and a translation vector t, f denotes the camera focal length, dx,dyRespectively representing the unit size of a single pixel on an image row and a single pixel on an image column, Zc represents depth information, and the calculation formula is as follows:
Figure BDA0002330018550000076
step 402: using the robot pose change (Δ R) calculated in step 305ij,Δvij,Δpij) The position p of the target in the second frame image coordinate system is estimated by integrating the camera model2The formula (where K is a reference calibrated in advance, and is not described here) is as follows:
Figure BDA0002330018550000077
fifthly, correcting the region of interest and detecting the accurate position of the target, wherein the implementation steps are as follows:
step 501: calculating a pixel offset Δ u-u2-u1,Δv=v2-v1Compensating to the region of interest ROI _ B 'described in S21'0Obtaining a new region of interest ROI _ B0=(x″0,y″0,x″1,y″1)=(x′0+Δu,y′0+Δv,x′1+Δu,y′1+Δv);
Step 502: compensate for said B'0=(x′0,y′0,x′1,y′1) Obtaining a new ROI estimation frame B0=(x″0,y″0,x″1,y″1)=(x′0+Δu,y′0+Δv,x′1+Δu,y′1+Δv);
Step 503: judging a region of interest ROI _ B0And if the boundary is out of range, adjusting. And if the image size is h multiplied by w, judging and adjusting the mode: x ″)0=min(x″0,0);y″0=min(y″0,0);x″1=max(x″1,w);y″1=max(y″1,h);
Step 504: ROI _ B "in the current picture0Executing corresponding detection algorithm in the range to obtain a target B0Accurate positioning of current _ B0=(x0,y0,x1,y1) If no target is detectedThen clear pre _ B0Skipping to step 102 for full-image detection;
step 505: after the system finishes processing the current image, the system updates pre _ B0=cur_B0Returning to step 101.
The experimental results show that the test comparison results are shown in the following table, and it can be seen that the method of the invention is obviously improved in both accuracy and calculation efficiency.
Table 1: the recognition rate is compared with the time consumption,
categoriesAccuracy of measurementRecall rateCorrect number ofNumber of errorsAverage time consumption of single frame
Before one85.00%70.12%12992229281ms
Now it is98.24%93.32%1500026823ms
Lifting of15.58%33.09%15.46%88.30%71.60%
The self-adaptive target ROI positioning method disclosed by the invention utilizes the characteristic that the position change of a target in two frames of images is limited, combines the detection result of the previous frame with the motion information of a sensor to compensate the position change of the target, and estimates the region of interest of the target possibly appearing in the current image, so that the detection region can be reduced by more than 50% (720P camera, 1080 multiplied by 720 resolution ratio), the average speed is improved by 70%, the background interference is reduced, and the detection precision is improved by about 15%. The invention effectively solves the problems of low operation speed, poor real-time performance and easy target loss in the moving process of the cleaning robot on the solar panel due to wide detection range and complex background.

Claims (6)

Translated fromChinese
1.一种太阳能面板清扫机器人自适应目标ROI定位方法,其特征在于,具体步骤为:1. a solar panel cleaning robot self-adaptive target ROI positioning method, is characterized in that, concrete steps are:S01:采集第一帧图像,对整幅图像检测和筛选得到目标位置;S01: Collect the first frame of image, detect and filter the entire image to obtain the target position;S02:采集下一帧图像,根据上一帧图像中的目标位置估计出目标在当前图像中的感兴趣区域;S02: Collect the next frame of image, and estimate the region of interest of the target in the current image according to the target position in the previous frame of image;S03:对机器人运动状态建模,计算出机器人在两幅图像间的位置变化;S03: Model the motion state of the robot, and calculate the position change of the robot between two images;S04:根据机器人的位置变化对所述感兴趣区域做运动估计与补偿;S04: Perform motion estimation and compensation on the region of interest according to the position change of the robot;S05:修正感兴趣区域并检测出目标精确位置。S05: Correct the region of interest and detect the precise position of the target.2.根据权利要求1所述的太阳能面板清洁机器人自适应目标ROI定位方法,其特征在于,所述对整幅图像检测和筛选得到目标位置,具体步骤如下:2. The self-adaptive target ROI positioning method of a solar panel cleaning robot according to claim 1, wherein the detection and screening of the entire image to obtain the target position, the specific steps are as follows:S11:系统定时或根据命令捕捉一帧图像,若非系统首帧图像并且存在检测结果,则跳转到S21;S11: The system captures a frame of image regularly or according to the command, if it is not the first frame of the system image and there is a detection result, then jump to S21;S12:根据需要检测的目标调用对应的算法模型进行全图检测,经过筛选后得到目标位置;设检测到的目标B0位置记为curt_B0=(x0,y0,x1,y1),其中(x0,y0),(x1,y1)分别为目标的左上、右下角的坐标;若这一帧未检测到目标,则返回S11,等待检测下一帧图像;S12: Call the corresponding algorithm model according to the target to be detected to perform full-image detection, and obtain the target position after screening; set the detected target B0 position as curt_B0 =(x0 , y0 , x1 , y1 ) , where (x0 , y0 ), (x1 , y1 ) are the coordinates of the upper left and lower right corners of the target respectively; if the target is not detected in this frame, return to S11 and wait to detect the next frame of image;S13:系统处理完当前图像后,更新pre_B0=curt_B0,返回到S11等待下一帧图像;其中pre_B0表示此目标在上一帧图像中的位置。S13: After the system processes the current image, it updates pre_B0 =curt_B0 , and returns to S11 to wait for the next frame of image; where pre_B0 represents the position of the target in the previous frame of image.3.根据权利要求1所述的太阳能面板清洁机器人自适应目标ROI定位方法,其特征在于,所述根据上一帧图像中的目标位置估计出目标在当前图像中的感兴趣区域,具体步骤如下:3. The solar panel cleaning robot self-adaptive target ROI positioning method according to claim 1, characterized in that, the region of interest of the target in the current image is estimated according to the target position in the previous frame image, and the specific steps are as follows :S21:所述目标B0在上一帧图像中的位置为pre_B0=(x0,y0,x1,y1),将pre_B0扩大1.2倍得到感兴趣区域ROI_B′0=(x′0,y′0,x′1,y′1)=(x0-δx,y0-δy,x1+δx,y1+δy),其中δx=0.6×(x1-x0),δy=0.6×(y1-y0),为中间变量。S21: The position of the target B0 in the previous frame image is pre_B0 =(x0 , y0 , x1 , y1 ), and the pre_B0 is enlarged by 1.2 times to obtain the region of interest ROI_B′0 =(x′0 ,y′0 ,x′1 ,y′1 )=(x0 -δx,y0 -δy,x1 +δx,y1 +δy),where δx=0.6×(x1 -x0 ), δy=0.6×(y1 -y0 ), which is an intermediate variable.4.根据权利要求1所述的太阳能面板清洁机器人自适应目标ROI定位方法,其特征在于,所述对机器人运动状态建模,计算出机器人在两幅图像间的位置变化,具体步骤如下:4. The solar panel cleaning robot self-adaptive target ROI positioning method according to claim 1, wherein the robot motion state is modeled, and the position change of the robot between two images is calculated, and the specific steps are as follows:S31:根据pre_B0=(x0,y0,x1,y1)计算出目标B0在上一帧图像上位置的中心p1=((x0+x1)/2,(y0+y1)/2);S31: According to pre_B0 =(x0 , y0 , x1 , y1 ), calculate the center p1 =((x0 +x1 )/2,(y0 of the position of the target B0 on the previous frame of image +y1 )/2);S32:定时间隔Δt采样IMU(惯性测量单元),得到机器人在三个轴方向上的线加速度ax,ay,ax和三个方向上的角速度wx,wy,wz,然后求解出旋转矩阵R和平移向量t;由于Δt很短,故假设Δt内的角速度与线加速度保持不变,结合物理学运动模型公式:
Figure FDA0002330018540000011
Figure FDA0002330018540000012
这里上标·表示导数;计算出t+Δt时刻的旋转角度R,速度v和位置p,公式如下:S32: Sample the IMU (Inertial Measurement Unit) at the timing interval Δt to obtain the linear acceleration ax , ay , ax and the angular velocity wx , wy , wz of the robot in the three axis directions, and then solve The rotation matrix R and the translation vector t are obtained; because Δt is very short, it is assumed that the angular velocity and linear acceleration in Δt remain unchanged, combined with the physical motion model formula:
Figure FDA0002330018540000011
Figure FDA0002330018540000012
Here the superscript · represents the derivative; the rotation angle R, velocity v and position p at time t+Δt are calculated, and the formula is as follows:
Figure FDA0002330018540000021
Figure FDA0002330018540000021
Figure FDA0002330018540000022
Figure FDA0002330018540000022
Figure FDA0002330018540000023
Figure FDA0002330018540000023
其中,上标d表示离散,上标g表示重力加速度,b为漂移值,η为高斯误差;Among them, the superscript d is discrete, the superscript g is the acceleration of gravity, b is the drift value, and η is the Gaussian error;S33:计算机器人对应第二帧图像的运动状态;由于IMU的采样频率高于图像,对多个时刻的IMU数据进行累积;设从第一帧图像t1到第二帧t2之间采样了j-i个IMU数据,那么从t1时刻对j-i个IMU数据累积得到t2时刻的状态值,公式如下:S33: Calculate the motion state of the robot corresponding to the second frame of image; since the sampling frequency of the IMU is higher than that of the image, the IMU data at multiple times are accumulated; it is assumed that the sampling from the first frame of imaget1 to the second framet2 ji pieces of IMU data, then accumulate the ji pieces of IMU data from time t1 to obtain the state value at time t2 , the formula is as follows:
Figure FDA0002330018540000024
Figure FDA0002330018540000024
Figure FDA0002330018540000025
Figure FDA0002330018540000025
Figure FDA0002330018540000026
Figure FDA0002330018540000026
其中,下标j等于t2,因为两者对应同一时刻;Among them, the subscript j is equal to t2 , because the two correspond to the same moment;S34:计算两幅图像[t1→t2]间隔之间机器人的运动状态变化;求解t1和t2时刻状态量之间的差值,也就是累积出IMU在i和j之间所有旋转ΔRij、速度Δvij和位移Δpij的变化量,计算公式如下:S34: Calculate the motion state change of the robot between the interval [t1 →t2 ] of the two images; solve the difference between the state quantities at t1 and t2 , that is, accumulate all the rotations of the IMU between i and j The variation of ΔRij , velocity Δvij and displacement Δpij is calculated as follows:
Figure FDA0002330018540000027
Figure FDA0002330018540000027
Figure FDA0002330018540000028
Figure FDA0002330018540000028
Figure FDA0002330018540000029
Figure FDA0002330018540000029
5.根据权利要求1所述的太阳能面板清洁机器人自适应目标ROI定位方法,其特征在于,所述根据机器人的位置变化对所述感兴趣区域做运动估计与补偿,具体步骤如下:5. The solar panel cleaning robot self-adaptive target ROI positioning method according to claim 1, characterized in that, the described region of interest is estimated and compensated for motion according to the position change of the robot, and the specific steps are as follows:S41:建立世界三维坐标系到图像坐标系之间的转换关系;假设某点P在世界坐标系下用(xw,yw,zw)表示,在对应的图像坐标系下表示为(u,v),其转换关系经过“世界坐标系到相机坐标系,再到图像物理坐标系,再到图像像素坐标系”,公式如下:S41: Establish the conversion relationship between the world three-dimensional coordinate system and the image coordinate system; assuming that a certain point P is represented by (xw , yw , zw ) in the world coordinate system, and represented as (u in the corresponding image coordinate system) ,v), the conversion relationship goes through "the world coordinate system to the camera coordinate system, then to the image physical coordinate system, and then to the image pixel coordinate system", the formula is as follows:
Figure FDA00023300185400000210
Figure FDA00023300185400000210
其中,相机坐标系与世界坐标系之间的位置变化用旋转矩阵R和平移向量t描述,f表示相机焦距,dx,dy分别表示单个像素在图像行列上的单位大小,Zc表示深度信息;Among them, the position change between the camera coordinate system and the world coordinate system is described by a rotation matrix R and a translation vector t, f represents the camera focal length, dx ,dy represent the unit size of a single pixel on the image row and column, Zc represents the depth information;S42:将S34中计算出的机器人位姿变化(ΔRij,Δvij,Δpij)融入相机模型估算出目标在第二帧图像坐标系中的位置p2,公式如下:S42: Integrate the robot pose changes (ΔRij , Δvij , Δpij ) calculated in S34 into the camera model to estimate the position p2 of the target in the second frame image coordinate system, the formula is as follows:
Figure FDA0002330018540000031
Figure FDA0002330018540000031
其中,K是提前标定好的内参。Among them, K is the internal reference calibrated in advance.
6.根据权利要求1所述的太阳能面板清洁机器人自适应目标ROI定位方法,其特征在于,所述修正感兴趣区域并检测出目标精确位置,具体步骤如下:6. The solar panel cleaning robot self-adaptive target ROI positioning method according to claim 1, wherein the modification of the region of interest and the detection of the precise target position, the specific steps are as follows:S51:计算像素偏移量Δu=u2-u1,Δv=v2-v1,补偿至S21中所述的感兴趣区域ROI_B′0得到新的感兴趣区域ROI_B″0=(x″0,y″0,x″1,y″1)=(x′0+Δu,y′0+Δv,x′1+Δu,y′1+Δv);S51: Calculate the pixel offsets Δu=u2 -u1 , Δv=v2 -v1 , and compensate to the region of interest ROI_B′0 described in S21 to obtain a new region of interest ROI_B″0 =(x″0 ,y″0 ,x″1 ,y″1 )=(x′0 +Δu,y′0 +Δv,x′1 +Δu,y′1 +Δv);S52:判断感兴趣区域ROI_B″0是否越界,如有越界则进行调整;设图像尺寸为h×w,则判断和调整方式:x″0=min(x″0,0);y″0=min(y″0,0);x″1=max(x″1,w);y″1=max(y″1,h);S52: Determine whether the ROI_B″0 of the region of interest is out of bounds, and adjust if it is out of bounds; if the image size is h×w, the judgment and adjustment method: x″0 =min(x″0 ,0); y″0 = min(y″0 ,0); x″1 =max(x″1 ,w); y″1 =max(y″1 ,h);S53:在当前图像的ROI_B″0范围内执行相应的检测算法得到目标B0的精确定位curt_B0=(x0,y0,x1,y1),若没有检测到目标,则清空pre_B0,跳转到S12进行全图检测;S53: Execute the corresponding detection algorithm within the ROI_B″0 range of the current image to obtain the precise positioning of the target B0 curt_B0 =(x0 , y0 , x1 , y1 ), if no target is detected, clear pre_B0 , jump to S12 for full image detection;S54:系统处理完当前图像后,更新pre_B0=curt_B0,返回到S11。S54: After the system finishes processing the current image, it updates pre_B0 =curt_B0 , and returns to S11.
CN201911332440.6A2019-12-222019-12-22 A self-adaptive target ROI positioning method for a solar panel cleaning robotActiveCN111144406B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201911332440.6ACN111144406B (en)2019-12-222019-12-22 A self-adaptive target ROI positioning method for a solar panel cleaning robot

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201911332440.6ACN111144406B (en)2019-12-222019-12-22 A self-adaptive target ROI positioning method for a solar panel cleaning robot

Publications (2)

Publication NumberPublication Date
CN111144406Atrue CN111144406A (en)2020-05-12
CN111144406B CN111144406B (en)2023-05-02

Family

ID=70519292

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201911332440.6AActiveCN111144406B (en)2019-12-222019-12-22 A self-adaptive target ROI positioning method for a solar panel cleaning robot

Country Status (1)

CountryLink
CN (1)CN111144406B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119600105A (en)*2024-11-202025-03-11上海联适导航技术股份有限公司 A fast and accurate real-time moving target position estimation method and its application

Citations (14)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20010046309A1 (en)*2000-03-302001-11-29Toshio KameiMethod and system for tracking a fast moving object
JP2011234314A (en)*2010-04-302011-11-17Canon IncImage processing apparatus, image processing method and program
US20150149084A1 (en)*2013-11-262015-05-28Institute For Information IndustryPositioning control method
CN105741325A (en)*2016-03-152016-07-06上海电气集团股份有限公司Moving target tracking method and moving target tracking equipment
CN107093188A (en)*2017-04-122017-08-25湖南源信光电科技股份有限公司A kind of intelligent linkage and tracking based on panoramic camera and high-speed ball-forming machine
CN107193279A (en)*2017-05-092017-09-22复旦大学Robot localization and map structuring system based on monocular vision and IMU information
CN107230219A (en)*2017-05-042017-10-03复旦大学A kind of target person in monocular robot is found and follower method
CN108230328A (en)*2016-12-222018-06-29深圳光启合众科技有限公司Obtain the method, apparatus and robot of target object
US20180211104A1 (en)*2016-03-102018-07-26Zhejiang Shenghui Lighting Co., LtdMethod and device for target tracking
CN109018591A (en)*2018-08-092018-12-18沈阳建筑大学A kind of automatic labeling localization method based on computer vision
CN109740590A (en)*2017-11-062019-05-10凝眸智能科技集团公司The accurate extracting method of ROI and system based on target following auxiliary
US20190206065A1 (en)*2018-01-042019-07-04Wistron CorporationMethod, system, and computer-readable recording medium for image object tracking
US20190238859A1 (en)*2018-01-312019-08-01Canon Kabushiki KaishaImage processing apparatus, image processing method, and non-transitory computer-readable storage medium
CN110516620A (en)*2019-08-292019-11-29腾讯科技(深圳)有限公司Method for tracking target, device, storage medium and electronic equipment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20010046309A1 (en)*2000-03-302001-11-29Toshio KameiMethod and system for tracking a fast moving object
JP2011234314A (en)*2010-04-302011-11-17Canon IncImage processing apparatus, image processing method and program
US20150149084A1 (en)*2013-11-262015-05-28Institute For Information IndustryPositioning control method
US20180211104A1 (en)*2016-03-102018-07-26Zhejiang Shenghui Lighting Co., LtdMethod and device for target tracking
CN105741325A (en)*2016-03-152016-07-06上海电气集团股份有限公司Moving target tracking method and moving target tracking equipment
CN108230328A (en)*2016-12-222018-06-29深圳光启合众科技有限公司Obtain the method, apparatus and robot of target object
CN107093188A (en)*2017-04-122017-08-25湖南源信光电科技股份有限公司A kind of intelligent linkage and tracking based on panoramic camera and high-speed ball-forming machine
CN107230219A (en)*2017-05-042017-10-03复旦大学A kind of target person in monocular robot is found and follower method
CN107193279A (en)*2017-05-092017-09-22复旦大学Robot localization and map structuring system based on monocular vision and IMU information
CN109740590A (en)*2017-11-062019-05-10凝眸智能科技集团公司The accurate extracting method of ROI and system based on target following auxiliary
US20190206065A1 (en)*2018-01-042019-07-04Wistron CorporationMethod, system, and computer-readable recording medium for image object tracking
CN110008795A (en)*2018-01-042019-07-12纬创资通股份有限公司Image object method for tracing and its system and computer-readable storage medium
US20190238859A1 (en)*2018-01-312019-08-01Canon Kabushiki KaishaImage processing apparatus, image processing method, and non-transitory computer-readable storage medium
CN109018591A (en)*2018-08-092018-12-18沈阳建筑大学A kind of automatic labeling localization method based on computer vision
CN110516620A (en)*2019-08-292019-11-29腾讯科技(深圳)有限公司Method for tracking target, device, storage medium and electronic equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘乐元: "集控式足球机器人视觉系统的研究与实现"*
熊文汇: "红外视频图像目标识别跟踪技术与C60实现"*
郑西点: "一种高速视觉实时定位与跟踪系统的研制"*

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN119600105A (en)*2024-11-202025-03-11上海联适导航技术股份有限公司 A fast and accurate real-time moving target position estimation method and its application

Also Published As

Publication numberPublication date
CN111144406B (en)2023-05-02

Similar Documents

PublicationPublication DateTitle
CN110261870B (en) A Synchronous Localization and Mapping Method for Vision-Inertial-Laser Fusion
US8588471B2 (en)Method and device of mapping and localization method using the same
CN111929699A (en)Laser radar inertial navigation odometer considering dynamic obstacles and mapping method and system
CN108827315A (en)Vision inertia odometer position and orientation estimation method and device based on manifold pre-integration
JP6842039B2 (en) Camera position and orientation estimator, method and program
CN107301654A (en)A kind of positioning immediately of the high accuracy of multisensor is with building drawing method
CN103197599A (en)System and method for numerical control (NC) workbench error self correction based on machine vision
CN101383899A (en) A hovering video image stabilization method for space-based platforms
CN114674311B (en)Indoor positioning and mapping method and system
CN110648354B (en)Slam method in dynamic environment
CN116977628A (en)SLAM method and system applied to dynamic environment and based on multi-mode semantic framework
CN118372259A (en)Intelligent control method and system for mechanical arm based on visual positioning
CN110889353A (en)Space target identification method based on primary focus large-visual-field photoelectric telescope
CN117739972A (en) A UAV approach phase positioning method without global satellite positioning system
CN114882070B (en)Binocular vision-based three-dimensional target motion tracking method
CN111144406A (en)Self-adaptive target ROI (region of interest) positioning method of solar panel cleaning robot
CN111582270A (en)Identification tracking method based on high-precision bridge region visual target feature points
CN111239761B (en)Method for indoor real-time establishment of two-dimensional map
Sheng et al.Mobile robot localization and map building based on laser ranging and PTAM
CN102359788B (en)Series image target recursive identification method based on platform inertia attitude parameter
CN119169075A (en) Steel structure positioning system and method based on computer vision and AI
JP2010286963A (en) Moving object detection apparatus and moving object detection method
CN120521620B (en) A robot positioning method, system and device using SLAM technology
JP6813244B2 (en) Airport surface monitoring device
Deng et al.Decoupled EKF for simultaneous target model and relative pose estimation using feature points

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp