Unmanned system flexible collaboration method for intelligent seal detection sceneTechnical Field
The invention belongs to the fields of computer vision and group intelligence, relates to key technologies such as lens shake elimination, binocular positioning ranging, accurate mechanical arm flexible operation and the like, and particularly provides an unmanned system flexible cooperation method for intelligent seal testing scenes.
Background
With the continuous improvement of the industrial automation degree and the rapid development of artificial intelligence technology, unmanned systems are increasingly widely applied in various fields; particularly in a sealing scene, the demands for accurate positioning, ranging and flexible operation are growing; the problems of lens shake, inaccurate positioning and ranging and the like exist in the traditional sealing and testing scene, and the flexible coordination capacity and the operation efficiency of the unmanned system are affected.
Disclosure of Invention
The invention aims to provide an intelligent sealing scene-oriented unmanned system flexible coordination method, which is used for solving the problems of lens shake, inaccurate positioning and ranging and the like in the traditional sealing scene and aims to improve the flexible coordination capability of the unmanned system so as to meet the requirement of the sealing scene on accurate operation.
In order to achieve the above purpose, the invention adopts the following technical scheme:
An unmanned system flexible collaboration method for intelligent seal testing scenes comprises the following steps:
Step 1, calibrating coordinates of three cameras in an intelligent sealing scene, acquiring an acquired image of each camera in real time, and preprocessing the image to obtain a feature matrix of the acquired image;
Step 2, for each camera, performing shake judgment according to the feature matrix of the acquired image, and performing shake correction on the acquired image subjected to shake;
Step 3, calculating to obtain the jitter weight of each camera at the current moment according to the jitter discrimination result;
Step 4, calculating to obtain the attitude weight of each camera at the current moment according to the acquired image after shake correction;
And 5, forming a binocular ranging system by the three cameras in a sequence of two by two, respectively measuring the coordinates and the sizes of the target object through each binocular ranging system, and then carrying out data fusion on the measurement results of the three binocular ranging systems according to the shake weight and the gesture weight of the main camera in the binocular ranging system to obtain the coordinates and the sizes of the target object at the current moment so as to complete flexible cooperative control.
Further, in step 1, the pretreatment process is as follows:
For each frame of image, its gray matrix is expressed as:
,
Wherein,The gray values of the M-th row and the N-th column in the gray matrix are represented, m=1, 2,..m, n=1, 2,., N, M represents the number of rows of the gray matrix, and N represents the number of columns of the gray matrix;
According to gray matrixComputing a lateral differential matrix of an imageThe method specifically comprises the following steps:
,
Wherein the elements of the m-th row and the n-th columnExpressed as:
,
According to gray matrixComputing a longitudinal differential matrix of an imageThe method specifically comprises the following steps:
,
Wherein the elements of the m-th row and the n-th columnExpressed as:
,
Combining the transverse differential matrix and the longitudinal differential matrix to obtain a feature matrix corresponding to the imageThe method specifically comprises the following steps:
,
Wherein,Representing dimensions asAll zero matrices of (a).
Further, in step 2, the moment is marked by the superscript t, and for the current moment t, the jitter discrimination feature is calculatedThe method specifically comprises the following steps:
,
Wherein,Representing the shake discrimination characteristics of the acquired image of the camera at the current time t,And (3) withRepresenting the feature matrix of the acquired images of the camera at time t and time t +1 respectively,Representing the exclusive-or operation of the matrix;
Setting a shake discrimination time window K, and carrying out shake discrimination on the camera according to shake discrimination characteristics, wherein the discrimination conditions of the shake discrimination are as follows:
,
Wherein,Represents the jitter discrimination threshold value,Representing modulo arithmetic;
if the judging condition of the shake judgment is met, the camera is considered to shake at the time T, shake correction is carried out on the image, and the shake occurrence time is recorded in a shake time set T in sequence; otherwise, it is determined that no jitter has occurred.
Further, in step2, the shake correction is expressed as:
,
,
Wherein,A gray matrix representing the acquired image of the camera at time t,Representing gray matrixIs used for the jitter correction result of (a),Representing an inverse filter operation; representing gray matrixThe result of the inverse filtering.
Further, in step 3, the shake weight of the camera at the current time t is calculated:
,
Wherein T represents the set of jitter moments,Indicating that the current time instant T does not belong to the jitter set T,Indicating that the current time instant T belongs to the jitter set T,The time T and the next time jitter occurs in the jitter collection T, and K represents a jitter discrimination time window.
Further, in step 4, for the acquired image at the current time t, boundary extraction is performed on the target object, and the boundary pixel sets are sequentially obtained according to the clockwise direction with the upper boundary as the starting point; Meanwhile, a square frame is adopted to fix a target object, the upper boundary is taken as a starting point, and a frame pixel set is sequentially obtained according to the clockwise direction; Calculating the coincidence ratio of the boundary pixel set and the frame pixel set, and obtaining the attitude estimation vector of the target object:
,
Wherein,A pose estimation vector representing the target object at time t,Representing boundary pixel setsAnd frame pixel setIs used for the degree of coincidence of (2),A rotation matrix representing a two-dimensional form of rotation,A parameter vector representing a straight line fit arc,Representing the modular operation of the algorithm,A exclusive nor operation representing a vector;
estimating a vector from a poseCalculating pose weight of current cameraThe method is specifically expressed as follows:
。
in step 5, the three cameras are numbered in sequence in a clockwise direction and form a binocular ranging system, wherein the 1 st camera in the 1 st binocular ranging system is used as a main camera, the 2 nd camera in the 2 nd binocular ranging system is used as a main camera, the 3 rd camera in the 3 rd binocular ranging system is used as an auxiliary camera, and the 3 rd camera in the 3 rd binocular ranging system is used as a main camera and the 1 st camera is used as an auxiliary camera; for the current time t, three binocular ranging systems respectively measure the coordinates of the target objectAnd size of,;
Fusing measurement results of the three binocular distance measuring systems according to the gesture weight and the shake weight of each camera to obtain the coordinate of the target object at the current time tAnd size ofThe method is specifically expressed as follows:
,
,
,
,
Wherein,Representing the distance between the main camera and the target object measured by the jth binocular distance measuring system,Representing the sign of the scientific count,Representing the pose weight of the j-th camera at time t,The jitter weight of the j-th camera at time t is indicated.
Based on the technical scheme, the invention has the beneficial effects that:
The invention provides an unmanned system flexible cooperative method for an intelligent seal measurement scene, which adopts the data fusion technology of real-time image preprocessing, jitter discrimination and correction, jitter and attitude weight calculation and a binocular range system, so that the data consistency and accuracy of a camera in image acquisition can be ensured, external interference can be found and corrected in time, and measurement data is more accurate. Finally, the invention has the advantages of improving the image acquisition stability, enhancing the measurement precision and realizing flexible cooperative control, and can meet the requirement of a sealing and testing scene on accurate operation.
Drawings
Fig. 1 is a schematic flow chart of an unmanned system flexible collaboration method facing an intelligent seal and test scene.
Detailed Description
In order to make the objects, technical solutions and advantageous effects of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and examples.
The embodiment provides an unmanned system flexible collaboration method for an intelligent seal and test scene, the flow of which is shown in fig. 1, and the method specifically comprises the following steps:
the intelligent sealing and testing scene comprises three cameras, coordinates of the three cameras are calibrated, collected images of each camera are obtained in real time, and the images are preprocessed to obtain a feature matrix of the collected images;
For each frame of image, its gray matrix is expressed as:
,
Wherein,The gray values of the M-th row and the N-th column in the gray matrix are represented, m=1, 2,..m, n=1, 2,., N, M represents the number of rows of the gray matrix, and N represents the number of columns of the gray matrix;
According to gray matrixComputing a lateral differential matrix of an imageThe method specifically comprises the following steps:
,
Wherein the elements of the m-th row and the n-th columnExpressed as:
,
According to gray matrixComputing a longitudinal differential matrix of an imageThe method specifically comprises the following steps:
,
Wherein the elements of the m-th row and the n-th columnExpressed as:
,
Combining the transverse differential matrix and the longitudinal differential matrix to obtain a feature matrix corresponding to the imageThe method specifically comprises the following steps:
,
Wherein,Representing dimensions asAll zero matrices of (a);
Step 2, for each camera, performing shake judgment according to the feature matrix of the acquired image, and performing shake correction on the acquired image subjected to shake;
For each camera, the time marked by the marked t is adopted, and for the current time t, the jitter discrimination characteristic is calculatedThe method specifically comprises the following steps:
,
Wherein,Representing the shake discrimination characteristics of the acquired image of the camera at the current time t,And (3) withRepresenting the feature matrix of the acquired images of the camera at time t and time t +1 respectively,Representing the exclusive-or operation of the matrix;
Setting a shake discrimination time window K, and carrying out shake discrimination on the camera according to shake discrimination characteristics, wherein the discrimination conditions of the shake discrimination are as follows:
,
Wherein,The jitter judgment threshold value is represented, and is concretely an empirical constant preset according to an application scene; representing modular operation, wherein the jitter discrimination time window K takes a value of 4;
If the judging condition of the shake judgment is met, the camera is considered to shake at the time T, shake correction is carried out on the image, and the shake occurrence time is recorded in a shake set T in sequence; otherwise, the jitter is not considered to occur; the shake correction is expressed as:
,
,
Wherein,A gray matrix representing the acquired image of the camera at time t,Representing gray matrixIs used for the jitter correction result of (a),Representing an inverse filtering (inverse convolution) operation; representing gray matrixThe result of the inverse filtering is specifically a gray matrix of the ghost-removed image obtained by the inverse filtering;
Step 3, calculating to obtain the jitter weight of each camera at the current moment according to the jitter discrimination result;
for each camera, calculating the jitter weight of the camera at the current time t:
,
Wherein,Indicating that the current time T does not belong to the jitter set T, i.e. the camera does not shake at the current time; Indicating that the current time T belongs to a jitter set T, namely that the camera is jittered at the current time; the next time jitter occurs after time T in jitter set T;
Step 4, calculating to obtain the attitude weight of each camera at the current moment according to the acquired image after shake correction;
For each camera, marking the moment by using the upper mark t, extracting the boundary of the target object for the acquired image (gray matrix) of the current moment t, and sequentially obtaining boundary pixel sets by taking the upper boundary as a starting point according to the clockwise directionIt should be noted that, the boundary extraction process is a well-known technology in the art, and will not be described herein again; meanwhile, a square frame is adopted to fix a target object, the upper boundary is taken as a starting point, and a frame pixel set is sequentially obtained according to the clockwise direction; Acquiring an estimated attitude vector of the target object by calculating the coincidence ratio of the boundary pixel set and the frame pixel set:
,
Wherein,A pose estimation vector representing the target object at time t,Representing boundary pixel setsAnd frame pixel setIs used for the degree of coincidence of (2),A rotation matrix representing a two-dimensional form of rotation,A parameter vector representing a straight line fit arc,Representing the modular operation of the algorithm,A exclusive nor operation representing a vector;
estimating a vector from a poseCalculating pose weight of current cameraThe method is specifically expressed as follows:
;
Step 5, forming a binocular ranging system by three cameras in a sequence of two by two, respectively measuring the coordinates and the sizes of the target object through each binocular ranging system, and then carrying out data fusion on the measurement results of the three binocular ranging systems according to the shake weight and the gesture weight of the main camera in the binocular ranging system to obtain the coordinates and the sizes of the target object at the current moment so as to complete flexible cooperative control;
sequentially numbering the three cameras according to the clockwise direction, wherein the two cameras form a binocular ranging system, a1 st camera in the 1 st binocular ranging system is used as a main camera, a2 nd camera in the 2 nd binocular ranging system is used as a main camera, a3 rd camera in the 3 rd binocular ranging system is used as an auxiliary camera, and the 3 rd camera in the 3 rd binocular ranging system is used as the main camera and the 1 st camera is used as the auxiliary camera; for the current time t, three binocular ranging systems respectively measure the coordinates of the target objectAnd size of,It should be noted that, the measurement process of the binocular distance measuring system is a well-known technology in the art, and will not be described herein again;
Fusing measurement results of the three binocular distance measuring systems according to the gesture weight and the shake weight of each camera to obtain the coordinate of the target object at the current time tAnd size ofThe method is specifically expressed as follows:
,
,
,
,
Wherein,Representing the distance between the main camera and the target object measured by the jth binocular distance measuring system,Representing the sign of the scientific count,Representing the pose weight of the j-th camera at time t,The jitter weight of the j-th camera at the time t is represented;
Thereby, according to the coordinates of the target object at the current time tAnd size ofThe flexible cooperative control of the unmanned system can be completed.
While the invention has been described in terms of specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the equivalent or similar purpose, unless expressly stated otherwise; all of the features disclosed, or all of the steps in a method or process, except for mutually exclusive features and/or steps, may be combined in any manner.