Movatterモバイル変換


[0]ホーム

URL:


CN118797214B - Unmanned system flexible collaboration method for intelligent seal detection scene - Google Patents

Unmanned system flexible collaboration method for intelligent seal detection scene
Download PDF

Info

Publication number
CN118797214B
CN118797214BCN202411260730.5ACN202411260730ACN118797214BCN 118797214 BCN118797214 BCN 118797214BCN 202411260730 ACN202411260730 ACN 202411260730ACN 118797214 BCN118797214 BCN 118797214B
Authority
CN
China
Prior art keywords
camera
jitter
time
matrix
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411260730.5A
Other languages
Chinese (zh)
Other versions
CN118797214A (en
Inventor
解维坤
王厚军
戴志坚
杨万渝
邓可为
李荣杰
武睿
朱睿琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of ChinafiledCriticalUniversity of Electronic Science and Technology of China
Priority to CN202411260730.5ApriorityCriticalpatent/CN118797214B/en
Publication of CN118797214ApublicationCriticalpatent/CN118797214A/en
Application grantedgrantedCritical
Publication of CN118797214BpublicationCriticalpatent/CN118797214B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention belongs to the field of computer vision and group intelligence, and particularly provides an unmanned system flexible cooperative method for an intelligent sealing scene, which is used for solving the problems of lens shake, inaccurate positioning and ranging and the like in the traditional sealing scene; firstly, acquiring an acquired image of each camera in real time, and preprocessing the image to obtain a feature matrix; then, for each camera, performing shake judgment according to the feature matrix, performing shake correction on the acquisition image subjected to shake, and further calculating to obtain the shake weight and the posture weight of the camera at the current moment; finally, carrying out data fusion on measurement results of the three binocular range systems according to the shake weight and the attitude weight of the main camera in the binocular range systems to obtain the coordinates and the dimensions of the target object at the current moment so as to complete flexible cooperative control; in summary, the invention can improve the flexible coordination capability of the unmanned system, thereby meeting the requirement of the sealing scene on accurate operation.

Description

Unmanned system flexible collaboration method for intelligent seal detection scene
Technical Field
The invention belongs to the fields of computer vision and group intelligence, relates to key technologies such as lens shake elimination, binocular positioning ranging, accurate mechanical arm flexible operation and the like, and particularly provides an unmanned system flexible cooperation method for intelligent seal testing scenes.
Background
With the continuous improvement of the industrial automation degree and the rapid development of artificial intelligence technology, unmanned systems are increasingly widely applied in various fields; particularly in a sealing scene, the demands for accurate positioning, ranging and flexible operation are growing; the problems of lens shake, inaccurate positioning and ranging and the like exist in the traditional sealing and testing scene, and the flexible coordination capacity and the operation efficiency of the unmanned system are affected.
Disclosure of Invention
The invention aims to provide an intelligent sealing scene-oriented unmanned system flexible coordination method, which is used for solving the problems of lens shake, inaccurate positioning and ranging and the like in the traditional sealing scene and aims to improve the flexible coordination capability of the unmanned system so as to meet the requirement of the sealing scene on accurate operation.
In order to achieve the above purpose, the invention adopts the following technical scheme:
An unmanned system flexible collaboration method for intelligent seal testing scenes comprises the following steps:
Step 1, calibrating coordinates of three cameras in an intelligent sealing scene, acquiring an acquired image of each camera in real time, and preprocessing the image to obtain a feature matrix of the acquired image;
Step 2, for each camera, performing shake judgment according to the feature matrix of the acquired image, and performing shake correction on the acquired image subjected to shake;
Step 3, calculating to obtain the jitter weight of each camera at the current moment according to the jitter discrimination result;
Step 4, calculating to obtain the attitude weight of each camera at the current moment according to the acquired image after shake correction;
And 5, forming a binocular ranging system by the three cameras in a sequence of two by two, respectively measuring the coordinates and the sizes of the target object through each binocular ranging system, and then carrying out data fusion on the measurement results of the three binocular ranging systems according to the shake weight and the gesture weight of the main camera in the binocular ranging system to obtain the coordinates and the sizes of the target object at the current moment so as to complete flexible cooperative control.
Further, in step 1, the pretreatment process is as follows:
For each frame of image, its gray matrix is expressed as
Wherein,The gray values of the M-th row and the N-th column in the gray matrix are represented, m=1, 2,..m, n=1, 2,., N, M represents the number of rows of the gray matrix, and N represents the number of columns of the gray matrix;
According to gray matrixComputing a lateral differential matrix of an imageThe method specifically comprises the following steps:
Wherein the elements of the m-th row and the n-th columnExpressed as:
According to gray matrixComputing a longitudinal differential matrix of an imageThe method specifically comprises the following steps:
Wherein the elements of the m-th row and the n-th columnExpressed as:
Combining the transverse differential matrix and the longitudinal differential matrix to obtain a feature matrix corresponding to the imageThe method specifically comprises the following steps:
Wherein,Representing dimensions asAll zero matrices of (a).
Further, in step 2, the moment is marked by the superscript t, and for the current moment t, the jitter discrimination feature is calculatedThe method specifically comprises the following steps:
Wherein,Representing the shake discrimination characteristics of the acquired image of the camera at the current time t,And (3) withRepresenting the feature matrix of the acquired images of the camera at time t and time t +1 respectively,Representing the exclusive-or operation of the matrix;
Setting a shake discrimination time window K, and carrying out shake discrimination on the camera according to shake discrimination characteristics, wherein the discrimination conditions of the shake discrimination are as follows:
Wherein,Represents the jitter discrimination threshold value,Representing modulo arithmetic;
if the judging condition of the shake judgment is met, the camera is considered to shake at the time T, shake correction is carried out on the image, and the shake occurrence time is recorded in a shake time set T in sequence; otherwise, it is determined that no jitter has occurred.
Further, in step2, the shake correction is expressed as:
Wherein,A gray matrix representing the acquired image of the camera at time t,Representing gray matrixIs used for the jitter correction result of (a),Representing an inverse filter operation; representing gray matrixThe result of the inverse filtering.
Further, in step 3, the shake weight of the camera at the current time t is calculated
Wherein T represents the set of jitter moments,Indicating that the current time instant T does not belong to the jitter set T,Indicating that the current time instant T belongs to the jitter set T,The time T and the next time jitter occurs in the jitter collection T, and K represents a jitter discrimination time window.
Further, in step 4, for the acquired image at the current time t, boundary extraction is performed on the target object, and the boundary pixel sets are sequentially obtained according to the clockwise direction with the upper boundary as the starting point; Meanwhile, a square frame is adopted to fix a target object, the upper boundary is taken as a starting point, and a frame pixel set is sequentially obtained according to the clockwise direction; Calculating the coincidence ratio of the boundary pixel set and the frame pixel set, and obtaining the attitude estimation vector of the target object
,
Wherein,A pose estimation vector representing the target object at time t,Representing boundary pixel setsAnd frame pixel setIs used for the degree of coincidence of (2),A rotation matrix representing a two-dimensional form of rotation,A parameter vector representing a straight line fit arc,Representing the modular operation of the algorithm,A exclusive nor operation representing a vector;
estimating a vector from a poseCalculating pose weight of current cameraThe method is specifically expressed as follows:
in step 5, the three cameras are numbered in sequence in a clockwise direction and form a binocular ranging system, wherein the 1 st camera in the 1 st binocular ranging system is used as a main camera, the 2 nd camera in the 2 nd binocular ranging system is used as a main camera, the 3 rd camera in the 3 rd binocular ranging system is used as an auxiliary camera, and the 3 rd camera in the 3 rd binocular ranging system is used as a main camera and the 1 st camera is used as an auxiliary camera; for the current time t, three binocular ranging systems respectively measure the coordinates of the target objectAnd size of
Fusing measurement results of the three binocular distance measuring systems according to the gesture weight and the shake weight of each camera to obtain the coordinate of the target object at the current time tAnd size ofThe method is specifically expressed as follows:
,
,
,
,
Wherein,Representing the distance between the main camera and the target object measured by the jth binocular distance measuring system,Representing the sign of the scientific count,Representing the pose weight of the j-th camera at time t,The jitter weight of the j-th camera at time t is indicated.
Based on the technical scheme, the invention has the beneficial effects that:
The invention provides an unmanned system flexible cooperative method for an intelligent seal measurement scene, which adopts the data fusion technology of real-time image preprocessing, jitter discrimination and correction, jitter and attitude weight calculation and a binocular range system, so that the data consistency and accuracy of a camera in image acquisition can be ensured, external interference can be found and corrected in time, and measurement data is more accurate. Finally, the invention has the advantages of improving the image acquisition stability, enhancing the measurement precision and realizing flexible cooperative control, and can meet the requirement of a sealing and testing scene on accurate operation.
Drawings
Fig. 1 is a schematic flow chart of an unmanned system flexible collaboration method facing an intelligent seal and test scene.
Detailed Description
In order to make the objects, technical solutions and advantageous effects of the present invention more apparent, the present invention will be described in further detail with reference to the accompanying drawings and examples.
The embodiment provides an unmanned system flexible collaboration method for an intelligent seal and test scene, the flow of which is shown in fig. 1, and the method specifically comprises the following steps:
the intelligent sealing and testing scene comprises three cameras, coordinates of the three cameras are calibrated, collected images of each camera are obtained in real time, and the images are preprocessed to obtain a feature matrix of the collected images;
For each frame of image, its gray matrix is expressed as
Wherein,The gray values of the M-th row and the N-th column in the gray matrix are represented, m=1, 2,..m, n=1, 2,., N, M represents the number of rows of the gray matrix, and N represents the number of columns of the gray matrix;
According to gray matrixComputing a lateral differential matrix of an imageThe method specifically comprises the following steps:
Wherein the elements of the m-th row and the n-th columnExpressed as:
According to gray matrixComputing a longitudinal differential matrix of an imageThe method specifically comprises the following steps:
Wherein the elements of the m-th row and the n-th columnExpressed as:
Combining the transverse differential matrix and the longitudinal differential matrix to obtain a feature matrix corresponding to the imageThe method specifically comprises the following steps:
Wherein,Representing dimensions asAll zero matrices of (a);
Step 2, for each camera, performing shake judgment according to the feature matrix of the acquired image, and performing shake correction on the acquired image subjected to shake;
For each camera, the time marked by the marked t is adopted, and for the current time t, the jitter discrimination characteristic is calculatedThe method specifically comprises the following steps:
Wherein,Representing the shake discrimination characteristics of the acquired image of the camera at the current time t,And (3) withRepresenting the feature matrix of the acquired images of the camera at time t and time t +1 respectively,Representing the exclusive-or operation of the matrix;
Setting a shake discrimination time window K, and carrying out shake discrimination on the camera according to shake discrimination characteristics, wherein the discrimination conditions of the shake discrimination are as follows:
Wherein,The jitter judgment threshold value is represented, and is concretely an empirical constant preset according to an application scene; representing modular operation, wherein the jitter discrimination time window K takes a value of 4;
If the judging condition of the shake judgment is met, the camera is considered to shake at the time T, shake correction is carried out on the image, and the shake occurrence time is recorded in a shake set T in sequence; otherwise, the jitter is not considered to occur; the shake correction is expressed as:
Wherein,A gray matrix representing the acquired image of the camera at time t,Representing gray matrixIs used for the jitter correction result of (a),Representing an inverse filtering (inverse convolution) operation; representing gray matrixThe result of the inverse filtering is specifically a gray matrix of the ghost-removed image obtained by the inverse filtering;
Step 3, calculating to obtain the jitter weight of each camera at the current moment according to the jitter discrimination result;
for each camera, calculating the jitter weight of the camera at the current time t
Wherein,Indicating that the current time T does not belong to the jitter set T, i.e. the camera does not shake at the current time; Indicating that the current time T belongs to a jitter set T, namely that the camera is jittered at the current time; the next time jitter occurs after time T in jitter set T;
Step 4, calculating to obtain the attitude weight of each camera at the current moment according to the acquired image after shake correction;
For each camera, marking the moment by using the upper mark t, extracting the boundary of the target object for the acquired image (gray matrix) of the current moment t, and sequentially obtaining boundary pixel sets by taking the upper boundary as a starting point according to the clockwise directionIt should be noted that, the boundary extraction process is a well-known technology in the art, and will not be described herein again; meanwhile, a square frame is adopted to fix a target object, the upper boundary is taken as a starting point, and a frame pixel set is sequentially obtained according to the clockwise direction; Acquiring an estimated attitude vector of the target object by calculating the coincidence ratio of the boundary pixel set and the frame pixel set
,
Wherein,A pose estimation vector representing the target object at time t,Representing boundary pixel setsAnd frame pixel setIs used for the degree of coincidence of (2),A rotation matrix representing a two-dimensional form of rotation,A parameter vector representing a straight line fit arc,Representing the modular operation of the algorithm,A exclusive nor operation representing a vector;
estimating a vector from a poseCalculating pose weight of current cameraThe method is specifically expressed as follows:
Step 5, forming a binocular ranging system by three cameras in a sequence of two by two, respectively measuring the coordinates and the sizes of the target object through each binocular ranging system, and then carrying out data fusion on the measurement results of the three binocular ranging systems according to the shake weight and the gesture weight of the main camera in the binocular ranging system to obtain the coordinates and the sizes of the target object at the current moment so as to complete flexible cooperative control;
sequentially numbering the three cameras according to the clockwise direction, wherein the two cameras form a binocular ranging system, a1 st camera in the 1 st binocular ranging system is used as a main camera, a2 nd camera in the 2 nd binocular ranging system is used as a main camera, a3 rd camera in the 3 rd binocular ranging system is used as an auxiliary camera, and the 3 rd camera in the 3 rd binocular ranging system is used as the main camera and the 1 st camera is used as the auxiliary camera; for the current time t, three binocular ranging systems respectively measure the coordinates of the target objectAnd size ofIt should be noted that, the measurement process of the binocular distance measuring system is a well-known technology in the art, and will not be described herein again;
Fusing measurement results of the three binocular distance measuring systems according to the gesture weight and the shake weight of each camera to obtain the coordinate of the target object at the current time tAnd size ofThe method is specifically expressed as follows:
,
,
,
,
Wherein,Representing the distance between the main camera and the target object measured by the jth binocular distance measuring system,Representing the sign of the scientific count,Representing the pose weight of the j-th camera at time t,The jitter weight of the j-th camera at the time t is represented;
Thereby, according to the coordinates of the target object at the current time tAnd size ofThe flexible cooperative control of the unmanned system can be completed.
While the invention has been described in terms of specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the equivalent or similar purpose, unless expressly stated otherwise; all of the features disclosed, or all of the steps in a method or process, except for mutually exclusive features and/or steps, may be combined in any manner.

Claims (2)

Translated fromChinese
1.一种面向智能封测场景的无人系统柔性协同方法,其特征在于,包括以下步骤:1. A flexible collaboration method for unmanned systems in intelligent packaging and testing scenarios, characterized by comprising the following steps:步骤1.标定智能封测场景中三台相机的坐标,实时获取每台相机的采集图像,并对图像进行预处理,得到采集图像的特征矩阵;Step 1. Calibrate the coordinates of the three cameras in the intelligent packaging and testing scene, obtain the captured images of each camera in real time, and pre-process the images to obtain the feature matrix of the captured images;步骤2.针对每一台相机,根据采集图像的特征矩阵进行抖动判别,并对发生抖动的采集图像进行抖动校正;Step 2. For each camera, jitter determination is performed based on the feature matrix of the captured image, and jitter correction is performed on the captured image that has jitter;抖动判别具体为:采用上标t标记时刻,对于当前时刻t,计算抖动判别特征Sat,具体为:The jitter identification is specifically as follows: the superscript t is used to mark the time, and for the current time t, the jitter identification feature Sat is calculated, which is specifically as follows:其中,Sat表示相机在当前时刻t的采集图像的抖动判别特征,Dt与Dt+1分别表示相机在时刻t与时刻t+1的采集图像的特征矩阵,表示矩阵的异或运算;Wherein, Sat represents the jitter discrimination feature of the image captured by the camera at the current time t, Dt and Dt+1 represent the feature matrices of the image captured by the camera at time t and time t+1 respectively. Represents the XOR operation of the matrix;设置抖动判别时间窗口K,根据抖动判别特征对相机进行抖动判别,抖动判别的判别条件为:Set the jitter detection time window K, and detect the camera jitter according to the jitter detection characteristics. The jitter detection conditions are:其中,Sth表示抖动判别阈值,‖·‖表示求模运算;Where,Sth represents the jitter discrimination threshold, ‖·‖ represents the modulo operation;若满足抖动判别的判别条件,则认定相机在时刻t发生抖动,对图像进行抖动校正并将抖动发生时刻按顺序记录于抖动时刻集合T中;否则,认定未发生抖动;If the jitter determination condition is met, it is determined that the camera jitters at time t, and the image is jitter-corrected and the jitter occurrence time is recorded in sequence in the jitter time set T; otherwise, it is determined that no jitter occurs;抖动校正表示为:The jitter correction is expressed as:其中,It表示相机在时刻t的采集图像的灰度矩阵,表示灰度矩阵It的抖动校正结果,f(·)表示逆滤波运算;表示灰度矩阵It经过逆滤波的结果;Among them,It represents the gray matrix of the image captured by the camera at time t, represents the jitter correction result of the gray matrix It , and f(·) represents the inverse filtering operation; Represents the result of inverse filtering of the grayscale matrix It ;步骤3.针对每一台相机,根据抖动判别结果计算得到相机在当前时刻的抖动权重;计算相机在当前时刻t的抖动权重stStep 3. For each camera, calculate the camera's jitter weight at the current moment according to the jitter identification result; calculate the camera's jitter weight st at the current moment t:其中,T表示抖动时刻集合,表示当前时刻t不属于抖动时刻集合T,t∈T表示当前时刻t属于抖动时刻集合T,tnext表示抖动时刻集合T中时刻t后下一次发生抖动的时刻,K表示抖动判别时间窗口;Where T represents the jitter time set, indicates that the current time t does not belong to the jitter time set T, t∈T indicates that the current time t belongs to the jitter time set T, tnext indicates the time when the next jitter occurs after time t in the jitter time set T, and K indicates the jitter judgment time window;步骤4.针对每一台相机,根据完成抖动校正的采集图像计算得到相机在当前时刻的姿态权重;Step 4. For each camera, the posture weight of the camera at the current moment is calculated based on the acquired image after the shake correction is completed;对于当前时刻t的采集图像,对目标物体进行边界提取,以上边界为起点,按照顺时针方向依次得到边界像素集同时,采用正方形边框框定目标物体,以上边界为起点,按照顺时针方向依次得到边框像素集计算边界像素集与边框像素集的重合度,获取目标物体的姿态估计向量GtFor the captured image at the current time t, the boundary of the target object is extracted, starting from the boundary, and the boundary pixel set is obtained in a clockwise direction. At the same time, a square frame is used to frame the target object. The upper boundary is taken as the starting point, and the frame pixel set is obtained in a clockwise direction. Calculate the overlap between the boundary pixel set and the border pixel set to obtain the target object's posture estimation vector Gt :其中,Gt表示目标物体在时刻t的姿态估计向量,表示边界像素集与边框像素集的重合度,R表示二维旋转形式的旋转矩阵,P表示直线拟合弧形的参数向量,‖·‖表示求模运算,⊙表示向量的同或运算;Among them,Gt represents the estimated posture vector of the target object at time t, Represents the boundary pixel set Pixel set with border The coincidence degree, R represents the rotation matrix of the two-dimensional rotation form, P represents the parameter vector of the straight line fitting arc, ‖·‖ represents the modulus operation, and ⊙ represents the vector exclusive OR operation;根据姿态估计向量Gt计算当前相机的姿态权重具体表示为:Calculate the current camera's posture weight based on the posture estimation vectorGt Specifically expressed as:步骤5.将三台相机按照顺序两两构成双目测距系统,通过每一个双目测距系统分别测量得到目标物体的坐标与尺寸,再根据双目测距系统中主相机的抖动权重与姿态权重对三个双目测距系统的测量结果进行数据融合,得到目标物体在当前时刻的坐标与尺寸,用以完成柔性协同控制;Step 5. The three cameras are placed in pairs in order to form a binocular ranging system. The coordinates and size of the target object are measured by each binocular ranging system. Then, the measurement results of the three binocular ranging systems are fused according to the jitter weight and attitude weight of the main camera in the binocular ranging system to obtain the coordinates and size of the target object at the current moment, so as to complete the flexible collaborative control.将三台相机按照顺时针方向依次编号且两两构成双目测距系统,第1个双目测距系统中第1台相机作为主相机、第2台相机作为辅相机,第2个双目测距系统中第2台相机作为主相机、第3台相机作为辅相机,第3个双目测距系统中第3台相机作为主相机、第1台相机作为辅相机;对于当前时刻t,三个双目测距系统分别测得目标物体的坐标(xj,yj,zj)与尺寸oj,j=1,2,3;The three cameras are numbered in clockwise direction and form a binocular ranging system in pairs. In the first binocular ranging system, the first camera is used as the main camera and the second camera is used as the auxiliary camera. In the second binocular ranging system, the second camera is used as the main camera and the third camera is used as the auxiliary camera. In the third binocular ranging system, the third camera is used as the main camera and the first camera is used as the auxiliary camera. For the current time t, the three binocular ranging systems respectively measure the coordinates (xj , yj , zj ) and size oj of the target object, j = 1, 2, 3.根据每一台相机的姿态权重与抖动权重对三个双目测距系统的测量结果进行融合,得到目标物体在当前时刻t的坐标(xt,yt,zt)与尺寸ot,具体表示为:The measurement results of the three binocular ranging systems are fused according to the posture weight and jitter weight of each camera to obtain the coordinates (xt , yt , zt ) and size ot of the target object at the current time t, which can be specifically expressed as:其中,dj表示第j个双目测距系统测量得到的主相机与目标物体的距离,e表示科学计数符号,表示第j台相机在时刻t的姿态权重,表示第j台相机在时刻t的抖动权重。Wherein,dj represents the distance between the main camera and the target object measured by the jth binocular ranging system, e represents the scientific notation symbol, represents the posture weight of the j-th camera at time t, represents the jitter weight of the jth camera at time t.2.根据权利要求1所述面向智能封测场景的无人系统柔性协同方法,其特征在于,步骤1中,预处理过程为:2. According to the unmanned system flexible collaboration method for intelligent packaging and testing scenarios according to claim 1, it is characterized in that in step 1, the preprocessing process is:针对每一帧图像,其灰度矩阵表示为I:For each frame of image, its grayscale matrix is represented as I:其中,am,n表示灰度矩阵中第m行、第n列的灰度值,m=1,2,...,M,n=1,2,...,N,M表示灰度矩阵的行数,N表示灰度矩阵的列数;Wherein, am,n represents the gray value of the mth row and nth column in the gray matrix, m=1,2,...,M, n=1,2,...,N, M represents the number of rows of the gray matrix, and N represents the number of columns of the gray matrix;根据灰度矩阵I计算图像的横向差分矩阵I1,具体为:The lateral difference matrix I1 of the image is calculated according to the gray matrix I, specifically:其中,第m行、第n列的元素bm,n表示为:The element bm,n in the mth row and nth column is expressed as:根据灰度矩阵I计算图像的纵向差分矩阵I2,具体为:The vertical difference matrix I2 of the image is calculated according to the gray matrix I, specifically:其中,第m行、第n列的元素cm,n表示为:The element cm,n in the mth row and nth column is expressed as:合并横向差分矩阵与纵向差分矩阵,得到图像对应的特征矩阵D,具体为:Combine the horizontal difference matrix and the vertical difference matrix to obtain the feature matrix D corresponding to the image, which is:D=[I1|0M×N]+[0M×N|I2],D=[I1 |0M×N ]+[0M×N |I2 ],其中,0M×N表示维度为M×N的全零矩阵。Here, 0M×N represents an all-zero matrix with dimension M×N.
CN202411260730.5A2024-09-102024-09-10Unmanned system flexible collaboration method for intelligent seal detection sceneActiveCN118797214B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411260730.5ACN118797214B (en)2024-09-102024-09-10Unmanned system flexible collaboration method for intelligent seal detection scene

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411260730.5ACN118797214B (en)2024-09-102024-09-10Unmanned system flexible collaboration method for intelligent seal detection scene

Publications (2)

Publication NumberPublication Date
CN118797214A CN118797214A (en)2024-10-18
CN118797214Btrue CN118797214B (en)2024-11-22

Family

ID=93025451

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411260730.5AActiveCN118797214B (en)2024-09-102024-09-10Unmanned system flexible collaboration method for intelligent seal detection scene

Country Status (1)

CountryLink
CN (1)CN118797214B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114648584A (en)*2022-05-232022-06-21北京理工大学前沿技术研究院Robustness control method and system for multi-source fusion positioning
CN117479013A (en)*2023-12-262024-01-30长春智冉光电科技有限公司Imaging jitter removing method for linear array camera under multi-axis motion platform

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN105069753B (en)*2015-07-302018-06-26华中科技大学A kind of shake Restoration method of blurred image of facing moving terminal
CN106447766B (en)*2016-09-282019-07-09成都通甲优博科技有限责任公司A kind of scene reconstruction method and device based on mobile device monocular camera
JP2020102786A (en)*2018-12-252020-07-02キヤノン株式会社 Image processing device
KR20230127287A (en)*2020-12-312023-08-31후아웨이 테크놀러지 컴퍼니 리미티드 Pose estimation method and related device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114648584A (en)*2022-05-232022-06-21北京理工大学前沿技术研究院Robustness control method and system for multi-source fusion positioning
CN117479013A (en)*2023-12-262024-01-30长春智冉光电科技有限公司Imaging jitter removing method for linear array camera under multi-axis motion platform

Also Published As

Publication numberPublication date
CN118797214A (en)2024-10-18

Similar Documents

PublicationPublication DateTitle
CN108510530B (en)Three-dimensional point cloud matching method and system
CN110580723B (en)Method for carrying out accurate positioning by utilizing deep learning and computer vision
CN111996883B (en)Method for detecting width of road surface
CN114529605A (en)Human body three-dimensional attitude estimation method based on multi-view fusion
CN114119987B (en) Feature extraction and descriptor generation method and system based on convolutional neural network
CN110910456B (en) Stereo camera dynamic calibration method based on Harris corner mutual information matching
CN110399866A (en) Space Debris Observation Method Based on Alternating Different Exposure Time of CCD Camera
CN109345513B (en) A cigarette package defect detection method with cigarette package attitude calculation
CN113569647B (en)AIS-based ship high-precision coordinate mapping method
CN111047586B (en) A Pixel Equivalent Measurement Method Based on Machine Vision
CN108171728B (en)Markless moving object posture recovery method and device based on hybrid camera system
CN111507306A (en)Temperature error compensation method based on AI face distance detection
CN114820563B (en) A method and system for industrial component size estimation based on multi-viewpoint stereo vision
CN106991705A (en)A kind of location parameter method of estimation based on P3P algorithms
CN112652020B (en)Visual SLAM method based on AdaLAM algorithm
CN113670268A (en) A distance measurement method for UAV and power tower based on binocular vision
CN118797214B (en)Unmanned system flexible collaboration method for intelligent seal detection scene
CN114972948A (en)Neural detection network-based identification and positioning method and system
CN112561001A (en)Video target detection method based on space-time feature deformable convolution fusion
CN104835156B (en)A kind of non-woven bag automatic positioning method based on computer vision
CN117611652A (en)Grounding device mounting bolt size measurement method and system
CN114399547B (en)Monocular SLAM robust initialization method based on multiframe
CN115082555B (en) A high-precision real-time displacement measurement system and method for RGBD monocular camera
CN115388891B (en) A spatial positioning method and system for moving targets with a large field of view
CN116989689A (en)Full-field three-dimensional strain measurement method of unmarked structure integrating neural network and binocular vision

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp