Movatterモバイル変換


[0]ホーム

URL:


CN119600048A - Carriage positioning and material detecting method, medium and equipment - Google Patents

Carriage positioning and material detecting method, medium and equipment
Download PDF

Info

Publication number
CN119600048A
CN119600048ACN202411643532.7ACN202411643532ACN119600048ACN 119600048 ACN119600048 ACN 119600048ACN 202411643532 ACN202411643532 ACN 202411643532ACN 119600048 ACN119600048 ACN 119600048A
Authority
CN
China
Prior art keywords
carriage
coordinates
coordinate system
point cloud
depth map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411643532.7A
Other languages
Chinese (zh)
Inventor
梁鸿
冀春锟
田浩楠
彭思远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Shibite Robot Co Ltd
Original Assignee
Hunan Shibite Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Shibite Robot Co LtdfiledCriticalHunan Shibite Robot Co Ltd
Priority to CN202411643532.7ApriorityCriticalpatent/CN119600048A/en
Publication of CN119600048ApublicationCriticalpatent/CN119600048A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The invention relates to a carriage positioning and material detecting method, medium and equipment. Obtaining angular point coordinates of the upper surface and the bottom of a carriage under a mechanical arm terminal coordinate system and edge contours by obtaining a conversion matrix between the acquisition equipment coordinate system and the mechanical arm terminal coordinate system, generating initial point cloud data according to internal parameters of the prior acquisition equipment and carriage depth map information, dividing the initial point cloud data to obtain carriage point cloud data, dividing the carriage point cloud data to obtain a plurality of planes and corresponding normal vectors, obtaining material point clouds according to normal vectors of the planes, obtaining bounding boxes of the material point clouds and coordinates of the bounding boxes according to the carriage depth map, and obtaining the size and the coordinates of the bounding boxes of the mechanical arm terminal coordinate system according to the conversion matrix. The problems of the prior art that the adaptability and the accuracy are low, the efficiency is low and the like when the carriage is positioned and the size is detected are solved.

Description

Carriage positioning and material detecting method, medium and equipment
Technical Field
The invention relates to the technical field of computer vision, in particular to a carriage positioning and material detecting method, medium and device.
Background
In conventional automated loading systems, the acquisition of car size and position typically relies on manual measurement or radar detection devices. The manual measurement method is low in efficiency, easy to make mistakes and poor in adaptability when the size of the carriage and the loading conditions are changed. However, although the radar detection technology improves accuracy and reduces cost, the radar detection technology has defects in the aspects of accurate identification of the corner points of the carriage and dynamic adjustment of the loading positions, so that the radar detection technology has limited adaptability when the length, the parking positions and the angles of the carriage are greatly changed.
Therefore, how to improve the problems of insufficient adaptability and accuracy and insufficient efficiency in carriage positioning and size detection in the prior art is a technical problem to be solved in the field.
Disclosure of Invention
Based on the above, the present application aims to provide a method, medium and device for positioning and detecting a carriage, which solve at least one technical problem mentioned in the background art.
In a first aspect, the present application provides a method for positioning a carriage and detecting materials, including:
acquiring a conversion matrix between a coordinate system of the acquisition equipment and a coordinate system of the tail end of the mechanical arm;
Controlling the mechanical arm to move in a set direction, and collecting a carriage depth map to obtain angular point coordinates of the upper surface and the bottom of a carriage under a coordinate system of the tail end of the mechanical arm;
generating initial point cloud data according to the prior acquisition equipment internal parameters and the carriage depth map information, and dividing the initial point cloud data according to the angular points of the upper surface and the bottom of the carriage to obtain carriage point cloud data;
Dividing the carriage point cloud data, performing plane fitting to obtain a plurality of planes and corresponding normal vectors, and obtaining a material point cloud according to the normal vectors of the planes;
Acquiring a minimum external graph of each material point cloud so as to obtain bounding boxes of each material point cloud and coordinates of each bounding box according to a carriage depth graph;
And converting the coordinates of each bounding box into the tail end coordinate system of the mechanical arm according to the conversion matrix to obtain the size and the coordinates of each bounding box in the tail end coordinate system of the mechanical arm.
Further, the step of obtaining the conversion matrix includes:
S11, installing a calibration plate at a set position, and controlling the acquisition equipment to scan the surface of the calibration plate to obtain coordinates and numbers corresponding to each characteristic point under a coordinate system of the acquisition equipment;
S12, mounting calibration equipment at the tail end of the mechanical arm, and sequentially touching characteristic points on the surface of the calibration plate to obtain coordinates and numbers corresponding to the characteristic points under a coordinate system of the tail end of the mechanical arm;
s13, constructing a plurality of characteristic point pairs according to the numbers corresponding to the characteristic points, and constructing a linear equation set according to the characteristic point pairs;
And S14, substituting the coordinates corresponding to the characteristic points into a linear equation set, and solving the linear equation set to obtain a conversion matrix between the coordinate system of the acquisition equipment and the terminal coordinate system of the mechanical arm.
S15, setting a plurality of verification points, and acquiring coordinates and numbers of each verification point in a coordinate system of the acquisition equipment and a coordinate system of the tail end of the mechanical arm;
S16, converting all verification points into the same coordinate system, obtaining the distances among the verification points with the same number, judging whether the distances are smaller than a distance threshold value, if yes, converting the matrix to be qualified, and if not, returning to the step S11.
Further, the step of obtaining the coordinates of the corner points of the upper surface and the bottom of the carriage under the coordinate system of the tail end of the mechanical arm comprises the following steps:
respectively acquiring a carriage depth map of a first carriage area and a carriage depth map of a second carriage area;
gradient calculation is carried out on each carriage depth map, and a corresponding carriage contour image is obtained;
Obtaining a carriage edge contour according to each carriage contour image, so as to obtain carriage geometric data according to the carriage edge contour, and manufacturing a corresponding carriage template;
Carrying out template matching on the carriage depth map according to each carriage template to obtain corner coordinates of the upper surface of the carriage, and mapping the corner coordinates to the bottom of the carriage according to the carriage depth map to obtain corner coordinates of the bottom of the carriage;
and obtaining the angular point coordinates of the upper surface and the bottom of the carriage under the robot base coordinate system according to the conversion matrix between the acquisition equipment coordinate system and the mechanical arm terminal coordinate system.
Further, the step of obtaining the coordinates of the corner points of the upper surface and the bottom of the carriage under the coordinate system of the tail end of the mechanical arm further comprises the following steps:
performing de-distortion treatment on the carriage depth map to obtain corrected corner coordinates;
And filtering the carriage depth map according to the prior carriage height to obtain an optimized carriage depth map.
Further, the step of obtaining the coordinates of the corner points of the upper surface and the bottom of the carriage under the coordinate system of the tail end of the mechanical arm further comprises the following steps:
scanning the carriage surface every set distance to obtain a plurality of groups of carriage depth maps;
gradient calculation is carried out on each carriage depth map, and a corresponding carriage contour image is obtained;
Making corresponding carriage templates according to the carriage contour images, and carrying out template matching on the carriage depth map according to the carriage templates to obtain corner coordinates of the upper surface of the carriage;
And mapping the corner coordinates of the upper surface of the carriage to the bottom of the carriage according to the carriage depth map to obtain the corner coordinates of the bottom of the carriage, and obtaining the corner coordinates of the upper surface and the bottom of the carriage under the robot base coordinate system according to a conversion matrix between the acquisition equipment coordinate system and the tail end coordinate system of the mechanical arm.
Further, the step of making a corresponding carriage template according to each carriage profile image to perform template matching on the carriage depth map according to each carriage template to obtain the corner coordinates of the upper surface of the carriage comprises the following steps:
obtaining a carriage edge contour according to each carriage contour image so as to obtain carriage geometric data according to the carriage edge contour;
manufacturing corresponding carriage templates according to carriage geometric data, and carrying out template matching on a carriage depth map according to each carriage template to obtain corner coordinates of the front half part of the carriage;
and reversely deducing the corner coordinates of the tail part of the carriage according to the direction and the distance between the midpoint of the prior carriage and the corner coordinates of the front half part of the carriage, and obtaining the corner coordinates of the upper surface of the carriage.
Further, the step of obtaining the carriage point cloud data includes:
According to depth values of all pixel points in the prior acquisition equipment internal reference and carriage depth map, three-dimensional coordinates corresponding to all the pixel points are obtained, and initial point cloud data are generated;
acquiring three-dimensional coordinates of all the corner points to obtain a corner point coordinate extremum;
And constructing a filtering range according to the corner coordinate extremum, and performing filtering operation on the initial point cloud data according to the filtering range to obtain the carriage point cloud data.
Further, the step of obtaining the material point cloud comprises the following steps:
dividing the carriage point cloud data to obtain a plurality of points Yun Julei;
performing plane fitting on each point cloud cluster to obtain a plurality of planes and corresponding normal vectors;
And acquiring the included angle between each normal vector and the z axis, judging whether the included angle is larger than a set angle threshold, if not, keeping unchanged, and if so, removing all point clouds corresponding to the plane to obtain the material point cloud.
In a second aspect, the present application also provides a computer storage medium storing executable program code for executing the car positioning and material detecting method according to any one of the first aspects.
In a third aspect, the present application further provides a terminal device, including a memory and a processor, where the memory stores program codes executable by the processor, and the program codes are configured to execute the method for positioning a car and detecting a material according to any one of the first aspects.
According to the carriage positioning and material detecting method, the medium and the device, through the conversion matrix between the acquisition device coordinate system and the mechanical arm terminal coordinate system, the mechanical arm is controlled to move according to the set direction, the carriage depth map is acquired, the corner coordinates and the edge profiles of the upper surface and the bottom of the carriage under the mechanical arm terminal coordinate system are obtained, the carriage depth map comprises the carriage depth map and the carriage depth map, initial point cloud data are generated according to the prior acquisition device internal parameters and carriage depth map information, the carriage point cloud data are obtained through segmentation according to the corner points of the upper surface and the bottom of the carriage, accurate positioning of the carriage is completed, the carriage point cloud data are segmented, plane fitting is carried out, a plurality of planes and corresponding normal vectors are obtained, the material point cloud is obtained according to the normal vectors of the planes, the minimum external connection graph of the material point cloud is obtained, the bounding boxes and the coordinates of the bounding boxes of the material point cloud are obtained according to the carriage depth map, the coordinates of the bounding boxes are converted to the mechanical arm terminal coordinate system, the size and the carriage coordinate system under the mechanical arm terminal coordinate system are obtained, the carriage point cloud data and the carriage coordinate system are obtained, the carriage point cloud data are accurately positioned according to the conversion, the actual position and the depth data are not processed, the actual position and the material is not processed, and the actual position is not processed, and the material is not suitable for the actual position and the position of the carriage is obtained through the actual position and the technology. The problems of the prior art that the adaptability and the accuracy are low, the efficiency is low and the like when the carriage is positioned and the size is detected are solved.
Drawings
FIG. 1 is a flow chart of a method for car positioning and material detection according to an embodiment of the present invention;
fig. 2 is a schematic diagram of vehicle cabin point cloud data according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that, in the embodiment of the present invention, directional instructions such as up, down, left, right, front, and rear are referred to as "directional instructions", the directional instructions are merely used to explain the relative positional relationship, movement conditions, and the like between the components in a specific posture, and if the specific posture is changed, the directional instructions are correspondingly changed. In addition, if there are descriptions of "first, second", "S1, S2", "step one, step two", etc. in the embodiments of the present invention, the descriptions are only for descriptive purposes, and are not to be construed as indicating or implying relative importance or implying that the number of technical features indicated or indicating the execution sequence of the method, etc. it will be understood by those skilled in the art that all matters in the technical concept of the present invention are included in the scope of this invention without departing from the gist of the present invention.
As shown in fig. 1, the present invention provides a method for positioning a carriage and detecting materials, which comprises:
s1, acquiring a conversion matrix between a coordinate system of acquisition equipment and a coordinate system of the tail end of a mechanical arm;
Specifically, but not limited to, the collection device is optionally installed at the tail end of the mechanical arm, the calibration plate is installed at the set position, coordinates of each characteristic point on the calibration plate under the coordinate system of the collection device and the coordinate system of the tail end of the mechanical arm are obtained, a plurality of characteristic point pairs are formed, a linear equation set is constructed, and the linear equation set is solved, so that a conversion matrix between the coordinate system of the collection device and the coordinate system of the tail end of the mechanical arm is obtained through calculation.
For example, a material detection system is optionally provided, including two 3D structured light cameras and a truss system with two truss arms, where the truss coordinate system uses the center point of the collection of trusses as the origin, and the ends of the truss arms are provided with a gripper or a suction cup, and the 3D structured light cameras are installed in the gripper or suction cup.
Preferably, the step of obtaining a transformation matrix between the coordinate system of the acquisition device and the coordinate system of the end of the mechanical arm optionally includes:
S11, installing a calibration plate at a set position, and controlling the acquisition equipment to scan the surface of the calibration plate to obtain coordinates and numbers corresponding to each characteristic point under a coordinate system of the acquisition equipment;
The method comprises the steps of installing a calibration plate at a set position, scanning the surface of the calibration plate through acquisition equipment on a mechanical arm to obtain image data of the surface of the calibration plate to obtain coordinates of all feature points on the surface of the calibration plate, obtaining numbers of all feature points according to a priori region of interest, setting the specific shape and size of the calibration plate by a person skilled in the art at will, setting the set position by the person skilled in the art at will, enabling the acquisition equipment to clearly acquire the surface data of the calibration plate, enabling the calibration equipment at the tail end of the mechanical arm to touch all the feature points on the surface of the calibration plate, and enabling the surface image data to comprise carriage texture images and carriage depth maps.
For example, the acquisition device may be selected to acquire the car texture image and the car depth map of the surface of the calibration plate, so as to acquire the plane coordinates of all the feature points according to the car texture image, and then acquire the 3D coordinates of all the feature points according to the car depth mapWherein the method comprises the steps ofI is the number of feature points, and n is the number of feature points.
S12, mounting calibration equipment at the tail end of the mechanical arm, and sequentially touching characteristic points on the surface of the calibration plate to obtain coordinates and numbers corresponding to the characteristic points under a coordinate system of the tail end of the mechanical arm;
Taking a calibration needle as an example, optionally installing the calibration needle at the tail end of the mechanical arm, controlling the calibration needle to sequentially touch characteristic points on the surface of the calibration plate by a person skilled in the art, and recording coordinates and numbers of the characteristic pointsWherein the method comprises the steps ofI is the number of feature points, and n is the number of feature points.
S13, constructing a plurality of characteristic point pairs according to the numbers corresponding to the characteristic points, and constructing a linear equation set according to the characteristic point pairs;
And S14, substituting the coordinates corresponding to the characteristic points into a linear equation set, and solving the linear equation set to obtain a conversion matrix between the coordinate system of the acquisition equipment and the terminal coordinate system of the mechanical arm.
Specifically, optionally but not limited to, using feature points with the same number as a pair of feature point pairs, traversing all feature points to obtain a plurality of feature point pairs to construct equation 1-1 of each point pair:
Then the equation of each point pair is developed to obtain a linear equation set 1-2:
wherein,For the ith feature point in the mechanical arm terminal coordinate system, tct and Rct are respectively a translation vector and a rotation matrix between the acquisition equipment coordinate system and the mechanical arm terminal coordinate system,The method is used for acquiring the ith feature point in the equipment coordinate system.
Preferably, the translation vector tct between the coordinate system of the acquisition equipment and the end coordinate system of the mechanical arm and the rotation matrices Rct;tct and Rct together form a conversion matrix between the two.
Preferably, since there may be factors such as process conditions and environmental impact in the calibration process, which cause deviation of the calibration result, the calibration method further needs to verify the conversion matrix obtained by calibration, and therefore, after step S14, the method further includes:
S15, setting a plurality of verification points, and acquiring coordinates and numbers of each verification point in a coordinate system of the acquisition equipment and a coordinate system of the tail end of the mechanical arm;
S16, converting all verification points into the same coordinate system, obtaining the distances among the verification points with the same number, judging whether the distances are smaller than a distance threshold value, if yes, converting the matrix to be qualified, and if not, returning to the step S11.
Specifically, but not limited to, a person skilled in the art sets any number of point clouds on the calibration plate as verification points, sets a distance threshold, obtains the coordinates and numbers of all verification points under the acquisition device coordinate system and the mechanical arm end coordinate system according to steps S11 and S12 respectively, converts all the verification points from the acquisition device coordinate system to the mechanical arm end coordinate system or from the mechanical arm end coordinate system to the acquisition device coordinate system according to the conversion matrix obtained in step S14, obtains the distance between the verification points with the same number, judges whether the distance is greater than a distance threshold, if not, indicates that the error of the conversion matrix obtained in step S14 is lower, meets the requirements, can carry out subsequent steps, if so, indicates that the error between the conversion matrices is greater, needs to reset parameters such as the calibration plate position, and returns to step S11 for recalibration until the iteration condition is reached, and the distance threshold and the iteration condition are set arbitrarily by the person skilled in the art. Preferably, the iteration condition is selected as the iteration number.
S2, controlling the mechanical arm to move in a set direction, and collecting a carriage depth map to obtain angular point coordinates of the upper surface and the bottom of the carriage under a coordinate system of the tail end of the mechanical arm;
Specifically, but not limited to, the mechanical arm is optionally controlled to move in a set direction, the upper surface of the carriage is scanned to obtain a carriage depth map of the upper surface of the carriage, so as to obtain corner points and edge contours of the upper surface of the carriage, and the corner points and the edge contours of the upper surface of the carriage are mapped to the bottom of the carriage according to the carriage depth map, so that corner coordinates and edge contours of the upper surface and the bottom of the carriage under a terminal coordinate system of the mechanical arm can be obtained.
Preferably, taking the material detection system in step S1 as an example, the specific steps of S2 optionally include:
S21, respectively acquiring a carriage depth map of a first carriage area and a carriage depth map of a second carriage area;
Specifically, a first mechanical arm is controlled to scan a first area of a carriage to obtain a carriage depth map of the first area of the carriage, and a second mechanical arm is controlled to scan a second area of the carriage to obtain a carriage depth map of the second area of the carriage, wherein the first area of the carriage and the second area of the carriage can be optionally set by a person skilled in the art.
Preferably, the first compartment area is a front half area of the compartment near the vehicle head, and the second compartment area is a rear half area of the compartment near the vehicle tail.
S22, carrying out gradient calculation on each carriage depth map to obtain a corresponding carriage contour image;
Specifically, but not limited to, a gradient map of a carriage depth map of a carriage first region and a carriage depth map of a carriage second region in a X, Y direction are obtained by adopting a Sobe l operator to calculate, so that edge characteristics of the horizontal direction and the vertical direction in an image are respectively obtained, edge information of the image is displayed more clearly, and carriage contour images corresponding to the carriage first region and the carriage second region are obtained, wherein an image contrast enhancement algorithm comprises a histogram equalization algorithm, a contrast stretching algorithm, a contrast limitation self-adaptive histogram equalization algorithm, a Gamma correction algorithm and other common image contrast enhancement algorithms.
S23, obtaining a carriage edge contour according to each carriage contour image, so as to obtain carriage geometric data according to the carriage edge contour, and manufacturing a corresponding carriage template;
Specifically, the contour detection algorithm is selected but not limited to be used for acquiring the carriage contour in the carriage contour map, and carriage geometric data is calculated according to the acquired carriage contour, so that carriage templates of a first area and a second area of the carriage are manufactured according to the carriage geometric data, the contour detection algorithm is selected to comprise common algorithms such as Canny edge detection, sobe l operator and edge tracking method, and the carriage geometric data is selected to comprise geometric data such as carriage width and diagonal length.
S24, carrying out template matching on the carriage depth map according to each carriage template to obtain corner coordinates of the upper surface of the carriage, and mapping the corner coordinates to the bottom of the carriage according to the carriage depth map to obtain corner coordinates of the bottom of the carriage;
And S25, obtaining the corner coordinates of the upper surface and the bottom of the carriage under the robot base coordinate system according to the conversion matrix between the acquisition equipment coordinate system and the mechanical arm terminal coordinate system.
Specifically, but not limited to, template matching is performed on the carriage depth map by using each carriage template, so as to obtain corner coordinates and edge contours of the upper surfaces of the first region and the second region of the carriage, then the corner coordinates and edge contours of the upper surface of the carriage are mapped to the bottom of the carriage according to carriage height information in the carriage depth map, and then the corner coordinates and edge contours of the bottom of the carriage can be obtained, and then the corner coordinates and the edge contours are converted into a robot base coordinate system according to a conversion matrix between the acquisition equipment coordinate system and the mechanical arm terminal coordinate system obtained in the step S1.
More specifically, since the capturing device is affected by various factors during the capturing process, such as stretching or compressing a point near the center of the image caused by the wide-angle lens, or an imperfect alignment between the lens and the image sensor, resulting in an inclination or offset of the image, so that the coordinates of each point in the captured image are inconsistent with the actual position, step S21 preferably further includes, after obtaining the depth map within the capturing range of the capturing device:
S21', performing de-distortion treatment on the depth map to obtain corrected corner coordinates;
Specifically, the method is optional but not limited to performing de-distortion treatment on the carriage depth map by adopting an image de-distortion method according to parameters such as focal length, principal point, distortion coefficient and the like in the prior acquisition equipment, so as to obtain corrected corner coordinates, wherein the image de-distortion method optionally comprises common algorithms such as an OpenCV de-distortion function, a remapping method, a depth learning method, a geometric correction method and the like.
And S22', filtering the carriage depth map according to the prior carriage height to obtain an optimized carriage depth map.
Specifically, a height threshold range is optionally but not limited to be set according to the prior carriage height, then the depth of each pixel point in the carriage depth map is obtained, whether the pixel point is located in the height threshold range is judged, if yes, the pixel point is proved to be a part of the carriage and needs to be reserved, if not, the pixel point is described as noise or irrelevant background information and needs to be removed, so that an optimized carriage depth map is obtained, interference is removed, recognition accuracy is improved, the prior carriage height is optionally obtained according to the modes of field measurement, a vehicle specification database and the like, and the height threshold range is optionally set by a person skilled in the art.
Preferably, due to factors such as process conditions and environmental impact, it may be caused that only a single mechanical arm can be operated when acquiring the depth map of the carriage, so step S2 further includes:
s26, scanning the surface of the carriage at intervals of set distance to obtain a plurality of groups of carriage depth maps;
Specifically, the method comprises the steps of optionally but not limited to setting the acquisition distance, controlling the mechanical arm to move the acquisition distance each time, scanning the surface of the carriage through the acquisition equipment, and obtaining a group of carriage depth maps each time, wherein the acquisition distance is set arbitrarily by a person skilled in the art.
S27, carrying out gradient calculation on each carriage depth map to obtain a corresponding carriage profile image;
specifically, a plurality of corresponding car contour images may be obtained according to step S22.
S28, manufacturing corresponding carriage templates according to the carriage contour images, and carrying out template matching on the carriage depth map according to the carriage templates to obtain corner coordinates of the upper surface of the carriage;
Specifically, the contour detection algorithm is selected to obtain the carriage contour in the carriage contour map, and carriage geometric data is calculated according to the obtained carriage contour, so that a carriage template of each region of the carriage is manufactured according to the carriage geometric data, the contour detection algorithm is selected to comprise common algorithms such as Canny edge detection, sobe l operator and edge tracking method, and the carriage geometric data is selected to comprise geometric data such as carriage width, diagonal length and the like.
Taking a car template of a car head and a car middle as an example, obtaining a car edge contour according to each car contour image to obtain car geometric data, and manufacturing a corresponding car template, optionally including:
S281, obtaining a carriage edge contour according to each carriage contour image so as to obtain carriage geometric data according to the carriage edge contour;
specifically, the car edge profile in each car profile image is obtained as an initial car edge profile according to step S23, so that the distance between the car edge profiles in each car profile image is calculated and obtained as the car width.
S282, manufacturing corresponding carriage templates according to carriage geometric data, and carrying out template matching on a carriage depth map according to each carriage template to obtain corner coordinates of the front half part of the carriage;
And S283, reversely deducing the corner coordinates of the tail part of the carriage according to the direction and the distance between the midpoint of the prior carriage and the corner coordinates of the front half part of the carriage, and obtaining the corner coordinates of the upper surface of the carriage.
Specifically, the coordinates of the corner points of the front half of the carriage such as the carriage head and the carriage middle are obtained according to step S24, then the coordinates of the corner points of the carriage tail are reversely deduced according to the direction and the distance between the midpoint of the carriage and the coordinates of the corner points of the carriage head and the symmetrical structure of the carriage, and the coordinates of the corner points of the carriage upper surface are obtained, wherein the midpoint of the carriage is obtained according to the mode of in-situ measurement, a vehicle specification database and the like.
And S29, mapping the corner coordinates of the upper surface of the carriage to the bottom of the carriage according to the carriage depth map to obtain the corner coordinates of the bottom of the carriage, and obtaining the corner coordinates of the upper surface and the bottom of the carriage under the robot base coordinate system according to a conversion matrix between the acquisition equipment coordinate system and the tail end coordinate system of the mechanical arm.
Specifically, the coordinates of the corner points of the bottom of the carriage are obtained according to step S24, and then the coordinates of the corner points of the upper surface and the bottom of the carriage in the robot base coordinate system are obtained according to step S25.
Preferably, when the single mechanical arm is operated to collect the depth map of the carriage, because of the structural influence of the machine, all the depth maps of the carriage cannot be directly collected, for example, only the depth maps of the head part and the middle part of the carriage or only the depth maps of the tail part and the middle part of the carriage can be collected, and all the carriage angular points can be obtained by mapping half of the acquired carriage angular points to the other half according to the symmetrical structure of the carriage.
S3, generating initial point cloud data according to the prior acquisition equipment internal parameters and the carriage depth map information, and dividing the initial point cloud data according to the angular points of the upper surface and the bottom of the carriage to obtain carriage point cloud data;
Specifically, the three-dimensional coordinates corresponding to each pixel point are obtained according to the depth information of each pixel point in the prior acquisition equipment internal parameter and the depth map of the carriage, so that initial point cloud data corresponding to the carriage is obtained, then a filtering range is built according to the angular points of the upper surface of the carriage and the bottom of the carriage, the initial point cloud data is subjected to direct filtering, noise or background point cloud in the initial point cloud data is removed, and carriage point cloud data shown in fig. 2 can be obtained.
Preferably, the step of generating initial point cloud data according to the prior acquisition equipment internal parameters and the carriage depth map information and dividing the initial point cloud data according to the angular points of the upper surface and the bottom of the carriage to obtain carriage point cloud data optionally comprises the following steps:
S31, obtaining three-dimensional coordinates corresponding to each pixel point according to depth values of each pixel point in the prior acquisition equipment internal reference and the carriage depth map, and generating initial point cloud data;
For example, an internal reference matrix of the acquisition device is optionally acquired, including a focal length (fx, fy) and principal point coordinates (cx, cy), and then each pixel in the depth map of the carriage is traversed, and a depth value Z thereof is extracted, so that a three-dimensional coordinate corresponding to each pixel point is calculated according to a coordinate calculation formula, and initial point cloud data is generated.
Preferably, the coordinate calculation formula is alternatively expressed as formulas 3-1, 3-2:
Wherein (Xj,Yj,Zj) is the three-dimensional coordinate corresponding to the jth pixel point, 0<j is less than or equal to M, M is the number of pixel points, (uj,vj) is the coordinate of the jth pixel point in the depth map, (fx, fy) is the focal length in the internal reference matrix of the acquisition device, and (cx, cy) is the principal point coordinate in the internal reference matrix of the acquisition device.
S32, obtaining three-dimensional coordinates of all corner points to obtain a corner point coordinate extremum;
And S33, constructing a filtering range according to the corner coordinate extremum so as to carry out filtering operation on the initial point cloud data according to the filtering range and obtain the carriage point cloud data.
Specifically, three-dimensional coordinates of all corner points on the upper surface and the bottom of the carriage are obtained, coordinate extremum of each corner point coordinate on an x axis, a y axis and a z axis is obtained, a filtering range can be constructed according to the coordinate extremum, and then point clouds of which the coordinates are not located in the filtering range in the initial point cloud data are removed, so that carriage point cloud data are obtained.
The coordinate extremum of each angular point coordinate on the x axis, the y axis and the z axis is (xmax,xmin),(ymax,ymin),(zmax,zmin) respectively, the coordinates of each point cloud on the x axis, the y axis and the z axis are obtained, whether the coordinates are located in the range of (xmax,xmin),(ymax,ymin),(zmax,zmin) or not is judged, if yes, the point cloud is proved to be the point cloud in the carriage, the point cloud needs to be reserved, if no, the point cloud is proved to be noise or background point cloud, the point cloud needs to be removed, and therefore carriage point cloud data are obtained, interference is reduced, and the processing efficiency of the subsequent steps is improved.
S4, dividing the carriage point cloud data, performing plane fitting to obtain a plurality of planes and corresponding normal vectors, and obtaining a material point cloud according to the normal vectors of the planes;
Specifically, optionally but not limited to setting an angle threshold, dividing the carriage point cloud data into a plurality of point cloud clusters according to a carriage depth map and a carriage depth map, performing plane fitting on each point cloud cluster respectively to obtain a plurality of planes and corresponding normal vectors, acquiring an included angle between each normal vector and a z-axis, judging whether the included angle is larger than the included angle threshold, if not, keeping unchanged, if so, removing all point clouds corresponding to the planes, and the rest point clouds are material point clouds.
S41, dividing the carriage point cloud data to obtain a plurality of points Yun Julei;
For example, the vehicle cabin point cloud can be divided into a plurality of point cloud clusters corresponding to the vehicle cabin specific structures according to the prior vehicle cabin size, the vehicle cabin specific structures such as the vehicle cabin bottom, the vehicle cabin side fence and the like.
S42, performing plane fitting on each point cloud cluster to obtain a plurality of planes and corresponding normal vectors;
Specifically, the planar model is optionally, but not limited to, set up according to planar features with the equations it needs to satisfy. The plane model is set By taking plane normal vectors and constant terms as parameters, more specifically, the plane model parameters are optionally but not limited to (A, B, C and D), wherein A, B, C is a component of the normal vector of a plane, D is a constant term of a plane equation, the plane equation is optionally but not limited to be expressed as ax+by+Cz+D=0, and the equation and the parameters adopted By the plane model are obviously illustrative but not limited to.
Specifically, optionally but not exclusively, randomly selecting 3 points Pa,Pb,Pc in each point cloud cluster, substituting the current internal point into a plane equation to fit and calculate parameters of a plane model, wherein a plane normal vector calculation formula is as followsAnd x is a cross multiplication operation, and the normal vector and any point are substituted into a plane equation ax+by+cz+d=0, so that equation coefficients of the plane equation corresponding to each point cloud cluster are obtained.
S43, acquiring an included angle between each normal vector and the z axis, judging whether the included angle is larger than a set angle threshold, if not, keeping unchanged, and if so, removing all point clouds corresponding to the plane to obtain a material point cloud.
Specifically, but not limited to, setting an angle threshold, then calculating the included angle between the normal vector and the z axis of the plane equation corresponding to each point cloud cluster, and judging whether the included angle is larger than the set angle threshold, if not, indicating that the plane is the plane corresponding to the material point cloud to be reserved, if so, indicating that the plane is the plane corresponding to the side column point cloud of the carriage, and removing all the point clouds corresponding to the plane to obtain the material point cloud.
S5, acquiring a minimum external graph of each material point cloud so as to obtain bounding boxes of each material point cloud and coordinates of each bounding box according to the carriage depth graph;
specifically, the method comprises the steps of optionally but not limited to projecting material point clouds onto an image plane, obtaining a minimum external graph of each material point cloud according to an external graph calculation method, then obtaining bounding boxes of each material according to height information corresponding to each material point cloud in a carriage depth map by combining the minimum external graph of each material point cloud, and obtaining coordinates of each bounding box under a coordinate system of an acquisition device according to the carriage depth map, wherein the minimum external graph optionally comprises any shape such as a rectangle, a circle, an ellipse, a diamond and the like, and is set by a person skilled in the art according to specific shapes of transportation devices and materials.
Preferably, the coordinates of the bounding box in the acquisition device coordinate system are optionally represented by coordinates of a center point or corner point.
And S6, converting the coordinates of each bounding box into a coordinate system of the tail end of the mechanical arm according to the conversion matrix to obtain the size and the coordinates of each bounding box in the coordinate system of the tail end of the mechanical arm.
Specifically, the coordinates of each bounding box are optionally converted into the coordinate system of the tail end of the mechanical arm according to the conversion matrix between the coordinate system of the acquisition device and the coordinate system of the tail end of the mechanical arm in the step S1, so that the size and the coordinates of each bounding box in the coordinate system of the tail end of the mechanical arm can be obtained.
In the embodiment, the carriage positioning and material detecting method is provided, the conversion matrix between the acquisition equipment coordinate system and the mechanical arm terminal coordinate system is obtained, then the mechanical arm is controlled to move according to the set direction, the carriage depth map is acquired, the corner coordinates and the edge contours of the upper surface and the bottom of the carriage under the mechanical arm terminal coordinate system are obtained, initial point cloud data are generated according to the prior acquisition equipment internal parameters and carriage depth map information, the carriage point cloud data are obtained by dividing the carriage upper surface and the corner points of the bottom of the carriage, accurate positioning of the carriage is completed, the carriage point cloud data are divided, plane fitting is carried out, a plurality of planes and corresponding normal vectors are obtained, the material point cloud is obtained according to the normal vector of each plane, the minimum external connection graph of each material point cloud is obtained, the coordinates of each bounding box and each bounding box are obtained according to the carriage depth map, the coordinates of each bounding box are converted to the mechanical arm terminal coordinate system according to the conversion matrix, the dimensions and the coordinates of each bounding box under the mechanical arm terminal coordinate system are obtained, the carriage depth image and the depth image and depth data of the carriage are generated, the actual position of the carriage are processed, the material is not matched with the prior art, and the actual position and the material is not processed, and the accuracy of the material is improved. The problems of the prior art that the adaptability and the accuracy are low, the efficiency is low and the like when the carriage is positioned and the size is detected are solved.
On the other hand, the invention also provides a computer storage medium which stores executable program codes for executing any carriage positioning and material detecting method.
On the other hand, the invention also provides a terminal device which comprises a memory and a processor, wherein the memory stores program codes which can be executed by the processor, and the program codes are used for executing any carriage positioning and material detecting method.
For example, the program code may be partitioned into one or more modules/units that are stored in the memory and executed by the processor to perform the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments describe the execution of the program code in the terminal device.
The terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The terminal device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the terminal devices may also include input-output devices, network access devices, buses, and the like.
The Processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage may be an internal storage unit of the terminal device, such as a hard disk or a memory. The memory may also be an external storage device of the terminal device, such as a plug-in hard disk provided on the terminal device, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), or the like. Further, the memory may also include both an internal storage unit of the terminal device and an external storage device. The memory is used for storing the program codes and other programs and data required by the terminal equipment. The memory may also be used to temporarily store data that has been output or is to be output.
The technical effects and advantages of the computer storage medium and the terminal device created based on the carriage positioning and material detecting method are not repeated herein, and each technical feature of the above-described embodiment may be arbitrarily combined, so that the description is concise, and all possible combinations of each technical feature in the above-described embodiment are not described, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.

Claims (10)

CN202411643532.7A2024-11-182024-11-18Carriage positioning and material detecting method, medium and equipmentPendingCN119600048A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411643532.7ACN119600048A (en)2024-11-182024-11-18Carriage positioning and material detecting method, medium and equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411643532.7ACN119600048A (en)2024-11-182024-11-18Carriage positioning and material detecting method, medium and equipment

Publications (1)

Publication NumberPublication Date
CN119600048Atrue CN119600048A (en)2025-03-11

Family

ID=94828200

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411643532.7APendingCN119600048A (en)2024-11-182024-11-18Carriage positioning and material detecting method, medium and equipment

Country Status (1)

CountryLink
CN (1)CN119600048A (en)

Similar Documents

PublicationPublication DateTitle
CN107063228B (en)Target attitude calculation method based on binocular vision
CN108932736B (en)Two-dimensional laser radar point cloud data processing method and dynamic robot pose calibration method
CN110926330B (en)Image processing apparatus, image processing method, and program
CN110163912B (en)Two-dimensional code pose calibration method, device and system
US10515259B2 (en)Method and system for determining 3D object poses and landmark points using surface patches
CN106408609B (en)A kind of parallel institution end movement position and posture detection method based on binocular vision
CN111507390A (en)Storage box body identification and positioning method based on contour features
CN109801333B (en)Volume measurement method, device and system and computing equipment
JP2018124787A (en)Information processing device, data managing device, data managing system, method, and program
JP4042780B2 (en) Object recognition method, object recognition program and storage medium thereof, and object recognition apparatus
CN110597249B (en)Robot and recharging positioning method and device thereof
CN116433737A (en)Method and device for registering laser radar point cloud and image and intelligent terminal
CN113256729A (en)External parameter calibration method, device, equipment and storage medium for laser radar and camera
CN117274563B (en)Photovoltaic panel pose recognition method and device and installation robot
CN110969650B (en)Intensity image and texture sequence registration method based on central projection
CN111142514A (en) A robot and its obstacle avoidance method and device
CN107680035B (en)Parameter calibration method and device, server and readable storage medium
CN119445005B (en) A point cloud image fusion method based on vision
CN116921932B (en)Welding track recognition method, device, equipment and storage medium
CN112634377B (en)Camera calibration method, terminal and computer readable storage medium of sweeping robot
US11880993B2 (en)Image processing device, driving assistance system, image processing method, and program
CN113255405A (en)Parking space line identification method and system, parking space line identification device and storage medium
CN114463425A (en)Workpiece surface featureless point positioning method based on probability Hough linear detection
CN116485877A (en)Automatic measuring method and device for volume of wagon compartment, intelligent terminal and storage medium
CN119006599B (en) Image processing method, device and system for measuring vehicle wheelbase

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp