Movatterモバイル変換


[0]ホーム

URL:


CN114882112B - Pallet stacking method, device, computer equipment and computer readable storage medium - Google Patents

Pallet stacking method, device, computer equipment and computer readable storage medium
Download PDF

Info

Publication number
CN114882112B
CN114882112BCN202210517041.2ACN202210517041ACN114882112BCN 114882112 BCN114882112 BCN 114882112BCN 202210517041 ACN202210517041 ACN 202210517041ACN 114882112 BCN114882112 BCN 114882112B
Authority
CN
China
Prior art keywords
reference object
positioning reference
image acquisition
acquisition device
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210517041.2A
Other languages
Chinese (zh)
Other versions
CN114882112A (en
Inventor
王琛
李陆洋
方牧
鲁豫杰
杨秉川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionnav Robotics Shenzhen Co Ltd
Original Assignee
Visionnav Robotics Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionnav Robotics Shenzhen Co LtdfiledCriticalVisionnav Robotics Shenzhen Co Ltd
Priority to CN202210517041.2ApriorityCriticalpatent/CN114882112B/en
Publication of CN114882112ApublicationCriticalpatent/CN114882112A/en
Application grantedgrantedCritical
Publication of CN114882112BpublicationCriticalpatent/CN114882112B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本申请涉及一种托盘堆叠方法、装置、计算机设备和存储介质。包括:当搬运设备进行托盘堆叠处理时,通过图像采集设备采集待堆叠托盘上第一定位参照对象和被堆叠托盘上第二定位参照对象的图像内容;根据图像内容确定第一定位参照对象和第二定位参照对象分别相较图像采集设备的相对位置信息;根据相对位置信息确定待堆叠托盘和被堆叠托盘的位姿差,控制搬运设备对待堆叠托盘进行托盘堆叠处理。本申请通过在托盘上设置定位参照对象,对图像内容进行分析得出第一定位参照对象第二定位参照对象分别与图像采集设备的相对姿态,根据相对姿态来反推待堆叠托盘和被堆叠托盘的位姿差,根据位姿差控制搬运设备完成托盘堆叠,本申请能够提高托盘堆叠的准确率。

The present application relates to a pallet stacking method, device, computer equipment and storage medium. It includes: when a handling device performs a pallet stacking process, the image content of a first positioning reference object on the pallet to be stacked and a second positioning reference object on the pallet to be stacked is collected by an image acquisition device; the relative position information of the first positioning reference object and the second positioning reference object relative to the image acquisition device is determined according to the image content; the posture difference between the pallet to be stacked and the pallet to be stacked is determined according to the relative position information, and the handling device is controlled to perform a pallet stacking process on the pallet to be stacked. The present application sets a positioning reference object on the pallet, analyzes the image content to obtain the relative postures of the first positioning reference object and the second positioning reference object with the image acquisition device, reversely infers the posture difference between the pallet to be stacked and the pallet to be stacked according to the relative posture, and controls the handling device to complete the pallet stacking according to the posture difference. The present application can improve the accuracy of pallet stacking.

Description

Tray stacking method, apparatus, computer device, and computer-readable storage medium
Technical Field
The present application relates to the field of machine vision, and in particular, to a tray stacking method, apparatus, computer device, and computer readable storage medium.
Background
With the development of industrial technology, the technology of automatically stacking trays has gradually replaced the conventional technology of manual stacking. Currently, a preset stacking position of a stacked tray is calculated mainly by a tray to be stacked, and a carrying device stacks the trays according to the calculated preset stacking position. However, in the stacking process, there may be a deviation between the actual stacking position of the trays to be stacked and the preset stacking position, and thus the accuracy of stacking the trays may be low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a tray stacking method, apparatus, computer device, and computer-readable storage medium capable of improving the accuracy of tray stacking.
In a first aspect, the present application provides a tray stacking method. The method comprises the following steps:
Acquiring a target image by an image acquisition device, wherein the target image comprises first image contents of a first positioning reference object on a tray to be stacked and second image contents of a second positioning reference object on the tray to be stacked;
determining relative position information of the first positioning reference object compared with the image acquisition equipment according to the first image content, and determining relative position information of the second positioning reference object compared with the image acquisition equipment according to the second image content;
determining a pose difference between the tray to be stacked and the tray to be stacked according to relative position information of the first positioning reference object and the second positioning reference object compared with the image acquisition device respectively;
And controlling the carrying equipment to carry out tray stacking processing on the trays to be stacked according to the pose difference.
In some embodiments, the determining the relative position information of the first positioning reference object compared to the image capturing device according to the first image content, and the determining the relative position information of the second positioning reference object compared to the image capturing device according to the second image content includes:
Performing feature detection on the first image content and the second image content respectively to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object respectively;
And aiming at each positioning reference object in the first positioning reference object and the second positioning reference object, obtaining the relative position information of the positioning reference object compared with the image acquisition device according to the internal parameters of the image acquisition device and the corresponding feature set of the positioning reference object.
In some embodiments, the feature detecting the first image content and the second image content to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object respectively includes:
contour extraction is carried out on the first image content and the second image content respectively, so that a first image contour corresponding to the first positioning reference object and a second image contour corresponding to the second positioning reference object are obtained;
And respectively extracting the characteristics of the first image contour and the second image contour to obtain characteristic sets corresponding to the first positioning reference object and the second positioning reference object.
In some embodiments, the relative position information includes an extrinsic matrix of the positioning reference object relative to the image acquisition device, and the feature set corresponding to the positioning reference object includes a plurality of feature points of the positioning reference object;
the obtaining, for each of the first positioning reference object and the second positioning reference object, relative position information of the positioning reference object compared with the image acquisition device according to an image acquisition device internal parameter of the image acquisition device and a feature set corresponding to the positioning reference object, includes:
for each positioning reference object in the first positioning reference object and the second positioning reference object, calculating an external reference matrix of the positioning reference object relative to the image acquisition device according to the pixel coordinates of each feature point corresponding to the positioning reference object in a pixel coordinate system, the coordinates of each feature point in the three-dimensional coordinate system constructed based on the positioning reference object and the internal reference of the image acquisition device;
The three-dimensional coordinate system constructed based on the positioning reference object is constructed by taking a plane where the positioning reference object is located as a coordinate plane and one of a plurality of characteristic points of the positioning reference object as a coordinate origin.
In some embodiments, the relative position information of the first positioning reference object compared with the image acquisition device comprises a first external parameter matrix, the relative position information of the second positioning reference object compared with the image acquisition device comprises a second external parameter matrix, and the determining the pose difference between the tray to be stacked and the tray to be stacked according to the relative position information of the first positioning reference object and the second positioning reference object respectively compared with the image acquisition device comprises the following steps:
Multiplying the inverse matrix of the second extrinsic matrix by the first extrinsic matrix to obtain a third extrinsic matrix of the first positioning reference object relative to the second positioning reference object;
and obtaining the pose difference according to the third external parameter matrix.
In some embodiments, the image acquisition device is fixed on the handling device, the first positioning reference object is a first two-dimensional code arranged on the tray to be stacked, the second positioning reference object is a second two-dimensional code arranged on the tray to be stacked, the plurality of feature points of the first two-dimensional code comprise at least three boundary corner points of the first two-dimensional code, and the plurality of feature points of the second two-dimensional code comprise at least three boundary corner points of the second two-dimensional code.
In some embodiments, the controlling the handling device to perform tray stacking processing on the trays to be stacked according to the pose difference includes:
obtaining the relative pose of the image acquisition equipment and the stacked tray according to the pose difference;
and if the relative pose is not in the preset pose range, adjusting the current pose of the carrying equipment according to the relative pose, and returning to execute the step of acquiring the target image through the image acquisition equipment after adjusting the pose of the carrying equipment.
In a second aspect, the application further provides a tray stacking device. The device comprises:
The image acquisition module is used for acquiring a target image through the image acquisition equipment, wherein the target image comprises first image contents of a first positioning reference object on a tray to be stacked and second image contents of a second positioning reference object on the tray to be stacked;
an information acquisition module, configured to determine, according to the first image content, relative position information of the first positioning reference object compared with the image acquisition device, and determine, according to the second image content, relative position information of the second positioning reference object compared with the image acquisition device;
The pose calculation module is used for determining pose differences between the tray to be stacked and the tray to be stacked according to relative position information of the first positioning reference object and the second positioning reference object compared with the image acquisition equipment respectively;
and the tray stacking module is used for controlling the carrying equipment to carry out tray stacking processing on the trays to be stacked according to the pose difference.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor implementing the steps of the above-described tray stacking method when the processor executes the computer program.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps in the above-described tray stacking method.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of the above-described tray stacking method.
The tray stacking method, the device, the computer equipment and the computer readable storage medium are used for acquiring a target image through the image acquisition equipment when the carrying equipment stacks the trays to be stacked, wherein the target image comprises first image content of a first positioning reference object on the trays to be stacked and second image content of a second positioning reference object on the stacked trays, first relative position information between the first positioning reference object and the image acquisition equipment is determined according to the first image content, second relative position information between the second positioning reference object and the image acquisition equipment is determined according to the second image content, and pose difference between the trays to be stacked and the stacked trays is determined according to the first relative position information and the second relative position information. By arranging the positioning reference objects on the trays, collecting and analyzing the image contents of the first positioning reference objects on the trays to be stacked and the second positioning reference objects on the stacked trays, the relative postures of the first positioning reference objects and the second positioning reference objects of the image collecting equipment can be obtained, and the deviation of the relative postures of the trays to be stacked and the stacked trays can be reversely pushed according to the relative postures, so that the carrying equipment can be further controlled to accurately complete the stacking work of the trays according to the position posture difference.
Drawings
FIG. 1 is a schematic view of an application environment of a tray stacking method in some embodiments;
FIG. 2 is a flow chart of a tray stacking method in some embodiments;
FIG. 3 is a flowchart illustrating a step of calculating relative position information of a first positioning reference object and a second positioning reference object, respectively, compared to an image capturing device according to some embodiments;
FIG. 4 is a flowchart illustrating a feature detection step performed on a first image content and a second image content in some embodiments;
FIG. 5 is a flow chart of a step of calculating a pose difference between a tray to be stacked and a tray to be stacked in some embodiments;
FIG. 6 is a schematic flow chart of a tray stacking process step of controlling a handling device to perform tray stacking according to a pose difference in some embodiments;
FIG. 7 is a schematic diagram of a structure of a stacked tray in some embodiments;
FIG. 8 is a flow chart of a method of stacking trays in other embodiments;
FIG. 9 is a block diagram of a tray stacking apparatus in some embodiments;
FIG. 10 is an internal block diagram of a computer device in some embodiments.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The tray stacking method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. In the embodiments of the present application, a tray to be stacked (not shown in the drawings) is placed on the handling device 102, and the handling device 102 is responsible for transporting the tray to be stacked to the position of the tray 104 to be stacked, and performing tray stacking processing on the tray to be stacked, wherein a first positioning reference object for positioning is disposed on a surface of the tray to be stacked, and a second positioning reference object 106 for positioning is disposed on a surface of the tray to be stacked 104.
When the handling apparatus 102 needs to stack the trays to be stacked onto the tray to be stacked 104, the trays to be stacked and the tray to be stacked 104 may be photographed by the image capturing apparatus to form a target image including the first image content of the first positioning reference object on the tray to be stacked and the second image content of the second positioning reference object 106 on the tray to be stacked 104.
After the image acquisition device acquires the target image, the relative position information of the first positioning reference object compared with the image acquisition device can be determined according to the first image content, the relative position information of the second positioning reference object 106 compared with the image acquisition device can be determined according to the second image content, the pose difference between the tray to be stacked and the tray 104 to be stacked is determined according to the relative position information of the first positioning reference object and the second positioning reference object 106 respectively compared with the image acquisition device, and the carrying device 102 is controlled to carry out tray stacking processing on the tray to be stacked according to the pose difference.
In some embodiments, as shown in fig. 2, the method may be applied to a handling device, and may also be applied to other computer devices that are communicatively connected to the handling device, where embodiments of the present application are not limited in particular.
Taking the carrying device in fig. 1 as an example, the method comprises the following steps:
Step 202, when the handling device performs stacking processing on the trays to be stacked, the image acquisition device acquires the target image.
The handling device referred to herein is a transport device for handling pallets to be stacked, wherein the handling device may be, but is not limited to, an automated guided vehicle (Automated Guided Vehicle, AGV trolley) and a forklift. It should be noted that the handling apparatus may be responsible for handling only the trays to be stacked, that is, stacking the trays to be stacked by a person after the handling apparatus has handled the trays to be stacked to a specified position such as the vicinity of the stacked trays. In addition, the handling equipment not only can be responsible for handling the trays to be stacked, but also can be responsible for carrying out tray stacking processing on the trays to be stacked, namely, when the handling equipment is used for handling the trays to be stacked to the vicinity of the trays to be stacked, the handling equipment can automatically carry out tray stacking processing on the trays to be stacked.
The pallet, also known as a pallet, is a cargo vehicle for transporting goods in groups. The trays to be stacked refer to trays to be subjected to stacking processing. Correspondingly, the stacked tray is used for indicating the position where the tray to be stacked needs to be stacked, for example, the stacked tray is used for indicating that the tray to be stacked is stacked above the stacked tray, etc. In the embodiment of the application, a positioning reference object for positioning is arranged on each tray (including the tray to be stacked and the tray to be stacked), and the positioning reference object can be specifically arranged as a certain pattern for positioning the tray. Taking the tray to be stacked and the tray to be stacked as an example, the application is provided with a first positioning reference object on the tray to be stacked and a second positioning reference object on the tray to be stacked, wherein the patterns of the first positioning reference object and the second positioning reference object can be the same or different, and the application is not particularly limited.
The image capturing device refers to a device having a photographing function, and may be, but not limited to, various cameras, mobile devices, cameras, video cameras, and scanners. In the embodiment of the application, the image acquisition device may be provided separately from the handling device, or the image acquisition device may be a component of the handling device, that is, the image acquisition device may be fixed on the handling device, and the positional relationship between the image acquisition device and the handling device is not limited.
Specifically, when the handling device needs to perform stacking processing on the trays to be stacked, by adjusting the angle of the image acquisition device, the view range of the handling device can completely cover the first positioning reference object and the second positioning reference object, then the object image is acquired, the acquired object image comprises the first image content of the first positioning reference object on the trays to be stacked and the second image content of the second positioning reference object on the stacked trays, and the relative position information of the first positioning reference object and the second positioning reference object compared with the image acquisition device can be calculated at the same time directly according to one object image.
Step 204, determining relative position information of the first positioning reference object compared with the image acquisition device according to the first image content, and determining relative position information of the second positioning reference object compared with the image acquisition device according to the second image content.
The relative position information comprises relative position information of the first positioning reference object compared with the image acquisition device, namely the relative relation between the position of the first positioning reference object and the position of the image acquisition device. The relative position information further comprises relative position information of the second positioning reference object compared to the image acquisition device, i.e. a relative relation between the position of the second positioning reference object and the position of the image acquisition device.
Step 206, determining the pose difference between the tray to be stacked and the tray to be stacked according to the relative position information of the first positioning reference object and the second positioning reference object compared with the image acquisition device respectively.
It will be appreciated that in the process of stacking confirmation, the first positioning reference object on the tray to be stacked, the second positioning reference object on the tray to be stacked, and the image capturing apparatus may be fixed, and therefore, the relative pose of the first positioning reference object and the image capturing apparatus, and the relative pose of the second positioning reference object and the image capturing apparatus can be obtained by the image capturing apparatus compared with the relative position information of the first positioning reference object and the second positioning reference object, respectively, and the deviation of the relative poses of the tray to be stacked and the tray to be stacked, that is, the pose difference between the tray to be stacked and the tray to be stacked, is reversely pushed according to the relative pose of the first positioning reference object and the image capturing apparatus, and the relative pose of the second positioning reference object and the image capturing apparatus.
And step 208, controlling the carrying equipment to carry out tray stacking processing on the trays to be stacked according to the pose difference.
Specifically, according to the pose difference, the relative pose of the image acquisition device and the stacked trays can be calculated, after the relative pose is calculated, whether the relative pose is in a preset pose range or not needs to be determined, if the relative pose is not in the preset pose range, the pose of the carrying device needing to be corrected is determined according to the relative pose, and after the pose of the carrying device is adjusted, the step of acquiring the target image through the image acquisition device is returned to be executed. And if the relative pose is in the preset pose range, directly controlling the carrying equipment to stack the trays to be stacked on the stacked trays.
The tray stacking method includes the steps of collecting target images through an image collecting device when carrying devices carry out stacking processing on trays to be stacked, wherein the target images comprise first image contents of first positioning reference objects on the trays to be stacked and second image contents of second positioning reference objects on the stacked trays, determining first relative position information between the first positioning reference objects and the image collecting device according to the first image contents, determining second relative position information between the second positioning reference objects and the image collecting device according to the second image contents, and determining pose differences between the trays to be stacked and the stacked trays according to the first relative position information and the second relative position information. By arranging the positioning reference objects on the trays, collecting and analyzing the image contents of the first positioning reference object on the trays to be stacked and the second positioning reference object on the stacked trays, the relative postures of the first positioning reference object and the second positioning reference object and the image collecting equipment can be obtained, and the deviation of the relative postures of the trays to be stacked and the stacked trays is reversely pushed according to the relative postures, so that the carrying equipment can be further controlled to accurately complete the stacking work of the trays according to the position posture difference.
In some embodiments, as shown in fig. 3, step 204 specifically includes, but is not limited to, including:
and step 302, performing feature detection on the first image content and the second image content respectively to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object respectively.
Wherein, the feature set corresponding to the first positioning reference object refers to feature points of the plurality of feature points on the first positioning reference object under a pixel coordinate system. The feature set corresponding to the second positioning reference object refers to feature points of the plurality of feature points on the second positioning reference object in a pixel coordinate system.
Specifically, it is necessary to determine position information of a plurality of feature points of the first positioning reference object in the first image content, and determine position information of a plurality of feature points of the second positioning reference object in the second image content, and form feature sets corresponding to the first positioning reference object and the second positioning reference object respectively according to the above position information. The plurality of feature points of the first positioning reference object and the second positioning reference object are a plurality of positioning points set on the first positioning reference object or the second positioning reference object by a user according to the outlines and actual requirements of the first positioning reference object and the second positioning reference object.
In practical applications, the anchor point includes, but is not limited to, at least one of a center point and a boundary corner point of the first anchor reference object. Likewise, the anchor points further include, but are not limited to, at least one of a center point and a boundary corner point of the second positioning reference object.
Step 304, for each of the first positioning reference object and the second positioning reference object, obtaining relative position information of the positioning reference object compared with the image acquisition device according to the feature set corresponding to the positioning reference object and the image acquisition device internal reference of the image acquisition device.
The positioning reference object refers to a first positioning reference object and a second positioning reference object, the internal parameters of the image acquisition device are device parameters used when the image acquisition device acquires images, the obtaining of the relative position information of the positioning reference object compared with the image acquisition device according to the feature sets corresponding to the internal parameters of the image acquisition device and the positioning reference object refers to the obtaining of the relative position information of the first positioning reference object compared with the image acquisition device according to the feature sets corresponding to the internal parameters of the image acquisition device and the first positioning reference object, and the obtaining of the relative position information of the second positioning reference object compared with the image acquisition device according to the feature sets corresponding to the internal parameters of the image acquisition device and the second positioning reference object.
Specifically, the relative relationship between the position of the first positioning reference object and the position of the image acquisition device can be determined by the feature set corresponding to the first positioning reference object and the image acquisition device internal parameter of the image acquisition device, and the relative relationship between the position of the second positioning reference object and the position of the image acquisition device can be determined by the feature set corresponding to the second positioning reference object and the image acquisition device internal parameter of the image acquisition device.
In some embodiments, the image capture device is a camera and the image capture device parameters are camera parameters associated with image capture. Such as camera parameters like focal length of the camera.
In some embodiments, as shown in fig. 4, step 302 specifically includes, but is not limited to, including:
and step 402, performing contour extraction on the first image content and the second image content respectively to obtain a first image contour corresponding to the first positioning reference object and a second image contour corresponding to the second positioning reference object.
Specifically, the outline of the first positioning reference object displayed in the first image content is extracted to obtain a first image outline, and the outline of the second positioning reference object displayed in the second image content is extracted to obtain a second image outline.
And step 404, extracting features of the first image contour and the second image contour respectively to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object respectively.
Specifically, position information of a plurality of feature points on a first positioning reference object in a pixel coordinate system is extracted from a first image contour, position information of a plurality of feature points on a second positioning reference object in the pixel coordinate system is extracted from a second image contour, and feature sets corresponding to the first positioning reference object and the second positioning reference object are formed according to the position information.
In some embodiments, prior to step 402, the method further comprises the step of preprocessing the target image, specifically including:
Converting the target image into a gray level image, sequentially performing erosion operation and expansion operation on the gray level image by using the opening operation of the image morphology to obtain a processed gray level image, and performing binarization processing on the processed gray level image to obtain a preprocessed target image. The image processing method comprises the steps of performing erosion operation on a gray level image to remove burrs on the image, and performing expansion operation on the gray level image to fill the missing part of the first image content and the second image content. The image is preprocessed, so that the accuracy of the subsequent feature detection of the target image can be improved, the calculated pose difference is more accurate, and the accuracy of tray stacking is improved.
In some embodiments, the relative position information includes an extrinsic matrix of the positioning reference object relative to the image acquisition device, and the feature set corresponding to the positioning reference object includes a plurality of feature points of the positioning reference object.
Wherein the positioning reference object refers to a first positioning reference object and a second positioning reference object, the relative position information includes an external reference matrix of the first positioning reference object relative to the image capturing device and an external reference matrix of the second positioning reference object relative to the image capturing device, the feature set corresponding to the first positioning reference object includes a plurality of feature points of the first positioning reference object in a pixel coordinate system, and the feature set corresponding to the second positioning reference object includes a plurality of feature points of the second positioning reference object in the pixel coordinate system, and the step 304 specifically includes, but is not limited to including the following steps:
and calculating an external reference matrix of the positioning reference object relative to the image acquisition device according to the pixel coordinates of each feature point corresponding to the positioning reference object under the pixel coordinate system, the coordinates of each feature point under the three-dimensional coordinate system constructed based on the positioning reference object and the internal reference of the image acquisition device aiming at each positioning reference object in the first positioning reference object and the second positioning reference object.
The external reference matrix of the positioning reference object relative to the image acquisition device describes a process of converting a three-dimensional coordinate system constructed based on the first positioning reference object into a coordinate system constructed based on the image acquisition device, and a process of converting a three-dimensional coordinate system constructed based on the second positioning reference object into a coordinate system constructed based on the image acquisition device. For convenience of description, the present application refers to a coordinate system constructed based on the image acquisition apparatus itself as a reference coordinate system.
Specifically, according to the pixel coordinates of each feature point corresponding to the first positioning reference object in the pixel coordinate system, the coordinates of each feature point in the three-dimensional coordinate system constructed based on the first positioning reference object, and the internal parameters of the image acquisition device, an external parameter matrix of the first positioning reference object compared with the image acquisition device is calculated. And calculating to obtain an external reference matrix of the second positioning reference object compared with the image acquisition device according to the pixel coordinates of each feature point corresponding to the second positioning reference object under the pixel coordinate system, the coordinates of each feature point under the three-dimensional coordinate system constructed based on the second positioning reference object, and the internal reference of the image acquisition device.
Specifically, the extrinsic matrix includes a rotation matrix and a translation matrix, the rotation matrix and the translation matrix collectively describing how each feature point is converted from the three-dimensional coordinate system to the reference coordinate system, the rotation matrix describing a direction of a coordinate axis of the three-dimensional coordinate system relative to a coordinate axis corresponding to the reference coordinate system, and the translation matrix describing a position of a spatial origin under the three-dimensional coordinate system.
The three-dimensional coordinate system constructed based on the positioning reference object is a three-dimensional coordinate system constructed by taking a plane in which the positioning reference object is located as a coordinate plane and taking one of a plurality of characteristic points of the positioning reference object as a coordinate origin. Taking the first positioning reference object as an example, 4 feature points are arranged on the first positioning reference object, wherein the first positioning reference object respectively comprises a feature point A, a feature point B, a feature point C and a feature point D, the process of constructing a three-dimensional coordinate system based on the first positioning reference object is that a plane where the first positioning reference object is located is taken as a coordinate plane, one of the feature points A of the first positioning reference object is taken as a coordinate origin, and the three-dimensional coordinate system based on the first positioning reference object is constructed, wherein if the plane where the first positioning reference object is located is a vertical plane, the plane where the first positioning object is located is taken as a coordinate plane and can be understood as a ZY plane, and the feature point A can be the feature point of the left vertex of the first positioning reference object (such as a two-dimensional code), and particularly can refer to FIG. 1. It should be noted that, the process of constructing the three-dimensional coordinate system based on the second positioning reference object is the same as the process of constructing the three-dimensional coordinate system based on the first positioning reference object, and will not be described in detail herein.
In some embodiments, the image acquisition device parameters include at least one of a focal length and pixel coordinates of the image acquisition device. The pixel coordinates refer to conversion of a certain coordinate point in a reference coordinate system where the image acquisition device is located relative to a pixel coordinate system where the target image is located, where the coordinate point refers to a certain feature point in the first image content or a certain feature point in the second image content in the embodiment of the present application.
When the external parameter matrix corresponding to the first positioning reference object and the second positioning reference object is calculated, the distortion parameters can be obtained in addition to the internal parameters of the image acquisition device, and the corresponding external parameter matrix can be calculated by combining the distortion parameters. It can be understood that, due to the influence of the lens manufacturing precision, the image shot by the image acquisition device may have distortion of different degrees, and distortion parameters can be formed according to the distortion conditions, so that the application of the distortion parameters can be considered to calculate the external parameter matrix, and the accuracy is improved.
In practical application, according to the pixel coordinates of each feature point corresponding to the positioning reference object under the pixel coordinate system, the coordinates of each feature point under the three-dimensional coordinate system constructed based on the positioning reference object, and the internal parameters of the image acquisition device, the pose solving algorithm is invoked to automatically solve the external parameter matrix of the positioning reference object relative to the image acquisition device.
The specific pose solving algorithm may select solvePnP algorithm of OpenCV or solvePnPRansac algorithm of OpenCV. Wherein OpenCV is a cross-platform computer vision and machine learning software library, solvePnP algorithm is a monocular relative pose estimation algorithm, solvePnPRansac algorithm is a consistency sampling algorithm.
Taking solvePnP algorithm as an example, a specific process of calculating the external parameter matrix of the positioning reference object relative to the image acquisition device is shown in the following formula (1) to formula (3):
Wherein Zc refers to an unknown coordinate of a certain feature point in the first image content or a certain feature point in the second image content in a reference coordinate system where the image capturing device is located, specifically, Zc may also be understood as a homogeneous coordinate of a certain target point in the reference coordinate system, which is generally referred to as a scale factor (coefficient), the scale factor defaulting to 1, which is related to the principle of pinhole imaging, generally, the smaller Zc is the closer to the image capturing device and the larger the image is, fx、fy、u0 and v0 are both references in the image capturing device, specifically, fx and fy are focal lengths of the image capturing device, u0 and v0 refer to pixel coordinates obtained by converting a certain coordinate point in the reference coordinate system where the image capturing device is located (a certain feature point in the first image content or a certain feature point in the second image content) with respect to a pixel coordinate system where the target image is located, XW、YW and ZW are 3D coordinates (w) in a three-dimensional coordinate system established based on a positioning reference object, which may be understood as a three-dimensional coordinate system based on the three-dimensional coordinate system, and thus the world coordinate system may be understood as a three-dimensional coordinate system which is a coordinate system which is defined by the three-dimensional coordinate system.
Since the first positioning reference object in the embodiment of the present application is a planar object, ZW =0, X and y are 2D coordinates (two-dimensional coordinates) of the feature points corresponding to the positioning reference object in the pixel coordinate system, it should be noted that XW、YW, X and y may form a set of coordinates of 3D to 2D, R11、R12、R13、R21、R22、R23、R31、R32 and R33 are all unknowns for constructing a rotation matrix, and T1、T2 and T3 are also unknowns for constructing a translation matrix, where the rotation matrix and the translation matrix are combined to form an external reference matrix.
Converting the formula (1) can obtain the following formulas (2) to (4):
Zc*x=XW*(fx*R11+u0*R31)+YW*(fx*R12+u0*R32)+ZW*(fx*R13+u0*R33)+fx*T1+u0*T3(2)
Zc*y=XW*(fy*R21+v0*R31)+YW*(fy*R22+v0*R32)+ZW*(fy*R23+v0*R33)+fy*T2+v0*T3(3)
Zc=XW*R31+YW*R32+ZW*R33+T3 (4)
the unknowns for constructing the rotation matrix and the translation matrix are obtained by the formulas (2) to (4). Since the rotation matrix is an orthogonal matrix, adding 3 unknowns for constructing the translation matrix, and adding 6 unknowns, two equations can be determined by the coordinates XW、YW, X and y of each set of 3D-to-2D, so in the embodiment of the present application, at least three sets of coordinates are needed to solve the six unknowns, and at least three sets of coordinates corresponding to the feature points are needed to obtain the 3D-to-2D coordinates.
After solving the unknowns for constructing the rotation matrix and the translation matrix, an external reference matrix (R, T) of one of the positioning reference objects relative to the image acquisition device can be obtained, that is, a rotation translation matrix corresponding to the reference coordinate system based on the three-dimensional coordinate system established by one of the positioning reference objects, wherein R is the rotation matrix and T is the translation matrix.
In some embodiments, the relative position information of the first positioning reference object compared to the image acquisition device comprises a first extrinsic matrix, and the relative position information of the second positioning reference object compared to the image acquisition device comprises a second extrinsic matrix, as shown in FIG. 5, step 206 specifically includes, but is not limited to, including:
step 502, multiplying the inverse of the second extrinsic matrix by the first extrinsic matrix to obtain a third extrinsic matrix of the first positioning reference object relative to the second positioning reference object.
Specifically, the first extrinsic matrix is set as M1, the second extrinsic matrix is set as M2, and the inverse matrix of M2 is multiplied by M1 to obtain the extrinsic matrix of the three-dimensional coordinate system where the first positioning object is located relative to the three-dimensional coordinate system where the second positioning reference object is located, that is, the rotation offset matrix M3.
And step 504, obtaining the pose difference according to the third external parameter matrix.
Specifically, the pose difference between the first positioning object and the second positioning object is obtained according to matrix parameters in the third external parameter matrix M3, that is, including the rotation amount and the translation amount.
In some embodiments, as shown in fig. 6, step 208 specifically includes, but is not limited to, including:
and step 602, obtaining the relative pose of the image acquisition equipment and the stacked tray according to the pose difference.
The relative pose of the image acquisition device and the stacked tray comprises a pitch angle and a translation amount of the image acquisition device relative to the stacked tray, wherein a rotation vector and a translation vector can be obtained through the pose solving algorithm, after the rotation vector is obtained, the rotation vector is required to be converted into a rotation matrix through the Rodrigas transformation and then converted into an Euler angle, namely the pitch angle of the image acquisition device relative to the stacked tray, and the translation vector, namely the translation amount, obtained through the pose solving algorithm is required to be solved.
Step 604, if the relative pose is not within the preset pose range, adjusting the current pose of the carrying device according to the relative pose, and after adjusting the pose of the carrying device, returning to execute the step of collecting the target image through the image collecting device.
Specifically, after the relative pose of the image acquisition device and the stacked trays is calculated, whether the relative pose is in a preset pose range is required to be determined, if the relative pose is in the preset pose range, the pose required to be corrected of the carrying device is determined according to the relative pose, and after the pose of the carrying device is adjusted, the step of acquiring the target image through the image acquisition device is performed in a returning mode.
If the relative pose calculated in step 602 is within the preset pose range, step 606 is performed, in which the handling device is directly controlled to stack the trays to be stacked onto the stacked trays.
In some embodiments, the relative pose of the image capture device and the stacked tray includes a pitch angle and an amount of translation of the image capture device relative to the stacked tray, and the preset pose range includes a pitch angle range and an amount of translation range. After calculating the pitch angle and the translation amount of the image acquisition equipment relative to the stacked tray, judging whether the pitch angle and the translation amount simultaneously meet the pitch angle range and the translation amount range in the preset pose range, if not, determining the angle and the distance required to be adjusted by the carrying equipment according to the relative pose of the image acquisition equipment and the stacked tray including the pitch angle and the translation amount of the image acquisition equipment relative to the stacked tray, adjusting the current pose of the carrying equipment, and returning to execute the step of acquiring the target image through the image acquisition equipment after adjusting the pose of the carrying equipment.
If the pitch angle and the translation quantity simultaneously meet the pitch angle range and the translation quantity range in the preset pose range, directly stacking the trays to be stacked on the stacked trays according to the current pose of the carrying equipment without carrying out pose adjustment operation.
In some embodiments, the amount of translation includes an amount of translation in the X-axis direction and an amount of translation in the Y-axis direction. In practical application, a person skilled in the art can set the translational amount range of the X-axis direction to-1.5 cm to +1.5 cm, the translational amount range of the Y-axis direction to-1.5 cm to +1.5 cm, and the pitch angle range to-1.5 degrees to +1.5 degrees according to practical requirements.
In some embodiments, the image acquisition device is fixed on the handling device, the first positioning reference object is a first two-dimensional code arranged on a tray to be stacked, the second positioning reference object is a second two-dimensional code arranged on the tray to be stacked, the plurality of feature points of the first two-dimensional code comprise at least three boundary corner points of the first two-dimensional code, and the plurality of feature points of the second two-dimensional code comprise at least three boundary corner points of the second two-dimensional code.
In some embodiments, 4 boundary corner points may be selected from the plurality of feature points of the first two-dimensional code, and 4 boundary corner points may be selected from the plurality of feature points of the second two-dimensional code. When the three-dimensional coordinate system is built based on the first two-dimensional code, one of 4 boundary corner points (for example, the boundary corner point at the upper left of the first two-dimensional code, namely, the left vertex of the first two-dimensional code) is selected as a coordinate origin, and a plane (for example, a ZY plane) where the first two-dimensional code is located is used as a coordinate plane to build the three-dimensional coordinate system based on the first two-dimensional code. It should be noted that, the process of constructing the three-dimensional coordinate system based on the second two-dimensional code is the same as the process of constructing the three-dimensional coordinate system based on the first two-dimensional code, and will not be described in detail here.
It should be noted that, the two-dimensional codes are disposed at the positions of the upper cover and the bottom support of the tray to be stacked, the two-dimensional codes are also disposed at the positions of the upper cover and the bottom support of the tray to be stacked, the first positioning reference object represents the two-dimensional codes disposed on the bottom support of the tray to be stacked, i.e. the first two-dimensional codes, and the second positioning reference object represents the two-dimensional codes disposed on the upper cover of the tray to be stacked, i.e. the second two-dimensional codes.
It should be noted that, the purpose of fixing the image capturing apparatus to the carrying apparatus is that, since the positional relationship between the image capturing apparatus and the carrying apparatus is fixed, the image capturing apparatus and the carrying apparatus can be directly regarded as a whole, and therefore, the pose difference between the image capturing apparatus and the first positioning reference object and the second positioning reference object, respectively, can be regarded as the pose difference between the carrying apparatus and the first positioning reference object and the second positioning reference object, respectively, thereby simplifying the calculation process.
It can be understood that the first positioning reference object and the second positioning reference object are set to be positioned in the form of two-dimensional codes, and because the two-dimensional codes have the same data unit, the positioning can be performed quickly without complex processing on the acquired data information.
In some embodiments, two-dimensional codes are fixed on one surface, such as the front surface, of each tray (including the tray to be stacked and the tray to be stacked), and referring to fig. 7, for example, the tray to be stacked includes an upper cover 702 and a bottom bracket 704, wherein a first two-dimensional code 706 is disposed on the front surface of the bottom bracket 704, a second two-dimensional code 708 is disposed on the front surface of the upper cover 702, and the second two-dimensional code 708 is located directly above the first two-dimensional code 706 coaxially.
In addition, the width of the first two-dimensional code 706 is consistent with the thickness of the bottom bracket 704, the width of the second two-dimensional code 708 is consistent with the thickness of the upper cover 702, and positioning can be performed by using the information of the two-dimensional code. The width of the first two-dimensional code 706 and the width of the second two-dimensional code 708 are set to be consistent with the thickness of the bottom support 704, so that the whole content of the first two-dimensional code 706 can be guaranteed to be located on the front surface of the bottom support 704, the whole content of the second two-dimensional code 708 can be guaranteed to be located on the front surface of the upper cover 702, the whole content of the first two-dimensional code 706 and the whole content of the second two-dimensional code 708 can be collected by image collection equipment, accurate feature points can be located on the basis of the whole two-dimensional code content, calculated pose differences are more accurate, and accordingly the tray stacking accuracy is improved.
If the first positioning reference object and the second positioning reference object are both two-dimensional codes, boundary corner points of the two-dimensional codes, that is, four vertices of the two-dimensional codes may be considered to be adopted when feature detection is performed on the first image content and the second image content. In practical application, if other positions are taken as feature points of the two-dimensional code, the two-dimensional code is difficult to correspond to coordinates under a reference coordinate system where the image acquisition equipment is located, and because a plane where the image acquisition equipment is located may be inclined, boundary corner points of the two-dimensional code are taken as feature points, and the pose solving efficiency can be improved.
If the image capturing device is required to be fixed on the carrying device, the image capturing device can be considered to be mounted on the side of the carrying device, and particularly, the image capturing device can be considered to be mounted on the left side or the right side of the carrying device. After the preliminary installation is finished, the angle of the image acquisition equipment is required to be adjusted, and the two-dimensional code A of the tray to be stacked and the two-dimensional code B of the tray to be stacked are ensured to be simultaneously present in the visual angle range of the image acquisition equipment.
In some embodiments, as shown in fig. 8, the tray stacking method of the present application may further include:
two-dimensional codes are fixed at the bottom support and the upper cover of each tray, a 2D camera is installed on the side edge of the AGV, the camera view angle range is adjusted to cover the two-dimensional code A of the tray bottom support to be stacked and the two-dimensional code B of the upper cover of the tray to be stacked, when the AGV forks the tray to be stacked to the upper side of the tray to be stacked, target images comprising the two-dimensional code A and the two-dimensional code B are acquired through the 2D camera, then four characteristic points of the two-dimensional code A and the two-dimensional code B are detected through a two-dimensional code detection program respectively, an external reference matrix between the two-dimensional code A and the two-dimensional code B is solved respectively through solvePnP, a rotation translation matrix M1 of a reference coordinate system under a three-dimensional coordinate system established by the two-dimensional code A is calculated, a rotation translation matrix M2 of the reference coordinate system established by the two-dimensional code B is calculated, the rotation translation matrix M3 of the two-dimensional code A under the three-dimensional coordinate system is obtained by multiplying the M2 inverse matrix, and the position of the AGV relative to the tray to be stacked is known by M3, so that stacking and confirmation is carried out, and the stacking work of the tray is accurately completed.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a tray stacking device for realizing the tray stacking method. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitation in the embodiments of the tray stacking device or devices provided below may be referred to the limitation of the tray stacking method hereinabove, and will not be repeated here.
In one embodiment, as shown in FIG. 9, a tray stacking apparatus is provided, comprising an image acquisition module 902, an information acquisition module 904, a pose calculation module 906, and a tray stacking module 908, wherein:
The image acquisition module 902 is configured to acquire a target image through the image acquisition device, where the target image includes a first image content of a first positioning reference object on a tray to be stacked and a second image content of a second positioning reference object on the tray to be stacked.
An information acquisition module 904 for determining relative position information of the first positioning reference object compared to the image capturing device according to the first image content, and determining relative position information of the second positioning reference object compared to the image capturing device according to the second image content.
And a pose calculating module 906, configured to determine a pose difference between the tray to be stacked and the tray to be stacked according to relative position information of the first positioning reference object and the second positioning reference object compared with the image capturing device, respectively.
And a tray stacking module 908, configured to control the handling device to perform tray stacking processing on the trays to be stacked according to the pose difference.
According to the tray stacking device, the positioning reference objects are arranged on the trays, the image contents of the first positioning reference object on the tray to be stacked and the second positioning reference object on the stacked tray are collected and analyzed, so that the relative postures of the first positioning reference object and the second positioning reference object respectively with the image acquisition equipment can be obtained, the deviation of the relative postures of the tray to be stacked and the stacked tray is reversely pushed according to the relative postures, and the carrying equipment can be further controlled according to the posture difference to accurately complete the tray stacking work.
In some embodiments, the information acquisition module comprises a feature detection unit and a position acquisition unit, wherein the feature detection unit is used for respectively carrying out feature detection on the first image content and the second image content to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object, and the position acquisition unit is used for positioning the reference object aiming at each of the first positioning reference object and the second positioning reference object, and obtaining relative position information of the positioning reference object compared with the image acquisition device according to feature sets corresponding to the internal parameters of the image acquisition device and the positioning reference object of the image acquisition device.
In some embodiments, the feature detection unit is further configured to perform contour extraction on the first image content and the second image content, respectively, to obtain a first image contour corresponding to the first positioning reference object and a second image contour corresponding to the second positioning reference object, and perform feature extraction on the first image contour and the second image contour, respectively, to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object, respectively.
In some embodiments, the position obtaining unit is further configured to calculate, for each of the first positioning reference object and the second positioning reference object, an extrinsic matrix of the positioning reference object with respect to the image capturing device according to a pixel coordinate of each feature point corresponding to the positioning reference object in a pixel coordinate system, a coordinate of each feature point in a three-dimensional coordinate system constructed based on the positioning reference object, and an intrinsic parameter of the image capturing device, where the three-dimensional coordinate system constructed based on the positioning reference object is a three-dimensional coordinate system constructed with a plane in which the positioning reference object is located as a coordinate plane, and one of a plurality of feature points of the positioning reference object is a coordinate origin.
In some embodiments, the pose calculation module is further configured to multiply an inverse of the second extrinsic matrix by the first extrinsic matrix to obtain a third extrinsic matrix of the first positioning reference object relative to the second positioning reference object, and obtain a pose difference according to the third extrinsic matrix.
In some embodiments, the tray stacking module is further configured to obtain a relative pose of the image capturing device and the stacked tray according to the pose difference, adjust a current pose of the handling device according to the relative pose if the relative pose is not within a preset pose range, and return to execute the step of capturing the target image by the image capturing device after adjusting the pose of the handling device.
The above-described division of the modules in the tray stacking apparatus is for illustration only, and in other embodiments, the detection apparatus may be divided into different modules as needed to perform all or part of the functions of the detection apparatus.
The various modules in the tray stacking apparatus described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In some embodiments, a computer device is provided, which may be the handling device of fig. 1, the internal structure of which may be as shown in fig. 10. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used to store image information and tray data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program when executed by a processor implements a tray stacking method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 10 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In some embodiments, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In some embodiments, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

Translated fromChinese
1.一种托盘堆叠方法,其特征在于,所述方法包括:1. A pallet stacking method, characterized in that the method comprises:通过图像采集设备采集目标图像;所述目标图像中包括待堆叠托盘上的第一定位参照对象的第一图像内容和被堆叠托盘上的第二定位参照对象的第二图像内容;Capturing a target image by an image acquisition device; the target image includes a first image content of a first positioning reference object on the to-be-stacked tray and a second image content of a second positioning reference object on the stacked tray;根据所述第一图像内容确定所述第一定位参照对象相较于所述图像采集设备的相对位置信息,以及根据所述第二图像内容确定所述第二定位参照对象相较于所述图像采集设备的相对位置信息;Determine relative position information of the first positioning reference object compared to the image acquisition device according to the first image content, and determine relative position information of the second positioning reference object compared to the image acquisition device according to the second image content;根据所述第一定位参照对象和所述第二定位参照对象分别相较于所述图像采集设备的相对位置信息,确定所述第一定位参照对象与所述图像采集设备的相对姿态,以及确定第二定位参照对象与所述图像采集设备的相对姿态;基于所述第一定位参照对象与所述图像采集设备的相对姿态和所述第二定位参照对象与所述图像采集设备的相对姿态,确定所述待堆叠托盘和所述被堆叠托盘之间的位姿差;Determine the relative posture of the first positioning reference object and the image acquisition device, and determine the relative posture of the second positioning reference object and the image acquisition device according to the relative position information of the first positioning reference object and the second positioning reference object respectively compared with the image acquisition device; determine the posture difference between the to-be-stacked tray and the stacked tray based on the relative posture of the first positioning reference object and the image acquisition device and the relative posture of the second positioning reference object and the image acquisition device;根据所述位姿差,计算出所述图像采集设备与所述被堆叠托盘的相对位姿,确定相对位姿是否处于预设位姿范围,若所述相对位姿不处于所述预设位姿范围,则根据所述相对位姿确定搬运设备需要摆正的姿态,并在调整搬运设备的姿态后,返回执行所述通过图像采集设备采集目标图像的步骤;若所述相对位姿处于所述预设位姿范围,则直接控制所述搬运设备将所述待堆叠托盘堆叠至所述被堆叠托盘上。Based on the posture difference, the relative posture of the image acquisition device and the stacked pallet is calculated to determine whether the relative posture is in a preset posture range; if the relative posture is not in the preset posture range, the posture that the transport device needs to be adjusted is determined according to the relative posture, and after adjusting the posture of the transport device, the step of acquiring the target image through the image acquisition device is returned to execute; if the relative posture is in the preset posture range, the transport device is directly controlled to stack the pallet to be stacked on the stacked pallet.2.根据权利要求1所述的方法,其特征在于,所述根据所述第一图像内容确定所述第一定位参照对象相较于所述图像采集设备的相对位置信息,以及根据所述第二图像内容确定所述第二定位参照对象相较于所述图像采集设备的相对位置信息,包括:2. The method according to claim 1, characterized in that the determining the relative position information of the first positioning reference object compared to the image acquisition device according to the first image content, and determining the relative position information of the second positioning reference object compared to the image acquisition device according to the second image content, comprises:分别对所述第一图像内容和所述第二图像内容进行特征检测,得到所述第一定位参照对象和所述第二定位参照对象各自对应的特征集;Performing feature detection on the first image content and the second image content respectively to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object respectively;针对所述第一定位参照对象和所述第二定位参照对象中的每一个定位参照对象,根据所述图像采集设备的图像采集设备内参和所述定位参照对象对应的特征集,得到所述定位参照对象相较于所述图像采集设备的相对位置信息。For each of the first positioning reference object and the second positioning reference object, the relative position information of the positioning reference object compared to the image acquisition device is obtained based on the image acquisition device internal parameters of the image acquisition device and the feature set corresponding to the positioning reference object.3.根据权利要求2所述的方法,其特征在于,所述分别对所述第一图像内容和所述第二图像内容进行特征检测,得到所述第一定位参照对象和所述第二定位参照对象各自对应的特征集,包括:3. The method according to claim 2, wherein the performing feature detection on the first image content and the second image content respectively to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object respectively comprises:分别对所述第一图像内容和所述第二图像内容进行轮廓提取,得到所述第一定位参照对象对应的第一图像轮廓和所述第二定位参照对象对应的第二图像轮廓;Performing contour extraction on the first image content and the second image content respectively to obtain a first image contour corresponding to the first positioning reference object and a second image contour corresponding to the second positioning reference object;分别对所述第一图像轮廓和所述第二图像轮廓进行特征提取,得到所述第一定位参照对象和所述第二定位参照对象各自对应的特征集。Feature extraction is performed on the first image contour and the second image contour respectively to obtain feature sets corresponding to the first positioning reference object and the second positioning reference object respectively.4.根据权利要求2所述的方法,其特征在于,所述相对位置信息包括所述定位参照对象相对于所述图像采集设备的外参矩阵,所述定位参照对象对应的特征集包括所述定位参照对象的多个特征点;4. The method according to claim 2, characterized in that the relative position information comprises an external parameter matrix of the positioning reference object relative to the image acquisition device, and the feature set corresponding to the positioning reference object comprises a plurality of feature points of the positioning reference object;所述针对所述第一定位参照对象和所述第二定位参照对象中的每一个定位参照对象,根据所述图像采集设备的图像采集设备内参和所述定位参照对象对应的特征集,得到所述定位参照对象相较于所述图像采集设备的相对位置信息,包括:The step of obtaining, for each of the first positioning reference object and the second positioning reference object, relative position information of the positioning reference object compared to the image acquisition device according to an image acquisition device internal parameter of the image acquisition device and a feature set corresponding to the positioning reference object, includes:针对所述第一定位参照对象和所述第二定位参照对象中的每一个定位参照对象,根据所述定位参照对象对应的每一所述特征点在像素坐标系下的像素坐标、每一所述特征点在基于所述定位参照对象构建的三维坐标系下的坐标、以及所述图像采集设备内参,计算得到所述定位参照对象相对于所述图像采集设备的外参矩阵;For each of the first positioning reference object and the second positioning reference object, calculate the external parameter matrix of the positioning reference object relative to the image acquisition device according to the pixel coordinates of each of the feature points corresponding to the positioning reference object in the pixel coordinate system, the coordinates of each of the feature points in the three-dimensional coordinate system constructed based on the positioning reference object, and the internal parameters of the image acquisition device;其中,所述基于所述定位参照对象构建的三维坐标系,是以所述定位参照对象所在平面为坐标平面,以所述定位参照对象的多个特征点中的其中一个为坐标原点构建的三维坐标系。The three-dimensional coordinate system constructed based on the positioning reference object is a three-dimensional coordinate system constructed with the plane where the positioning reference object is located as the coordinate plane and one of the multiple feature points of the positioning reference object as the coordinate origin.5.根据权利要求4所述的方法,其特征在于,所述第一定位参照对象相较于所述图像采集设备的相对位置信息包括第一外参矩阵;所述第二定位参照对象相较于所述图像采集设备的相对位置信息包括第二外参矩阵;所述根据所述第一定位参照对象和所述第二定位参照对象分别相较于所述图像采集设备的相对位置信息,确定所述待堆叠托盘和所述被堆叠托盘之间的位姿差,包括:5. The method according to claim 4, characterized in that the relative position information of the first positioning reference object compared to the image acquisition device includes a first extrinsic parameter matrix; the relative position information of the second positioning reference object compared to the image acquisition device includes a second extrinsic parameter matrix; the determining the posture difference between the to-be-stacked tray and the stacked tray according to the relative position information of the first positioning reference object and the second positioning reference object respectively compared to the image acquisition device comprises:所述第二外参矩阵的逆矩阵乘以所述第一外参矩阵,得到所述第一定位参照对象相对于所述第二定位参照对象的第三外参矩阵;The inverse matrix of the second extrinsic parameter matrix is multiplied by the first extrinsic parameter matrix to obtain a third extrinsic parameter matrix of the first positioning reference object relative to the second positioning reference object;根据所述第三外参矩阵,得到所述位姿差。The posture difference is obtained according to the third extrinsic parameter matrix.6.根据权利要求4所述的方法,其特征在于,所述图像采集设备固定于所述搬运设备上;所述第一定位参照对象为所述待堆叠托盘上设置的第一二维码;所述第二定位参照对象为所述被堆叠托盘上设置的第二二维码;所述第一二维码的多个特征点包括所述第一二维码的至少三个边界角点;所述第二二维码的多个特征点包括所述第二二维码的至少三个边界角点。6. The method according to claim 4 is characterized in that the image acquisition device is fixed on the handling device; the first positioning reference object is a first two-dimensional code set on the pallet to be stacked; the second positioning reference object is a second two-dimensional code set on the stacked pallet; the multiple feature points of the first two-dimensional code include at least three boundary corner points of the first two-dimensional code; the multiple feature points of the second two-dimensional code include at least three boundary corner points of the second two-dimensional code.7.根据权利要求1至6任一项所述的方法,其特征在于,所述根据所述位姿差控制所述搬运设备对所述待堆叠托盘进行托盘堆叠处理,包括:7. The method according to any one of claims 1 to 6, characterized in that the step of controlling the handling device to stack the pallets to be stacked according to the posture difference comprises:根据所述位姿差,得到所述图像采集设备与所述被堆叠托盘的相对位姿;According to the posture difference, the relative posture of the image acquisition device and the stacked trays is obtained;若所述相对位姿不处于预设位姿范围,则根据所述相对位姿调整所述搬运设备当前的姿态,并在调整所述搬运设备的姿态后,返回执行所述通过图像采集设备采集目标图像的步骤。If the relative posture is not within the preset posture range, the current posture of the transport device is adjusted according to the relative posture, and after adjusting the posture of the transport device, the process returns to the step of acquiring the target image by the image acquisition device.8.一种托盘堆叠装置,其特征在于,所述装置包括:8. A tray stacking device, characterized in that the device comprises:图像采集模块,用于通过图像采集设备采集目标图像;所述目标图像中包括待堆叠托盘上的第一定位参照对象的第一图像内容和被堆叠托盘上的第二定位参照对象的第二图像内容;An image acquisition module, used for acquiring a target image through an image acquisition device; the target image includes a first image content of a first positioning reference object on the to-be-stacked tray and a second image content of a second positioning reference object on the stacked tray;信息获取模块,用于根据所述第一图像内容确定所述第一定位参照对象相较于所述图像采集设备的相对位置信息,以及根据所述第二图像内容确定所述第二定位参照对象相较于所述图像采集设备的相对位置信息;an information acquisition module, configured to determine relative position information of the first positioning reference object compared to the image acquisition device according to the first image content, and to determine relative position information of the second positioning reference object compared to the image acquisition device according to the second image content;位姿计算模块,用于根据所述第一定位参照对象和所述第二定位参照对象分别相较于所述图像采集设备的相对位置信息,确定所述第一定位参照对象与所述图像采集设备的相对姿态,以及确定第二定位参照对象与所述图像采集设备的相对姿态;基于所述第一定位参照对象与所述图像采集设备的相对姿态和所述第二定位参照对象与所述图像采集设备的相对姿态,确定所述待堆叠托盘和所述被堆叠托盘之间的位姿差;a posture calculation module, configured to determine the relative posture of the first positioning reference object and the image acquisition device, and the relative posture of the second positioning reference object and the image acquisition device according to the relative position information of the first positioning reference object and the second positioning reference object respectively compared with the image acquisition device; and determine the posture difference between the to-be-stacked tray and the stacked tray based on the relative posture of the first positioning reference object and the image acquisition device and the relative posture of the second positioning reference object and the image acquisition device;托盘堆叠模块,用于根据所述位姿差,计算出所述图像采集设备与所述被堆叠托盘的相对位姿,确定相对位姿是否处于预设位姿范围,若所述相对位姿不处于所述预设位姿范围,则根据所述相对位姿确定搬运设备需要摆正的姿态,并在调整搬运设备的姿态后,返回执行所述通过图像采集设备采集目标图像的步骤;若所述相对位姿处于所述预设位姿范围,则直接控制所述搬运设备将所述待堆叠托盘堆叠至所述被堆叠托盘上。The tray stacking module is used to calculate the relative posture of the image acquisition device and the stacked trays according to the posture difference, and determine whether the relative posture is in a preset posture range; if the relative posture is not in the preset posture range, determine the posture that the handling device needs to be adjusted according to the relative posture, and after adjusting the posture of the handling device, return to execute the step of acquiring the target image by the image acquisition device; if the relative posture is in the preset posture range, directly control the handling device to stack the tray to be stacked on the stacked tray.9.一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至7中任一项所述的方法的步骤。9. A computer device, comprising a memory and a processor, wherein the memory stores a computer program, wherein the processor implements the steps of the method according to any one of claims 1 to 7 when executing the computer program.10.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至7中任一项所述的方法的步骤。10. A computer-readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 7 are implemented.
CN202210517041.2A2022-05-132022-05-13 Pallet stacking method, device, computer equipment and computer readable storage mediumActiveCN114882112B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202210517041.2ACN114882112B (en)2022-05-132022-05-13 Pallet stacking method, device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202210517041.2ACN114882112B (en)2022-05-132022-05-13 Pallet stacking method, device, computer equipment and computer readable storage medium

Publications (2)

Publication NumberPublication Date
CN114882112A CN114882112A (en)2022-08-09
CN114882112Btrue CN114882112B (en)2025-03-25

Family

ID=82675450

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202210517041.2AActiveCN114882112B (en)2022-05-132022-05-13 Pallet stacking method, device, computer equipment and computer readable storage medium

Country Status (1)

CountryLink
CN (1)CN114882112B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
TWI880554B (en)*2023-12-272025-04-11友達光電股份有限公司Method for measuring angle of rotation

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112053444A (en)*2019-06-052020-12-08北京外号信息技术有限公司Method for superimposing virtual objects based on optical communication means and corresponding electronic device
CN112678724A (en)*2019-10-182021-04-20北京极智嘉科技有限公司Intelligent forklift and control method thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108960202B (en)*2018-08-012022-05-10京东方科技集团股份有限公司 An intelligent shelf, system, and method for judging the stacking of commodities
CN110002367B (en)*2019-03-282023-05-05上海快仓智能科技有限公司AGV attitude adjustment system and method in AGV carrier transporting process
CN113379684A (en)*2021-05-242021-09-10武汉港迪智能技术有限公司Container corner line positioning and automatic container landing method based on video
CN114275712B (en)*2021-12-302024-11-12中钞长城金融设备控股有限公司 Stacking device and stacking method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112053444A (en)*2019-06-052020-12-08北京外号信息技术有限公司Method for superimposing virtual objects based on optical communication means and corresponding electronic device
CN112678724A (en)*2019-10-182021-04-20北京极智嘉科技有限公司Intelligent forklift and control method thereof

Also Published As

Publication numberPublication date
CN114882112A (en)2022-08-09

Similar Documents

PublicationPublication DateTitle
CN111127422B (en)Image labeling method, device, system and host
CN112233181B (en)6D pose recognition method and device and computer storage medium
CN111612794B (en)High-precision three-dimensional pose estimation method and system for parts based on multi-2D vision
CN111627075A (en)Camera external parameter calibration method, system, terminal and medium based on aruco code
CN115100271B (en)Method, device, computer equipment and storage medium for detecting goods taking height
CN114418952B (en) Cargo counting method, device, computer equipment, and storage medium
CN112991429B (en)Box volume measuring method, device, computer equipment and storage medium
CN115062737A (en) Method, device, equipment and storage medium for obtaining cargo pose based on 2D camera
CN116051634A (en)Visual positioning method, terminal and storage medium
CN117934625A (en)Three-dimensional space rapid calibration and positioning method based on laser radar
CN114882112B (en) Pallet stacking method, device, computer equipment and computer readable storage medium
CN115018932A (en)Camera calibration method and device, electronic equipment and storage medium
CN119832151A (en)Open three-dimensional reconstruction method, automatic depth positioning method, equipment and robot
CN117252931A (en)Camera combined external parameter calibration method and system using laser radar and storage medium
CN117739954A (en) Map partial update method, device and electronic equipment
CN117765095A (en)Unmanned aerial vehicle camera and laser radar calibration method and system based on structural characteristics
CN117934611A (en)6D gesture estimation method, system, equipment and medium based on object imaging
CN116977448A (en)Low-resolution TOF area array internal parameter calibration method, device, equipment and medium
WO2023157964A1 (en)Picking device, and picking control program
CN116091610A (en) A Joint Calibration Method of Radar and Camera Based on 3D Tower Checkerboard
CN118570305B (en)Radar fusion external parameter calibration method
JP6719925B2 (en) Information processing device, information processing method, and program
CN119919320B (en) Dynamic point cloud elimination method, device, equipment and storage medium based on multimodal data perception
CN120807521A (en)Workpiece identification method, device, electronic equipment and storage medium
CN118071835A (en)Visual positioning method and device based on 2D visual data and computer equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp