Movatterモバイル変換


[0]ホーム

URL:


CN117114958A - Visual processing-based plug-in method, device, computer equipment and storage medium - Google Patents

Visual processing-based plug-in method, device, computer equipment and storage medium
Download PDF

Info

Publication number
CN117114958A
CN117114958ACN202311149735.6ACN202311149735ACN117114958ACN 117114958 ACN117114958 ACN 117114958ACN 202311149735 ACN202311149735 ACN 202311149735ACN 117114958 ACN117114958 ACN 117114958A
Authority
CN
China
Prior art keywords
image
interest
component
coordinates
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311149735.6A
Other languages
Chinese (zh)
Inventor
黎彰
刘志昌
胡宗群
郭琛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gree Electric Appliances Inc of Zhuhai
Original Assignee
Gree Electric Appliances Inc of Zhuhai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gree Electric Appliances Inc of ZhuhaifiledCriticalGree Electric Appliances Inc of Zhuhai
Priority to CN202311149735.6ApriorityCriticalpatent/CN117114958A/en
Publication of CN117114958ApublicationCriticalpatent/CN117114958A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The invention provides a visual processing-based plug-in mounting method, a device, computer equipment and a storage medium, wherein the method comprises the steps of analyzing a first component image to obtain first coordinates of each first element characteristic on the first component image, and analyzing a second component image to obtain second coordinates of each second element characteristic on the second component image; converting the first coordinate and the second coordinate into a third coordinate and a fourth coordinate based on a robot coordinate system respectively; performing straight line fitting on the third coordinate and the fourth coordinate on a robot coordinate system, and calculating the deviation amount of the third coordinate and the fourth coordinate; and calculating the insertion compensation amount of the robot according to the deviation amount, and inserting the first component and the second component based on the insertion compensation amount. The coordinates of the first element features and the coordinates of the second element features are subjected to linear fitting, the deviation of the two coordinates is accurately calculated, the inserting position of the robot is compensated, and accurate inserting of the special-shaped plug-in unit to the circuit board is realized.

Description

Visual processing-based plug-in method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of vision processing technologies, and in particular, to a method and apparatus for inserting a card based on vision processing, a computer device, and a storage medium.
Background
With the rapid development of artificial intelligence technology and the support of computing power of hardware equipment, the vision technology is widely applied to an automatic production line in cooperation with a robot for high-precision positioning and object feature detection, so that the technology reduces personnel configuration required by the production line and improves the automation and production efficiency of the production line. As the automatic production line has the requirements of high production yield, continuous production, high production efficiency and the like. Therefore, a positioning or detecting system is required to be built through vision to support, so that a robot can accurately acquire a target and perform production operations such as inserting, clamping or spraying, and the required functional requirements are met.
The traditional high-precision special-shaped plug-in system is a technology applied to PCB production, image information of special-shaped plug-in pins and PCB welding holes is mainly acquired and processed through two cameras, and finally positioning information of the special-shaped plug-in pins and the PCB welding holes is acquired and sent to a robot, so that the function that the robot grabs the special-shaped plug-in to be accurately inserted in a specific position of the PCB is realized. The method can realize the function of high-precision inserting the PCB board by different special-shaped inserts, but the inserting precision is lower.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a visual processing-based plug-in method, apparatus, computer device, and storage medium.
An inserting method based on visual processing, comprising:
acquiring an image of a first component to obtain an image of the first component, and acquiring an image of a second component to obtain an image of the second component;
analyzing the first component image to obtain first coordinates of each first element feature on the first component image, and analyzing the second component image to obtain second coordinates of each second element feature on the second component image;
converting the first coordinate and the second coordinate into a third coordinate and a fourth coordinate based on a robot coordinate system respectively;
performing linear fitting on the third coordinate and the fourth coordinate on the robot coordinate system, and calculating the deviation amount of the third coordinate and the fourth coordinate;
and calculating the insertion compensation amount of the robot according to the deviation amount, and inserting the first component and the second component based on the insertion compensation amount.
In one embodiment, the step of analyzing the first component image to obtain a first coordinate of a first element feature on the first component image, and analyzing the second component image to obtain a second coordinate of a second element feature on the second component image includes:
Analyzing the first component image to obtain reference point coordinates of the first component image and rough positioning coordinates of each first element feature;
analyzing the second component image to obtain reference point coordinates of the second component image and coarse positioning coordinates of each second element feature;
acquiring a first template region of interest, acquiring a first region of interest of the first component image based on reference point coordinates of the first component image, comparing the first region of interest with the first template region of interest based on a feature matching method of a pattern profile, and precisely positioning coarse positioning coordinates of each first element feature in the first template region of interest to obtain the first coordinates of each first element feature in the first region of interest;
acquiring a second template region of interest, acquiring a second region of interest of the second component image based on reference point coordinates of the second component image, comparing the second region of interest with the second template region of interest based on a feature matching method of a pattern profile, and precisely positioning coarse positioning coordinates of each second element feature in the second template region of interest to obtain the second coordinates of each second element feature in the second region of interest.
In one embodiment, the step of analyzing the first component image to obtain the reference point coordinates of the first component image and the coarse positioning coordinates of each of the first element features includes:
acquiring a first template image, comparing the first component image with the first template image to obtain reference point coordinates of the first component image, and obtaining coarse positioning coordinates of each first element feature on the first component image based on the reference point coordinates of the first component image;
the step of analyzing the second component image to obtain reference point coordinates of the second component image and coarse positioning coordinates of each second element feature includes:
and obtaining a second template image, comparing the second component image with the second template image to obtain reference point coordinates of the second component image, and obtaining coarse positioning coordinates of each second element feature on the second component image based on the reference point coordinates of the second component image.
In one embodiment, the step of acquiring the first region of interest of the first part image based on the reference point coordinates of the first part image includes:
Aligning reference point coordinates of the first part image and reference point coordinates of the first template image, and acquiring a first region of interest of the first part image according to affine transformation based on the position of the first template region of interest on the first template image;
the step of acquiring a second region of interest of the second part image based on reference point coordinates of the second part image includes:
aligning reference point coordinates of the second part image and reference point coordinates of the second template image, and acquiring a second region of interest of the second part image according to affine transformation based on a position of the second template region of interest on the second template image.
In one embodiment, the step of comparing the first region of interest with the first template region of interest by the feature matching method based on the pattern profile, and performing fine positioning on the coarse positioning coordinates of each first element feature in the first template region of interest to obtain the first coordinates of each first element feature in the first region of interest includes:
comparing the first region of interest with the first template region of interest based on a feature matching method of pattern contours, and precisely positioning coarse positioning coordinates of each first element feature in the first template region of interest according to positions of template elements in the first template region of interest to obtain the first coordinates of each first element feature in the first region of interest;
The step of comparing the second region of interest with the second template region of interest by the feature matching method based on the pattern profile, and performing fine positioning on the coarse positioning coordinates of each second element feature in the second template region of interest to obtain the second coordinates of each second element feature in the second region of interest includes:
and comparing the second region of interest with the second template region of interest based on a feature matching method of the pattern profile, and precisely positioning coarse positioning coordinates of each second element feature in the second template region of interest according to the positions of template elements in the second template region of interest to obtain the second coordinates of each second element feature in the second region of interest.
In one embodiment, the step of linearly fitting the third coordinate and the fourth coordinate on the robot coordinate system includes:
and performing linear fitting on the robot coordinate system on the third coordinate and the fourth coordinate by using an empirical harmonic method.
In one embodiment, the step of calculating the amount of the package compensation according to the amount of the deviation includes:
Detecting whether the deviation amount is larger than a preset deviation amount;
when the deviation amount is larger than the preset deviation amount, determining that the first component and/or the second component are waste materials;
and when the deviation amount is smaller than or equal to the preset deviation amount, calculating to obtain the insertion compensation amount of the robot according to the deviation amount, and inserting the first component and the second component based on the insertion compensation amount.
A vision processing-based cartridge device, comprising:
the image acquisition module is used for acquiring an image of the first component to obtain an image of the first component and acquiring an image of the second component to obtain an image of the second component;
the coordinate acquisition module is used for analyzing the first component image to obtain first coordinates of each first element feature on the first component image, and analyzing the second component image to obtain second coordinates of each second element feature on the second component image;
the coordinate conversion module is used for converting the first coordinate and the second coordinate into a third coordinate and a fourth coordinate based on a robot coordinate system respectively;
the deviation amount calculation module is used for performing linear fitting on the third coordinate and the fourth coordinate on the robot coordinate system, and calculating the deviation amount of the third coordinate and the fourth coordinate;
And the compensation amount calculation module is used for calculating the insertion compensation amount of the robot according to the deviation amount and inserting the first component and the second component based on the insertion compensation amount.
A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor when executing the computer program performs the steps of:
acquiring an image of a first component to obtain an image of the first component, and acquiring an image of a second component to obtain an image of the second component;
analyzing the first component image to obtain first coordinates of each first element feature on the first component image, and analyzing the second component image to obtain second coordinates of each second element feature on the second component image;
converting the first coordinate and the second coordinate into a third coordinate and a fourth coordinate based on a robot coordinate system respectively;
performing linear fitting on the third coordinate and the fourth coordinate on the robot coordinate system, and calculating the deviation amount of the third coordinate and the fourth coordinate;
and calculating the insertion compensation amount of the robot according to the deviation amount, and inserting the first component and the second component based on the insertion compensation amount.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring an image of a first component to obtain an image of the first component, and acquiring an image of a second component to obtain an image of the second component;
analyzing the first component image to obtain first coordinates of each first element feature on the first component image, and analyzing the second component image to obtain second coordinates of each second element feature on the second component image;
converting the first coordinate and the second coordinate into a third coordinate and a fourth coordinate based on a robot coordinate system respectively;
performing linear fitting on the third coordinate and the fourth coordinate on the robot coordinate system, and calculating the deviation amount of the third coordinate and the fourth coordinate;
and calculating the insertion compensation amount of the robot according to the deviation amount, and inserting the first component and the second component based on the insertion compensation amount.
According to the inserting method, the device, the computer equipment and the storage medium based on the vision processing, the coordinates of the first element characteristic on the first component and the coordinates of the second element characteristic on the second component are subjected to linear fitting, so that the deviation of the two coordinates can be accurately calculated, the inserting position of the robot is compensated, and the accurate inserting of the special-shaped plug-in unit to the circuit board is realized.
Drawings
Fig. 1 is an application scenario diagram of an insertion method based on visual processing in one embodiment;
FIG. 2 is a flow diagram of a visual processing-based plug-in method in one embodiment;
FIG. 3 is a block diagram of a visual processing-based cartridge device in one embodiment;
FIG. 4 is an internal block diagram of a computer device in one embodiment;
FIG. 5A is a schematic diagram of an image data offloading and matching flow diagram of a vision-processing-based plug-in method in one embodiment;
FIG. 5B is a system algorithm flow diagram of a vision processing-based instrumentation method in one embodiment;
fig. 5C is a system processing logic diagram of a vision processing-based instrumentation method in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Example 1
The visual processing-based plug-in method provided by the application can be applied to an application scene shown in fig. 1. The vision robot system comprises a vision controller 1, a pin detection camera 9, a welding hole detection camera 4 and a light source 7, wherein the vision controller is responsible for realizing vision algorithm and strategy, camera control and robot communication, the pin detection camera is used for collecting images of plug-ins and is applied to pin detection of different special-shaped plug-ins 8, the welding hole detection camera is used for collecting images of a PCB (Printed Circuit Board, a printed circuit board) 3 and is applied to welding hole detection of the PCB 3 needing to be plugged, and the light source 7 provides illumination and is applied to separating pins and welding holes from the background to obtain characteristic information of the pins and the welding holes;
The PCB board is conveyed on a production line 2, and passes through a welding hole detection camera 4, the production line 2 comprises a master control 10 and a conveyor belt 11, the master control 10 is responsible for controlling the whole production line, different special-shaped plug-ins are moved to the grabbing position of a robot, and the conveyor belt 11 is responsible for conveying the PCB board to a designated inserting position;
the robot system comprises a robot 5 and a clamp 6, wherein the robot 5 and the clamp 6 are responsible for clamping and conveying different special-shaped plug-ins to the upper end of the stitch detection camera 9 for continuous photographing, and the special-shaped plug-ins are continuously inserted into the appointed position of the PCB after positioning information is acquired, so that the special-shaped plug-ins are inserted onto the PCB.
In this embodiment, the vision controller 1 may be implemented by using a computer in the following embodiment, where the vision controller acquires an image of a first component through a stitch detection camera to obtain an image of the first component, and acquires an image of a second component through a solder hole detection camera to obtain an image of the second component; analyzing the first component image to obtain first coordinates of each first element feature on the first component image, and analyzing the second component image to obtain second coordinates of each second element feature on the second component image; converting the first coordinate and the second coordinate into a third coordinate and a fourth coordinate based on a robot coordinate system respectively; performing linear fitting on the third coordinate and the fourth coordinate on the robot coordinate system, and calculating the deviation amount of the third coordinate and the fourth coordinate; and calculating the insertion compensation amount of the robot according to the deviation amount, and inserting the first component and the second component based on the insertion compensation amount.
It should be understood that the vision processing-based inserting method of the present application may be applied to a scenario where a robot inserts two components, and for convenience of explanation, in the following embodiments, a first component is taken as a special-shaped insert, and a second component is taken as a PCB circuit board for further explanation, but it should be understood that this example is only for explaining the present application, and is not used for limiting the present application, and the same applies to other components capable of being inserted, and the description in this embodiment is not redundant.
Example two
In this embodiment, as shown in fig. 2, there is provided a visual processing-based plug-in method, which includes:
step 210, acquiring an image of a first component, obtaining an image of the first component, and acquiring an image of a second component, obtaining an image of the second component.
In this embodiment, the first component and the second component are components that are mutually inserted, for example, the first component is a special-shaped plug-in unit, the second component is a PCB circuit board, the special-shaped plug-in unit is provided with pins, the circuit board is provided with soldering holes, and each pin is inserted into one soldering hole.
In this embodiment, images of the first member and the second member are acquired by a photosensitive element such as a camera, and the first member image and the second member image are obtained, respectively.
And 220, analyzing the first component image to obtain first coordinates of each first element feature on the first component image, and analyzing the second component image to obtain second coordinates of each second element feature on the second component image.
In this embodiment, the image is parsed, so as to obtain the position of the element feature on the image, that is, the coordinates of the element feature on the image. In one embodiment, the first element features pins on the shaped insert and the second element features solder apertures on the circuit board. In this embodiment, the first component image and the second component image are respectively analyzed to obtain the position of the stitch on the first component image, that is, the first coordinate, and obtain the position of the solder hole on the second component image, that is, the second coordinate.
In one embodiment, the first component image is binarized, the first component image is converted into a black-and-white image, the binarized first component image is analyzed to obtain first coordinates of each first element feature on the first component image, the second component image is binarized, the second component image is converted into a black-and-white image, the binarized second component image is analyzed to obtain second coordinates of each second element feature on the second component image. In this embodiment, the element features can be clearly displayed by binarizing the component image, for example, the welding hole is irradiated by the light source, and the light is collected by the photosensitive element through the welding hole, so as to obtain an image containing the welding hole.
Step 230, converting the first coordinate and the second coordinate into a third coordinate and a fourth coordinate based on a robot coordinate system, respectively.
In the embodiment, the first coordinate is converted into a third coordinate based on a robot coordinate system; and respectively converting the second coordinates into fourth coordinates based on a robot coordinate system. It should be understood that in the above steps, the coordinates of the element features in the obtained image are teaching coordinates, and the robot coordinate system is a coordinate system established by the position and the line of the robot motion. The robot is capable of movement between a station where the first component is placed and a station where the second component is placed.
And 240, performing straight line fitting on the third coordinate and the fourth coordinate on the robot coordinate system, and calculating the deviation amount of the third coordinate and the fourth coordinate.
It should be appreciated that during insertion, the shaped insert is aligned Yu Dianlu with the board such that each pin on the shaped insert is aligned with a solder aperture of the circuit board. Therefore, in this embodiment, the third coordinates of the pins and the fourth coordinates of the solder holes in the same coordinate system are fitted in a straight line, and the first coordinate point is used as the reference for superposition, and the coordinate position of each pin and the coordinate position of the solder hole are calculated, so as to calculate the deviation amount of the pins and the solder holes.
And 250, calculating the insertion compensation amount of the robot according to the deviation amount, and inserting the first component and the second component based on the insertion compensation amount.
In this embodiment, the plugging compensation amount is a distance for adjusting the coordinates of the robot, and is used for enabling the robot to grasp the special-shaped plug-in, and the plugging compensation amount is a compensation for the coordinates of the robot. And calculating the insertion compensation quantity of the robot on the robot coordinate according to the deviation quantity, and inserting the first component and the second component based on the robot coordinate and the insertion compensation quantity. In this embodiment, according to the overall deviation amount of the pins and the solder holes, the insertion compensation amount of the robot is calculated, so that the pins on the special-shaped plug-in unit grabbed by the robot can be accurately aligned to the solder holes on the circuit board.
In the above embodiment, the coordinates of the first element feature on the first component and the coordinates of the second element feature on the second component are subjected to straight line fitting, so that the deviation of the two coordinates can be accurately calculated, the inserting position of the robot is compensated, and the accurate inserting of the special-shaped plug-in unit to the circuit board is realized.
In one embodiment, the step of analyzing the first component image to obtain a first coordinate of a first element feature on the first component image, and analyzing the second component image to obtain a second coordinate of a second element feature on the second component image includes:
Analyzing the first component image to obtain reference point coordinates of the first component image and rough positioning coordinates of each first element feature; analyzing the second component image to obtain reference point coordinates of the second component image and coarse positioning coordinates of each second element feature;
acquiring a first template region of interest, acquiring a first region of interest of the first component image based on reference point coordinates of the first component image, comparing the first region of interest with the first template region of interest based on a feature matching method of a pattern profile, and precisely positioning coarse positioning coordinates of each first element feature in the first template region of interest to obtain the first coordinates of each first element feature in the first region of interest;
acquiring a second template region of interest, acquiring a second region of interest of the second component image based on reference point coordinates of the second component image, comparing the second region of interest with the second template region of interest based on a feature matching method of a pattern profile, and precisely positioning coarse positioning coordinates of each second element feature in the second template region of interest to obtain the second coordinates of each second element feature in the second region of interest.
In this embodiment, first, rough positioning is performed on the component images, and coordinates of reference points, which are coordinates that are coordinate system references or origins in the component images, for example, center coordinates, of the first component image and the second component image and rough positioning coordinates of each element feature are obtained, respectively.
The first template region of interest and the second template region of interest are regions of interest preset for reference, and the template region of interest is a preselected region of interest in the template image.
After the rough positioning coordinates are obtained, a first template interested region and a second template interested region are obtained, the obtained rough positioning coordinates are corrected according to the positions of element features in the template interested region and based on a feature matching method of pattern contours, specifically, the element features in the template interested region and the element features of the interested region are matched one by one according to the feature matching method of pattern contours, and the positions of the element features in the template interested region and the positions of the element features of the corresponding interested region are utilized to correct, so that the precisely positioned first coordinates and second coordinates are obtained. In this embodiment, the positions of the stitch and the solder hole are first coarsely positioned to obtain coarse positioning coordinates, and then fine positioning coordinates of the stitch and the solder hole are obtained by fine positioning of the region of interest. Thus, the precision of the coordinates can be effectively improved, and the precise element characteristic coordinates can be obtained.
In order to obtain coarse positioning coordinates of the element features, in one embodiment, the step of analyzing the first component image to obtain reference point coordinates of the first component image and coarse positioning coordinates of each of the first element features includes:
acquiring a first template image, comparing the first component image with the first template image to obtain reference point coordinates of the first component image, and obtaining coarse positioning coordinates of each first element feature on the first component image based on the reference point coordinates of the first component image;
the step of analyzing the second component image to obtain reference point coordinates of the second component image and coarse positioning coordinates of each second element feature includes:
and obtaining a second template image, comparing the second component image with the second template image to obtain reference point coordinates of the second component image, and obtaining coarse positioning coordinates of each second element feature on the second component image based on the reference point coordinates of the second component image.
In this embodiment, the first template image and the second template image are pre-acquired images for reference, the first template image is an image of a pre-acquired and stored special-shaped plug-in, the second template image is an image of a pre-acquired and stored circuit board, the first template image and the second template image are respectively used as reference standards, and accurate coordinates are recorded on the first template image and the second template image. In this way, the first component image is compared with the first template image, the center point coordinate of the first component image can be determined, and the second component image is compared with the second template image, the center point coordinate of the second component image can be determined. And based on the center point coordinates, the relative positions of the pins or the welding holes and the center point coordinates can be obtained, and the coarse positioning coordinates of the first element features on the first component image and the coarse positioning coordinates of the second element features on the second component image can be calculated by combining the positions of the element features on the first template image and the second template image.
To acquire a region of interest, a fine positioning of coordinates is achieved, and in one embodiment, the step of acquiring a first region of interest of the first part image based on reference point coordinates of the first part image includes: aligning reference point coordinates of the first part image and reference point coordinates of the first template image, and acquiring a first region of interest of the first part image according to affine transformation based on the position of the first template region of interest on the first template image;
the step of acquiring a second region of interest of the second part image based on reference point coordinates of the second part image includes: aligning reference point coordinates of the second part image and reference point coordinates of the second template image, and acquiring a second region of interest of the second part image according to affine transformation based on a position of the second template region of interest on the second template image.
In this embodiment, the first template region of interest and the second template region of interest are regions of interest selected in advance on the first template image and the second template image, at least one element feature is provided on the region of interest, and coordinate positions of the element features of the region of interest are measured in advance and used as reference for precise positioning. In this embodiment, the reference point coordinates of the part image and the reference point coordinates of the template image are aligned, and the position of the region of interest of the template on the template image is mapped to the part image based on the position of the region of interest of the template on the template image, so as to obtain the position of the region of interest on the part image, so as to obtain the region of interest on the part image.
In one embodiment, the step of comparing the first region of interest with the first template region of interest by the feature matching method based on the pattern profile, and performing fine positioning on the coarse positioning coordinates of each first element feature in the first template region of interest to obtain the first coordinates of each first element feature in the first region of interest includes:
comparing the first region of interest with the first template region of interest based on a feature matching method of pattern contours, and precisely positioning coarse positioning coordinates of each first element feature in the first template region of interest according to positions of template elements in the first template region of interest to obtain the first coordinates of each first element feature in the first region of interest;
the step of comparing the second region of interest with the second template region of interest by the feature matching method based on the pattern profile, and performing fine positioning on the coarse positioning coordinates of each second element feature in the second template region of interest to obtain the second coordinates of each second element feature in the second region of interest includes:
And comparing the second region of interest with the second template region of interest based on a feature matching method of the pattern profile, and precisely positioning coarse positioning coordinates of each second element feature in the second template region of interest according to the positions of template elements in the second template region of interest to obtain the second coordinates of each second element feature in the second region of interest.
In this embodiment, the feature matching method based on the pattern profile shows sharper distribution of the similarity of the stitch and the solder hole, so as to improve the positioning accuracy, specifically, the stitch in the first interested region of the single stitch and the stitch in the first interested region of the first template are matched, the solder hole in the second interested region of the single stitch and the solder hole in the second interested region of the second template are matched, so that the position of the stitch in the first interested region and the position of the solder hole in the second interested region are precisely positioned and corrected, and thus, accurate first coordinates and second coordinates are obtained.
In one embodiment, the step of fitting the third coordinate and the fourth coordinate to a straight line on the robot coordinate system includes: and performing linear fitting on the robot coordinate system on the third coordinate and the fourth coordinate by using an empirical harmonic method.
In this embodiment, the third coordinate is the stitch coordinate, the fourth coordinate is the weld hole coordinate, and the stitch point set (x) is obtained by using the empirical blending methodi ,yi ) And solder Kong Dianji (x)i ,yi ) Performing straight line fitting, as shown in formula (1), by yi Multiplying by a weight factor wi Where ε is a small amount, first in wi-0.5 yi As a new dependent variable, wi-0.5 And wi-0.5 xi As an argument, b is obtained by the least square method with "intercept" of zero0 And b1 . As shown in the formula (2), the fitting result is then subjected to ki Greater than k'2 (0.95) giving weight reduction as possible outliers, and performing 2 nd round of weighted fitting to obtain stitch point set straight lines and welding hole point set straight lines.
In one embodiment, the step of calculating the amount of the package compensation according to the amount of the deviation includes: detecting whether the deviation amount is larger than a preset deviation amount; when the deviation amount is larger than the preset deviation amount, determining that the first component and/or the second component are waste materials; and when the deviation amount is smaller than or equal to the preset deviation amount, calculating to obtain the insertion compensation amount of the robot according to the deviation amount, and inserting the first component and the second component based on the insertion compensation amount.
In this embodiment, when the deviation is greater than the preset deviation, it means that the position deviation of the pins on the image of the special-shaped plug-in unit and the solder holes on the image of the circuit board is greater, and the plug-in unit cannot be inserted smoothly, at this time, the special-shaped plug-in unit or the circuit board is determined to be waste material, the material is thrown, and the robot does not insert the material. When the deviation is smaller than or equal to the preset deviation, the position deviation of the pins on the image of the special-shaped plug-in unit and the welding holes on the image of the circuit board is smaller, so that the motion coordinates of the robot are compensated, and the pins of the special-shaped plug-in unit can be aligned with the welding holes on the circuit board accurately.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
Example III
In this embodiment, as shown in fig. 1, the hardware components of the plugging system based on visual processing include a visual system, a production line and a robot system:
(1) The vision system comprises a vision controller 1, a pin detection camera 9, a welding hole detection camera 4 and a light source 7, wherein the vision controller is responsible for realizing vision algorithm and strategy, camera control and robot communication, the pin detection camera is applied to pin detection of different special-shaped plug-ins 8, the welding hole detection camera is applied to welding hole detection of a PCB 3 needing to be plugged, and the light source is applied to separate pins and welding holes from the background to obtain characteristic information of the pins and the welding holes;
(2) The production line comprises a master control 10 and a conveyor belt 11, wherein the master control is responsible for controlling the whole production line, moving different special-shaped plug-ins to the grabbing position of the robot, and the conveyor belt is responsible for conveying the PCB to a designated inserting position;
(3) The robot system comprises a robot 5 and a clamp 6, wherein the robot and the clamp are responsible for clamping and conveying different special-shaped plug-ins to the upper end of the pin detection camera for continuous photographing, and the special-shaped plug-ins are continuously inserted into the appointed position of the PCB after positioning information is acquired.
2. As shown in fig. 5A, the image data distribution and matching process mainly includes the following steps:
(1) The robot clamps the special-shaped plug-in 1 to a shooting teaching point, acquires the black-and-white image information provided by the stitch detection camera in real time through the visual controller, creates an image variable 1 to store the special-shaped plug-in 1 image and stores the image number;
(2) The robot clamps the special-shaped plug-in 2 to a shooting teaching point, acquires the black-and-white image information provided by the stitch detection camera in real time through the visual controller, creates an image variable 2 to store an image of the special-shaped plug-in 2 and stores the image number;
(3) The welding hole detection camera acquires PCB images and performs image segmentation, and the images of the inserting points 1 and 2 are cached by using an image variable 3 and an image variable 4;
(4) The vision controller extracts corresponding stitch detection images and welding hole detection images according to the image numbers;
(5) And (3) obtaining positioning deviation data through visual positioning algorithm processing, and realizing continuous insertion of the special-shaped plug-ins 1 and 2.
3. As shown in fig. 5B, the generalized high-precision special-shaped plug-in system algorithm overall framework mainly comprises the following processing steps:
(1) And taking the special-shaped plug-in image and the PCB image as inputs, and performing coarse positioning by template matching to obtain the pixel coordinates of the center of the special-shaped plug-in and the pixel coordinates of the center of the PCB welding hole point set.
(2) And acquiring an ROI (region of interest) region containing the stitch point set and an ROI region containing the weld Kong Dianji through affine transformation, performing stitch searching fine positioning and weld hole searching fine positioning in the region, and acquiring pixel coordinates of each stitch point and weld hole point.
(3) Stitch point set and solder Kong Dianji (x)i ,yi ) Fitting a straight line, and obtaining y as shown in formula (1)i Multiplying by a weight factor wi Where ε is a small amount, first in wi-0.5 yi As a new dependent variable, wi-0.5 And wi-0.5 xi As an argument, b is obtained by the least square method with "intercept" of zero0 And b1 . As shown in the formula (2), the fitting result is then subjected to ki Greater than k'2 (0.95) giving weight reduction as possible outliers, performing 2 nd round of weighted fitting, and obtaining stitch point set straight lines and welding Kong Dianji straight lines;
(4) The stitch point set straight line and the welding Kong Dianji straight line take the first coordinate point as a superposition reference, calculate the whole deviation amount, judge whether the welding hole can cover all stitch points, if the feasible output robot cartridge deviation amount carries out the cartridge operation, the material is thrown.
4. As shown in fig. 5C, a millimeter-scale vision positioning system construction flow chart is shown, and the main flow is as follows:
(1) Firstly, a robot clamps the special-shaped plug-in components to a photographing point and a plug-in mounting point for teaching, and obtains corresponding special-shaped plug-in component template images and template images of the plug-in mounting positions of the PCB;
(2) Performing multi-template matching on the image acquired by the pin detection camera, selecting the template center coordinate with the highest score according to the matching score, and giving the image number of the corresponding plug-in unit according to the template; the method comprises the steps of carrying out a first treatment on the surface of the
(3) The welding hole detection camera acquires and segments the PCB image, and selects a corresponding template according to the image number to perform template matching on the segmented specific area image to acquire the pixel center coordinates of the welding Kong Dianji;
(4) Affine transformation is carried out according to the center coordinates of the original template and the center coordinates obtained after matching, and the ROI area of the special-shaped plug-in pin points and the ROI area of the welding Kong Dianji are obtained;
(5) The ROI area is precisely positioned, the similarity of the stitch and the welding hole is more sharply distributed by using a feature matching method based on pattern contours, so that the positioning precision is improved, a single stitch is used as a template to perform template matching in the ROI area, a single welding hole is used as a template to perform template matching in the welding hole ROI area, and precise pixel coordinates of the center of each stitch point and each pinhole point are obtained;
(6) Converting the pixel coordinates into robot coordinates according to the teaching coordinates, and acquiring stitch deviation and welding hole deviation and robot insertion compensation quantity;
(7) And (3) performing straight line fitting on the stitch point set and the welding Kong Dianji, taking the first welding hole coordinate point as a superposition reference, and performing inserting work if the welding hole point set can cover all stitches, otherwise, throwing materials.
In the embodiment, a vision-based generalized high-precision special-shaped plug-in system is provided, the method is suitable for different types of special-shaped plug-ins, and the function of continuously inserting a plurality of special-shaped plug-ins can be realized through two cameras, so that the production efficiency of the system is improved, and the construction cost is reduced. The main process is as follows: :
(1) Constructing a pin detection camera and a PCB board welding hole detection camera, and teaching the robot at the photographing point and the inserting point of each special-shaped plug-in unit;
(2) The method comprises the steps that a robot clamps a plurality of different special-shaped plug-ins to photographing points of a pin detection camera to continuously photograph, then data distribution is carried out on image information of each special-shaped plug-in, and a welding hole detection camera obtains welding hole image information to be plugged;
(3) And meanwhile, the image information and the welding hole information of each special-shaped plug-in are paired, and the plug-in positioning information deviation of each special-shaped plug-in is obtained and sent to the robot through algorithm processing such as template matching coarse positioning, stitch searching fine positioning, welding hole searching fine positioning, plug-in detection judgment and the like, so that the operation of continuous plug-in mounting of different plug-ins is realized.
Example IV
In this embodiment, as shown in fig. 3, there is provided a vision processing-based plugging device, including:
an image acquisition module 310, configured to acquire an image of a first component, obtain an image of the first component, and acquire an image of a second component, obtain an image of the second component;
the coordinate acquiring module 320 is configured to analyze the first component image to obtain first coordinates of each first element feature on the first component image, and analyze the second component image to obtain second coordinates of each second element feature on the second component image;
a coordinate conversion module 330, configured to convert the first coordinate and the second coordinate into a third coordinate and a fourth coordinate based on a robot coordinate system, respectively;
a deviation amount calculating module 340, configured to perform a straight line fitting on the robot coordinate system on the third coordinate and the fourth coordinate, and calculate a deviation amount of the third coordinate and the fourth coordinate;
and the compensation amount calculation module 350 is configured to calculate an insertion compensation amount of the robot according to the deviation amount, and insert the first component and the second component based on the insertion compensation amount.
In one embodiment, the coordinate acquisition module includes:
the first coarse positioning unit is used for analyzing the first component image to obtain reference point coordinates of the first component image and coarse positioning coordinates of each first element characteristic;
the second coarse positioning unit is used for analyzing the second component image to obtain reference point coordinates of the second component image and coarse positioning coordinates of each second element characteristic;
the first fine positioning unit is used for acquiring a first template region of interest, acquiring the first region of interest of the first component image based on the reference point coordinates of the first component image, comparing the first region of interest with the first template region of interest based on a feature matching method of a pattern profile, and carrying out fine positioning on the coarse positioning coordinates of each first element feature in the first template region of interest to obtain the first coordinates of each first element feature in the first region of interest;
the second fine positioning unit is used for acquiring a second template region of interest, acquiring a second region of interest of the second component image based on reference point coordinates of the second component image, comparing the second region of interest with the second template region of interest based on a feature matching method of a pattern profile, and carrying out fine positioning on coarse positioning coordinates of each second element feature in the second template region of interest to obtain the second coordinates of each second element feature in the second region of interest.
In one embodiment, the first coarse positioning unit is further configured to obtain a first template image, compare the first component image with the first template image, obtain reference point coordinates of the first component image, and obtain coarse positioning coordinates of each first element feature on the first component image based on the reference point coordinates of the first component image;
the first coarse positioning unit is further configured to obtain a second template image, compare the second component image with the second template image, obtain reference point coordinates of the second component image, and obtain coarse positioning coordinates of each second element feature on the second component image based on the reference point coordinates of the second component image.
In one embodiment, the first fine positioning unit is further configured to align reference point coordinates of the first component image and reference point coordinates of the first template image, obtain a first region of interest of the first component image according to an affine transformation based on a position of the first template region of interest on the first template image;
the second fine positioning unit is further configured to align reference point coordinates of the second component image with reference point coordinates of the second template image, and acquire a second region of interest of the second component image according to affine transformation based on a position of the second template region of interest on the second template image.
In one embodiment, the first fine positioning unit is further configured to compare the first region of interest with the first template region of interest based on a feature matching method of a pattern profile, and perform fine positioning on coarse positioning coordinates of each first element feature in the first template region of interest according to a position of a template element in the first template region of interest, so as to obtain the first coordinates of each first element feature in the first region of interest;
the second fine positioning unit is further configured to compare the second region of interest with the second template region of interest based on a feature matching method of the pattern profile, and perform fine positioning on coarse positioning coordinates of each second element feature in the second template region of interest according to positions of template elements in the second template region of interest, so as to obtain the second coordinates of each second element feature in the second region of interest.
In one embodiment, the deviation amount calculation module is further configured to fit the third coordinate and the fourth coordinate to a straight line on the robot coordinate system using an empirical harmonic method.
In one embodiment, the compensation amount calculation module includes:
A deviation amount detection unit for detecting whether the deviation amount is larger than a preset deviation amount;
the material throwing unit is used for determining that the first component and/or the second component are waste materials when the deviation is larger than the preset deviation;
and the inserting unit is used for calculating the inserting compensation quantity of the robot according to the deviation quantity when the deviation quantity is smaller than or equal to the preset deviation quantity, and inserting the first component and the second component based on the inserting compensation quantity.
For specific limitations on the vision-based packaging device, reference may be made to the above limitations on the vision-based packaging method, and no further description is given here. The various units in the vision processing-based plug-in device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The units can be embedded in hardware or independent of a processor in the computer equipment, and can also be stored in a memory in the computer equipment in a software mode, so that the processor can call and execute the operations corresponding to the units.
Example five
In this embodiment, a computer device is provided. The internal structure thereof can be shown in fig. 4. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program, and the non-volatile storage medium is deployed with a database for storing template images, template regions of interest. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used to communicate with other computer devices in which application software is deployed. The computer program, when executed by a processor, implements a vision processing-based plug-in method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
It will be appreciated by persons skilled in the art that the architecture shown in fig. 4 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting as to the computer device to which the present inventive arrangements are applicable, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory storing a computer program and a processor that when executing the computer program performs the steps of:
acquiring an image of a first component to obtain an image of the first component, and acquiring an image of a second component to obtain an image of the second component;
analyzing the first component image to obtain first coordinates of each first element feature on the first component image, and analyzing the second component image to obtain second coordinates of each second element feature on the second component image;
converting the first coordinate and the second coordinate into a third coordinate and a fourth coordinate based on a robot coordinate system respectively;
performing linear fitting on the third coordinate and the fourth coordinate on the robot coordinate system, and calculating the deviation amount of the third coordinate and the fourth coordinate;
And calculating the insertion compensation amount of the robot according to the deviation amount, and inserting the first component and the second component based on the insertion compensation amount.
In one embodiment, the processor when executing the computer program further performs the steps of:
analyzing the first component image to obtain reference point coordinates of the first component image and rough positioning coordinates of each first element feature;
analyzing the second component image to obtain reference point coordinates of the second component image and coarse positioning coordinates of each second element feature;
acquiring a first template region of interest, acquiring a first region of interest of the first component image based on reference point coordinates of the first component image, comparing the first region of interest with the first template region of interest based on a feature matching method of a pattern profile, and precisely positioning coarse positioning coordinates of each first element feature in the first template region of interest to obtain the first coordinates of each first element feature in the first region of interest;
acquiring a second template region of interest, acquiring a second region of interest of the second component image based on reference point coordinates of the second component image, comparing the second region of interest with the second template region of interest based on a feature matching method of a pattern profile, and precisely positioning coarse positioning coordinates of each second element feature in the second template region of interest to obtain the second coordinates of each second element feature in the second region of interest.
In one embodiment, the processor when executing the computer program further performs the steps of:
acquiring a first template image, comparing the first component image with the first template image to obtain reference point coordinates of the first component image, and obtaining coarse positioning coordinates of each first element feature on the first component image based on the reference point coordinates of the first component image;
and obtaining a second template image, comparing the second component image with the second template image to obtain reference point coordinates of the second component image, and obtaining coarse positioning coordinates of each second element feature on the second component image based on the reference point coordinates of the second component image.
In one embodiment, the processor when executing the computer program further performs the steps of:
aligning reference point coordinates of the first part image and reference point coordinates of the first template image, and acquiring a first region of interest of the first part image according to affine transformation based on the position of the first template region of interest on the first template image;
aligning reference point coordinates of the second part image and reference point coordinates of the second template image, and acquiring a second region of interest of the second part image according to affine transformation based on a position of the second template region of interest on the second template image.
In one embodiment, the processor when executing the computer program further performs the steps of:
comparing the first region of interest with the first template region of interest based on a feature matching method of pattern contours, and precisely positioning coarse positioning coordinates of each first element feature in the first template region of interest according to positions of template elements in the first template region of interest to obtain the first coordinates of each first element feature in the first region of interest;
and comparing the second region of interest with the second template region of interest based on a feature matching method of the pattern profile, and precisely positioning coarse positioning coordinates of each second element feature in the second template region of interest according to the positions of template elements in the second template region of interest to obtain the second coordinates of each second element feature in the second region of interest.
In one embodiment, the processor when executing the computer program further performs the steps of:
and performing linear fitting on the robot coordinate system on the third coordinate and the fourth coordinate by using an empirical harmonic method.
In one embodiment, the processor when executing the computer program further performs the steps of:
detecting whether the deviation amount is larger than a preset deviation amount;
when the deviation amount is larger than the preset deviation amount, determining that the first component and/or the second component are waste materials;
and when the deviation amount is smaller than or equal to the preset deviation amount, calculating to obtain the insertion compensation amount of the robot according to the deviation amount, and inserting the first component and the second component based on the insertion compensation amount.
Example six
In this embodiment, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of:
acquiring an image of a first component to obtain an image of the first component, and acquiring an image of a second component to obtain an image of the second component;
analyzing the first component image to obtain first coordinates of each first element feature on the first component image, and analyzing the second component image to obtain second coordinates of each second element feature on the second component image;
converting the first coordinate and the second coordinate into a third coordinate and a fourth coordinate based on a robot coordinate system respectively;
Performing linear fitting on the third coordinate and the fourth coordinate on the robot coordinate system, and calculating the deviation amount of the third coordinate and the fourth coordinate;
and calculating the insertion compensation amount of the robot according to the deviation amount, and inserting the first component and the second component based on the insertion compensation amount.
In one embodiment, the computer program when executed by the processor further performs the steps of:
analyzing the first component image to obtain reference point coordinates of the first component image and rough positioning coordinates of each first element feature;
analyzing the second component image to obtain reference point coordinates of the second component image and coarse positioning coordinates of each second element feature;
acquiring a first template region of interest, acquiring a first region of interest of the first component image based on reference point coordinates of the first component image, comparing the first region of interest with the first template region of interest based on a feature matching method of a pattern profile, and precisely positioning coarse positioning coordinates of each first element feature in the first template region of interest to obtain the first coordinates of each first element feature in the first region of interest;
Acquiring a second template region of interest, acquiring a second region of interest of the second component image based on reference point coordinates of the second component image, comparing the second region of interest with the second template region of interest based on a feature matching method of a pattern profile, and precisely positioning coarse positioning coordinates of each second element feature in the second template region of interest to obtain the second coordinates of each second element feature in the second region of interest.
In one embodiment, the computer program when executed by the processor further performs the steps of:
acquiring a first template image, comparing the first component image with the first template image to obtain reference point coordinates of the first component image, and obtaining coarse positioning coordinates of each first element feature on the first component image based on the reference point coordinates of the first component image;
and obtaining a second template image, comparing the second component image with the second template image to obtain reference point coordinates of the second component image, and obtaining coarse positioning coordinates of each second element feature on the second component image based on the reference point coordinates of the second component image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
aligning reference point coordinates of the first part image and reference point coordinates of the first template image, and acquiring a first region of interest of the first part image according to affine transformation based on the position of the first template region of interest on the first template image;
aligning reference point coordinates of the second part image and reference point coordinates of the second template image, and acquiring a second region of interest of the second part image according to affine transformation based on a position of the second template region of interest on the second template image.
In one embodiment, the computer program when executed by the processor further performs the steps of:
comparing the first region of interest with the first template region of interest based on a feature matching method of pattern contours, and precisely positioning coarse positioning coordinates of each first element feature in the first template region of interest according to positions of template elements in the first template region of interest to obtain the first coordinates of each first element feature in the first region of interest;
And comparing the second region of interest with the second template region of interest based on a feature matching method of the pattern profile, and precisely positioning coarse positioning coordinates of each second element feature in the second template region of interest according to the positions of template elements in the second template region of interest to obtain the second coordinates of each second element feature in the second region of interest.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and performing linear fitting on the robot coordinate system on the third coordinate and the fourth coordinate by using an empirical harmonic method.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting whether the deviation amount is larger than a preset deviation amount;
when the deviation amount is larger than the preset deviation amount, determining that the first component and/or the second component are waste materials;
and when the deviation amount is smaller than or equal to the preset deviation amount, calculating to obtain the insertion compensation amount of the robot according to the deviation amount, and inserting the first component and the second component based on the insertion compensation amount.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

CN202311149735.6A2023-09-062023-09-06Visual processing-based plug-in method, device, computer equipment and storage mediumPendingCN117114958A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202311149735.6ACN117114958A (en)2023-09-062023-09-06Visual processing-based plug-in method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202311149735.6ACN117114958A (en)2023-09-062023-09-06Visual processing-based plug-in method, device, computer equipment and storage medium

Publications (1)

Publication NumberPublication Date
CN117114958Atrue CN117114958A (en)2023-11-24

Family

ID=88812682

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202311149735.6APendingCN117114958A (en)2023-09-062023-09-06Visual processing-based plug-in method, device, computer equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN117114958A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118817713A (en)*2024-06-282024-10-22珠海格力电器股份有限公司 Plug-in installation anomaly detection method, processor and detection camera
CN119110508A (en)*2024-09-202024-12-10珠海格力电器股份有限公司 Electronic component insertion method and device based on machine vision

Cited By (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118817713A (en)*2024-06-282024-10-22珠海格力电器股份有限公司 Plug-in installation anomaly detection method, processor and detection camera
CN119110508A (en)*2024-09-202024-12-10珠海格力电器股份有限公司 Electronic component insertion method and device based on machine vision
CN119110508B (en)*2024-09-202025-09-30珠海格力电器股份有限公司 Electronic component insertion method and device based on machine vision

Similar Documents

PublicationPublication DateTitle
US6396942B1 (en)Method and apparatus for locating ball grid array packages from two-dimensional image data
CN117114958A (en)Visual processing-based plug-in method, device, computer equipment and storage medium
CA2507174C (en)Method of registering and aligning multiple images
US6078700A (en)Method and apparatus for location and inspecting a two-dimensional image including co-linear features
Benedek et al.Solder paste scooping detection by multilevel visual inspection of printed circuit boards
US6751361B1 (en)Method and apparatus for performing fixturing in a machine vision system
KR101595547B1 (en)Inspection method
CN115131268A (en) An automatic welding system based on image feature extraction and 3D model matching
Bai et al.Corner point-based coarse–fine method for surface-mount component positioning
CN113822882A (en)Circuit board surface defect detection method and device based on deep learning
CN113752260A (en) A kind of reclaiming positioning correction method and device
CN116481434B (en)Self-positioning control method and device for visual detection and self-positioning system
CN107977953A (en)Workpiece conductive features inspection method and workpiece conductive features check system
US9305235B1 (en)System and method for identifying and locating instances of a shape under large variations in linear degrees of freedom and/or stroke widths
CN118162708A (en)Welding spot maintenance method, system, computer equipment and storage medium
CN118134925B (en)Semiconductor chip flip method, device, equipment and storage medium
US6951175B2 (en)Method for forming printing inspection data
CN113910756A (en)Compensation control method, device, equipment and medium based on screen printing alignment
CN117152145B (en)Board card process detection method and device based on image
Calabrese et al.Application of Mask R-CNN for Defect Detection in Printed Circuit Board Manufacturing
CN111914857B (en)Layout method, device and system for plate excess material, electronic equipment and storage medium
TWI582721B (en) Workpiece conductive feature inspection method and workpiece conductive feature inspection system
Mendizabal-Arrieta et al.Vision system for automatic inspection of solder joints in electronic boards
CN110736911B (en)Flying probe testing method, flying probe testing device, flying probe testing apparatus, and storage medium
JPH11328410A (en) Pattern alignment method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp