Movatterモバイル変換


[0]ホーム

URL:


CN119850516A - Device defect detection method, device, equipment and medium based on dimension characteristics - Google Patents

Device defect detection method, device, equipment and medium based on dimension characteristics
Download PDF

Info

Publication number
CN119850516A
CN119850516ACN202410953098.6ACN202410953098ACN119850516ACN 119850516 ACN119850516 ACN 119850516ACN 202410953098 ACN202410953098 ACN 202410953098ACN 119850516 ACN119850516 ACN 119850516A
Authority
CN
China
Prior art keywords
dimension
detection
detected
color
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410953098.6A
Other languages
Chinese (zh)
Inventor
陈健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Luchen Intelligent Equipment Technology Co ltd
Original Assignee
Guangzhou Yingshi Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Yingshi Information Technology Co ltdfiledCriticalGuangzhou Yingshi Information Technology Co ltd
Priority to CN202410953098.6ApriorityCriticalpatent/CN119850516A/en
Publication of CN119850516ApublicationCriticalpatent/CN119850516A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明实施例公开了基于维度特征的器件缺陷检测方法、装置、设备和介质。自动光学检测过程中,获取待检测器件的待测图像,基于器件图像获取对应的标准图像和检测策略,检测策略包括至少两个检测维度和维度检测顺序,每个检测维度对应配置有预训练的特征提取模型;将待测图像和标准图像,依次输入维度检测顺序对应的特征提取模型进行维度特征分类提取和检测;在每个检测维度的待测提取结果与对应的标准提取结果均匹配的情况下,确认对应的待检测器件检测合格。根据不同器件类型的重点特征,基于对应的检测维度进行有顺序的检测,从而实现对器件缺陷的精准检测。

The embodiments of the present invention disclose a device defect detection method, apparatus, equipment and medium based on dimensional features. During the automatic optical inspection process, an image to be inspected of the device to be inspected is obtained, and a corresponding standard image and inspection strategy are obtained based on the device image. The inspection strategy includes at least two inspection dimensions and a dimension inspection sequence, and each inspection dimension is configured with a pre-trained feature extraction model; the image to be inspected and the standard image are sequentially input into the feature extraction model corresponding to the dimension detection sequence for dimensional feature classification extraction and inspection; when the extraction result to be inspected in each inspection dimension matches the corresponding standard extraction result, it is confirmed that the corresponding device to be inspected has passed the inspection. According to the key features of different device types, sequential inspection is performed based on the corresponding inspection dimensions, thereby achieving accurate detection of device defects.

Description

Device defect detection method, device, equipment and medium based on dimension characteristics
Technical Field
The embodiment of the invention relates to the technical field of optical detection, in particular to a device defect detection method, device, equipment and medium based on dimension characteristics.
Background
In the practical application scenario of AOI (Automated Optical Inspection, automatic optical inspection), it is generally required to obtain parameters such as the position, shape, and size of pins of an electronic component (such as an integrated circuit, a chip, a connector, etc.), so as to ensure that the electronic component is installed normally, no installation defect occurs, and normal installation and use can be performed subsequently. Traditional AOI is mainly realized through two-dimensional images, namely, the detection of the two-dimensional images is carried out through a camera, the color comparison is carried out on the basis of the images of standard devices, and whether the installation has defects or not is confirmed.
The inventor finds that the device types are various when detecting by using the existing AOI detection mode, the image acquisition can be interfered, and the accuracy of the detection result is lower based on the image detection of the color comparison.
Disclosure of Invention
The invention provides a device defect detection method, device, equipment and medium based on dimension characteristics, which are used for solving the technical problem of lower accuracy of the existing automatic optical detection result.
In a first aspect, an embodiment of the present invention provides a device defect detection method based on dimensional characteristics, where the device defect detection method based on dimensional characteristics includes:
Acquiring an image to be detected of a device to be detected, and acquiring a corresponding standard image and a detection strategy based on the device image, wherein the detection strategy comprises at least two detection dimensions and a dimension detection sequence, and each detection dimension is correspondingly provided with a pre-trained feature extraction model;
The method comprises the steps of performing dimension feature classification extraction on a first feature extraction model corresponding to a dimension detection sequence to obtain a corresponding to-be-detected extraction result and a standard extraction result, and performing dimension feature classification extraction on a next feature extraction model corresponding to the dimension detection sequence to obtain a corresponding to-be-detected extraction result and a standard extraction result when the to-be-detected extraction result obtained each time is matched with the corresponding standard extraction result, until the to-be-detected extraction result is not matched with the corresponding standard extraction result or the feature extraction models corresponding to at least two detection dimensions are extracted;
And under the condition that the results corresponding to at least two detection dimensions are matched, confirming that the corresponding device to be detected is qualified in detection.
In the automatic optical detection process, the image to be detected of the device to be detected is obtained, the corresponding standard image and the detection strategy are obtained based on the device image, the detection strategy comprises at least two detection dimensions and a dimension detection sequence, a pre-trained feature extraction model is correspondingly configured in each detection dimension, the image to be detected and the standard image are sequentially input into the feature extraction model corresponding to the dimension detection sequence to conduct dimension feature classification extraction and detection, and the corresponding device to be detected is confirmed to be detected to be qualified under the condition that the extraction result to be detected of each detection dimension is matched with the corresponding standard extraction result. According to key features of different device types, sequential detection is performed based on corresponding detection dimensions, so that accurate detection of device defects is achieved.
The detection dimension comprises a color dimension, a feature extraction model corresponding to the color dimension is a color classification recognition model, and the color classification recognition model is obtained by pre-training in the following mode:
acquiring color images of multiple colors as first training samples, wherein the first training samples of each color comprise multiple color images;
And performing supervised central clustering training on a preset first neural network model through a first training sample to obtain a color classification recognition model, wherein the constraint condition of central clustering training is that color data with the same color are gathered around the same color center, and color data with different colors are scattered in respective color centers.
By the above-mentioned supervised training of the color dimension, the color difference is identified as a feature, instead of identifying a specific color, so that devices of various colors can be identified without restriction.
The first neural network model maps the extracted color feature vectors into a first high-dimensional space, and the first high-dimensional space is provided with corresponding centers corresponding to different colors.
In the high-dimensional space, the distances between the features with the same color are very close, and the distances between the features with different colors are very far, so that the capability of distinguishing the different colors is provided.
The detection dimension comprises a dimension, a feature extraction model corresponding to the dimension is a size classification recognition model, and the size classification recognition model is obtained by pre-training in the following mode:
acquiring a plurality of device images with the same shooting parameters as second training samples, wherein each second training sample with the same size comprises a plurality of device images;
And performing supervised center clustering training on a preset second neural network model through a second training sample to obtain a size classification recognition model, wherein the constraint condition of the center clustering training is that device images with the same size are gathered around the same size center, and device images with different sizes are scattered in the respective size centers.
By the above-mentioned supervised training of the dimension, the difference in size is identified as a feature, instead of identifying a specific dimension parameter, so that devices of various sizes can be identified without restriction.
The second neural network model maps the extracted size feature vector into a second high-dimensional space, and the second high-dimensional space is provided with a corresponding center corresponding to different colors.
In the high-dimensional space, the distances between the features with the same size are very close, and the distances between the features with different sizes are very far, so that the capability of distinguishing the different sizes is provided.
And when the similarity between the to-be-detected extraction result corresponding to the dimension and the standard extraction result is above a preset similarity threshold, confirming that the to-be-detected extraction result is matched with the standard extraction result.
Above-mentioned, confirm the matching state based on similarity threshold to the deviation of the image collection of gathering and the deviation of discernment time, can guarantee that the size discernment can tolerate the deviation in the image collection process, guarantee simultaneously to distinguish the device that the size is obvious different, improve the robustness of device size discernment.
The detection strategy is to sequentially detect the dimension of the dimension, the dimension of the color and the dimension of the texture so as to detect the wrong piece or missing piece of the device.
By sequentially detecting the size dimension, the color dimension and the texture dimension, the detection speed of the device defect can be improved.
In a second aspect, an embodiment of the present invention provides a device defect detection apparatus based on dimensional characteristics, including:
The detection preparation unit is used for acquiring an image to be detected of the device to be detected, and acquiring a corresponding standard image and a detection strategy based on the device image, wherein the detection strategy comprises at least two detection dimensions and a dimension detection sequence, and each detection dimension is correspondingly provided with a pre-trained feature extraction model;
The detection comparison unit is used for carrying out dimension feature classification extraction on the to-be-detected image and the standard image by inputting a first feature extraction model corresponding to the dimension detection sequence to obtain a corresponding to-be-detected extraction result and a standard extraction result, and carrying out dimension feature classification extraction on a next feature extraction model corresponding to the dimension detection sequence of the to-be-detected image under the condition that the obtained to-be-detected extraction result is matched with the corresponding standard extraction result each time to obtain a corresponding to-be-detected extraction result and a standard extraction result until the to-be-detected extraction result is not matched with the corresponding standard extraction result or the feature extraction models corresponding to at least two detection dimensions are extracted;
and the result confirming unit is used for confirming that the corresponding device to be detected is qualified under the condition that the results corresponding to the at least two detection dimensions are matched.
The detection dimension comprises a color dimension, a feature extraction model corresponding to the color dimension is a color classification recognition model, and the color classification recognition model is obtained by pre-training in the following mode:
acquiring color images of multiple colors as first training samples, wherein the first training samples of each color comprise multiple color images;
And performing supervised central clustering training on a preset first neural network model through a first training sample to obtain a color classification recognition model, wherein the constraint condition of central clustering training is that color data with the same color are gathered around the same color center, and color data with different colors are scattered in respective color centers.
By the above-mentioned supervised training of the color dimension, the color difference is identified as a feature, instead of identifying a specific color, so that devices of various colors can be identified without restriction.
The first neural network model maps the extracted color feature vectors into a first high-dimensional space, and the first high-dimensional space is provided with corresponding centers corresponding to different colors.
In the high-dimensional space, the distances between the features with the same color are very close, and the distances between the features with different colors are very far, so that the capability of distinguishing the different colors is provided.
The detection dimension comprises a dimension, a feature extraction model corresponding to the dimension is a size classification recognition model, and the size classification recognition model is obtained by pre-training in the following mode:
acquiring a plurality of device images with the same shooting parameters as second training samples, wherein each second training sample with the same size comprises a plurality of device images;
And performing supervised center clustering training on a preset second neural network model through a second training sample to obtain a size classification recognition model, wherein the constraint condition of the center clustering training is that device images with the same size are gathered around the same size center, and device images with different sizes are scattered in the respective size centers.
By the above-mentioned supervised training of the dimension, the difference in size is identified as a feature, instead of identifying a specific dimension parameter, so that devices of various sizes can be identified without restriction.
The second neural network model maps the extracted size feature vector into a second high-dimensional space, and the second high-dimensional space is provided with a corresponding center corresponding to different colors.
In the high-dimensional space, the distances between the features with the same size are very close, and the distances between the features with different sizes are very far, so that the capability of distinguishing the different sizes is provided.
And when the similarity between the to-be-detected extraction result corresponding to the dimension and the standard extraction result is above a preset similarity threshold, confirming that the to-be-detected extraction result is matched with the standard extraction result.
Above-mentioned, confirm the matching state based on similarity threshold to the deviation of the image collection of gathering and the deviation of discernment time, can guarantee that the size discernment can tolerate the deviation in the image collection process, guarantee simultaneously to distinguish the device that the size is obvious different, improve the robustness of device size discernment.
The detection strategy is to sequentially detect the dimension of the dimension, the dimension of the color and the dimension of the texture so as to detect the wrong piece or missing piece of the device.
By sequentially detecting the size dimension, the color dimension and the texture dimension, the detection speed of the device defect can be improved.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
one or more processors;
a memory for storing one or more computer programs;
The one or more computer programs, when executed by the one or more processors, cause the electronic device to implement the dimensional feature-based device defect detection method as in the first aspect.
In a fourth aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a device defect detection method based on dimensional characteristics as in the first aspect.
The electronic device of the third aspect and the computer readable storage medium of the fourth aspect may be used to perform the device defect detection method based on dimensional characteristics provided in any of the foregoing embodiments, and have corresponding functions and beneficial effects.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting device defects based on dimension characteristics according to an embodiment of the present application;
FIG. 2 is a schematic diagram of the distribution of multiple colors in a feature space;
FIG. 3 is a schematic structural diagram of a device defect detection apparatus based on dimension characteristics according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in further detail with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are for purposes of illustration and not of limitation. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings.
It should be noted that the present disclosure is not limited to all the alternative embodiments, and those skilled in the art who review this disclosure will recognize that any combination of the features may be used to construct the alternative embodiments as long as the features are not mutually inconsistent.
The following describes each embodiment in detail.
Traditional AOI is mainly realized through two-dimensional images, namely, the detection of the two-dimensional images is carried out through a camera, the color comparison is carried out on the basis of the images of standard devices, and whether the installation has defects or not is confirmed.
In a specific AOI inspection process, there may be character-bearing components and non-character components on the devices on the board. For the character components, the optical character recognition algorithm is an effective method for recognizing the components, but in a real scene, the character recognition failure or false alarm can be caused due to different sizes and shapes of the characters. In addition, in a real scene, the ratio of the non-character components (such as a capacitor) is also large, and the optical character recognition algorithm is basically disabled and cannot be used for the non-character components. For both cases, another algorithm is generally needed to solve, and one currently common algorithm to solve such problems is the color extraction algorithm.
The defect detection based on the color extraction algorithm is not full-automatic detection, but rather, the reference image corresponding to the template component needs to be manually adjusted or a proper range of chromaticity and brightness is verified as a reference standard for detection. Meanwhile, noise is greatly affected in the detection process, because the color extraction algorithm can calculate the brightness and the chromaticity of each pixel point, if the pixel point has noise, the brightness and the chromaticity can be changed, and the color extraction algorithm is too focused on the local pixel point information of each point. In addition, the color difference output result is greatly affected by the reference image. For example, the color range (red) of the reference image is relatively narrow, the color range (red) conforming to the reference image in the image to be tested (red+black) corresponding to the component to be tested is approximately 50%, and the standard range for AOI color detection is assumed to be (60, 100), and the image to be tested is determined to be NG (i.e. failed) in the AOI detection system. If the previous two images are detected in reverse, that is, the color range (red+black) of the reference image is wider, the duty ratio of the color range (red+black) conforming to the reference image in the image to be detected (red) can reach 100%, and the standard range for AOI color detection is assumed to be (60, 100), at this time, the image to be detected can be judged as OK (i.e. qualified) in the AOI detection system. The judging result is different in two states that two images are the to-be-detected image and the reference image, and only one result is necessarily existed between the components corresponding to the two images.
In the whole, when the existing AOI detection mode is used for detection, because the types of devices are various, image acquisition can be interfered and the like, the accuracy of detection results is lower based on the image detection of color comparison.
In order to solve the technical problems, the embodiment of the application provides a device defect detection method based on dimension characteristics, wherein in an automatic optical detection process, a to-be-detected image of a device to be detected is obtained, a corresponding standard image and a detection strategy are obtained based on the device image, the detection strategy comprises at least two detection dimensions and dimension detection sequences, a pre-trained feature extraction model is correspondingly configured in each detection dimension, the to-be-detected image and the standard image are sequentially input into the feature extraction model corresponding to the dimension detection sequences to conduct dimension feature classification extraction and detection, and the corresponding device to be detected is confirmed to be qualified under the condition that the to-be-detected extraction result of each detection dimension is matched with the corresponding standard extraction result. According to key features of different device types, sequential detection is performed based on corresponding detection dimensions, so that accurate detection of device defects is achieved.
Fig. 1 is a flowchart of a method for detecting a device defect based on dimension features according to an embodiment of the present application, as shown in fig. 1, including but not limited to steps S110 to S140.
Step S110, obtaining an image to be detected of a device to be detected, and obtaining a corresponding standard image and a detection strategy based on the device image, wherein the detection strategy comprises at least two detection dimensions and a dimension detection sequence, and each detection dimension is correspondingly configured with a pre-trained feature extraction model.
The device type to be detected can be confirmed by receiving a setting operation required by a user according to the current detection, or can be confirmed by running a pre-trained device identification model to identify the device in the PCB card image, and the device type can be used for subsequent detection judgment whether the device type is directly confirmed by receiving the operation of the user or is confirmed by active identification. It should be understood that in a specific implementation process, it is generally required to continuously acquire an image of a PCB board card, confirm an image to be detected corresponding to a device to be detected therefrom, and implement defect detection on the corresponding device to be detected based on the image to be detected. Common device types such as chips, capacitors, resistors, etc. The standard image is an image of another device which is the same as the device to be detected and is extracted from the PCB board card image after the image acquisition is carried out on the qualified PCB board card which is produced. The image to be detected and the standard image are obtained by image acquisition through the optical camera, parameters during specific acquisition are the same as much as possible, such as an ambient light state, a focal length, a camera shooting distance, various hardware parameters of the camera and the like, and the two images are acquired by the optical camera at the same parameter for comparison, so that pretreatment before image comparison can be reduced, image differences possibly brought in the image acquisition process are directly reduced from the source, and the accuracy of device defect detection is improved.
When the defect detection is performed, the device is in a state of being mounted on the circuit board, the image of the whole circuit board is usually collected when the image is collected during detection, but in the embodiment of the application, one device is detected each time, and the corresponding image to be detected of each device to be detected needs to be extracted from the corresponding circuit board image. The specific extraction method can be that an opencv template matching algorithm is adopted, namely, firstly, an image of a circuit board which is confirmed to have no production defect is acquired to obtain a standard circuit board image, then, the standard circuit board image is confirmed to correspond to each device based on device identification and/or manual calibration, meanwhile, the device type of each device in the standard circuit board image can be confirmed, the image of the circuit board acquired in the subsequent detection process is matched according to the calibration in the standard circuit board image, the image to be detected corresponding to each device to be detected is extracted, and the image to be detected corresponding to the extracted device type corresponding to each device to be detected can be confirmed. That is, the device type of the device to be detected is confirmed, which may be directly confirmed according to the device type calibrated in the standard circuit board image in the extraction process. The device type calibration method comprises the steps of confirming that each device corresponds to a standard image based on device identification and/or manual calibration, namely, device type calibration can be completed through directly confirming devices in standard circuit board images through image identification device types, device type calibration can be completed through manual operation on the devices in the standard circuit board images, and initial calibration can be carried out through image identification first, and final calibration confirmation can be carried out through manual operation.
For each device type, a detection strategy is correspondingly provided, the detection strategy is used for describing the sequence (namely dimension detection sequence) of which feature dimensions (namely detection dimensions) are required to be detected for one device type, each detection dimension is detected based on a corresponding feature extraction model, and the feature extraction model is obtained by collecting samples before detection and training. Each feature extraction model can identify the clustering result of feature information under the same detection dimension from the image, and the clustering result is the feature expression under the corresponding detection dimension. And if the clustering result of the image to be detected is matched with the clustering result of the corresponding standard image, the device to be detected corresponding to the image to be detected is considered to be qualified in the detection dimension. In the embodiment of the application, the detection dimension can comprise a plurality of color dimension, size dimension and texture dimension according to the common identification basis of the device and the judging standard of whether the device is qualified or not.
And for the detection dimension corresponding to the color dimension, the feature extraction model corresponding to the color dimension is a color classification recognition model, the color classification recognition model is obtained by pre-training in a mode that color images of multiple colors are obtained as first training samples, the first training samples of each color comprise multiple color images, the first training samples are used for carrying out supervised central clustering training on a preset first neural network model to obtain the color classification recognition model, and the constraint condition of the central clustering training is that color data with the same color are gathered around the same color center and color data with different colors are scattered in respective color centers. The first neural network model maps the extracted color feature vectors into a first high-dimensional space, and the first high-dimensional space is provided with corresponding centers corresponding to different colors.
The overall training process of the feature extraction model can refer to the implementation process in the related technology, and in the embodiment of the application, the detailed design made for the specific recognition target in the specific application scene is mainly described in a key way. For the color classification model, the color difference is focused, so that the first training sample for training the color classification model is pure color picture data, irrelevant background or component electrodes are removed, and then picture data with the same color are classified into one type and picture data with different colors are classified into another type. Because the pattern of single color data is single, the color data in reality is more changeable, for example, the upper half of a picture is dark red and the lower half of the picture is light red, and in reality, rotation, noise interference and the like may exist. Therefore, the original data is rotated and noise is added through a design algorithm, and the situation possibly encountered in reality is simulated, so that the finally trained feature extraction model has anti-interference performance.
The feature extraction model may use a neural network model that is a more mainstream in the related art, such as Resnet, denseNet or other networks. For the color classification recognition model, the initial convolution kernel is set to be as large as possible, such as 5*5, 7*7, and the like, which is also the reason that the device defect detection method in the embodiment of the application has more anti-interference performance than the traditional color extraction algorithm. If a picture with 5*5 colors has noise at a certain pixel, the color intensity of the pixel will change greatly, which will affect the traditional color extraction algorithm, and the neural network model will smooth the noise.
In the whole recognition process, color data are mapped to a first high-dimensional space through a neural network model, in the first high-dimensional space, each color has a corresponding center, and in the training process, the features with the same color are gathered towards the corresponding centers. The end result after such training is that the distance between features of the same color will be very close, while the distance between features of different colors will be very far, so that this feature has the ability to distinguish between different colors.
In the embodiment of the application, based on the color classification recognition model of the supervised center cluster, a center is allocated to each color, and the centers relate to two constraint relations, wherein the first constraint relation is that data with the same color are gathered together, and data with different colors are scattered in the respective centers. And additionally, the high-dimensional data is adopted as the characteristic. The high-dimensional data is learned through automatic training, has color semantic information, and can automatically remove noise. Based on the color classification recognition model obtained by training, compared with the existing color recognition scheme, when the unknown 11 th color is detected on the basis of the known 10 colors, a new center is automatically established on the feature space, and the new center is closer to the center of the similar color and farther from the center of the different colors. In the specific recognition process, the recognition results output by the image to be detected and the standard image are compared, and if the color clustering results are the same or close to each other, the features of the color dimension can be considered to be the same.
A schematic distribution diagram of multiple colors in a feature space can refer to fig. 2, wherein color features corresponding to close colors are distributed more closely, and color features corresponding to distinct colors are distributed more far. It should be understood that fig. 2 is only used to characterize the overall distribution of color differences in the feature space, and not to illustrate the distribution of specific color features by specific colors therein.
By supervised training of the color dimension, color differences are identified as features, rather than specific colors, so that devices of various colors can be identified without constraint. In the first high-dimensional space, the distance between features having the same color may be very close and the distance between features of different colors may be very far, thereby providing the ability to distinguish between different colors.
For the detection dimension corresponding to the dimension, the feature extraction model corresponding to the dimension is a size classification recognition model, the size classification recognition model is obtained by pre-training in a mode that device images with multiple sizes under the same shooting parameter are obtained to serve as second training samples, each second training sample with multiple device images comprises a plurality of device images, the second training samples are used for carrying out supervised central clustering training on a preset second neural network model to obtain the size classification recognition model, and the constraint condition of central clustering training is that device images with the same size are gathered around the same size center, and device images with different sizes are scattered in the respective size centers. The second neural network model maps the extracted size feature vector into a second high-dimensional space, and the second high-dimensional space is provided with a corresponding center corresponding to different colors.
For the "size" differences of dimension concern, extraneous background or component electrodes are removed in a second training sample for training a size classification recognition model. Then, picture data having the same "size" are classified into one type, and picture data having different "sizes" are classified into another type. The large, medium and small form a pyramid structure. The patterns of the directly collected data are single, and the data in reality are changeable. Such as a picture "size", in reality there may be rotation, noisy disturbances, etc. The original data is rotated and noise is added through a design algorithm, and the situation possibly encountered in reality is simulated, so that the model has anti-interference performance. It should be understood that the "size" of the device in the image is related to parameters such as shooting distance, and the size during training can be directly classified and calibrated, but in the subsequent detection process, the image to be detected and the standard image should be obtained by respectively carrying out image acquisition on the device to be detected and the standard device based on the same image acquisition parameters, where the image acquisition parameters include parameters such as distance between the lens and the device, focal length, and the like, which have influence on the imaging size.
The basic neural network model of the size classification recognition model is substantially the same as the structure and training process of the color classification recognition model. When the size is identified, the data of the 'size' is mapped to a second high-dimensional space through the neural network model, in the second high-dimensional space, each 'size' has a corresponding center, and in the training process, the characteristics with the same 'size' are gathered towards the corresponding centers. The end result after such training is that the distance between features of the same "size" will be very close, while the distance between features of different "sizes" will be very far, thus giving this feature the ability to distinguish between different "sizes".
The 'size' features are centrally clustered, so that the trained neural network learns the distribution of the 'size' features in the embedded space. The design of this embedded space is critical, which brings similar "size" features close together and dissimilar "size" features far apart. This helps to better distinguish between different "sizes" during the identification phase. In this embedded space, the unknown "size" feature will fall into an area that should be as far away from the area of known "size" as possible, thereby enabling identification of the unknown "size". To achieve differential identification of unknown "size", a similarity threshold is set in the embedding space. When "size" difference recognition is performed, if the "similarity" between a certain feature vector and a template "size" feature exceeds a similarity threshold, (where "cosine similarity" is used to measure the "similarity" between the two), it can be confirmed as consistent (and matching) with the template "size" feature, otherwise, it is inconsistent. That is, when the similarity between the to-be-detected extraction result corresponding to the dimension and the standard extraction result is above a preset similarity threshold, the to-be-detected extraction result is confirmed to be matched with the standard extraction result.
By supervised training of the dimension, the size difference is identified as a feature, rather than identifying a specific dimension parameter, so that devices of various sizes can be identified without constraint. In a high-dimensional space, the distances between features of the same size may be very close and the distances between features of different sizes may be very far, thereby providing the ability to distinguish between different sizes. In a specific embodiment, in order to ensure that the device is identified as an image corresponding to a complete device, the second training sample for training, the standard image for reference, and the image to be detected of the device to be detected are all larger than the coverage area corresponding to the complete device, for example, the coverage area corresponding to the complete device is expanded outwards, so that the image of any link involved in detection is about 1.4 times, for example, 1.3 times, 1.4 times, 1.5 times, and the like, of the coverage area corresponding to the corresponding complete device.
In the above description of the training process, the training based on the center cluster is described, which is equivalent to realizing the classification capability after the training through the constraint on the feature center cluster. In the concrete implementation, triple loss or comparison loss can be adopted, and other constraints under the characteristic dimension can be realized, but the whole implementation target is that the characteristic distance of samples with the same attribute is pulled to be very close, and the characteristic distance of samples with different attributes is pulled to be very far.
The device type includes a capacitance, and the detection strategy is to sequentially detect a size dimension and a color dimension. The method is characterized in that the method is used for carrying out targeted detection strategy design corresponding to the specific situation that the capacitor basically does not need positive and negative distinguishing installation requirements, only the size dimension and the color dimension are detected, and the established capacitor type and the installation effect are confirmed, so that the detection speed and the accuracy of the capacitor are improved. Of course, in a specific implementation process, the detection strategy can be designed specifically according to specific device characteristics and installation characteristics for devices with various installation characteristics such as resistors, diodes and integrated circuits, and the detection accuracy is improved through the embodiment of the application.
In an alternative implementation, the detection strategy is to sequentially detect the size dimension, the color dimension, and the texture dimension to detect a wrong piece or missing piece of the device. Considering that an electronic device is a device that is subject to stringent requirements, differences in the size, color, etc. of the device may indicate that the electronic device is erroneously mounted, that the corresponding circuit board is directly unavailable, and that, in this implementation, according to the characteristics capable of being extracted and detected quickly, the electronic device with flaws is correspondingly and quickly confirmed, defects of the circuit board are prevented from being confirmed after comprehensive characteristic extraction is carried out for a long detection time, and detection efficiency is effectively improved.
Step S120, inputting a first feature extraction model corresponding to a dimension detection sequence into the image to be detected and the standard image for dimension feature classification extraction to obtain a corresponding to-be-detected extraction result and a standard extraction result, and under the condition that the to-be-detected extraction result obtained each time is matched with the corresponding standard extraction result, inputting the image to be detected into a next feature extraction model corresponding to the dimension detection sequence for dimension feature classification extraction to obtain a corresponding to-be-detected extraction result and a standard extraction result until the to-be-detected extraction result is not matched with the corresponding standard extraction result or the feature extraction models corresponding to at least two detection dimensions are extracted.
In the embodiment of the application, based on a detection strategy, the detection process of a device to be detected is divided into a plurality of small detection links, each detection link corresponds to one detection dimension, specific detection content corresponding to each detection link is confirmed according to a dimension detection sequence, for example, when the device type of the device to be detected is a capacitor, the detection strategy sequentially detects the dimension and the color dimension, in the specific detection process, the detection is firstly carried out according to a feature extraction model (namely a size classification recognition model) corresponding to the dimension, after the detection corresponding to the dimension is qualified, the detection is carried out according to a feature extraction model (namely a color classification recognition model) corresponding to the color dimension, and whether the single feature extraction model is detected by a relevant detection scheme is needed.
As described above, each training sample is an image with the background removed, and the standard image is a device image with the background removed during detection, so that the influence of the graphic elements irrelevant to the electronic device on the detection result is eliminated, and the accuracy of the detection result is ensured. Specifically, the background can be removed when an opencv template matching algorithm is used in positioning. Of course, in the specific implementation process, the image to be detected with the background can also be directly extracted, and corresponding detection can also be realized.
And step S130, under the condition that the results corresponding to at least two detection dimensions are matched, confirming that the corresponding device to be detected is qualified in detection.
The at least two detection dimensions are not limited in number, namely, the at least two detection dimensions are matched to indicate that the detection is qualified, but refer to all detection dimensions included in a detection strategy corresponding to the device to be detected. It should be understood that, in the embodiment of the present application, the confirmation of the qualification of the inspection is only the qualification of the inspection in the inspection target defined in the embodiment of the present application, and does not indicate the confirmation of the inspection result of the overall design and production quality of the PCB board card.
In the device defect detection method based on the dimension characteristics, the to-be-detected image of the device to be detected is obtained, the corresponding standard image and the detection strategy are obtained based on the device image, the detection strategy comprises at least two detection dimensions and dimension detection sequences, a pre-trained feature extraction model is correspondingly configured in each detection dimension, the to-be-detected image and the standard image are input into a first feature extraction model corresponding to the dimension detection sequences to conduct dimension feature classification extraction, a corresponding to-be-detected extraction result and a standard extraction result are obtained, under the condition that the obtained to-be-detected extraction result is matched with the corresponding standard extraction result, the next feature extraction model corresponding to the dimension detection sequences of the to be-detected image is input into the dimension feature classification extraction model to obtain the corresponding to-be-detected extraction result and the standard extraction result until the to-be-detected extraction result is not matched with the corresponding standard extraction result or the feature extraction models corresponding to the at least two detection dimensions are all extracted, and under the condition that the corresponding results of the at least two detection dimensions are matched, the corresponding to be-detected device is confirmed to be qualified. In the automatic optical detection process, sequential detection is carried out based on corresponding detection dimensions according to key features of different device types, so that accurate detection of device defects is realized.
Fig. 3 is a schematic structural diagram of a device defect detecting device based on dimension characteristics according to an embodiment of the present application. As shown in fig. 3, the dimension feature-based device defect detection apparatus includes a detection preparation unit 210, a sequential identification unit 220, and a qualification validation unit 230.
The detection preparation unit 210 is configured to obtain an image to be detected of the device to be detected, obtain a corresponding standard image and a detection strategy based on the device image, wherein the detection strategy comprises at least two detection dimensions and a dimension detection sequence, each detection dimension is correspondingly configured with a pre-trained feature extraction model, the sequential identification unit 220 is configured to perform dimension feature classification extraction on a first feature extraction model corresponding to the input dimension detection sequence of the image to be detected and the standard image to obtain a corresponding extraction result to be detected and a standard extraction result, and perform dimension feature classification extraction on a next feature extraction model corresponding to the input dimension detection sequence of the image to be detected when the obtained extraction result to be detected is matched with the corresponding standard extraction result each time to obtain a corresponding extraction result to be detected and a standard extraction result until the extraction result to be detected is not matched with the corresponding standard extraction result or the feature extraction models corresponding to the at least two detection dimensions are all extracted, and the qualification confirmation unit 230 is configured to confirm that the corresponding device to be detected is qualified when the results corresponding to the at least two detection dimensions are matched.
On the basis of the embodiment, the detection dimension includes a color dimension, the feature extraction model corresponding to the color dimension is a color classification recognition model, and the color classification recognition model is pre-trained by the following method:
acquiring color images of multiple colors as first training samples, wherein the first training samples of each color comprise multiple color images;
And performing supervised central clustering training on a preset first neural network model through a first training sample to obtain a color classification recognition model, wherein the constraint condition of central clustering training is that color data with the same color are gathered around the same color center, and color data with different colors are scattered in respective color centers.
On the basis of the above embodiment, the first neural network model maps the extracted color feature vector into a first high-dimensional space, where there is a corresponding center for the corresponding different color.
On the basis of the embodiment, the detection dimension includes a dimension, the feature extraction model corresponding to the dimension is a size classification recognition model, and the size classification recognition model is pre-trained by the following method:
acquiring a plurality of device images with the same shooting parameters as second training samples, wherein each second training sample with the same size comprises a plurality of device images;
And performing supervised center clustering training on a preset second neural network model through a second training sample to obtain a size classification recognition model, wherein the constraint condition of the center clustering training is that device images with the same size are gathered around the same size center, and device images with different sizes are scattered in the respective size centers.
Based on the above embodiment, the second neural network model maps the extracted size feature vector into a second high-dimensional space, where the second high-dimensional space has a corresponding center for a corresponding different color.
On the basis of the embodiment, when the similarity between the to-be-detected extraction result corresponding to the dimension and the standard extraction result is above a preset similarity threshold, the to-be-detected extraction result is confirmed to be matched with the standard extraction result.
On the basis of the embodiment, the detection strategy is to sequentially detect the dimension of the dimension, the dimension of the color and the dimension of the texture so as to detect the wrong piece or missing piece of the device.
The device defect detection device based on the dimension characteristics provided by the embodiment of the application is contained in the electronic equipment, can be used for executing the corresponding device defect detection method based on the dimension characteristics provided by the embodiment, and has corresponding functions and beneficial effects.
It should be noted that, in the embodiment of the device defect detection apparatus based on dimension characteristics, each unit and module included are only divided according to the functional logic, but not limited to the above division, as long as the corresponding functions can be implemented, and the specific names of the functional units are only for convenience of distinguishing each other, and are not used for limiting the protection scope of the present invention.
Fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 4, the electronic device includes a processor 310 and a memory 320, and may further include an input device 330, an output device 340, and a communication device 350, where the number of processors 310 in the electronic device may be one or more, and one processor 310 is illustrated in fig. 4, and the processors 310, the memory 320, the input device 330, the output device 340, and the communication device 350 in the electronic device may be connected by a bus or other manners, and in fig. 4, the connection is illustrated by a bus.
The memory 320 is a computer readable storage medium, and may be used to store a software program, a computer executable program, and a module, such as program instructions/modules corresponding to the dimension feature-based device defect detection method in the embodiment of the present application. The processor 310 executes various functional applications of the electronic device and data processing, i.e., implements the above-described dimension feature-based device defect detection method, by running software programs, instructions, and modules stored in the memory 320.
The memory 320 may mainly include a storage program area that may store an operating system, application programs required for at least one function, and a storage data area that may store data created according to the use of the electronic device, etc. In addition, memory 320 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 320 may further include memory located remotely from processor 310, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 330 may be used to receive network configuration information. The output device 340 may include a display device such as a display screen.
The electronic equipment can be used for executing any device defect detection method based on dimensional characteristics, and has corresponding functions and beneficial effects.
The embodiments of the present application also provide a storage medium containing computer executable instructions, which when executed by a computer processor, are configured to perform the relevant operations in the dimension feature-based device defect detection method provided in any embodiment of the present application, and have corresponding functions and beneficial effects.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product.
Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash RAM. Memory is an example of a computer-readable medium.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (10)

inputting the images to be detected and the standard images into a first feature extraction model corresponding to the dimension detection sequence for dimension feature classification extraction to obtain a corresponding extraction result to be detected and a standard extraction result, and inputting the images to be detected and the standard images into a next feature extraction model corresponding to the dimension detection sequence for dimension feature classification extraction under the condition that the extraction result to be detected is matched with the corresponding standard extraction result each time to obtain a corresponding extraction result to be detected and a standard extraction result until the extraction result to be detected is not matched with the corresponding standard extraction result or the feature extraction models corresponding to the at least two detection dimensions are extracted;
The detection comparison unit is used for inputting the image to be detected and the standard image into a first feature extraction model corresponding to the dimension detection sequence to perform dimension feature classification extraction to obtain a corresponding extraction result to be detected and a standard extraction result, and inputting the image to be detected into a next feature extraction model corresponding to the dimension detection sequence to perform dimension feature classification extraction under the condition that the extraction result to be detected is matched with the corresponding standard extraction result each time to obtain a corresponding extraction result to be detected and a standard extraction result until the extraction result to be detected is not matched with the corresponding standard extraction result or the feature extraction models corresponding to the at least two detection dimensions are extracted;
CN202410953098.6A2024-07-162024-07-16Device defect detection method, device, equipment and medium based on dimension characteristicsPendingCN119850516A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202410953098.6ACN119850516A (en)2024-07-162024-07-16Device defect detection method, device, equipment and medium based on dimension characteristics

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202410953098.6ACN119850516A (en)2024-07-162024-07-16Device defect detection method, device, equipment and medium based on dimension characteristics

Publications (1)

Publication NumberPublication Date
CN119850516Atrue CN119850516A (en)2025-04-18

Family

ID=95358576

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202410953098.6APendingCN119850516A (en)2024-07-162024-07-16Device defect detection method, device, equipment and medium based on dimension characteristics

Country Status (1)

CountryLink
CN (1)CN119850516A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108109137A (en)*2017-12-132018-06-01重庆越畅汽车科技有限公司The Machine Vision Inspecting System and method of vehicle part
US20190108396A1 (en)*2017-10-112019-04-11Aquifi, Inc.Systems and methods for object identification
CN110033443A (en)*2019-04-042019-07-19武汉精立电子技术有限公司A kind of feature extraction network and its defects of display panel detection method
CN110619350A (en)*2019-08-122019-12-27北京达佳互联信息技术有限公司Image detection method, device and storage medium
US20230142383A1 (en)*2019-12-202023-05-11Boe Technology Group Co., Ltd.Method and device for processing product manufacturing messages, electronic device, and computer-readable storage medium
TWI816549B (en)*2022-09-142023-09-21朝陽科技大學 Automated defect detection methods

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20190108396A1 (en)*2017-10-112019-04-11Aquifi, Inc.Systems and methods for object identification
CN108109137A (en)*2017-12-132018-06-01重庆越畅汽车科技有限公司The Machine Vision Inspecting System and method of vehicle part
CN110033443A (en)*2019-04-042019-07-19武汉精立电子技术有限公司A kind of feature extraction network and its defects of display panel detection method
CN110619350A (en)*2019-08-122019-12-27北京达佳互联信息技术有限公司Image detection method, device and storage medium
US20230142383A1 (en)*2019-12-202023-05-11Boe Technology Group Co., Ltd.Method and device for processing product manufacturing messages, electronic device, and computer-readable storage medium
TWI816549B (en)*2022-09-142023-09-21朝陽科技大學 Automated defect detection methods

Similar Documents

PublicationPublication DateTitle
CN110060237B (en)Fault detection method, device, equipment and system
CN105184778B (en)Detection method and device
TW202239281A (en)Electronic substrate defect detection
WO2017088553A1 (en)Method and system for rapidly identifying and marking electronic component polarity direction
CN110956080B (en)Image processing method and device, electronic equipment and storage medium
CN106127746A (en)Circuit board element missing part detection method and system
CN105574550A (en)Vehicle identification method and device
CN113240673B (en)Defect detection method, defect detection device, electronic equipment and storage medium
CN116843650A (en)SMT welding defect detection method and system integrating AOI detection and deep learning
WO2017181724A1 (en)Inspection method and system for missing electronic component
CN105426917A (en)Element classification method and device
CN105513046A (en)Electronic component polarity identification method and system, and labeling method and system
KR102174424B1 (en)Method for Inspecting Component basesd Server and system and apparatus therefor
CN118691801B (en) Target area identification method, solder paste defect detection method, device, equipment and medium
CN117274258B (en)Method, system, equipment and storage medium for detecting defects of main board image
WO2014103617A1 (en)Alignment device, defect inspection device, alignment method, and control program
Iwahori et al.Defect classification of electronic board using dense SIFT and CNN
CN114708214A (en)Cigarette case defect detection method, device, equipment and medium
CN112750113B (en)Glass bottle defect detection method and device based on deep learning and linear detection
CN117788456A (en) Detection method, model training method, device, storage medium and program product
CN113853243B (en) Game props classification and neural network training method and device
CN111598832A (en)Slot defect marking method and device and storage medium
CN115270841A (en)Bar code detection method and device, storage medium and computer equipment
CN119850516A (en)Device defect detection method, device, equipment and medium based on dimension characteristics
TWI816549B (en) Automated defect detection methods

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right
TA01Transfer of patent application right

Effective date of registration:20250508

Address after:Room 1501, Building 1, No. 6 Yunpu Fourth Road, Huangpu District, Guangzhou City, Guangdong Province 510700

Applicant after:Guangzhou luchen Intelligent Equipment Technology Co.,Ltd.

Country or region after:China

Address before:Room 325, No. 192 Kezhu Road, Huangpu District, Guangzhou City, Guangdong Province, 510700

Applicant before:Guangzhou Yingshi Information Technology Co.,Ltd.

Country or region before:China


[8]ページ先頭

©2009-2025 Movatter.jp