Movatterモバイル変換


[0]ホーム

URL:


CN111814850A - Defect detection model training method, defect detection method and related device - Google Patents

Defect detection model training method, defect detection method and related device
Download PDF

Info

Publication number
CN111814850A
CN111814850ACN202010573557.XACN202010573557ACN111814850ACN 111814850 ACN111814850 ACN 111814850ACN 202010573557 ACN202010573557 ACN 202010573557ACN 111814850 ACN111814850 ACN 111814850A
Authority
CN
China
Prior art keywords
detection
defect
frame
value
sample image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010573557.XA
Other languages
Chinese (zh)
Other versions
CN111814850B (en
Inventor
崔婵婕
任宇鹏
卢维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co LtdfiledCriticalZhejiang Dahua Technology Co Ltd
Priority to CN202010573557.XApriorityCriticalpatent/CN111814850B/en
Publication of CN111814850ApublicationCriticalpatent/CN111814850A/en
Application grantedgrantedCritical
Publication of CN111814850BpublicationCriticalpatent/CN111814850B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

The application discloses a defect detection model training method, a defect detection method and a related device, comprising the following steps: acquiring at least one training sample image, wherein the training sample image is marked with at least one first true value frame respectively corresponding to at least one type of defect; detecting the training sample image by using a target detection network to obtain a first detection result, wherein the first detection result comprises a first detection frame corresponding to the defect; selecting at least part of the first detection frame as a target detection frame by using the distance between the first true value frame and the first detection frame; and determining a network loss value based on the target detection frame and the first true value frame, and updating parameters of the target detection network by using the network loss value to obtain a final defect detection model. The technical scheme that this application provided can be fast and accurate training obtain the defect detection model that detects multiple defect simultaneously.

Description

Defect detection model training method, defect detection method and related device
Technical Field
The present application relates to the field of defect detection, and in particular, to a defect detection model training method, a defect detection method, and a related apparatus.
Background
With the rapid development of the intelligent manufacturing industry, computer vision and image processing technologies become one of the main means for detecting the post-process of products. Such as: in the manufacturing process of the bottle cap, various problems of bottle cap breakage, scratch, slippage, abnormal code spraying, abnormal labeling and the like exist, and if the problem bottle cap cannot be screened out in time, the appearance of the finished product bottle cap is influenced by a light person, and the storage of a product sealed and stored by the bottle cap is influenced by a serious person. For the bottle cap detection problem, most of the existing monitoring methods adopt manual identification, wine bottles are identified and positioned by naked eyes of workers, and some wine bottles can be screened by adopting a traditional image processing method. However, these methods have obvious disadvantages, such as fatigue of workers who perform object recognition for a long time, which leads to a decrease in attention. The accuracy of the detection result is influenced to a certain extent, and the identification rate of the bottle cap with the problem is reduced; moreover, the speed of the visual identification of the staff is slow, so a technical scheme for solving the problems is needed.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a defect detection model training method, a defect detection method and a related device, which can quickly train to obtain a defect detection model for detecting various defects.
In order to solve the technical problem, the application adopts a technical scheme that: a defect detection model training method is provided, and the method comprises the following steps:
acquiring at least one training sample image, wherein the training sample image is marked with at least one first true value frame respectively corresponding to at least one type of defect;
detecting the training sample image by using a target detection network to obtain a first detection result, wherein the first detection result comprises a first detection frame corresponding to the defect;
selecting at least part of the first detection frame as a target detection frame by using the distance between the first truth value frame and the first detection frame;
and determining a network loss value based on the target detection box and the first true value box, and updating parameters of the target detection network by using the network loss value to obtain a final defect detection model.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a defect detection method, the method comprising:
acquiring an image to be detected obtained by shooting an object to be detected;
utilizing a defect detection model to carry out defect detection on the image to be detected to obtain a defect detection result;
wherein the object to be detected is a bottle cap, and/or the defect detection model is a model obtained by training according to any one of the above methods.
In order to solve the above technical problem, another technical solution adopted by the present application is: providing a defect detection model training apparatus, the apparatus comprising a memory and a processor coupled, wherein the memory comprises a local storage and stores a computer program;
the processor is configured to run the computer program to perform the defect detection model training method as described above.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a defect detection apparatus comprising a memory and a processor coupled, wherein,
the memory includes local storage and stores a computer program;
the processor is configured to run the computer program to perform the defect detection method as described above.
In order to solve the above technical problem, the present application adopts another technical solution: there is provided a computer storage medium storing a computer program executable by a processor for implementing a defect detection model training method or a defect detection method as described above.
Compared with the scheme in the prior art, according to the technical scheme provided by the application, at least one training sample image is obtained, the target detection network is used for detecting the training sample image to obtain a first detection result of a first detection frame comprising corresponding defects, then at least part of the first detection frame is selected as the target detection frame by using the distance between the first true value frame and the first detection frame, the network loss value is determined based on the target detection frame and the first true value frame, and the parameters of the target detection network are updated by using the network loss value to obtain a final defect detection model.
Drawings
FIG. 1 is a schematic flow chart illustrating an embodiment of a defect detection model training method according to the present application;
FIG. 2 is a schematic flow chart illustrating another embodiment of a defect detection model training method according to the present application;
FIG. 3 is a schematic flow chart illustrating a method for training a defect inspection model according to another embodiment of the present disclosure;
FIG. 4 is a schematic flow chart illustrating a method for training a defect inspection model according to another embodiment of the present disclosure;
FIG. 5 is a schematic flow chart illustrating a method for training a defect inspection model according to yet another embodiment of the present disclosure;
FIG. 6 is a schematic flow chart illustrating a method for training a defect detection model according to another embodiment of the present disclosure;
FIG. 7 is a schematic flowchart illustrating a method for training a defect inspection model according to another embodiment of the present disclosure;
FIG. 8 is a flowchart illustrating a defect detection method according to an embodiment of the present disclosure;
FIG. 9 is a schematic structural diagram of an embodiment of a training apparatus for defect detection models according to the present application;
FIG. 10 is a schematic structural diagram illustrating an embodiment of a defect detection apparatus according to the present application;
fig. 11 is a schematic structural diagram of an embodiment of a computer storage medium according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a defect detection model training method according to the present application. In the current embodiment, the method provided by the present application includes:
s110: at least one training sample image is acquired.
Wherein, the training sample image is marked with at least one first true value frame respectively corresponding to at least one type of defects.
When training a defect detection model capable of detecting defects of a certain type of target, a sample image including the target to be detected is acquired, wherein the target included in the sample image correspondingly includes at least one type of defects. The sample image including the target may be obtained by shooting with an external shooting device and transmitted to the defect detection model training device, or may be obtained by the defect detection model training device by using an image obtaining unit thereof, which is not limited herein, and the specific configuration of the defect detection model training device is used as a reference.
In the present embodiment, the sample image includes at least the original RGB image, and it is understood that the type of the sample image is not limited to include only one of the original RGB image. When the sample image is an original RGB image, the training sample image can be obtained after at least one type of defects in the original RGB image are labeled. Specifically, at least one first truth value box respectively corresponding to at least one type of defect may be manually marked in advance in the training sample image. Wherein, in the current embodiment, a type of defect may correspond to a first true value box. In another embodiment, a plurality of first true value frames may be labeled in the training sample image for the same type of defect.
Further, in another embodiment, in order to train a defect detection model capable of simultaneously detecting a plurality of different types of defects for a certain type of target, a plurality of sample images including the target for defect detection need to be acquired first. In this embodiment, the plurality of sample images include a plurality of types of defects to be detected, and specifically, the plurality of sample images include: a sample image including only one type of defect, while including a plurality of different types of defects.
Further, please refer to fig. 2, fig. 2 is a schematic flowchart illustrating another embodiment of a defect detection model training method according to the present application. In the present embodiment, the step of acquiring at least one training sample image in step S110 is emphasized. In the embodiment corresponding to fig. 2, step S110 further includes steps S201 to S202.
S201: and acquiring an original training sample image. Wherein, the training sample image is marked with at least one real value frame respectively corresponding to at least one type of defect.
The original training sample image is the sample image which is obtained and marked with at least one true value frame respectively corresponding to at least one type of defect and is not subjected to data enhancement processing. In one embodiment, the process of obtaining the original training sample image is as follows: the method comprises the steps of shooting a target to be detected including a defect to be detected in a multi-azimuth mode by using an image acquisition unit to acquire an original image, manually marking the defect included in the target to be detected in the acquired original image on a defect detection model training device by a user to acquire an original training sample image, and executing the following step S202. When a user marks the defect included in the target in the original image, the user marks the defect by using at least a frame and marks the type of the defect corresponding to the frame to obtain a first true value frame.
In another embodiment, the process of obtaining the original training sample image is: acquiring an original image obtained by shooting a target to be detected including a defect to be detected in a multi-direction mode through external shooting equipment, manually marking the acquired original image by a user to acquire a first true value frame, further acquiring at least one original training sample image, sending the acquired original training sample image to a defect detection model training device, enabling the defect detection model training device to acquire the original training sample image, and then executing the following step S202.
S202: and carrying out data enhancement processing on the original training sample image to obtain a new training sample image.
After the original training sample image is acquired, further performing data enhancement processing on the acquired original training sample image to acquire a new training sample image. In the current embodiment, a large number of new training sample images can be obtained by performing data enhancement processing on the obtained original training sample images, and after the new training sample images are obtained, the obtained new training sample images and the original training sample images are merged and output as a training set for training the defect detection model. In the current embodiment, the obtained new training sample image is also marked with the first true value frame and the corresponding defect type, and after the data enhancement processing is performed on the original training sample image in the current embodiment, the total number of the training sample images can be better enlarged. The data enhancement processing mode at least comprises the following steps: one of flipping, rotating, color transforming, adding noise, radial transforming, translating, and random clipping, and style migration networks.
Further, the step S202 performs data enhancement processing on the original training sample image to obtain a new training sample image, and further includes: acquiring a defect-free image, and transferring the defect characteristics in the original training sample image to the defect-free image by using a style transfer network to obtain a new training sample image. In another embodiment, the steps before and after acquiring the defect-free image and acquiring the original training sample image in step 201 are not limited. The non-defective image can be an image marked and confirmed by a user, and in order to train and obtain a defect detection model with high accuracy, non-defective images of the object to be detected at a plurality of different angles are acquired. In the current embodiment, the style migration network is used for performing data enhancement processing on the training sample images to obtain new training sample images, so that the number of the training sample images is increased, the utilization rate of a data set can be better improved, and the investment of manual labeling of a user is reduced.
Specifically, since the defect feature in the original training sample image may be in another form in different samples, during the process of transferring the defect feature in the original training sample image to another non-defective image by using the style transfer network, the position of the defect, the size of the defect, and the relative fusion effect between the defect and the surrounding image all have adaptive changes, so as to obtain a new training sample image including the defect and having a different defect form, and output the original training sample image and the new training sample image as the training sample image, so as to execute step S120.
S120: and detecting the training sample image by using a target detection network to obtain a first detection result.
After the at least one training sample image is obtained, the training sample image is further detected by using a target detection network to obtain a first detection result. Specifically, the acquired training sample is input into a target detection network, so that the target detection network performs defect detection on the training sample image, and further acquires a first detection result. The first detection result at least comprises a first detection frame corresponding to the defect.
Further, when the target detection network is used for detecting the training sample image, the type of the detected defect is further judged by using the target detection network, and the defect type corresponding to the first detection frame is marked at the acquired first detection frame. In the present embodiment, the defect type corresponding to the first detection frame is marked at the first detection frame.
Further, the defect type is labeled according to a preset defect identification code. In the current embodiment, defect identification codes are set for the types of defects in advance, and when a first detection frame is obtained by detecting a training sample image by using a target detection network, the defects corresponding to the first detection frame are labeled by using the defect identification codes. Each defect identification code has uniqueness, namely different defect identification codes are only used for marking one type of defects in the technical scheme provided by the application.
Further, when training is performed for the first time, the target detection network utilized in the step S120 may be an initial target detection network, each parameter in the initial target detection network is an initial parameter, and it should be noted that, in the process of obtaining the defect detection model through training, the parameters of the target detection network are further adjusted according to the obtained network loss value between the target detection frame and the first true value frame, so as to train and optimize the target detection network, and further obtain the defect detection model with more accurate detection. The selection of the target detection frame may refer to the corresponding explanation in step S130 below, regarding the target detection between networks.
Further, the target detection network includes an SSD network, and correspondingly, the first detection frame is a detection frame obtained by detecting each convolutional layer in the target detection network. The SSD network comprises a basic network and a pyramid network. Wherein, the basic network is the top 4 layer network of VGG-16, and the pyramid network is a simple convolution network (the convolution network can be defined as convolution layer in other embodiments) with gradually reduced characteristic diagram and is composed of 5 parts. And 3 × 3 convolution is carried out on each part of the pyramid structure for prediction, a first detection frame is obtained by detection at each position of the feature map, and each first detection frame is respectively corresponding to a set number of defect classification scores and 4 position offsets relative to a true value frame. And the defect classification score is used for determining the type of the defect corresponding to the first detection frame.
S130: and selecting at least part of the first detection frame as a target detection frame by using the distance between the first true value frame and the first detection frame.
After a first detection result comprising a first detection frame is obtained, the distance between the first true value frame and the first detection frame is further calculated and obtained, and a part of the first detection frame is selected from the first detection frame as a target detection frame by using the calculated distance between the first true value frame and the first detection frame. The target detection frame is a part of the first detection frame used for adjusting the target detection network parameters.
Further, in another embodiment, after the first detection frame is acquired, the loss distance between the first true value frame and the first detection frame is calculated, that is, the loss value between the first true value frame and the first detection frame is calculated, and according to the calculated loss value, a part of the first detection frame is selected from the first detection frame as the target detection frame.
Further, in another embodiment, after calculating the loss value between the first true value box and the first detection box, the obtained loss value is further processed to determine the target detection box according to the obtained processed loss value.
S140: and determining a network loss value based on the target detection frame and the first true value frame, and updating parameters of the target detection network by using the network loss value to obtain a final defect detection model.
After the target detection frame is selected from the first detection frame, a network loss value between the target detection frame and the first true value frame is further calculated, and then parameters of the target detection network are updated by using the obtained network loss value to obtain a final defect detection model.
Further, when the number of the target detection frames is multiple, the network loss values between each target detection frame and the corresponding first true value frame are calculated respectively, the network loss values between each target detection frame and the corresponding true value frame are further subjected to weighted summation to obtain a total network loss value, and then the parameters of the target detection network are updated according to the obtained total loss value to obtain a final defect detection model. And the final defect detection model is a defect detection model which is output finally after the training optimization is stopped.
In the embodiment corresponding to fig. 1 of the present application, at least one training sample image is obtained, the training sample image is detected by using the target detection network, so as to obtain a first detection result including a first detection frame corresponding to a defect, then at least a part of the first detection frame is selected as the target detection frame by using a distance between the first true value frame and the first detection frame, a network loss value is determined based on the target detection frame and the first true value frame, and a parameter of the target detection network is updated by using the network loss value so as to obtain a final defect detection model.
Specifically, compared with the prior art that the intersection ratio between all the detection frames and the truth-value frame is calculated in a large amount, in the embodiment corresponding to fig. 1 of the present application, a part of the first detection frames is selected as the target detection frame from the detected first detection frames based on the distance between the first detection frame and the first truth-value frame, so that the calculation amount can be reduced better compared with the prior art, and the training speed of the defect detection model is further improved.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a defect detection model training method according to another embodiment of the present application. In the present embodiment, the method provided by the present application highlights the step of selecting at least a part of the first detection frame as the target detection frame by using the distance between the first true value frame and the first detection frame in the step S130. In the embodiment corresponding to fig. 3, the step S130 further includes steps S301 to S303.
S301: and respectively calculating the distance between each first detection frame and the corresponding first truth value frame.
After a training sample image is detected by using a target detection network and first detection frames are obtained, first truth value frames corresponding to the first detection frames are respectively determined, and distances between the first detection frames and the corresponding first truth value frames are further respectively calculated after the first truth value frames are determined.
Further, in step S301, the loss distance between each first detection frame and the corresponding first truth frame is calculated respectively, so as to determine the accuracy of the first detection frame relative to the first truth frame according to the loss distance.
S302: and selecting a preset number of first detection frames with the minimum distance from the first detection frames as candidate detection frames.
After calculating and obtaining the distance between each first detection frame and the corresponding first truth frame, further selecting a preset number of first detection frames with the minimum distance from the first detection frames as candidate detection frames. Wherein the preset number is a fixed value set according to an empirical value.
Further, in another embodiment, when the number of training sample images to be trained is not fixed or the number of defects included in the training sample images is not fixed, the number of first detection frames obtained by detecting from the training sample images by using the target detection network is also uncertain, so in step S302, the first detection frame with the smallest distance in the preset proportion may be selected from the first detection frames as the candidate detection frame. The candidate detection frame is a detection frame selected from the first detection frames according to a distance between the first detection frame and the corresponding first truth frame and used for selecting the target detection frame. In other words, the candidate detection frames may be sorted from large to small according to the loss distance, and the loss distance is smaller than the first detection frames of the preset number or the preset proportion.
S303: and calculating a first intersection ratio of each candidate detection frame and the corresponding first true value frame, and selecting at least part of the candidate detection frames as target detection frames according to the first intersection ratio.
After the candidate detection boxes are determined, a first intersection ratio of each candidate detection box and the corresponding first true value box is further calculated. And selecting at least part of candidate detection frames as target detection frames according to the first intersection ratios.
Further, after the first intersection ratio between each candidate detection frame and the corresponding first true value frame is obtained, at least a part of candidate detection frames with larger intersection ratio may be directly selected as the target detection frame, for example.
Further, after the first intersection ratio of each candidate detection frame and the corresponding first true value frame is calculated and obtained, the obtained intersection ratio may be processed again, for example, a mean value and a variance value of the first intersection ratio are calculated, then an intersection ratio threshold corresponding to the selected target detection frame is determined according to the mean value and the variance value, and then the candidate detection frame with the intersection ratio greater than or equal to the determined intersection ratio threshold is selected as the target detection frame, which may be specifically described in the following description of the embodiment section corresponding to fig. 4. In some embodiments, the selected target detection frame is defined as a positive sample, and the other first detection frames that are not selected as target detection frames are defined as negative samples.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a training method of a defect detection model according to another embodiment of the present application. In the current embodiment, the method provided by the present application includes:
s401: at least one training sample image is acquired.
S402: and detecting the training sample image by using a target detection network to obtain a first detection result.
In the current embodiment, the step S403 of calculating the distance between each first detection box and the corresponding first truth box in step S301 further includes step S301.
S403: and respectively obtaining the distance between the central point of each first detection frame and the central point of the corresponding first truth value frame as the distance between the first detection frame and the first truth value frame.
In the current embodiment, after the first detection result including the first detection boxes is obtained, the distance between the center point of each first detection box and the first truth value box corresponding to the first detection box is respectively calculated, and in the current embodiment, the distance between the center point of the first detection box and the center point corresponding to the first truth value box may refer to an actual distance value between the two points.
Further, in another embodiment, after the first detection result including the first detection frames is obtained, the loss distance between the center point of each first detection frame and the first truth frame center point corresponding to the first detection frame is respectively calculated, where the loss distance may also be understood as a loss value between the center point of the first detection frame and the first truth frame center point corresponding to the first detection frame.
Further, in other embodiments, after the training sample image is detected by using the target detection network to obtain the first detection result, the distances between the end points of each first detection box and the end point of the corresponding first truth value box are further obtained, and then the obtained distances between the end points are subjected to weighted summation to serve as the distance between the first detection box and the first truth value box. It is understood that, in some embodiments, the average of the distances between the plurality of endpoints may also be determined, and the average of the distances between the plurality of endpoints may be used as the distance between the first detection box and the first truth box.
S404: and selecting a preset number of first detection frames with the minimum distance from the first detection frames as candidate detection frames.
S405: a first intersection ratio of each candidate detection box and the corresponding first true value box is calculated.
In the present embodiment, the selecting at least a part of the candidate detection frames as the target detection frames according to the first intersection ratio in step S303 further includes steps S406 to S408.
S406: and acquiring the mean value and the variance value of the first intersection ratio of the candidate detection frames with the preset number and the corresponding first true value frame.
In the current embodiment, after calculating the first intersection ratio between each candidate detection frame and the corresponding first true value frame, a mean value and a variance value for obtaining the first intersection ratio between a preset number of candidate detection frames and the corresponding first true value frame are further calculated.
S407: and taking the sum of the mean value and the variance value as a first selected threshold value of the target detection frame.
After the mean value and the variance value are obtained through calculation and calculation, the obtained mean value and the variance value are used as a first selection threshold of the intersection ratio of the target detection frame, and step S408 is executed.
S408: and selecting the candidate detection frame with the first intersection ratio larger than or equal to a first selection threshold value as a target detection frame.
Selecting a candidate detection frame with the first intersection ratio being greater than or equal to a first selection threshold value as a target detection frame, namely outputting the candidate detection frame with the first intersection ratio being greater than or equal to the first selection threshold value as the target detection frame so as to train and optimize the defect detection model; and simultaneously, other candidate detection frames with the first intersection ratio smaller than the first selection threshold value are discarded.
S409: and determining a network loss value based on the target detection frame and the first true value frame, and updating parameters of the target detection network by using the network loss value to obtain a final defect detection model. In the current embodiment, after the first cross-over ratio between each candidate detection frame and the corresponding first true value frame is calculated, the mean value and the variance value of the first cross-over ratio between the candidate detection frames in the preset number and the corresponding first true value frame are further calculated and obtained, then the first selection threshold is further determined by using the mean value and the variance value of the first cross-over ratio, and then the target detection frame is selected from the candidate detection frames based on the first selection threshold, so that the target detection frame can be selected more accurately, the efficiency of training the defect detection model is improved by selecting the more accurate target detection frame, and meanwhile, the accuracy of the training the defect detection model can also be improved.
It should be noted that steps S401 to S402, S404 to S405, and S409 in the present embodiment are the same as some steps in fig. 1 or fig. 3, and may specifically refer to the descriptions of the corresponding parts above, and are not described in detail here.
Referring to fig. 5, fig. 5 is a schematic flowchart illustrating a training method of a defect detection model according to another embodiment of the present application. In the present embodiment, the focus is on the steps included after the step of updating the parameters of the target detection network with the network loss values to obtain the final defect detection model. After the parameters of the target detection network are updated by using the network loss value, namely a new defect detection model is obtained, the obtained new defect detection model is evaluated to judge whether the parameters of the new defect detection model are qualified or not, and further judge whether the parameters of the new defect detection model are the final defect detection model or not. Specifically, the process of evaluating the new defect detection model includes the contents of step S501 to step S504. Specifically, in the current embodiment, the method provided by the present application further includes:
s501: and inputting the test sample image into the defect detection model to obtain a second detection result of the test sample image.
After the parameters of the target detection network are updated by using the network loss value, a new defect detection model is obtained, and then the test sample image is further input into the obtained new defect detection model to obtain a second detection result corresponding to the test sample image. The test sample image is a preset sample image for testing the defect detection model, and may specifically be an image set different from the training sample set. The second detection result includes a second detection frame corresponding to the defect, and the second detection result includes a defect type, specifically, the defect type is marked at the second detection frame by using the defect identification code. It should be noted that the defect identification code for identifying the defect type corresponding to the second detection frame and the defect identification code for identifying the defect corresponding to the first detection frame are the same set of identification codes.
Further, in other embodiments, the steps S120 to S140 executed for a predetermined number of times are preset according to an empirical value, so as to obtain a new defect detection model through multiple training and optimization, and then the steps S501 to S504 are used to evaluate and detect the obtained defect detection model. The preset number of times of performing the training optimization is adjustable according to an empirical value, and is not limited herein.
S502: and judging whether the position and/or the size of the second detection frame of the defect conforms to the theoretical characteristics of the defect.
And after a second detection result comprising the second detection frame is obtained, further judging whether the position and/or the size of the second detection frame corresponding to the defect meet the theoretical characteristics of the defect. Further, after the second detection result is obtained, the position and/or the size of the second detection frame is further obtained, and then the obtained position and/or size of the second detection frame is compared with the preset theoretical characteristics of the defect, so that whether the position and/or the size of the second detection frame of the defect meets the theoretical characteristics of the defect or not is judged. Wherein the theoretical characteristics of the defect include at least a theoretical location range and a theoretical size of the defect.
Specifically, the theoretical characteristics of the defect can also be preset by a user according to the product requirements and the characteristics of the defect frequently occurring in the product, and can also be understood as other common characteristics of the defect in the product. For example, when the product is a bottle cap, the defect that code spraying of the bottle cap is abnormal can be preset to be located on the side face of the bottle cap, or the code spraying height of the bottle cap for code spraying abnormality is preset to be higher than two thirds of the height of the bottle cap. It can be understood that the defect detection model training method provided by the application can be used for training defect detection models for performing defect detection on different products, so that in practical application, a user can set theoretical characteristics of defects according to application scenes of the defect detection models and common characteristics of the defects in the products for performing defect detection, and the theoretical characteristics of the defects are not limited herein.
In another embodiment, the theoretical features of the defect may also be based on observations of the training sample image. As in one embodiment, the observation of the training sample image yields: the bottle cap breakage type defect can be located on the whole interface of the bottle cap; the defects of bottle cap screwing are mostly present on the side surface of the bottle cap and 1/3-1/2 above the bottle cap, the defects of abnormal code spraying are mostly present on the left middle side of the bottle cap, and the defects of bottle cap breakpoints and broken edges are mostly present below the bottle cap, so that the defect conclusion observed by the observation is correspondingly set as the theoretical characteristics of the defects. Based on this information, a method for verifying defect detection is further devised. And inputting the second detection result into a verification defect processing module, detecting whether the defect obtained by current detection accords with the theoretical characteristics of the type of defect according to the type of the defect, if so, leaving a second detection frame corresponding to the defect, and otherwise, discarding the second detection frame.
S503: if yes, the second detection frame is reserved, and if not, the second detection frame is abandoned.
If the position and/or the size of the second detection frame with the defect is judged to be in accordance with the theoretical characteristics of the defect, the second detection frame is reserved, otherwise, if the position and/or the size of the second detection frame with the defect is judged to be not in accordance with the theoretical characteristics of the defect, the second detection frame is abandoned.
S504: and evaluating the performance of the defect detection model by using the number of the reserved second detection frames.
And after judging whether the position and/or the size of the second detection frame of the defect accord with the theoretical characteristics of the defect or not and obtaining the reserved second detection frames, evaluating the performance of the current defect detection model by using the number of the reserved second detection frames. Specifically, for the performance evaluation of the defect detection model, see the following description in the embodiment corresponding to fig. 6.
In the embodiment corresponding to fig. 5, the second detection result of the test sample image is obtained by inputting the test sample image into the defect detection model, and whether the position and/or size of the second detection frame of the defect conforms to the theoretical characteristics of the defect is judged, if so, the second detection frame is retained, otherwise, the second detection frame is discarded, and then the number of the retained second detection frames is used to perform performance evaluation on the defect detection model, so that the detection accuracy of the obtained defect detection model can be better evaluated, and whether the obtained defect detection model is output is determined by using the evaluation result, thereby realizing obtaining the defect detection model with higher detection accuracy in training.
Further, please refer to fig. 6, where fig. 6 is a schematic flowchart of a defect detection model training method according to another embodiment of the present application. In the current embodiment, the test sample image is marked with second true value frames respectively corresponding to at least one type of defect.
The step S504 performs performance evaluation on the defect detection model by using the number of the reserved second detection frames, and further includes:
s601: and acquiring a second intersection ratio between the reserved second detection frame and the corresponding second truth frame.
And after the position and/or the size of a second detection frame of the defect is judged to be in accordance with the theoretical characteristics of the defect and the second detection frame is reserved, a second intersection ratio between the reserved second detection frame and a corresponding second true value frame is further calculated. In step S601, a second intersection ratio between the second truth frames that are obtained and have the same defect type as the second detection frame is calculated.
S602: and determining the accuracy and the recall rate of the defect detection model by using at least part of the second intersection ratio.
And after calculating and acquiring a second intersection ratio between the reserved second detection frame and the corresponding second truth frame, further determining the accuracy and the recall ratio of the defect detection model by using at least part of the second intersection ratio.
Further, in an embodiment, after a second intersection ratio between the reserved second detection box and the corresponding second truth-value box is obtained, a second intersection ratio greater than or equal to a preset second selection threshold is further selected, and then the accuracy and the recall rate of the defect detection model are determined by using the second intersection ratio whose intersection ratio is greater than or equal to the preset second selection threshold.
S603: and obtaining a performance evaluation value of the defect detection model according to the second intersection ratio, the recall rate and the accuracy so as to evaluate the defect detection model according to the performance evaluation value.
And further judging whether the performance of the current new defect detection model meets the requirements or not according to the calculated second intersection ratio, recall rate and accuracy, further judging whether the training of the defect detection model can be stopped or not, and outputting the current defect detection model as a final defect detection model.
The accuracy indicates how many of the samples predicted to be positive are true positive samples. Then prediction as positive is possible in two ways, one is to predict the positive class as positive (TP) and the other is to predict the negative class as positive (FP). That is, in the current embodiment, the accuracy is equal to the ratio of the total number of the second detection frames to the total number of the second true value frames, and the calculation formula is: the accuracy is the total number of second detection frames retained/the total number of second true frames.
The recall is for the original sample, which indicates how many of the positive examples in the sample were predicted to be correct. Specifically, there are two possibilities, one is to predict the original positive class into a positive class (TP) and the other is to predict the original positive class into a negative class (FN). That is, in the present embodiment, the recall ratio is equal to the ratio of the number of the second detection frames that are reserved to the number of the second true value frames in total, that is, the recall ratio is the ratio of the number of the second detection frames/the number of the second true value frames in total. Wherein, it is required to be noted that: in other embodiments, accuracy and recall are further defined as precision and recall.
Further, an average precision mean value, namely mAP (mean average precision) is calculated according to the second intersection ratio, the recall rate and the accuracy. Specifically, the average precision average is calculated according to the following formula,
Figure BDA0002550236070000161
wherein, P and R are accuracy and recall respectively.
It should be noted that, in order to perform a more comprehensive evaluation on the defect detection model, the defect types included in the multiple test sample images need to be limited to at least cover all the defect types in the training sample images used for training.
It should be noted that, in some embodiments, updating the parameter of the target detection network by using the network loss value is defined as performing a gradient backhaul by using the network loss value to update the parameter of the target detection network, and further, when the target detection network is an SSD network, the above step is also understood as performing a gradient backhaul by using the network loss value to update the parameter of the SSD network.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating a training method of a defect detection model according to another embodiment of the present application. In the present embodiment, the determining the network loss value based on the target detection box and the first true value box in step S140 further includes:
s701: and obtaining a position loss value according to the position difference between the target detection frame and the corresponding first truth value frame.
And after the target detection frame is selected and obtained, calculating a network loss value between the target detection frame and the first true value frame. Wherein the network loss values include at least a location loss value and a confidence loss value.
Specifically, the position loss value is a Smooth L1 loss function.
S702: and obtaining a confidence loss value according to the confidence of the target detection frame in the first detection result.
Wherein the confidence loss value is a cross entropy loss function.
S703: a network loss value is derived based on the location loss value and the confidence loss value.
After the position loss value and the confidence coefficient loss value are obtained through calculation respectively, weighted summation is further conducted on the position loss value and the confidence coefficient loss value, the result of the weighted summation of the position loss value and the confidence coefficient loss value is used as a network loss value, and parameters of a target detection network are updated according to the obtained network loss value to obtain a final defect detection model. When the position loss value and the confidence loss value are weighted and summed, the weight ratio corresponding to the position loss value and the confidence loss value may be set according to an empirical value, which is not limited herein.
The method provided by the application can be used for better training to obtain the defect detection model which can be applied to detecting various types of defects of a certain product, and only corresponding defect data needing to be detected is needed to be provided and trained.
Referring to fig. 8, fig. 8 is a schematic flowchart illustrating a defect detection method according to an embodiment of the present application. In the current embodiment, the method provided by the present application includes:
s810: and acquiring an image to be detected obtained by shooting an object to be detected.
When detecting an object to be detected, firstly, the object to be detected needs to be shot, and an image to be detected obtained by shooting the object to be detected is obtained.
Further, in another embodiment, in order to perform more accurate detection on the object to be detected, step S810 is to acquire a plurality of images to be detected at different angles obtained by multi-angle shooting of the object to be detected. When the object to be detected is detected in multiple angles, when the image to be detected of the object to be detected with different angles is shot, the shot angle is further marked in the shot image to be detected, so that the defect is determined according to the shot angle during subsequent defect detection, and after the defect detection result is obtained through detection, the defect detection result and the shooting angle are correspondingly output, so that a user or a subsequent machine can quickly distinguish the defect according to the defect detection result, or the obtained defect is combined with the shooting angle and compared with the theoretical characteristics of the defect, so that the accuracy of the defect detection is judged.
S820: and carrying out defect detection on the image to be detected by using the defect detection model to obtain a defect detection result.
Inputting the image to be detected into a defect detection model, carrying out defect detection on the image to be detected by using the defect detection model to obtain a defect detection result, and outputting the obtained defect detection result.
The object to be detected is a bottle cap, and/or the defect detection model is a final defect detection model obtained by training the defect detection model in any one of the embodiments shown in fig. 1 to 7 and corresponding thereto.
Further, in another embodiment, please continue to refer to fig. 8, in step S820, after detecting the defect of the image to be detected by using the defect detection model, and obtaining the defect detection result, the method provided in the present application further includes:
s830: and classifying the bottle caps according to the types of the defects contained in the defect detection result.
And after the defect detection model is utilized to carry out defect detection on the image to be detected to obtain a defect detection result, classifying the bottle caps according to the types of the defects contained in the defect detection result. Specifically, the detected defect detection result is output to the processor, so that the processor determines the operation of classifying the bottle cap according to the type of the defect included in the detected defect detection result, and generates a control instruction of the corresponding operation of classifying, so as to output the bottle cap to a corresponding station or a corresponding line according to the type of the defect, so as to process the defect on the bottle cap in the following.
Further, the classification processing means determining a station or a work line to which the bottle cap is output according to the defect type of the bottle cap, and feeding back the station or the work line to the processor so as to generate an operation instruction corresponding to the classification processing and output the bottle caps including different types of defects to different stations or work lines to process the defects of the bottle caps, thereby obtaining qualified bottle caps.
Wherein, the types of the defects included in the defect detection result include: at least one of bottle cap breakage, bottle cap deformation, bottle cap broken edge, bottle cap screwing, bottle cap breaking point and code spraying abnormity.
Compared with the prior art, the technical scheme provided by the application can be used for detecting the defects only by using the images shot by the camera, and compared with other methods, the method does not need an additional image processing technology, is simple and easy to implement, and can be quickly applied to a factory.
In the technical scheme provided by any one of the embodiments corresponding to fig. 1 to 7 and 8 of the present application and the embodiments, an end-to-end deep neural network is used to train the RGB images by using a deep learning technology and combining the visual characteristics of the images, so as to realize accurate classification and positioning of the bottle cap defects. In addition, the defect detection method based on deep learning can realize the identification of various defect types by only replacing the data and a small number of parameters of the training sample image, and can be easily upgraded when being deployed in a product.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of a defect detection model training device according to the present application. In the current embodiment, the defect detectionmodel training apparatus 900 provided herein includes aprocessor 901 and amemory 902 coupled thereto. The defect detectionmodel training apparatus 900 may perform the defect detection model training method described in any one of the embodiments of fig. 1 to 7 and their counterparts.
Thememory 902 includes a local storage (not shown) and stores a computer program, and the computer program can implement the method described in any of the embodiments of fig. 1 to 7 and the corresponding embodiments.
Aprocessor 901 is coupled to thememory 902, and theprocessor 901 is configured to run a computer program to execute the defect detection model training method as described in any one of the embodiments of fig. 1 to 7 and corresponding embodiments.
Further, in another embodiment, the defect inspectionmodel training apparatus 900 provided in the present application further includes an image acquisition unit (not shown). The image acquisition unit is connected to theprocessor 901 and is used for capturing and acquiring an original image or a test sample image or a training sample image under the control of theprocessor 901.
Further, in another embodiment, the defect detectionmodel training apparatus 900 provided by the present application may further include a communication circuit (not shown), which is connected to theprocessor 901, and is configured to perform data interaction with an external terminal device under the control of theprocessor 901 to obtain a training sample image or a raw image or a test sample image, where the external terminal device may include a shooting device or a mobile terminal, etc.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of a defect detection apparatus according to the present application. In the current embodiment, thedefect detection apparatus 1000 provided by the present application includes aprocessor 1001 and amemory 1002 coupled. Thedefect detection apparatus 1000 may execute the defect detection model training method described in fig. 8 and any corresponding embodiment thereof.
Thememory 1002 includes a local storage (not shown) and stores a computer program, and the computer program can implement the method described in fig. 8 and any corresponding embodiment thereof when executed.
Theprocessor 1001 is coupled to thememory 1002, and theprocessor 1001 is configured to execute a computer program to perform the defect detection method as described in fig. 8 and any corresponding embodiment thereof.
Further, in another embodiment, thedefect detecting apparatus 1000 provided by the present application further includes an image acquiring unit (not shown). The image acquisition unit is connected to theprocessor 1001 and is configured to capture and acquire an object to be detected under the control of theprocessor 1001 to acquire an image to be detected.
Further, in another embodiment, thedefect detection apparatus 1000 provided by the present application may further include a communication circuit (not shown), which is connected to theprocessor 1001 and configured to perform data interaction with an external terminal device under the control of theprocessor 1001 to obtain an image to be detected, where the external terminal device may include a shooting device or a mobile terminal, etc.
Referring to fig. 11, fig. 11 is a schematic structural diagram of an embodiment of a computer storage medium according to the present application. Thecomputer storage medium 1100 stores acomputer program 1101 capable of being executed by a processor, where thecomputer program 1101 is used to implement the defect detection model training method described in any one of the embodiments of fig. 1 to 7 and corresponding embodiments thereof, or thecomputer program 1101 is used to implement the defect detection method described in any one of the embodiments of fig. 8 and corresponding embodiments thereof. Specifically, thecomputer storage medium 1100 may be one of a memory, a personal computer, a server, a network device, or a usb disk, and is not limited in any way herein.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (14)

1. A method for training a defect detection model, the method comprising:
acquiring at least one training sample image, wherein the training sample image is marked with at least one first true value frame respectively corresponding to at least one type of defect;
detecting the training sample image by using a target detection network to obtain a first detection result, wherein the first detection result comprises a first detection frame corresponding to the defect;
selecting at least part of the first detection frame as a target detection frame by using the distance between the first truth value frame and the first detection frame;
and determining a network loss value based on the target detection box and the first true value box, and updating parameters of the target detection network by using the network loss value to obtain a final defect detection model.
2. The method of claim 1, wherein selecting at least a portion of the first detection box as a target detection box using a distance between the first truth box and the first detection box, further comprises:
respectively calculating the distance between each first detection frame and the corresponding first truth value frame;
selecting a preset number of first detection frames with the minimum distance from the first detection frames as candidate detection frames;
and calculating a first intersection ratio of each candidate detection frame and the corresponding first true value frame, and selecting at least part of the candidate detection frames as the target detection frames according to the first intersection ratio.
3. The method of claim 2, wherein said calculating the distance between each of said first detection boxes and the corresponding first truth box comprises:
respectively obtaining the distance between the central point of each first detection frame and the central point of the corresponding first truth value frame to be used as the distance between the first detection frame and the first truth value frame;
selecting at least part of the candidate detection frames as the target detection frames according to the first intersection ratio, further comprising:
acquiring a mean value and a variance value of a first intersection ratio of the preset number of candidate detection frames and a corresponding first true value frame;
taking the sum of the mean value and the variance value as a first selection threshold value of a target detection frame;
and selecting the candidate detection frame with the first intersection ratio larger than or equal to the first selection threshold value as the target detection frame.
4. The method of claim 1, wherein after updating the parameters of the target inspection network with the network loss values to obtain a final defect inspection model, the method further comprises:
inputting a test sample image into the defect detection model to obtain a second detection result of the test sample image, wherein the second detection result comprises a second detection frame corresponding to the defect;
judging whether the position and/or the size of a second detection frame of the defect accord with the theoretical characteristics of the defect;
if so, retaining the second detection frame, otherwise, discarding the second detection frame;
and evaluating the performance of the defect detection model by using the number of the reserved second detection frames.
5. The method of claim 4, wherein the test sample image is labeled with second truth boxes respectively corresponding to at least one type of defect;
the performing performance evaluation on the defect detection model by using the reserved number of the second detection frames comprises:
acquiring a second intersection ratio between the reserved second detection frame and the corresponding second truth value frame;
determining an accuracy and a recall of the defect detection model using at least a portion of the second cross-correlation;
and obtaining a performance evaluation value of the defect detection model according to the second intersection ratio, the recall rate and the accuracy rate, so as to evaluate the defect detection model according to the performance evaluation value.
6. The method of claim 5, wherein determining a network loss value based on the target detection box and a first truth box comprises:
obtaining a position loss value according to the position difference between the target detection frame and the corresponding first truth value frame; and the number of the first and second groups,
obtaining a confidence coefficient loss value according to the confidence coefficient of the target detection frame in the first detection result;
and obtaining a network loss value based on the position loss value and the confidence coefficient loss value.
7. The method of claim 1, wherein the obtaining at least one training sample image further comprises:
acquiring an original training sample image, wherein the training sample image is marked with at least one first true value frame respectively corresponding to at least one type of defect;
and performing data enhancement processing on the original training sample image to obtain a new training sample image.
8. The method of claim 7, wherein the performing data enhancement processing on the original training sample image to obtain a new training sample image further comprises:
acquiring a defect-free image, and transferring defect characteristics in the original training sample image to the defect-free image by using a style transfer network to obtain a new training sample image.
9. The method of claim 1, wherein the target detection network comprises an SSD network, and wherein the first detection box is a detection box detected for each convolutional layer in the target detection network.
10. A method of defect detection, the method comprising:
acquiring an image to be detected obtained by shooting an object to be detected;
utilizing a defect detection model to carry out defect detection on the image to be detected to obtain a defect detection result;
wherein the object to be detected is a bottle cap, and/or the defect detection model is a model obtained by training according to the method of any one of claims 1 to 9.
11. The method according to claim 10, wherein after the defect detection is performed on the image to be detected by using the defect detection model to obtain a defect detection result, the method further comprises:
classifying the bottle caps according to the types of the defects contained in the defect detection result;
and/or the types of the defects contained in the defect detection result comprise: at least one of bottle cap breakage, bottle cap deformation, bottle cap broken edge, bottle cap screwing, bottle cap breaking point and code spraying abnormity.
12. A defect inspection model training apparatus, comprising a memory and a processor coupled, wherein,
the memory includes local storage and stores a computer program;
the processor is configured to run the computer program to perform the method of any one of claims 1 to 9.
13. A defect detection apparatus, comprising a memory and a processor coupled, wherein,
the memory includes local storage and stores a computer program;
the processor is configured to run the computer program to perform the method of any one of claims 10 to 11.
14. A computer storage medium, characterized in that it stores a computer program executable by a processor for implementing the method of any one of claims 1 to 9 or 10 to 11.
CN202010573557.XA2020-06-222020-06-22Defect detection model training method, defect detection method and related deviceActiveCN111814850B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010573557.XACN111814850B (en)2020-06-222020-06-22Defect detection model training method, defect detection method and related device

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010573557.XACN111814850B (en)2020-06-222020-06-22Defect detection model training method, defect detection method and related device

Publications (2)

Publication NumberPublication Date
CN111814850Atrue CN111814850A (en)2020-10-23
CN111814850B CN111814850B (en)2024-10-18

Family

ID=72845400

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010573557.XAActiveCN111814850B (en)2020-06-222020-06-22Defect detection model training method, defect detection method and related device

Country Status (1)

CountryLink
CN (1)CN111814850B (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112634254A (en)*2020-12-292021-04-09北京市商汤科技开发有限公司Insulator defect detection method and related device
CN112712119A (en)*2020-12-302021-04-27杭州海康威视数字技术股份有限公司Method and device for determining detection accuracy of target detection model
CN113034449A (en)*2021-03-112021-06-25深圳市优必选科技股份有限公司Target detection model training method and device and communication equipment
CN113095400A (en)*2021-04-092021-07-09安徽芯纪元科技有限公司Deep learning model training method for machine vision defect detection
CN113239975A (en)*2021-04-212021-08-10洛阳青鸟网络科技有限公司Target detection method and device based on neural network
CN113255590A (en)*2021-06-252021-08-13众芯汉创(北京)科技有限公司Defect detection model training method, defect detection method, device and system
CN113298793A (en)*2021-06-032021-08-24中国电子科技集团公司第十四研究所Circuit board surface defect detection method based on multi-view template matching
CN113344847A (en)*2021-04-212021-09-03安徽工业大学Long tail clamp defect detection method and system based on deep learning
CN113408631A (en)*2021-06-232021-09-17佛山缔乐视觉科技有限公司Method and device for identifying style of ceramic sanitary appliance and storage medium
CN113673488A (en)*2021-10-212021-11-19季华实验室 Target detection method, device and object sorting intelligent system based on few samples
CN114219962A (en)*2021-12-292022-03-22北京三快在线科技有限公司Model training and target detection method and device, storage medium and electronic equipment
CN114331949A (en)*2021-09-292022-04-12腾讯科技(上海)有限公司 Image data processing method, computer device and readable storage medium
CN114492589A (en)*2021-12-292022-05-13浙江大华技术股份有限公司Model training method, target detection method, terminal device, and computer medium
CN114723651A (en)*2020-12-222022-07-08东方晶源微电子科技(北京)有限公司Defect detection model training method, defect detection method, device and equipment
CN114743180A (en)*2022-04-252022-07-12中国第一汽车股份有限公司Detection result identification method and device, storage medium and processor
CN114882206A (en)*2022-06-212022-08-09上海商汤临港智能科技有限公司Image generation method, model training method, detection method, device and system
CN115147353A (en)*2022-05-252022-10-04腾讯科技(深圳)有限公司Defect detection model training method, device, equipment, medium and program product
WO2022237153A1 (en)*2021-05-142022-11-17上海商汤智能科技有限公司Target detection method and model training method therefor, related apparatus, medium, and program product
CN117710944A (en)*2024-02-052024-03-15虹软科技股份有限公司Model defect detection method, model training method, target detection method and target detection system
CN118246511A (en)*2024-05-202024-06-25合肥市正茂科技有限公司 A training method, system, device and medium for vehicle detection model
CN119624936A (en)*2024-12-062025-03-14西安电子科技大学 Automatic detection method of image defects in fiber optic gyroscope assembly process based on quantitative analysis

Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN108960174A (en)*2018-07-122018-12-07广东工业大学A kind of object detection results optimization method and device
CN109117831A (en)*2018-09-302019-01-01北京字节跳动网络技术有限公司The training method and device of object detection network
CN109409517A (en)*2018-09-302019-03-01北京字节跳动网络技术有限公司The training method and device of object detection network
CN109829893A (en)*2019-01-032019-05-31武汉精测电子集团股份有限公司A kind of defect object detection method based on attention mechanism
CN110111332A (en)*2019-05-202019-08-09陕西何止网络科技有限公司Collagent casing for sausages defects detection model, detection method and system based on depth convolutional neural networks
CN110163858A (en)*2019-05-272019-08-23成都数之联科技有限公司A kind of aluminium shape surface defects detection and classification method and system
CN110503095A (en)*2019-08-272019-11-26中国人民公安大学Alignment quality evaluation method, localization method and the equipment of target detection model
CN110503112A (en)*2019-08-272019-11-26电子科技大学 A Small Target Detection and Recognition Method Based on Enhanced Feature Learning
WO2019233166A1 (en)*2018-06-042019-12-12杭州海康威视数字技术股份有限公司Surface defect detection method and apparatus, and electronic device
CN110889421A (en)*2018-09-072020-03-17杭州海康威视数字技术股份有限公司Target detection method and device
CN111161233A (en)*2019-12-252020-05-15武汉科技大学Method and system for detecting defects of punched leather

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
WO2019233166A1 (en)*2018-06-042019-12-12杭州海康威视数字技术股份有限公司Surface defect detection method and apparatus, and electronic device
CN108960174A (en)*2018-07-122018-12-07广东工业大学A kind of object detection results optimization method and device
CN110889421A (en)*2018-09-072020-03-17杭州海康威视数字技术股份有限公司Target detection method and device
CN109117831A (en)*2018-09-302019-01-01北京字节跳动网络技术有限公司The training method and device of object detection network
CN109409517A (en)*2018-09-302019-03-01北京字节跳动网络技术有限公司The training method and device of object detection network
CN109829893A (en)*2019-01-032019-05-31武汉精测电子集团股份有限公司A kind of defect object detection method based on attention mechanism
CN110111332A (en)*2019-05-202019-08-09陕西何止网络科技有限公司Collagent casing for sausages defects detection model, detection method and system based on depth convolutional neural networks
CN110163858A (en)*2019-05-272019-08-23成都数之联科技有限公司A kind of aluminium shape surface defects detection and classification method and system
CN110503095A (en)*2019-08-272019-11-26中国人民公安大学Alignment quality evaluation method, localization method and the equipment of target detection model
CN110503112A (en)*2019-08-272019-11-26电子科技大学 A Small Target Detection and Recognition Method Based on Enhanced Feature Learning
CN111161233A (en)*2019-12-252020-05-15武汉科技大学Method and system for detecting defects of punched leather

Cited By (28)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN114723651A (en)*2020-12-222022-07-08东方晶源微电子科技(北京)有限公司Defect detection model training method, defect detection method, device and equipment
CN112634254A (en)*2020-12-292021-04-09北京市商汤科技开发有限公司Insulator defect detection method and related device
CN112712119A (en)*2020-12-302021-04-27杭州海康威视数字技术股份有限公司Method and device for determining detection accuracy of target detection model
CN112712119B (en)*2020-12-302023-10-24杭州海康威视数字技术股份有限公司Method and device for determining detection accuracy of target detection model
CN113034449A (en)*2021-03-112021-06-25深圳市优必选科技股份有限公司Target detection model training method and device and communication equipment
CN113034449B (en)*2021-03-112023-12-15深圳市优必选科技股份有限公司Target detection model training method and device and communication equipment
CN113095400A (en)*2021-04-092021-07-09安徽芯纪元科技有限公司Deep learning model training method for machine vision defect detection
CN113344847A (en)*2021-04-212021-09-03安徽工业大学Long tail clamp defect detection method and system based on deep learning
CN113239975A (en)*2021-04-212021-08-10洛阳青鸟网络科技有限公司Target detection method and device based on neural network
CN113344847B (en)*2021-04-212023-10-31安徽工业大学Deep learning-based long tail clamp defect detection method and system
WO2022237153A1 (en)*2021-05-142022-11-17上海商汤智能科技有限公司Target detection method and model training method therefor, related apparatus, medium, and program product
CN113298793B (en)*2021-06-032023-11-24中国电子科技集团公司第十四研究所 A method for circuit board surface defect detection based on multi-view template matching
CN113298793A (en)*2021-06-032021-08-24中国电子科技集团公司第十四研究所Circuit board surface defect detection method based on multi-view template matching
CN113408631A (en)*2021-06-232021-09-17佛山缔乐视觉科技有限公司Method and device for identifying style of ceramic sanitary appliance and storage medium
CN113255590A (en)*2021-06-252021-08-13众芯汉创(北京)科技有限公司Defect detection model training method, defect detection method, device and system
CN114331949A (en)*2021-09-292022-04-12腾讯科技(上海)有限公司 Image data processing method, computer device and readable storage medium
CN114331949B (en)*2021-09-292025-07-22腾讯科技(上海)有限公司Image data processing method, computer device and readable storage medium
CN113673488A (en)*2021-10-212021-11-19季华实验室 Target detection method, device and object sorting intelligent system based on few samples
CN114492589A (en)*2021-12-292022-05-13浙江大华技术股份有限公司Model training method, target detection method, terminal device, and computer medium
CN114219962A (en)*2021-12-292022-03-22北京三快在线科技有限公司Model training and target detection method and device, storage medium and electronic equipment
CN114743180A (en)*2022-04-252022-07-12中国第一汽车股份有限公司Detection result identification method and device, storage medium and processor
CN115147353A (en)*2022-05-252022-10-04腾讯科技(深圳)有限公司Defect detection model training method, device, equipment, medium and program product
CN114882206A (en)*2022-06-212022-08-09上海商汤临港智能科技有限公司Image generation method, model training method, detection method, device and system
CN114882206B (en)*2022-06-212025-07-22上海商汤临港智能科技有限公司Image generation method, model training method, detection method, device and system
CN117710944A (en)*2024-02-052024-03-15虹软科技股份有限公司Model defect detection method, model training method, target detection method and target detection system
CN118246511A (en)*2024-05-202024-06-25合肥市正茂科技有限公司 A training method, system, device and medium for vehicle detection model
CN118246511B (en)*2024-05-202024-11-22合肥市正茂科技有限公司 A training method, system, device and medium for vehicle detection model
CN119624936A (en)*2024-12-062025-03-14西安电子科技大学 Automatic detection method of image defects in fiber optic gyroscope assembly process based on quantitative analysis

Also Published As

Publication numberPublication date
CN111814850B (en)2024-10-18

Similar Documents

PublicationPublication DateTitle
CN111814850B (en)Defect detection model training method, defect detection method and related device
CN111179251B (en) Defect detection system and method using template comparison based on Siamese neural network
CN108985214B (en) Image data annotation method and device
KR102229594B1 (en) Display screen quality detection method, device, electronic device and storage medium
US10878283B2 (en)Data generation apparatus, data generation method, and data generation program
CN110060237B (en)Fault detection method, device, equipment and system
CN106875381B (en)Mobile phone shell defect detection method based on deep learning
CN111310826B (en)Method and device for detecting labeling abnormality of sample set and electronic equipment
CN114359235B (en) A wood surface defect detection method based on improved YOLOv5l network
CN112131936A (en)Inspection robot image identification method and inspection robot
CN109816634B (en)Detection method, model training method, device and equipment
CN115861170A (en)Surface defect detection method based on improved YOLO V4 algorithm
CN116188432A (en)Training method and device of defect detection model and electronic equipment
CN113487610B (en)Herpes image recognition method and device, computer equipment and storage medium
CN112508946B (en)Cable tunnel anomaly detection method based on antagonistic neural network
CN111008576A (en)Pedestrian detection and model training and updating method, device and readable storage medium thereof
CN119091236B (en) Ceramic packaging substrate detection method and system based on visual inspection and meta-learning
CN114255339A (en) A method, device and storage medium for identifying breakpoints of power transmission wires
CN116843677A (en)Appearance quality detection system and method for sheet metal part
KR102781346B1 (en)Photo-based building construction defect analysis apparatus using deep learning
CN118293999A (en) A liquid level detection system for reagent bottles
CN117576098A (en)Cell division balance evaluation method and device based on segmentation
CN111093140A (en)Method, device, equipment and storage medium for detecting defects of microphone and earphone dust screen
CN115358981B (en) Method, device, equipment and storage medium for determining glue defects
CN116958031A (en)Defect detection method and related device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp