Disclosure of Invention
The technical problem to be solved by the embodiment of the invention is to provide the training method of the LED lamp strip defect detection model, which can improve the robustness of the model and is convenient to expand on different data sets.
The technical problem to be further solved by the embodiment of the invention is to provide the training device for the LED lamp strip defect detection model, which can improve the robustness of the model and is convenient to expand on different data sets.
A further technical problem to be solved by the embodiments of the present invention is to provide a computer readable storage medium for storing a computer program capable of improving robustness of a model and facilitating expansion on different data sets.
The technical problem to be further solved by the embodiment of the invention is to provide the LED lamp strip defect detection method, which has strong robustness and can be expanded on different data sets.
In order to solve the technical problems, the embodiment of the invention provides the following technical scheme: a training method of an LED lamp strip defect detection model comprises the following steps:
Providing an LED lamp strip training set only comprising positive type samples;
K typical positive class samples are selected from the LED lamp strip training set based on a K-means++ cluster selection algorithm, and template image features of each typical positive class sample in a plurality of different preset dimensions are obtained by adopting a preset feature extraction network and are used as memory samples to be stored in a memory pool;
Obtaining pseudo-abnormal samples corresponding to each positive type sample in the LED lamp strip training set based on a pseudo-abnormal sample generation method; and
And taking the remaining positive class samples except the typical positive class samples in the LED lamp strip training set and the pseudo-abnormal samples as input images to carry out the following model training:
extracting advanced feature information of the input image in each preset dimension by adopting the feature extraction network;
Calculating Euclidean distance between the advanced feature information of the input image and the memory sample vector in each preset dimension, obtaining difference information between the input image and the memory sample, and determining optimal difference information;
combining the difference information of each dimension in the optimal difference information with the advanced feature information of the input image to obtain serial information;
performing multi-scale feature fusion on the serial information based on a multi-scale feature fusion network model to obtain a fusion feature map of each preset dimension; and
And obtaining the spatial attention map of each fusion characteristic map, and flowing each spatial attention map to a decoder through a jump connection, and outputting a predicted image after the decoder predicts according to the characteristics in each spatial attention map.
Further, during model training, the difference between the true value S and the predicted value S* is also calculated, and the minimized L1 loss function and the focal loss function are constructed to train the model, which specifically includes:
Constructing a minimized L1 loss function: Wherein S represents a true pixel value of an abnormal pixel point in the predicted image at a position corresponding to the input image, and S* represents a predicted pixel value of the abnormal pixel point in the predicted image;
the focal loss function is constructed as: Wherein pt represents the prediction probability of an abnormal pixel point in the predicted image, and alphat and gamma are super parameters for controlling the weighting degree;
constructing a total loss function by combining the minimized L1 loss function and the focal loss function: Wherein lambdal1 and lambdaf are predetermined empirical constants; and
And correcting the parameter values of the feature extraction network, the multi-scale feature fusion network model and each layer of network in the decoder to solve the minimum value of the total loss function, and outputting the LED lamp strip defect detection model when the total loss function is the minimum value.
Further, the K-means++ cluster selection algorithm is used for selecting K typical positive class samples from the LED lamp strip training set, and specifically comprises the following steps:
randomly selecting a positive sample from the LED lamp strip training set as a current initial cluster center;
Calculating a first actual distance between each positive sample in the LED lamp strip training set and the nearest center of the initial cluster;
Updating the current initial cluster center by adopting the positive sample with the largest first actual distance and circularly executing the previous step until K initial cluster centers are selected;
adopting each initial cluster center to correspondingly construct an empty cluster space, calculating a second actual distance between each positive type sample in the LED lamp strip training set and each initial cluster center, and dividing each positive type sample in the LED lamp strip training set into the cluster space closest to the initial cluster center according to the second actual distance;
calculating a vector average value of positive class samples in each cluster space, and updating the initial cluster center corresponding to the cluster space by adopting the vector average value; and
And circularly executing the previous step until the initial cluster center of each cluster type space is unchanged so as to determine each initial cluster center corresponding positive type sample as a typical positive type sample.
Further, the obtaining the pseudo-abnormal samples corresponding to each positive sample in the LED strip training set based on the pseudo-abnormal sample generating method specifically includes:
Respectively carrying out binarization processing on each positive type sample I in the LED lamp strip training set to generate a contour description graph MI, carrying out preset threshold binarization processing on two-dimensional Berlin noise P which is randomly generated in advance to generate a random mask graph Mp, and multiplying each contour description graph MI with the random mask graph Mp pixel by pixel to generate a target mask graph M;
The specific calculation formula of multiplying each target mask M by texture data In derived from the DTD dataset pixel by pixel to generate the abnormal region image Sn* is: Wherein, the method comprises the steps of, wherein,The transparency factor is used for balancing fusion of the positive sample I and the abnormal region image In*, and is randomly sampled and determined in the range of [0.15,1 ]; and
Combining the abnormal region image In* with the corresponding positive sample I to generate a pseudo-abnormal sample IA, where the combination formula is:。
further, the multi-scale feature fusion network model includes:
A first convolution layer for performing a primary convolution on the concatenated information to maintain a number of channels of the concatenated information;
A coordinate attention weighting layer, configured to perform coordinate attention weighting on the serial information in different dimensions to capture channel information of the serial information;
An up-sampling layer for up-sampling the serial information weighted by the coordinate attention to align the dimensions;
A second convolution layer, configured to deconvolute the serial information after the alignment dimension to align the number of channels of the serial information; and
And the pixel addition operation layer is used for carrying out pixel-by-pixel addition on the serial information of different dimensions after the number of the aligned channels.
Further, the method further comprises:
Inputting a test set which comprises a positive class sample and a negative class sample and is similar to the LED lamp strip training set into the LED lamp strip defect detection model to obtain a predicted image, and calculating the abnormal score of the predicted image corresponding to each sample in each test set, wherein the abnormal score calculation formula is as follows:,
Wherein, the pixel points of each predicted image are ranked from high to low according to the size of the pixel values, G (xtest) represents the sum of the pixel values of the pixel points of the predicted image in the front preset position, G (xtest)max represents the pixel maximum value of the pixel points of the predicted image in the front preset position, G (xtest)min represents the pixel minimum value of the pixel points of the predicted image in the front preset position);
determining a score threshold for distinguishing a positive sample from a negative sample by integrating the abnormal scores of each test predicted image and the corresponding sample types;
And detecting the image to be detected of the LED lamp strip by adopting the defect detection model of the LED lamp strip to judge whether the LED lamp strip has defects or not, detecting the image to be detected by adopting the defect detection model of the LED lamp strip to generate a predicted image, calculating the abnormal score of the predicted image corresponding to the image to be detected according to the abnormal score calculation formula, comparing the abnormal score corresponding to the image to be detected with the score threshold, and dividing the image to be detected lower than the score threshold into the image to be detected which has no defects and higher than the score threshold into the image to be detected which has defects.
Further, the feature extraction network is resnet network pre-trained by ImageNet, and the parameters of the first three layers of the resnet network are kept unchanged in the training process.
On the other hand, in order to solve the above further technical problems, the embodiments of the present invention further provide the following technical solutions: a training apparatus for a LED strip defect detection model, the apparatus comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing a training method for a LED strip defect detection model as claimed in any one of the preceding claims when the computer program is executed by the processor.
In order to solve the above further technical problems, the embodiments of the present invention further provide the following technical solutions: a computer readable storage medium comprising a stored computer program, wherein the computer program when run controls a device in which the computer readable storage medium is located to perform the training method of the LED strip defect detection model according to any one of the above.
In order to solve the above further technical problems, the embodiments of the present invention further provide the following technical solutions: the LED lamp strip defect detection method comprises the following steps:
acquiring an image to be tested of an LED lamp strip to be tested; and
The LED lamp strip defect detection model obtained through training by the training method of the LED lamp strip defect detection model is used for detecting the image to be detected to judge whether the LED lamp strip has defects or not.
After the technical scheme is adopted, the embodiment of the invention has at least the following beneficial effects: according to the embodiment of the invention, K typical positive type samples are selected from the LED lamp strip training set only comprising the positive type samples based on the K-means++ cluster selection algorithm, and the K-means++ cluster selection algorithm can realize sample cluster selection aiming at different types of data samples, so that the flexibility is high, and the robustness of a training obtained model can be effectively improved; and then, processing each positive sample in the LED lamp strip training set based on a pseudo-abnormal sample generation method to obtain pseudo-abnormal samples corresponding to each positive sample, expanding the LED lamp strip training set by using the generated pseudo-abnormal samples, combining the rest positive samples except the typical positive samples in the LED lamp strip training set with the pseudo-abnormal samples as input images to perform network training during training, thereby improving the learning of the abnormal samples by a finally generated model, improving the detection performance of the model, then, calculating Euclidean distance of each feature as difference information among corresponding features, finally obtaining optimal difference information, performing feature fusion on serial information generated by combining the difference information with advanced feature information of an input image by using a multi-scale feature fusion network model, calculating a plurality of spatial attention force diagrams by using an attention guiding mechanism, and finally, predicting by a decoder to output a predicted image by using each spatial attention force diagram in a jump connection mode, thereby obtaining the LED lamp strip defect detection model, and completing the training of the model.
Detailed Description
The application will be described in further detail with reference to the drawings and the specific examples. It should be understood that the following exemplary embodiments and descriptions are only for the purpose of illustrating the application and are not to be construed as limiting the application, and that the embodiments and features of the embodiments of the application may be combined with one another without conflict.
Referring to fig. 1, an alternative embodiment of the present invention provides a training method for an LED strip defect detection model, including the following steps:
s1: providing an LED lamp strip training set only comprising positive type samples;
s2: k typical positive class samples are selected from the LED lamp strip training set based on a K-means++ cluster selection algorithm, and template image features of each typical positive class sample in a plurality of different preset dimensions are obtained by adopting a preset feature extraction network and are used as memory samples to be stored in a memory pool;
S3: obtaining pseudo-abnormal samples corresponding to each positive type sample in the LED lamp strip training set based on a pseudo-abnormal sample generation method;
s4: and taking the remaining positive class samples except the typical positive class samples in the LED lamp strip training set and the pseudo-abnormal samples as input images to carry out the following model training:
s41: extracting advanced feature information of the input image in each preset dimension by adopting the feature extraction network;
S42: calculating Euclidean distance between the advanced feature information of the input image and the memory sample vector in each preset dimension, obtaining difference information between the input image and the memory sample, and determining optimal difference information;
S43: combining the difference information of each dimension in the optimal difference information with the advanced feature information of the input image to obtain serial information;
S44: performing multi-scale feature fusion on the serial information based on a multi-scale feature fusion network model to obtain a fusion feature map of each preset dimension; and
S45: and obtaining the spatial attention map of each fusion characteristic map, and flowing each spatial attention map to a decoder through a jump connection, and outputting a predicted image after the decoder predicts according to the characteristics in each spatial attention map.
According to the embodiment of the invention, K typical positive type samples are selected from the LED lamp strip training set only comprising the positive type samples based on the K-means++ cluster selection algorithm, and the K-means++ cluster selection algorithm can realize sample cluster selection aiming at different types of data samples, so that the flexibility is high, and the robustness of a training obtained model can be effectively improved; and then, processing each positive sample in the LED lamp strip training set based on a pseudo-abnormal sample generation method to obtain pseudo-abnormal samples corresponding to each positive sample, expanding the LED lamp strip training set by using the generated pseudo-abnormal samples, combining the rest positive samples except the typical positive samples in the LED lamp strip training set with the pseudo-abnormal samples as input images to perform network training during training, thereby improving the learning of the abnormal samples by a finally generated model, improving the detection performance of the model, then, calculating Euclidean distance of each feature as difference information among corresponding features, finally obtaining optimal difference information, performing feature fusion on serial information generated by combining the difference information with advanced feature information of an input image by using a multi-scale feature fusion network model, calculating a plurality of spatial attention force diagrams by using an attention guiding mechanism, and finally, predicting by a decoder to output a predicted image by using each spatial attention force diagram in a jump connection mode, thereby obtaining the LED lamp strip defect detection model, and completing the training of the model.
In an alternative embodiment of the present invention, when performing model training, the difference between the true value S and the predicted value S* is also calculated, and the minimized L1 loss function and the focal loss function are constructed to train the model, which specifically includes:
Constructing a minimized L1 loss function: Wherein S represents a true pixel value of an abnormal pixel point in the predicted image at a position corresponding to the input image, and S* represents a predicted pixel value of the abnormal pixel point in the predicted image;
the focal loss function is constructed as: Wherein pt represents the prediction probability of an abnormal pixel point in the predicted image, and alphat and gamma are super parameters for controlling the weighting degree;
constructing a total loss function by combining the minimized L1 loss function and the focal loss function: Wherein lambdal1 and lambdaf are predetermined empirical constants; and
And correcting the parameter values of the feature extraction network, the multi-scale feature fusion network model and each layer of network in the decoder to solve the minimum value of the total loss function, and outputting the LED lamp strip defect detection model when the total loss function is the minimum value.
In this embodiment, the minimum value of the total loss function is solved by constraining the model by adopting the minimized L1 loss function and the focal loss function, so as to construct the total loss function, and by correcting the feature extraction network, the multi-scale feature fusion network model and the relevant parameter values of each layer of network in the decoder in the training process, the LED lamp strip defect detection model can be obtained, thereby completing the training of the model.
According to the embodiment of the invention, K typical positive type samples are selected from the LED lamp strip training set only comprising the positive type samples based on the improved K-means cluster selection algorithm, and the improved K-means cluster selection algorithm can realize sample cluster selection aiming at different types of data samples, so that the flexibility is high, and the robustness of a training obtained model can be effectively improved; then, based on a pseudo-abnormal sample generation method, processing each positive sample in the LED lamp strip training set to obtain a pseudo-abnormal sample corresponding to each positive sample, expanding the LED lamp strip training set by using the generated pseudo-abnormal sample, combining the positive samples except the typical positive sample in the LED lamp strip training set with the pseudo-abnormal sample as a model training sample to perform network training when training is performed, thereby improving the learning of the finally generated model on the abnormal sample, improving the detection performance of the model, then, obtaining the optimal difference information finally by calculating the Euclidean distance of each feature as the difference information between the corresponding features, after feature fusion is carried out on serial information generated by combining difference information and original feature information of an input sample through a multi-scale feature fusion network model, a plurality of spatial attention force diagrams can be calculated through an attention guiding mechanism, then, the spatial attention force diagrams are connected to a decoder through jump connection, prediction can be finally carried out by the decoder to output a predicted image, a total loss function is constructed by restraining the model through a minimized L1 loss function and a focal loss function, and the LED lamp strip defect detection model can be obtained by correcting the feature extraction network, the multi-scale feature fusion network model and the relevant parameter values of each layer of network in the decoder in the training process to solve the minimum value of the total loss function, so that training of the model is completed.
In particular, it is understood that the positive sample is an LED strip image without defects and the negative sample is an LED strip image with defects.
In an optional embodiment of the present invention, the selecting K typical positive samples from the LED strip training set based on the K-means++ cluster selection algorithm specifically includes:
randomly selecting a positive sample from the LED lamp strip training set as a current initial cluster center;
Calculating a first actual distance between each positive sample in the LED lamp strip training set and the nearest center of the initial cluster;
Updating the current initial cluster center by adopting the positive sample with the largest first actual distance and circularly executing the previous step until K initial cluster centers are selected;
adopting each initial cluster center to correspondingly construct an empty cluster space, calculating a second actual distance between each positive type sample in the LED lamp strip training set and each initial cluster center, and dividing each positive type sample in the LED lamp strip training set into the cluster space closest to the initial cluster center according to the second actual distance;
calculating a vector average value of positive class samples in each cluster space, and updating the initial cluster center corresponding to the cluster space by adopting the vector average value; and
And circularly executing the previous step until the initial cluster center of each cluster type space is unchanged so as to determine each initial cluster center corresponding positive type sample as a typical positive type sample.
In the embodiment, the improved K-means cluster selection algorithm is specifically the sample processing process, K typical positive samples can be simply and rapidly selected from the LED lamp strip training set, and the data processing efficiency is high.
In an optional embodiment of the present invention, the obtaining, based on the method for generating a pseudo-abnormal sample, a pseudo-abnormal sample corresponding to each positive sample in the LED strip training set specifically includes:
Respectively carrying out binarization processing on each positive type sample I in the LED lamp strip training set to generate a contour description graph MI, carrying out preset threshold binarization processing on two-dimensional Berlin noise P which is randomly generated in advance to generate a random mask graph Mp, and multiplying each contour description graph MI with the random mask graph Mp pixel by pixel to generate a target mask graph M;
The specific calculation formula of multiplying each target mask M by texture data In derived from the DTD dataset pixel by pixel to generate the abnormal region image Sn* is: Wherein, the method comprises the steps of, wherein,The transparency factor is used for balancing fusion of the positive sample I and the abnormal region image In*, and is randomly sampled and determined in the range of [0.15,1 ]; and
Combining the abnormal region image In* with the corresponding positive sample I to generate a pseudo-abnormal sample IA, where the combination formula is:。
In this embodiment, the pseudo-abnormal sample generating method specifically performs the above processing procedure on each positive sample I in the LED strip training set, and can simply and quickly generate a pseudo-abnormal sample IA corresponding to each positive sample I.
In an alternative embodiment of the present invention, as shown in fig. 2, the multi-scale feature fusion network model includes:
A first convolution layer for performing a primary convolution on the concatenated information to maintain a number of channels of the concatenated information;
A coordinate attention weighting layer, configured to perform coordinate attention weighting on the serial information in different dimensions to capture channel information of the serial information;
An up-sampling layer for up-sampling the serial information weighted by the coordinate attention to align the dimensions;
A second convolution layer, configured to deconvolute the serial information after the alignment dimension to align the number of channels of the serial information; and
And the pixel addition operation layer is used for carrying out pixel-by-pixel addition on the serial information of different dimensions after the number of the aligned channels.
In this embodiment, the first convolution layer is specifically a3×3 convolution layer, and is configured to maintain the number of channels of the serial information, and since the serial information of each dimension is simply connected with the advanced feature information and the difference information of the model training sample, the channel information of the serial information is captured by using a coordinate attention (CA-block, i.e. a coordinate attention weighting layer); after the coordinate attention weighting, the serial information of different dimensions is firstly up-sampled by an up-sampling layer to align the dimensions, then the convolution operation is performed by a second convolution layer to align the number of channels, and finally the element-by-element addition operation is performed by a pixel addition operation layer to realize multi-scale feature fusion.
In an alternative embodiment of the invention, the method further comprises:
Inputting a test set which comprises a positive class sample and a negative class sample and is similar to the LED lamp strip training set into the LED lamp strip defect detection model to obtain a predicted image, and calculating the abnormal score of the predicted image corresponding to each sample in each test set, wherein the abnormal score calculation formula is as follows:
,
Wherein, the pixel points of each predicted image are ranked from high to low according to the size of the pixel values, G (xtest) represents the sum of the pixel values of the pixel points of the predicted image in the front preset position, G (xtest)max represents the pixel maximum value of the pixel points of the predicted image in the front preset position, G (xtest)min represents the pixel minimum value of the pixel points of the predicted image in the front preset position);
Determining and outputting a score threshold value for distinguishing a positive sample from a negative sample by integrating the abnormal score of each test predicted image and the corresponding sample type;
And detecting the image to be detected of the LED lamp strip by adopting the defect detection model of the LED lamp strip to judge whether the LED lamp strip has defects or not, detecting the image to be detected by adopting the defect detection model of the LED lamp strip to generate a predicted image, calculating the abnormal score of the predicted image corresponding to the image to be detected according to the abnormal score calculation formula, comparing the abnormal score corresponding to the image to be detected with the score threshold, and dividing the image to be detected lower than the score threshold into the image to be detected which has no defects and higher than the score threshold into the image to be detected which has defects.
When the traditional model is used for defect detection, only a predicted image is usually given, and a manual dividing threshold is needed to distinguish positive and negative samples, in the embodiment, the trained model is tested, and the predicted image is output by the modelThe first 1000 largest pixel values and calculate an anomaly score; assuming that the sample image input by the test is xtest, the output is S*(xtest), the sum of the first 1000 maximum pixel values is G (xtest), and the anomaly score G (xtest) is obtained through normalization calculation; since the anomaly score corresponding to the positive class sample is lower, and the anomaly score corresponding to the negative class sample is higher, anomaly detection is performed by determining the score threshold of the division through reasonable reasoning, the division lower than the score threshold is positive (i.e. no defect exists), and the division higher than the threshold is negative (i.e. defect exists).
In an alternative embodiment of the present invention, the feature extraction network is a resnet network pre-trained by ImageNet, and the first three layers of parameters of the resnet network are kept constant during training. In this embodiment, the resnet network trained by ImageNet in advance can effectively extract the advanced features of the sample; meanwhile, in order to ensure the uniformity of the high-level features of the template image features in the memory pool and the image features of the input image, parameters of the first three layers of resnet networks are fixed in the whole training process.
In specific implementation, each positive sample in the LED lamp strip training set isThe resnet network trained by ImageNet carries out feature extraction on each typical positive class sample to output template image features with three dimensions of 64 multiplied by 64, 128 multiplied by 32 and 256 multiplied by 16; likewise, the number of the cells to be processed, each input image is also characterized the extraction outputs 64×64×64 advanced feature information II in three dimensions of 128×32×32 and 256×16×16;
In step S42, by calculating the euclidean distance between the advanced feature information II and each of the template image features in the memory pool in the same dimension, summing the euclidean distances between the advanced feature information II and each of the template image features in different dimensions to obtain euclidean distances and values corresponding to the advanced feature information II corresponding to different template image features, and then, taking the minimum value of the euclidean distances and values as the best difference information between the advanced feature information II and each of the template image features in the memory pool, combining each of the euclidean distances in different dimensions corresponding to the best difference information with each of the template image features to obtain combined serial information, where, when calculating the euclidean distances between the advanced feature information and each of the template image features in the memory pool, a specific calculation formula is as follows:
(equation 1)
Wherein N represents the total number of template image features of the same dimension in the memory pool, and MI represents the template image features in the memory pool;
By selecting the minimum value of the Euclidean distance and the value as the optimal difference information DI* between the advanced feature information II and the image features of each template in the memory pool, a specific calculation formula is as follows:
(equation 2)
Wherein, the value of N is 3, which represents the three dimensions;
The best difference information DI* indicates the difference between a model training sample and a typical positive sample that is most similar to the model training sample, so that the greater the corresponding position difference value is, the higher the possibility that abnormality occurs in the position area in the model training sample is;
Further, in step S43, the difference information (i.e. euclidean distance) in three dimensions constituting the optimal difference information DI* is combined with the template image features of the corresponding model training samples to obtain the concatenation information CI1、 CI2 and CI, respectively3
Next, in step S44, after feature fusion is performed on each serial information by the multi-scale feature fusion module, three-dimensional fusion feature maps CIn* (n=1, 2, 3) of 64×16×16, 128×32×32, and 256×64×64 are obtained respectively,The channel pixel mean image of the feature map CI3* with the size is directly used as the spatial attention map M3, the spatial attention map M2 is obtained by up-sampling M3 and then multiplying the channel pixel mean image of the feature map CI2* with the size of 128×32×32 pixel by pixel, and then the spatial attention map M1 is obtained by processing the channel pixel mean image of the feature map CI1* by adopting the same operation, wherein the specific calculation formula is as follows:
(equation 3)
(Equation 4)
(Equation 5)
Wherein, C1、C2 and C3 represent the channel numbers of CI1*、CI2* and CI3*, respectively, and M2U and M3U represent the feature maps obtained by upsampling M2 and M3. Finally M1、M2 and M3 will flow to the decoder through a jump connection; in a specific implementation, the network structure of the decoder is a U-net network structure, which is not described in detail herein.
In addition, the input image is formed by combining the rest positive type samples except the typical positive type samples in the LED lamp strip training set and the pseudo-abnormal samples, wherein S of the positive type samples is composed of a pixel value full 1 image, which indicates that an abnormal region is not contained, and the S is not set to 0 so as to prevent gradient from disappearing; the S of the pseudo-abnormal sample is composed of In, and represents an abnormal region of the pseudo-abnormal sample.
Finally, for the total loss function, the importance of the loss of Ll1 and Lf is controlled through the parameters of lambdal1 and lambdaf during model training, so that the objective function is better optimized; empirically set λl1 to 0.6 and λf to 0.4.
On the other hand, as shown in fig. 3, an embodiment of the present invention further provides a training device 1 for an LED strip defect detection model, where the device includes a processor 10, a memory 12, and a computer program stored in the memory 12 and configured to be executed by the processor 10, where the processor 10 implements the training method for an LED strip defect detection model according to the above embodiment when executing the computer program.
Illustratively, the computer program may be partitioned into one or more modules/units that are stored in the memory 12 and executed by the processor 10 to accomplish the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program in the training device 1 of the LED strip defect detection model. For example, the computer program may be divided into functional modules in the training device 1 of the LED strip defect detection model shown in fig. 4, where the training set providing module 21, the memory pool sample selecting module 22, the pseudo-abnormal sample generating module 23, and the model training module 25 respectively perform the above steps S1 to S4.
The training device 1 of the LED lamp strip defect detection model can be a computing device such as a desktop computer, a notebook computer, a palm computer and a cloud server. The training device 1 of the LED lamp strip defect detection model can comprise, but is not limited to, a processor 10 and a memory 12. It will be understood by those skilled in the art that the schematic diagram is merely an example of the command management apparatus 5 of the vehicle-mounted ECU system, and does not constitute a limitation of the training apparatus 1 of the LED strip defect detection model, and may include more or less components than those illustrated, or may combine some components, or different components, for example, the command management apparatus 5 of the vehicle-mounted ECU system may further include an input-output device, a network access device, a bus, and the like.
The Processor 10 may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor, etc., and the processor 10 is a control center of the training device 1 for the LED strip defect detection model, and connects the respective parts of the training device 1 for the entire LED strip defect detection model by using various interfaces and lines.
The memory 12 may be used to store the computer program and/or module, and the processor 10 may implement various functions of the training device 1 of the LED strip defect detection model by running or executing the computer program and/or module stored in the memory 12 and invoking data stored in the memory 12. The memory 12 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function (such as a pattern recognition function, a pattern layering function, etc.), and the like; the storage data area may store data (such as graphic data, etc.) created according to the use of the training device 1 of the LED strip defect detection model, and the like. In addition, memory 12 may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart memory card (SMART MEDIA CARD, SMC), secure Digital (SD) card, flash memory card (FLASH CARD), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
The functionality of the embodiments of the present invention, if implemented in the form of software functional modules or units and sold or used as a stand-alone product, may be stored in a computing device readable storage medium. Based on such understanding, the implementation of all or part of the flow of the method of the foregoing embodiment according to the embodiments of the present invention may also be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of each of the foregoing method embodiments when executed by the processor 10. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
In still another aspect, an embodiment of the present invention further provides a computer readable storage medium, where the computer readable storage medium includes a stored computer program, where when the computer program runs, the device where the computer readable storage medium is located is controlled to execute the training method of the LED strip defect detection model according to any one of the foregoing embodiments of the present invention.
In still another aspect, an embodiment of the present invention further provides a method for detecting defects of an LED strip, including the following steps:
acquiring an image to be tested of an LED lamp strip to be tested; and
The LED lamp strip defect detection model obtained through training by the training method of the LED lamp strip defect detection model is used for detecting the image to be detected to judge whether the LED lamp strip has defects or not.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are all within the scope of the present invention.