Movatterモバイル変換


[0]ホーム

URL:


CN112907560A - Notebook appearance flaw segmentation method based on deep learning - Google Patents

Notebook appearance flaw segmentation method based on deep learning
Download PDF

Info

Publication number
CN112907560A
CN112907560ACN202110282633.6ACN202110282633ACN112907560ACN 112907560 ACN112907560 ACN 112907560ACN 202110282633 ACN202110282633 ACN 202110282633ACN 112907560 ACN112907560 ACN 112907560A
Authority
CN
China
Prior art keywords
deep learning
image
convolution
size
notebook
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110282633.6A
Other languages
Chinese (zh)
Inventor
王诚
程坦
刘涛
吕剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongkehaituo Wuxi Technology Co ltd
Original Assignee
Zhongkehaituo Wuxi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongkehaituo Wuxi Technology Co ltdfiledCriticalZhongkehaituo Wuxi Technology Co ltd
Priority to CN202110282633.6ApriorityCriticalpatent/CN112907560A/en
Publication of CN112907560ApublicationCriticalpatent/CN112907560A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The invention discloses a notebook appearance flaw segmentation method based on deep learning, which comprises the following steps of: the method comprises the following steps: collecting training samples to make a data set, and training a model to be convergent by using the data set; step two: collecting a target image; step three: the connected domain with the largest area is found through connected domain analysis, the image is cut into a target size by taking the region as the center and input, and the definition of the image is guaranteed to the greatest extent; step four: modifying the structure of Res50, replacing convolution modules of Res4 and Res5 with deformable convolution, fixing parameters of a previous layer unchanged, retraining parameters of Res4 and layers behind Res4, enhancing the geometric transformation modeling capability of the model, and reducing missing reports and false reports; step five: performing K-Means clustering on a target frame in the data set to obtain prior knowledge of the size of a search box; step six: adjusting the size of the cut image, and inputting a deep learning model; step seven: and judging the appearance flaws of the notebook through a deep learning model.

Description

Notebook appearance flaw segmentation method based on deep learning
Technical Field
The invention relates to the field of notebook appearance flaw segmentation, in particular to a notebook appearance flaw segmentation method based on deep learning.
Background
With the development of science and technology, the automation level of industrial production is also increasing day by day. In the production of electronic devices, how to improve the detection efficiency becomes an urgent problem to be solved while improving the production efficiency. At present, most factories adopt manual detection to perform quality inspection, and the detection mode is influenced by factors such as experience of workers and working state, and is lack of objectivity, large in workload and low in detection efficiency. The traditional visual detection method is used for a notebook appearance flaw detection algorithm, and is generally low in accuracy, poor in generalization capability and difficult to adapt to a factory environment. Therefore, the notebook appearance flaw segmentation technology based on deep learning has important practical significance.
Therefore, a notebook appearance flaw segmentation method based on deep learning is provided.
Disclosure of Invention
The invention mainly aims to provide a notebook appearance flaw segmentation method based on deep learning, which can effectively solve the problems that most generations in the background technology adopt a manual detection mode to detect the notebook appearance flaws, a large amount of human resources are needed, the detection efficiency is low, the traditional vision-based notebook appearance flaw detection algorithm is easily interfered by factors such as external environment, the unified design is difficult to perform aiming at different flaw characteristics, the detection accuracy is low, and the generalization capability is poor.
In order to achieve the purpose, the invention adopts the technical scheme that:
a notebook appearance flaw segmentation method based on deep learning comprises the following steps:
the method comprises the following steps: collecting training samples, making a data set, training a deep learning model by using the data set, and training the model until convergence;
step two: collecting a target image, and segmenting the foreground and the background of the image by using a maximum inter-class variance method;
step three: performing connected domain analysis to find a connected domain with the largest area, and cutting the image into a target size by taking the region as a center to input;
step four: modifying the structure of Res50, replacing convolution modules of Res4 and Res5 with deformable convolution, fixing parameters of a previous layer unchanged, and retraining parameters of Res4 and layers after Res 4;
step five: performing K-Means clustering on a target frame in the data set to obtain prior knowledge of the size of a search box;
step six: adjusting the size of the cut image, and inputting a deep learning model;
step seven: and (4) distinguishing the appearance flaws of the notebook computer through a deep learning model and outputting an inference result to an upper computer for display.
Further, the data set in the first step includes a plurality of sample images and label information corresponding to each sample image, where the label information includes a category of a detection target in the image, a segmentation mask, and a framing position, where the framing position may be represented as (x, y, w, h), x is an abscissa of the target frame, y is an ordinate of the target frame, w is a width of the target frame, h is a length of the target frame, and the segmentation mask is an outline of an actual detection object in the target frame.
Furthermore, the deformable convolution in the fourth step mainly adds learning of offset in the x and y directions in the original convolution unit, dynamically adjusts the size and position of a convolution kernel, inputs the deformable convolution as a feature map after standard convolution, performs convolution operation on the feature map to generate N2-dimensional offset quantities (Δ x, Δ y), corrects the value of each point on the input feature map, sets the feature map as P, i.e., P (x, y) = P (x + Δ x, y + Δ y), calculates P (x + Δ x, y + Δ y) by using bilinear interpolation when x + Δ x is a fraction, forms N feature maps, and performs convolution one-to-one by using N convolution kernels to obtain output.
Further, before training the deep learning model, the size of the search box of the RPN network candidate area is set.
Further, the candidate boxes in the step five have sizes of 322, 642 and 1282, and the aspect ratios of the candidate boxes are 1:1, 1:3 and 3: 1.
Compared with the prior art, the invention has the following beneficial effects:
1. when a sample image is processed, a foreground area is cut out through a traditional image algorithm, and then the size of the cut-out foreground image is adjusted, so that the definition of the image is guaranteed to the maximum extent compared with the case that the original image is directly used for adjusting the size;
2. network structures of Res4 and Res5 of ResNet50 are modified, DCN is used for replacing a common convolution module, the geometric transformation modeling capability of the model is enhanced, and missing reports and false reports are reduced to a certain extent;
3. the size and the length-width ratio of the candidate frame of the RPN network are optimized by using the priori knowledge, so that the method is more suitable for detecting the appearance flaws of the notebook computer, the missing report is further reduced, and the detection precision is improved.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings needed to be used in the technical description of the present invention will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive labor.
FIG. 1 is a structural diagram of a deformable convolution of a notebook appearance flaw segmentation method based on deep learning according to the present invention.
Detailed Description
The present invention will be further described with reference to the following detailed description, wherein the drawings are for illustrative purposes only and are not intended to be limiting, wherein certain elements may be omitted, enlarged or reduced in size, and are not intended to represent the actual dimensions of the product, so as to better illustrate the detailed description of the invention.
Example 1
As shown in fig. 1, a notebook appearance flaw segmentation method based on deep learning includes the following steps:
the method comprises the following steps: collecting training samples, making a data set, training a deep learning model by using the data set, and training the model until convergence;
step two: collecting a target image, and segmenting the foreground and the background of the image by using a maximum inter-class variance method;
step three: performing connected domain analysis to find a connected domain with the largest area, and cutting the image into a target size by taking the region as a center to input;
step four: modifying the structure of Res50, replacing convolution modules of Res4 and Res5 with deformable convolution, fixing parameters of a previous layer unchanged, and retraining parameters of Res4 and layers after Res 4;
step five: performing K-Means clustering on a target frame in the data set to obtain prior knowledge of the size of a search box;
step six: adjusting the size of the cut image, and inputting a deep learning model;
step seven: and (4) distinguishing the appearance flaws of the notebook computer through a deep learning model and outputting an inference result to an upper computer for display.
In the second step, the foreground area is cut out through a traditional image algorithm, and then the size of the cut foreground image is adjusted, so that the definition of the image is guaranteed to the maximum extent compared with the method of directly adjusting the size by using an original image.
In the third step, if the size of the image to be detected is larger, the detection precision can be increased, but the detection speed can be influenced, the display memory consumed by detection is slightly larger, so that the size can be properly reduced, and a balance is obtained between the detection precision and the detection speed.
Example 2
As shown in fig. 1, a notebook appearance flaw segmentation method based on deep learning includes the following steps:
the method comprises the following steps: collecting training samples, making a data set, training a deep learning model by using the data set, and training the model until convergence;
step two: collecting a target image, and segmenting the foreground and the background of the image by using a maximum inter-class variance method;
step three: performing connected domain analysis to find a connected domain with the largest area, and cutting the image into a target size by taking the region as a center to input;
step four: modifying the structure of Res50, replacing convolution modules of Res4 and Res5 with deformable convolution, fixing parameters of a previous layer unchanged, and retraining parameters of Res4 and layers after Res 4;
step five: performing K-Means clustering on a target frame in the data set to obtain prior knowledge of the size of a search box;
step six: adjusting the size of the cut image, and inputting a deep learning model;
step seven: and (4) distinguishing the appearance flaws of the notebook computer through a deep learning model and outputting an inference result to an upper computer for display.
The deformable convolution in the fourth step is mainly characterized in that learning of offset in the x and y directions is added in an original convolution unit, dynamic adjustment is carried out on the size and the position of a convolution kernel, the input of the deformable convolution is a feature map after standard convolution, then convolution operation is carried out on the feature map to generate N2-dimensional offset quantities (delta x and delta y), the value of each point on the input feature map is corrected respectively, the feature map is set to be P, namely P (x, y) = P (x & ltdelta & gt, y & ltdelta & gt y & gt), when x & ltdelta & gt is a fraction, P (x & ltdelta & gt, y & ltdelta & gt) is calculated by using bilinear interpolation to form N feature maps, and then N convolution kernels are used for one-to-one correspondence to obtain output;
meanwhile, network structures of Res4 and Res5 of ResNet50 are modified, DCN is used for replacing a common convolution module, the geometric transformation modeling capability of the model is enhanced, and missing reports and false reports are reduced to a certain extent.
Example 3
As shown in fig. 1, a notebook appearance flaw segmentation method based on deep learning includes the following steps:
the method comprises the following steps: collecting training samples, making a data set, training a deep learning model by using the data set, and training the model until convergence;
step two: collecting a target image, and segmenting the foreground and the background of the image by using a maximum inter-class variance method;
step three: performing connected domain analysis to find a connected domain with the largest area, and cutting the image into a target size by taking the region as a center to input;
step four: modifying the structure of Res50, replacing convolution modules of Res4 and Res5 with deformable convolution, fixing parameters of a previous layer unchanged, and retraining parameters of Res4 and layers after Res 4;
step five: performing K-Means clustering on a target frame in the data set to obtain prior knowledge of the size of a search box;
step six: adjusting the size of the cut image, and inputting a deep learning model;
step seven: and (4) distinguishing the appearance flaws of the notebook computer through a deep learning model and outputting an inference result to an upper computer for display.
The data set in the first step includes a plurality of sample images and label information corresponding to each sample image, where the label information includes a category, a segmentation mask, and a framing position of a detection target in the image, the framing position may be represented as (x, y, w, h), x is an abscissa of the target frame, y is an ordinate of the target frame, w is a width of the target frame, h is a length of the target frame, and the segmentation mask is an outline of an actual detection object in the target frame.
Before training the deep learning model, the size of a search box of the RPN network candidate area is set, the invention uses a K-Means clustering method to cluster the size of a target box in a data set, and a proper candidate box size and length-width ratio are divided. The priori knowledge of the size of the search box is obtained through K-Means clustering, and then the parameter setting of the search box is carried out, so that the detection precision reduction caused by the difference between the size of the search box and the size of the actual detection defect is effectively avoided.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (5)

3. The method for segmenting the notebook appearance flaws based on deep learning of claim 1, characterized in that: the deformable convolution in the fourth step is mainly to add learning of offset in the x and y directions in an original convolution unit, dynamically adjust the size and the position of a convolution kernel, input the deformable convolution as a feature map after standard convolution, perform convolution operation on the feature map to generate N2-dimensional offset quantities (delta x and delta y), correct the value of each point on the input feature map, set the feature map as P, namely P (x, y) = P (x + delta x, y + delta y), calculate P (x + delta x, y + delta y) by using bilinear interpolation when x + delta x is a fraction, form N feature maps, and perform convolution one by using N convolution kernels to obtain output.
CN202110282633.6A2021-03-162021-03-16Notebook appearance flaw segmentation method based on deep learningPendingCN112907560A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202110282633.6ACN112907560A (en)2021-03-162021-03-16Notebook appearance flaw segmentation method based on deep learning

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202110282633.6ACN112907560A (en)2021-03-162021-03-16Notebook appearance flaw segmentation method based on deep learning

Publications (1)

Publication NumberPublication Date
CN112907560Atrue CN112907560A (en)2021-06-04

Family

ID=76105249

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202110282633.6APendingCN112907560A (en)2021-03-162021-03-16Notebook appearance flaw segmentation method based on deep learning

Country Status (1)

CountryLink
CN (1)CN112907560A (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106228545A (en)*2016-07-142016-12-14西北民族大学A kind of automatic division method of figure of buddha class Tangka's concordance damaged area
CN106546233A (en)*2016-10-312017-03-29西北工业大学A kind of monocular visual positioning method towards cooperative target
CN107644254A (en)*2017-09-092018-01-30复旦大学A kind of convolutional neural networks weight parameter quantifies training method and system
CN109509172A (en)*2018-09-252019-03-22无锡动视宫原科技有限公司A kind of liquid crystal display flaw detection method and system based on deep learning
WO2019165949A1 (en)*2018-03-012019-09-06腾讯科技(深圳)有限公司Image processing method, device, storage medium and computer program product
CN110263686A (en)*2019-06-062019-09-20温州大学A kind of construction site safety of image cap detection method based on deep learning
CN110399882A (en)*2019-05-292019-11-01广东工业大学 A text detection method based on deformable convolutional neural network
CN110517259A (en)*2019-08-302019-11-29北京百度网讯科技有限公司 A detection method, device, equipment and medium for product surface state
CN110969610A (en)*2019-12-032020-04-07杭州天铂云科光电科技有限公司Power equipment infrared chart identification method and system based on deep learning
CN111008567A (en)*2019-11-072020-04-14郑州大学Driver behavior identification method
CN111753805A (en)*2020-07-082020-10-09深延科技(北京)有限公司Method and device for detecting wearing of safety helmet
CN111951300A (en)*2020-07-092020-11-17江苏大学 A multi-target tracking method for smart cars in urban conditions
CN112131983A (en)*2020-09-112020-12-25桂林理工大学 A safety helmet wearing detection method based on improved YOLOv3 network
CN112150410A (en)*2020-08-242020-12-29浙江工商大学Automatic detection method and system for weld defects
CN112233095A (en)*2020-10-162021-01-15哈尔滨市科佳通用机电股份有限公司 A method for detecting various failure forms of a locking plate device of a railway freight car
CN113628178A (en)*2021-07-302021-11-09安徽工业大学Method for detecting surface defects of steel products with balanced speed and precision

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106228545A (en)*2016-07-142016-12-14西北民族大学A kind of automatic division method of figure of buddha class Tangka's concordance damaged area
CN106546233A (en)*2016-10-312017-03-29西北工业大学A kind of monocular visual positioning method towards cooperative target
CN107644254A (en)*2017-09-092018-01-30复旦大学A kind of convolutional neural networks weight parameter quantifies training method and system
WO2019165949A1 (en)*2018-03-012019-09-06腾讯科技(深圳)有限公司Image processing method, device, storage medium and computer program product
CN109509172A (en)*2018-09-252019-03-22无锡动视宫原科技有限公司A kind of liquid crystal display flaw detection method and system based on deep learning
CN110399882A (en)*2019-05-292019-11-01广东工业大学 A text detection method based on deformable convolutional neural network
CN110263686A (en)*2019-06-062019-09-20温州大学A kind of construction site safety of image cap detection method based on deep learning
CN110517259A (en)*2019-08-302019-11-29北京百度网讯科技有限公司 A detection method, device, equipment and medium for product surface state
CN111008567A (en)*2019-11-072020-04-14郑州大学Driver behavior identification method
CN110969610A (en)*2019-12-032020-04-07杭州天铂云科光电科技有限公司Power equipment infrared chart identification method and system based on deep learning
CN111753805A (en)*2020-07-082020-10-09深延科技(北京)有限公司Method and device for detecting wearing of safety helmet
CN111951300A (en)*2020-07-092020-11-17江苏大学 A multi-target tracking method for smart cars in urban conditions
CN112150410A (en)*2020-08-242020-12-29浙江工商大学Automatic detection method and system for weld defects
CN112131983A (en)*2020-09-112020-12-25桂林理工大学 A safety helmet wearing detection method based on improved YOLOv3 network
CN112233095A (en)*2020-10-162021-01-15哈尔滨市科佳通用机电股份有限公司 A method for detecting various failure forms of a locking plate device of a railway freight car
CN113628178A (en)*2021-07-302021-11-09安徽工业大学Method for detecting surface defects of steel products with balanced speed and precision

Similar Documents

PublicationPublication DateTitle
CN110598698B (en)Natural scene text detection method and system based on adaptive regional suggestion network
CN113298169A (en) A method and device for rotating target detection based on convolutional neural network
CN112686894A (en)FPCB (flexible printed circuit board) defect detection method and device based on generative countermeasure network
CN114758123B (en) A method for enhancing target samples in remote sensing images
CN111401456B (en)Training method, system and device for face gesture recognition model
CN115423796A (en) A chip defect detection method and system based on TensorRT accelerated reasoning
CN114881989A (en)Small sample based target object defect detection method and device, and electronic equipment
CN111079528A (en) A method and system for checking primitive drawings based on deep learning
CN119048487A (en)Industrial product defect detection method based on improvement FASTER RCNN
CN119784756A (en) Circuit board appearance defect visual recognition method and system based on improved YOLO algorithm
CN116051515A (en) A large field of view semiconductor chip appearance defect detection method
CN112907560A (en)Notebook appearance flaw segmentation method based on deep learning
CN118447360B (en)Training and using method of body-aware positioning model based on multi-mode large model
CN108389154A (en)The implementation method of a kind of clipping region cutting techniques for parallel drawing in GPU
CN100378752C (en) Robust Natural Image Segmentation Methods
CN112907564A (en)MaskRCNN-based nut surface defect segmentation method
CN114627289B (en)Industrial part instance segmentation method based on voting mechanism
CN112733860B (en)Method and system for mining difficult samples of two-classification segmentation network
CN116612295A (en)Feature map generation method, training method and device of target detection model
CN113052809B (en)EfficientNet-based nut surface defect classification method
CN110689071A (en)Target detection system and method based on structured high-order features
CN116309645B (en) An adaptive image segmentation method based on improved tangent-cotangent optimization
CN108648212A (en)Adaptive piecemeal method for tracking target based on super-pixel model
CN116351046B (en) Object display method and device in virtual scene
CN118262252B (en)High-low voltage center identification method, system, medium and electronic equipment

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp