Movatterモバイル変換


[0]ホーム

URL:


CN110555839B - Defect detection and identification method, device, computer equipment and storage medium - Google Patents

Defect detection and identification method, device, computer equipment and storage medium
Download PDF

Info

Publication number
CN110555839B
CN110555839BCN201910843972.XACN201910843972ACN110555839BCN 110555839 BCN110555839 BCN 110555839BCN 201910843972 ACN201910843972 ACN 201910843972ACN 110555839 BCN110555839 BCN 110555839B
Authority
CN
China
Prior art keywords
target product
product image
merging
target
domain
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910843972.XA
Other languages
Chinese (zh)
Other versions
CN110555839A (en
Inventor
高斌斌
高立钊
贾佳亚
戴宇荣
沈小勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Cloud Computing Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Cloud Computing Beijing Co LtdfiledCriticalTencent Cloud Computing Beijing Co Ltd
Priority to CN201910843972.XApriorityCriticalpatent/CN110555839B/en
Publication of CN110555839ApublicationCriticalpatent/CN110555839A/en
Application grantedgrantedCritical
Publication of CN110555839BpublicationCriticalpatent/CN110555839B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

The invention discloses a defect detection and identification method, a device, computer equipment and a medium, and belongs to the field of computer vision detection and identification. The method obtains a mask map by dividing the background and the foreground in the target product image, positions a defect target in the target product image according to the spatial position distribution and the number of connected domains in the mask map of the target product image, and further identifies a target product image block corresponding to a target positioning frame, the dividing method converts the prediction of the defect shape and the boundary into the division of the defect foreground and the background, the defect locating block comprises a defect foreground and an image background which meet target conditions, and meanwhile, the defect locating method provided by the invention can locate the defect position more accurately, is beneficial to extracting main defect characteristics, reduces influence of mask noise and target product image background on defect type identification, and improves accuracy of defect type identification.

Description

Defect detection and identification method, device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer vision inspection, and in particular, to a defect inspection and identification method, apparatus, computer device, and computer readable storage medium.
Background
The defect detection and identification are widely applied to the fields of industrial production and manufacturing, quality monitoring and the like, such as liquid crystal panel defect identification, workpiece surface quality detection, cloth surface flaw identification and the like. Through defect detection, defects existing on the surface of a product can be found for repair by maintenance personnel in time to ensure the quality of the product, but in order to accurately judge whether the quality of the product is qualified, select which procedures to repair, and the like, careful analysis and fine recognition of images are required after target product images suspected to comprise the defects of the product are obtained. Therefore, a defect detection and identification method is needed to realize automatic positioning of defects on the surface of a product and intelligent identification of defect types.
The existing defect detection and identification methods mainly comprise three types: firstly, scaling an original target product image of a product to a fixed size, then using a convolutional neural network (Convolutional Neural Networks, CNN) to identify the defect type in the original target product image, in the actual implementation process, firstly sampling the original target product image by using a sliding window mode to obtain target product image blocks, and then using the CNN to carry out defect positioning on each target product image block and realize the identification of the defect type; firstly, constructing a cascade detection network by utilizing a target detection algorithm such as a Single-shot Detector (SSD), unified real-time target detection (You Only Look Once, YOLO) and the like to locate a target product image block, and then identifying the defect type of the located target product image block by using CNN; firstly, designing an architecture of a cascade self-encoder so as to carry out image segmentation on an original target product image of the obtained product surface to obtain a mask image of the target product image, further obtaining a minimum peripheral boundary frame of the target product image, thereby realizing defect positioning, and finally, sending the positioned target product image block into a CNN to realize accurate identification of defect types.
In carrying out the invention, the inventors have found that the prior art has at least the following problems:
In the first method, original image input is adopted when a target product image is sampled by using a sliding window mode, accurate identification is difficult to achieve when defects are too small, and defect positioning is performed by using the sliding window mode, so that the positions of the defects can only be roughly obtained, and the defect positioning is inaccurate; in the second method, the method for constructing the cascade detection network through the target detection algorithm is difficult to realize accurate segmentation of the defect boundary and shape, and the accurate positioning of the core defect position is affected; in the third method, when the mask pattern of the target product image has noise points or scattered distribution, the result obtained by using the positioning method based on the mask minimum peripheral frame is difficult to accurately express the position information of the defect, and further, the performance of identifying the defect type is affected, so that the defect type identification result is inaccurate.
Disclosure of Invention
The embodiment of the invention provides a defect detection and identification method, a device, computer equipment and a computer readable storage medium, which can solve the problems of inaccurate defect positioning and poor defect type identification precision in the related technology. The technical scheme is as follows:
in one aspect, a defect detection and identification method is provided, the method comprising:
acquiring a mask image of a target product image based on the target product image;
determining a target positioning frame in the target product image according to the spatial position distribution and the number of the connected domains in the mask image of the target product image, wherein the connected domains and the image background contained in the target positioning frame meet target conditions;
And identifying the target product image block corresponding to the target positioning frame in the target product image.
In one aspect, there is provided a defect detection and identification apparatus, the apparatus comprising:
the segmentation module is used for acquiring a mask image of the target product image based on the target product image;
The positioning module is used for determining a target positioning frame in the target product image according to the spatial position distribution and the number of the connected domains in the mask image of the target product image;
and the identification module is used for identifying the target product image block corresponding to the target positioning frame in the target product image.
In one possible implementation, the positioning module is further configured to:
when only one connected domain exists in the mask map of the target product image, determining a positioning frame of the connected domain as the target positioning frame, wherein the first connected domain is the largest connected domain in the mask map;
when two or more connected domains exist in the mask map of the target product image, the target positioning frame is determined according to the area occupation ratio of the merging frame and the merging mask occupation ratio.
In one possible implementation, the positioning module is further configured to:
When the area ratio of the merging frame meets a first value range or the area ratio of the merging mask meets a second value range, determining a positioning frame of the first connected domain as a target positioning frame;
when the area ratio of the merging frame does not meet the first value range and the area ratio of the merging mask does not meet the second value range, determining a locating frame which is positioned in the locating frame of the first merging domain and comprises the locating frame of the first communicating domain in the mask map as the target locating frame, wherein the first merging domain is obtained by merging all the communicating domains.
In a possible implementation manner, the positioning module is further configured to determine a positioning frame of the first connected domain in the mask map as an initial positioning frame;
the positioning module is further configured to determine, based on the nearest connected domain of the first connected domain, a positioning frame of a second merged domain, where the second merged domain includes the first merged domain and the nearest connected domain of the first connected domain;
a calculating module, configured to calculate the merge frame area ratio and the merge mask ratio of the second merge domain;
And the positioning module is further used for determining the amplified positioning frame as the target positioning frame when the area ratio of the combined frame meets the first value range or the combined mask meets the second value range.
In one possible implementation, the apparatus further includes:
The extraction module is used for extracting a feature map of the target product image through a convolutional neural network of the defect detection model;
The pyramid module inputs the feature images into the space pyramid module of the defect detection model to obtain feature images with different granularities of the target product image;
the up-sampling module is used for up-sampling the feature graphs with different granularities through the space pyramid module to obtain a final feature graph;
and the segmentation mask extraction module is used for extracting the defect mask by using a convolution layer of 1x1 based on the final feature map, and obtaining a mask map of the target product image.
In one possible implementation, the apparatus further includes:
The intercepting module is used for intercepting a square target product image block by taking the longest side of the target positioning frame as the side length based on the center of the target positioning frame;
the identification module is also used for identifying the square target product image block.
In one aspect, a computer device is provided that includes one or more processors and one or more memories having stored therein at least one program code loaded and executed by the one or more processors to implement the operations performed by the defect detection identification method.
In one aspect, a computer readable storage medium having stored therein at least one program code loaded and executed by a processor to perform operations performed by the defect detection identification method is provided.
Obtaining a mask map by dividing the background and the foreground in the target product image, positioning a defect target frame in the target product image according to the spatial position distribution and the number of connected domains in the mask map of the target product image, further identifying a target product image block corresponding to the target positioning frame, converting the prediction of the defect shape and the boundary into the division of the defect foreground and the background by the dividing method, the method for locating the target can accurately determine the position of the defect, is favorable for extracting main defect characteristics, reduces influence of mask noise and target product image background on defect type identification, and improves accuracy of defect type identification.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation environment of a defect detection and identification method according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a defect detection recognition model according to an embodiment of the present invention;
FIG. 3 is a flowchart of a defect detection and identification method according to an embodiment of the present invention;
Fig. 4 is a schematic structural diagram of a spatial pyramid module according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of connected domain merging according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a positioning result of a target positioning frame according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a defect detecting and identifying apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
Artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that simulates, extends, and extends human intelligence using a digital computer or a machine controlled by a digital computer, perceives the environment, obtains knowledge, and uses the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Computer Vision (CV) is a science of studying how to "look" a machine, and more specifically, to replace a human eye with a camera and a Computer to perform machine Vision such as recognition and measurement on a target, and further perform graphic processing to make the Computer process an image more suitable for human eye observation or transmission to an instrument for detection. As a scientific discipline, computer vision research-related theory and technology has attempted to build artificial intelligence systems that can acquire information from images or multidimensional data. Computer vision techniques typically include image processing, image recognition, image semantic understanding, image retrieval, optical character recognition (Optical Character Recognition, OCR), video processing, video semantic understanding, video content/behavior recognition, three-Dimensional object reconstruction, three-Dimensional (3D) techniques, virtual reality, augmented reality, synchronous positioning, and map construction, among others, and also include common biometric recognition techniques such as face recognition, fingerprint recognition, and others.
In the field of images, the semantics refer to the content of the images, and the segmentation means that different objects in the image are segmented from the angle of pixels, and each pixel in the original image is identified.
Machine learning (MACHINE LEARNING, ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, and the like. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
CNN is a feedforward neural network, whose artificial neurons can respond to surrounding units within a part of coverage, and has excellent performance for large-scale image processing, and specifically includes a convolution layer, a pooling layer, a normalization-coating layer, a random inactivation layer, an activation function layer, and the like.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
The defect detection generally refers to detection of surface defects of an article, wherein the surface defects are detected by adopting advanced computer vision detection technology, such as spots, pits, scratches, chromatic aberration, defects and the like on the surface of a workpiece.
The scheme provided by the embodiment of the invention relates to the technologies of artificial intelligence, such as machine learning, computer vision technology and the like, and is specifically described by the following embodiments:
fig. 1 is a schematic diagram of an implementation environment of a defect detection and identification method according to an embodiment of the present invention, referring to fig. 1, the implementation environment includes: a computer device 101.
The computer device 101 may be at least one of a desktop graphics processor (Graphic Processing Unit, GPU) computer, a GPU computing cluster, a neural network computer, or the like. The related technician can use the computer equipment 101 to process the product image, find out the defective product, and ensure the product quality. The computer device 101 may process the image input therein, illustratively, the computer device 101 is connected to the camera assembly to automatically acquire the image and process the image, or the related technician may input the image into the computer device to process the image, which is not limited by the image acquisition mode of the present invention. Optionally, the computer device 101 may also have at least one image database, such as a defect type database, a defect image database, etc., for storing possible defect types and acquired defect images.
Fig. 2 is a schematic structural diagram of a defect detection model provided in an embodiment of the present invention, and referring to fig. 2, the defect detection recognition model sequentially includes a CNN, a spatial pyramid module, a segmentation processing layer, defect positioning, and defect recognition. The computer equipment can send the target product image input into the CNN to process, obtain the characteristic image of the target product image, take the characteristic image of the target product image as the input of a space pyramid module, utilize a plurality of layers of pooling cores in the space pyramid module to obtain characteristic images with different granularities, utilize a convolution layer in the space pyramid module to realize the dimension reduction processing of the characteristic images with different granularities, further use bilinear interpolation to carry out up-sampling processing on the characteristic images with different granularities after dimension reduction, connect the characteristic images after up-sampling processing in series to be output of the space pyramid module to obtain a final characteristic image, use a segmentation processing layer to process the final characteristic image to obtain a mask image of the target product image, and carry out defect positioning and defect type identification based on the mask image of the target product image.
Fig. 3 is a flowchart of a defect detection and identification method according to an embodiment of the present invention, referring to fig. 3, the method includes:
301. The computer device obtains an image of the target product.
It should be noted that, the computer device may acquire the target product image through the camera component connected with the computer device, or the related technician may input the target product image into the computer device, and the embodiment of the present invention does not limit a specific manner of acquiring the target product image.
302. The computer device obtains a feature map of the target product image based on the target product image.
In one possible implementation manner, the computer device may perform feature extraction on the target product image through the feature extraction layer, where the feature extraction layer may include a plurality of feature extraction layers, each feature extraction layer may be provided with a corresponding weight matrix, so that the computer device may slide on the target product image based on a sliding window of the feature extraction layer to obtain a sub-image that needs to be processed currently, perform multiplication operation on a pixel value in the sub-image and the weight matrix, thereby obtaining a value of a feature point, and perform multiple sliding through the sliding window, so as to output a feature map of one feature extraction layer, use the feature map as an input map of a next feature extraction layer, and continue feature extraction, and so on, use a feature map output by a last feature extraction layer in the feature extraction model as a feature map of the target product image. The above procedure is merely illustrative of one possible implementation of the feature extraction procedure and is not intended to limit the feature extraction method employed by embodiments of the present invention.
In step 302, multiple feature extraction layers may be used to implement feature extraction, and a network of more layers may iteratively extract more complex features from low-level features.
303. The computer equipment inputs the feature map into a space pyramid module to obtain feature maps with different granularities of the target product image.
It should be noted that the main purpose of the spatial pyramid module is to integrate context information of different levels to enrich the feature representation of the image. Fig. 4 is a schematic diagram of a specific structure of a spatial pyramid module according to an embodiment of the present invention, where the spatial pyramid module uses a plurality of hierarchical pooling kernels, and may obtain feature maps with different granularities, as shown in fig. 4, where the spatial pyramid module is a part of a defect detection model used in the mask prediction process in fig. 2.
The context information may be some or all of information that can affect the scene and the object in the image, and the context information is not directly obtained from the appearance of the object, but is obtained from data in the neighborhood, labels of the object, spatial position distribution of the object, or statistical data information. In the actual process, the interaction information between different objects can be captured, and the interaction information between the objects and the scene is used as a condition to identify and process the targets.
304. The computer device obtains a final feature map based on the feature maps of different granularities.
In one possible implementation manner, the computer device uses a convolution layer to perform dimension reduction processing on feature graphs with different granularities to obtain a feature graph after dimension reduction, uses bilinear interpolation to perform up-sampling processing on the feature graph after dimension reduction, and finally connects the feature graphs after up-sampling processing in series to obtain an output of the spatial pyramid module as a final feature graph.
The final feature map is a final feature representation, which contains information about local and global contexts.
305. The computer device obtains a mask map of the target product image based on the final feature map.
In one possible implementation, the computer device uses the final feature map as an input to a segmentation processing layer of a defect detection model to obtain a mask map of the target product image, the mask map being a mask prediction result at a pixel level.
It should be noted that, the defect detection model may be a neural network model based on a depth semantic segmentation algorithm, for example, the depth semantic segmentation algorithm may be a two-class semantic segmentation algorithm, through which a mask image of the target product image may be obtained, so as to implement prediction of a pixel level of the target product image.
It should be noted that, the above steps 302 to 305 may be replaced by other methods to predict the defect mask map, and the embodiment of the present invention does not limit what method is specifically adopted, for example, a template matching method may be used to implement the prediction of the defect mask map.
306. The computer device detects the spatial position distribution and the number of connected domains in the mask map of the target product image, performs step 307 when only one connected domain in the mask map of the target product image is detected, and performs step 308 when two or more connected domains in the mask map of the target product image are detected.
It should be noted that, the computer device may determine, according to pixel values of different pixel points in the mask map of the target product image, pixel points having the same or similar pixel values and being adjacent to each other in position, so as to determine positions of connected domains, and pixel points having different or less similar pixel values may form different connected domains.
In the embodiment of the present invention, it may be assumed that n connected domains exist in the mask map of the target product image, and the set of n connected domains may be denoted as c= { C1,c2,…,cn }, where n may be any positive integer greater than or equal to 1.
It should be noted that, there may be a case where there is no connected domain in the mask map of the target product image, that is, C is an empty set, and in this case, the computer device may not execute the subsequent steps any more.
307. The computer device performs step 314 with the location frame of the connected domain as the target location frame.
In one possible implementation manner, when the computer device detects that only one connected domain exists in the mask image of the target product image, the positioning frame of the connected domain can include all defects in the mask image of the target product image to the greatest extent, so that the computer device can define a positioning frame with any size and any position in the mask image of the target product image as a target positioning frame m in advance, and further can update the position of the target positioning frame m as a positioning frame b1 of the connected domain, namely m≡b1.
It should be noted that, in the mask chart of the target product image, the positioning frame of any one of the connected domains ci may be denoted as bi, and the positioning frame of the connected domain may be denoted as b1, and the target positioning frame may be denoted as m.
308. The computer device determines a first connected domain from the two or more connected domains, determines a positioning frame of the first connected domain as an initial target positioning frame, and the first connected domain is a largest connected domain in the mask map.
In one possible implementation manner, the computer device detects spatial position distribution and number of pixels included in each connected domain in the two or more connected domains, determines an area of each connected domain according to the spatial position and number of pixels in each connected domain, and then finds a maximum connected domain in the mask map, and determines a positioning frame of the first connected domain as an initial positioning frame as the first connected domain. For example, the computer device may take the largest connected domain in the mask image of the target product image as the first connected domain, i.eTaking a positioning frame of the maximum connected domain in the mask image of the target product image as an initial positioning frame, namely
Here, arg maxi area(ci) may refer to the value of i when area (ci) is maximized, and area (ci) may represent the area of the connected domain ci.
It should be noted that, when the computer device detects that there are two or more connected domains in the mask map of the target product image, the method for balancing and positioning defects between positioning frames based on the first connected domain and all connected domains provided by the embodiment of the invention has the core idea that the largest connected domain is adopted as an initial solution of a defect position, and adjacent connected domains are continuously absorbed until a certain balance is achieved.
309. The computer device determines a second connected domain closest to the first connected domain and a positioning frame of the second connected domain based on the first connected domain.
In one possible implementation manner, the computer device may detect the center and the boundary of each connected domain in the mask map, determine, in combination with the spatial position distribution information of the connected domain, a connected domain with the smallest distance between the center and the boundary of the first connected domain according to the detected result, and determine the connected domain as the second connected domain.
Fig. 5 is a schematic diagram of connected domain merging provided in the embodiment of the present invention, referring to fig. 5, a target positioning framem and a positioning frame b corresponding to a connected domain closest to the target positioning frame are respectively represented by rectangular frames as shown in fig. 5, and masks included in the rectangular frames are respectively circular and crescent. The circular mask region is a first communication region, and the crescent mask region is a second communication region.
310. The computer device determines a merge frame area ratio and a merge mask area ratio of the first and second connected domains, the merge frame area ratio being used to represent a ratio between an area and a value of a location frame of the two or more connected domains and an area of a location frame of the merged domain after the connected domains are merged. The merge mask ratio is used to represent the ratio between the area sum value of the two or more connected domains and the area of the positioning frame of the merged domain after the connected domains are merged.
It should be noted that, the area ratio of the merging frame is defined as the ratio of the area of the locating frame corresponding to the target locating frame and the connected domain closest to the target locating frame to the area of the locating frame of the merging domain, namely area (uni)/area (clo), wherein area () may represent the area, the connected domain closest to the target locating frame may represent c, the corresponding locating frame may represent b,un i may represent the area where b and the target locating frame m merge, namely uni≡m per b, and clo may represent the locating frames of m and b, namely clo+m, b ].
It should be noted that, the merging mask ratio is defined as a ratio of a defect mask area in a target positioning frame and a positioning frame corresponding to a connected domain closest to the target positioning frame to an area of the merging domain, that is, mask (clo)/area (clo), where mask () may represent the defect mask area, area () may represent the area, the connected domain closest to the target positioning frame may be represented as c, its corresponding positioning frame may be represented as b, and clo may represent the positioning frames of m and b.
Referring to the connected domain merging schematic diagram shown in fig. 5, the merging frame area ratio is the ratio of the sum of the areas of the frames m and b to the locating frame area of the merging domain, and the merging mask ratio is the ratio of the sum of the circular and crescent mask areas to the locating frame area of the merging domain.
It should be noted that, according to the calculated values of the merging frame area ratio and the merging mask ratio, the computer device may determine the target positioning frame by adopting the defect positioning strategy based on the spatial distribution of the defect mask map provided by the present invention, and the specific implementation method is as follows, which is shown in steps 311 to 313:
311. When the computer device detects that the area ratio of the merging frame meets the first value range or the area ratio of the merging mask meets the second value range, step 312 is executed, otherwise, step 313 is executed.
It should be noted that, the computer device may preset two thresholds, which are respectively denoted as a merge frame area duty ratio threshold τ1 and a merge mask duty ratio threshold τ2, where the merge frame area duty ratio meeting a first value range may be that the merge frame area duty ratio is smaller than the merge frame area duty ratio threshold τ1, and the merge mask duty ratio meeting a second value range may be that the merge mask duty ratio is smaller than the merge mask duty ratio threshold τ2.
312. The computer device determines the location box of the first connected domain as a target location box, and performs step 314.
It should be noted that there may be a special case where the merge frame area ratio and the merge mask ratio are both 1, and when the computer device detects that the merge frame area ratio and the merge mask ratio are both 1, it may be determined that only one connected domain, which is a first connected domain, is included in the mask map, and therefore, the computer device may directly determine the positioning frame of the first connected domain as the target positioning frame. Fig. 6 is a schematic diagram of a positioning result of a target positioning frame according to an embodiment of the present invention, referring to fig. 6, a rectangular frame indicated by 601 in the figure is a target positioning frame.
313. The computer device uses the merged domain of the first and second connected domains as the first connected domain, and continues to execute the step 309 and the subsequent steps.
In one possible implementation manner, the computer device merges the second connected domain nearest to the first connected domain according to the comparison result, expands the representative region of the connected domain, and updates the target positioning frame, that is, m≡m, b ], [ m, b ] may represent the positioning frames of the frames m and b.
It should be noted that, the computer device may preset two thresholds, which are respectively denoted as a merge frame area duty ratio threshold τ1 and a merge mask duty ratio threshold τ2, when the computer device detects that the merge frame area duty ratio satisfies a first value range or the merge mask duty ratio satisfies a second value range, that is, the merge frame area duty ratio is smaller than the merge frame area duty ratio threshold τ1 or the merge mask duty ratio is smaller than the merge mask duty ratio threshold τ2, it is unnecessary to search for a connected domain closest to the currently determined connected domain, and the currently determined positioning frame of the connected domain is the target positioning frame, and referring to fig. 6, the rectangular frame indicated by 603 in the figure is the target positioning frame.
There may be a special case where the area ratio of the merge frame and the merge mask ratio are both 0, and when the computer device detects that the area ratio of the merge frame and the merge mask ratio are both 0, the computer device determines the positioning frame of the merge domain of the first connected domain and the second connected domain in the mask map as the target positioning frame, see fig. 6, where the rectangular frame indicated by 602 in the diagram is the target positioning frame.
It should be noted that, when the number of connected domains is 1, the above steps 308 to 313 provide a cyclic processing procedure, the target positioning frame may be directly determined as the positioning frame of the connected domain, and when the number of connected domains is plural, the target positioning frame may be determined through the cyclic processing procedure, the current largest connected domain and the nearest connected domain may be combined at a time, and judged based on the cyclic cutoff condition, if any one of the cyclic cutoff conditions is met, the positioning frame of the current combined connected domain may be used as the target positioning frame, and if the cyclic cutoff condition is not met, the step 309 and the subsequent steps may be continuously performed using the current combined connected domain as the first connected domain until the value of the combined frame area ratio is smaller than the combined frame area ratio threshold τ1 or the combined mask ratio is smaller than the combined mask ratio threshold τ2 or no un-combined connected domain is used as the target positioning frame.
It should be noted that, in the above process, the position relationship between the connected domains and the area of the connected domains may be determined by detecting the spatial position distribution of the connected domains in step 308, and the computer device may obtain the connected domain set c= { C1,c2,…,cn } based on the detected connected domains, and in the execution of the cyclic process, each time the merging process is performed once, the merged connected domain may be deleted from the connected domain set, and when the connected domain set is an empty set, it may be determined that there is no non-merged connected domain at present, and the cyclic process may be stopped.
314. The computer equipment intercepts square target product image blocks based on the center of the target positioning frame and taking the longest side of the target positioning frame as the side length.
It should be noted that, when the segmentation processing layer of the defect detection model is used to process the target product image block, the target product image block is required to be a square image block, so that the square image block needs to be cut out from the mask image of the target product image, in addition, the square target product image block is cut out by taking the longest edge of the target positioning frame as the side length, so that all the connected domains for determining the target positioning frame can be included in the square target product image, and accuracy of defect type identification can be ensured.
315. The computer device identifies the square target product image block.
In one possible implementation, the computer device scales the target product image block to a fixed size, determines a boundary box of a defect mask in the target product image, and combines the defect type data obtained by training according to the boundary box of the detected defect mask to realize the identification of the defect type.
It should be noted that, the foregoing steps 317 to 318 may be replaced by other methods to identify the defect types, which method is not limited to the specific method adopted in the embodiment of the present invention, for example, manual features such as Scale-INVARIANT FEATURE TRANSFORM (SIFT), directional gradient histogram (Histogram of Oriented Gradient, HOG), gray level co-occurrence matrix, wavelet features, etc. may be adopted, machine learning methods such as multi-classification support vector machine, random forest, etc. may be adopted, and deep learning such as convolutional neural network may also be used to identify the defect types.
The method for dividing the background and the foreground in the target product image to obtain a mask image, positioning a defect target in the target product image according to the spatial position distribution and the number of connected domains in the mask image of the target product image, and further identifying the target product image block corresponding to the target positioning frame, the method for dividing converts the prediction of the defect shape and the boundary into the division of the defect foreground and the background, realizes the more accurate prediction of the defect mask, the defect positioning block comprises the defect foreground and the image background meeting the target condition, the defect positioning method provided by the invention can more accurately position the defect, is beneficial to extracting main defect characteristics, reduces the influence of mask noise and target product image background on defect type recognition, improves the accuracy of defect type recognition, supports the recognition of various morphological defects, improves the accuracy of defect type recognition, realizes high-precision recognition of fine defects, and has good classification performance particularly for defects with too small defects and similar appearance characterization.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present invention, which is not described herein.
Fig. 7 is a schematic diagram of a defect detecting and identifying device according to an embodiment of the present invention, referring to fig. 7, the device includes:
An obtaining module 701, configured to obtain a mask map of a target product image based on the target product image;
A determining module 702, configured to determine a target positioning frame in the target product image according to the spatial position distribution and the number of connected domains in the mask map of the target product image;
The identifying module 703 is configured to identify a target product image block corresponding to the target positioning frame in the target product image.
In one possible implementation, the determining module is further configured to:
when only one connected domain exists in the mask map of the target product image, determining a positioning frame of the connected domain as the target positioning frame, wherein the first connected domain is the largest connected domain in the mask map;
when two or more connected domains exist in the mask map of the target product image, the target positioning frame is determined according to the area occupation ratio of the merging frame and the merging mask occupation ratio.
In one possible implementation, the positioning module is further configured to:
When the area ratio of the merging frame meets a first value range or the area ratio of the merging mask meets a second value range, determining a positioning frame of the first connected domain as a target positioning frame;
when the area ratio of the merging frame does not meet the first value range and the area ratio of the merging mask does not meet the second value range, determining a locating frame which is positioned in the locating frame of the first merging domain and comprises the locating frame of the first communicating domain in the mask map as the target locating frame, wherein the first merging domain is obtained by merging all the communicating domains.
In a possible implementation manner, the positioning module is further configured to determine a positioning frame of the first connected domain in the mask map as an initial positioning frame;
the positioning module is further configured to determine, based on the nearest connected domain of the first connected domain, a positioning frame of a second merged domain, where the second merged domain includes the first merged domain and the nearest connected domain of the first connected domain;
a calculating module, configured to calculate the merge frame area ratio and the merge mask ratio of the second merge domain;
And the positioning module is further used for determining the amplified positioning frame as the target positioning frame when the area ratio of the combined frame meets the first value range or the combined mask meets the second value range.
In one possible implementation, the apparatus further includes:
The extraction module is used for extracting a feature map of the target product image through a convolutional neural network of the defect detection model;
The pyramid module inputs the feature images into the space pyramid module of the defect detection model to obtain feature images with different granularities of the target product image;
the up-sampling module is used for up-sampling the feature graphs with different granularities through the space pyramid module to obtain a final feature graph;
And the segmentation extraction module is used for obtaining a mask image of the target product image based on the final feature image and the convolution layer.
In one possible implementation, the apparatus further includes:
The intercepting module is used for intercepting a square target product image block by taking the longest side of the target positioning frame as the side length based on the center of the target positioning frame;
the identification module is also used for identifying the square target product image block.
The device obtains a mask map by dividing the background and the foreground in the target product image, positions a defect target in the target product image according to the spatial position distribution and the number of connected domains in the mask map of the target product image, further identifies the target product image block corresponding to the target positioning frame, converts the prediction of the defect shape and the boundary into the division of the defect foreground and the background, the defect locating block comprises a defect foreground and an image background which meet target conditions, and meanwhile, the defect locating method provided by the invention can locate the defect position more accurately, is beneficial to extracting main defect characteristics, reduces influence of mask noise and target product image background on defect type identification, and improves accuracy of defect type identification.
It should be noted that: in the defect detection and identification device provided in the above embodiment, only the division of the above functional modules is used for illustration, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the defect detection and identification device and the defect detection and identification method provided in the foregoing embodiments belong to the same concept, and detailed implementation processes of the defect detection and identification device and the defect detection and identification method are detailed in the method embodiments and are not repeated here.
Fig. 8 is a schematic structural diagram of a computer device according to an embodiment of the present invention. The computer device 800 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Computer device 800 may also be referred to by other names as user device, portable computer device, laptop computer device, desktop computer device, etc.
In general, the computer device 800 includes: one or more processors 801, and one or more memories 802.
Processor 801 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 801 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). The processor 801 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 801 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 801 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 802 may include one or more computer-readable storage media, which may be non-transitory. Memory 802 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 802 is used to store at least one program code for execution by processor 801 to implement the defect detection identification method provided by the method embodiments of the present invention.
In some embodiments, the computer device 800 may optionally further include: a peripheral interface 803, and at least one peripheral. The processor 801, the memory 802, and the peripheral interface 803 may be connected by a bus or signal line. Individual peripheral devices may be connected to the peripheral device interface 803 by buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 804, a display 805, a camera 806, audio circuitry 807, and a power supply 809.
Peripheral interface 803 may be used to connect at least one Input/Output (I/O) related peripheral to processor 801 and memory 802. In some embodiments, processor 801, memory 802, and peripheral interface 803 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 801, the memory 802, and the peripheral interface 803 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 804 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuit 804 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 804 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 804 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 804 may communicate with other computer devices via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 804 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present invention.
The display 805 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 805 is a touch display, the display 805 also has the ability to collect touch signals at or above the surface of the display 805. The touch signal may be input as a control signal to the processor 801 for processing. At this time, the display 805 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 805 may be one, providing a front panel of the computer device 800; in other embodiments, the display 805 may be at least two, respectively disposed on different surfaces of the computer device 800 or in a folded design; in still other embodiments, the display 805 may be a flexible display disposed on a curved surface or a folded surface of the computer device 800. Even more, the display 805 may be arranged in an irregular pattern other than rectangular, i.e., a shaped screen. The display 805 may be made of LCD (Liquid CRYSTAL DISPLAY), OLED (Organic Light-Emitting Diode), or other materials.
The camera assembly 806 is used to capture images or video. Optionally, the camera assembly 806 includes a front camera and a rear camera. Typically, the front camera is disposed on a front panel of the computer device and the rear camera is disposed on a rear surface of the computer device. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, the camera assembly 806 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
Audio circuitry 807 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and the environment, converting the sound waves into electric signals, inputting the electric signals to the processor 801 for processing, or inputting the electric signals to the radio frequency circuit 804 for voice communication. For purposes of stereo acquisition or noise reduction, the microphone may be multiple, each disposed at a different location of the computer device 800. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 801 or the radio frequency circuit 804 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 807 may also include a headphone jack.
The power supply 809 is used to power the various components in the computer device 800. The power supply 809 may be an alternating current, direct current, disposable battery, or rechargeable battery. When the power supply 809 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the computer device 800 also includes one or more sensors 810. The one or more sensors 810 include, but are not limited to: acceleration sensor 811, gyroscope sensor 812, pressure sensor 813, optical sensor 815, and proximity sensor 816.
The acceleration sensor 811 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the computer device 800. For example, the acceleration sensor 811 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 801 may control the display screen 805 to display a user interface in a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 811. Acceleration sensor 811 may also be used for the acquisition of motion data of a game or user.
The gyro sensor 812 may detect a body direction and a rotation angle of the computer device 800, and the gyro sensor 812 may collect a 3D motion of the user on the computer device 800 in cooperation with the acceleration sensor 811. The processor 801 may implement the following functions based on the data collected by the gyro sensor 812: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
Pressure sensor 813 may be disposed on a side frame of computer device 800 and/or on an underlying layer of display 805. When the pressure sensor 813 is disposed on a side frame of the computer device 800, a grip signal of the computer device 800 by a user may be detected, and the processor 801 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 813. When the pressure sensor 813 is disposed at the lower layer of the display screen 805, the processor 801 controls the operability control on the UI interface according to the pressure operation of the user on the display screen 805. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The optical sensor 815 is used to collect the ambient light intensity. In one embodiment, the processor 801 may control the display brightness of the display screen 805 based on the intensity of ambient light collected by the optical sensor 815. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 805 is turned up; when the ambient light intensity is low, the display brightness of the display screen 805 is turned down. In another embodiment, the processor 801 may also dynamically adjust the shooting parameters of the camera module 806 based on the ambient light intensity collected by the optical sensor 815.
A proximity sensor 816, also referred to as a distance sensor, is typically provided on the front panel of the computer device 800. The proximity sensor 816 is used to collect the distance between the user and the front of the computer device 800. In one embodiment, when the proximity sensor 816 detects a gradual decrease in the distance between the user and the front of the computer device 800, the processor 801 controls the display 805 to switch from the bright screen state to the off screen state; when the proximity sensor 816 detects that the distance between the user and the front of the computer device 800 gradually increases, the processor 801 controls the display 805 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is not limiting and that more or fewer components than shown may be included or that certain components may be combined or that a different arrangement of components may be employed.
In an exemplary embodiment, a computer readable storage medium, such as a memory including program code executable by a processor to perform the defect detection identification method of the above embodiment, is also provided. For example, the computer readable storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only Memory (CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
It will be appreciated by those of ordinary skill in the art that all or part of the steps of implementing the above embodiments may be implemented by hardware, or may be implemented by program code related hardware, where the program may be stored in a computer readable storage medium, where the storage medium may be a read only memory, a magnetic disk or optical disk, etc.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (8)

CN201910843972.XA2019-09-062019-09-06 Defect detection and identification method, device, computer equipment and storage mediumActiveCN110555839B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910843972.XACN110555839B (en)2019-09-062019-09-06 Defect detection and identification method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910843972.XACN110555839B (en)2019-09-062019-09-06 Defect detection and identification method, device, computer equipment and storage medium

Publications (2)

Publication NumberPublication Date
CN110555839A CN110555839A (en)2019-12-10
CN110555839Btrue CN110555839B (en)2024-11-15

Family

ID=68739379

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910843972.XAActiveCN110555839B (en)2019-09-062019-09-06 Defect detection and identification method, device, computer equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN110555839B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111105410A (en)*2019-12-272020-05-05中国人民解放军陆军军医大学第二附属医院 Device and method for determining proportion of hematopoietic tissue based on bone marrow biopsy images
CN111105411B (en)*2019-12-302023-06-23创新奇智(青岛)科技有限公司Magnetic shoe surface defect detection method
CN111179253B (en)*2019-12-302023-11-24歌尔股份有限公司Product defect detection method, device and system
CN111325713B (en)*2020-01-212023-05-23杭州未名信科科技有限公司 Wood defect detection method, system and storage medium based on neural network
CN111353983B (en)*2020-02-282023-05-23腾讯科技(深圳)有限公司Defect detection identification method, device, computer readable medium and electronic equipment
CN111444921A (en)*2020-03-252020-07-24浙江华睿科技有限公司 Scratch defect detection method, device, computing device and storage medium
CN111489348B (en)*2020-04-162023-01-20创新奇智(重庆)科技有限公司Method and device for simulating surface defects of magnetic material product
CN113538450B (en)*2020-04-212023-07-21百度在线网络技术(北京)有限公司Method and device for generating image
TWI732618B (en)*2020-07-022021-07-01撼訊科技股份有限公司 Image recognition method and system
CN112287452B (en)*2020-10-122022-05-24哈尔滨工业大学Spacecraft maintainability intelligent modeling method
CN112461130A (en)*2020-11-162021-03-09北京平恒智能科技有限公司Positioning method for visual inspection tool frame of adhesive product
CN114549390A (en)*2020-11-252022-05-27鸿富锦精密电子(成都)有限公司Circuit board detection method, electronic device and storage medium
TWI748828B (en)*2020-12-292021-12-01鴻海精密工業股份有限公司Method for detecting defects of product, computer device and storage medium
CN114943855B (en)*2021-02-092025-08-26富泰华工业(深圳)有限公司 Image classification and annotation method, device, electronic device and storage medium
CN112926438B (en)*2021-02-222024-04-05深圳中科飞测科技股份有限公司Detection method and device, detection equipment and storage medium
CN113706440B (en)*2021-03-122024-10-15腾讯科技(深圳)有限公司Image processing method, device, computer equipment and storage medium
CN113362288B (en)*2021-05-242024-03-08深圳明锐理想科技股份有限公司Golden finger scratch detection method and device and electronic equipment
CN113470024B (en)*2021-09-022021-12-21深圳市信润富联数字科技有限公司Hub internal defect detection method, device, equipment, medium and program product
CN114202543B (en)*2022-02-182022-04-26成都数之联科技股份有限公司Method, device, equipment and medium for detecting dirt defects of PCB (printed circuit board)
CN115439476B (en)*2022-11-072023-03-14成都博视广达科技有限责任公司Silk-screen defect detection method and device based on image analysis
CN115855950B (en)*2022-11-232025-06-24环维电子(上海)有限公司Image detection method and system for tiny flaws and wrong parts
CN115631204A (en)*2022-11-302023-01-20北京矩视智能科技有限公司Workpiece surface defect area segmentation method and device
CN115690094B (en)*2022-12-122023-05-30常州微亿智造科技有限公司Industrial defect detection method and system based on self-supervision network
CN117390206B (en)*2023-10-262024-07-19杭州食方科技有限公司Fresh image storage method, apparatus, electronic device and computer readable medium
CN117274239B (en)*2023-11-132024-02-20江苏永鼎股份有限公司Method for rapidly detecting defects of chip packaging technology

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110009618A (en)*2019-04-022019-07-12浙江大学 Method and device for detecting surface quality of shaft parts

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP3625910B2 (en)*1995-09-112005-03-02松下電器産業株式会社 Moving object extraction device
US6185314B1 (en)*1997-06-192001-02-06Ncr CorporationSystem and method for matching image information to object model information
NL1015943C2 (en)*2000-08-162002-02-19Ocu Technologies B V Interpretation of colored documents.
JP4213357B2 (en)*2001-03-162009-01-21株式会社リコー Image processing apparatus, image processing method, and program for executing the method
KR20070053872A (en)*2005-11-222007-05-28삼성에스디아이 주식회사 Image display
AU2009201252B2 (en)*2009-03-312011-06-02Canon Kabushiki KaishaColour correcting foreground colours for visual quality improvement
CN102542290B (en)*2011-12-222015-04-15国家计算机网络与信息安全管理中心Junk mail image recognition method and device
US9171204B2 (en)*2012-12-122015-10-27Qualcomm IncorporatedMethod of perspective correction for devanagari text
JP5796107B2 (en)*2013-05-242015-10-21キヤノン株式会社 Method and apparatus for text detection
CN104112370B (en)*2014-07-302016-08-17哈尔滨工业大学深圳研究生院Parking lot based on monitoring image intelligent car position recognition methods and system
US20160069903A1 (en)*2014-09-102016-03-10Fundació Institute De Ciències ForòniquesMethod for detecting cells
CN104574418A (en)*2015-01-272015-04-29西安工业大学Pressure vessel weld defect identification method and device based on neural network
US20160357784A1 (en)*2015-06-022016-12-08Thomson LicensingMethod and apparatus for scoring an image
CN104902258A (en)*2015-06-092015-09-09公安部第三研究所Multi-scene pedestrian volume counting method and system based on stereoscopic vision and binocular camera
CN105957081B (en)*2016-04-282019-01-08华北电力大学(保定)A kind of glass insulator falls to go here and there fault detection method
US10210608B2 (en)*2017-02-072019-02-19Xerox CorporationSystem and method for detecting defects in an image
CN107025639A (en)*2017-04-052017-08-08中科微至智能制造科技江苏有限公司A kind of Bar code positioning method under complex environment
CN106951900B (en)*2017-04-132019-10-22杭州申昊科技股份有限公司A kind of automatic identifying method of arrester meter reading
CN107563999A (en)*2017-09-052018-01-09华中科技大学A kind of chip defect recognition methods based on convolutional neural networks
CN108154510A (en)*2018-01-172018-06-12深圳市亿图视觉自动化技术有限公司Method for detecting surface defects of products, device and computer readable storage medium
CN108230324B (en)*2018-01-312023-10-20浙江理工大学Visual detection method for microdefect on surface of magnetic shoe

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110009618A (en)*2019-04-022019-07-12浙江大学 Method and device for detecting surface quality of shaft parts

Also Published As

Publication numberPublication date
CN110555839A (en)2019-12-10

Similar Documents

PublicationPublication DateTitle
CN110555839B (en) Defect detection and identification method, device, computer equipment and storage medium
CN110210571B (en)Image recognition method and device, computer equipment and computer readable storage medium
CN110544272B (en)Face tracking method, device, computer equipment and storage medium
CN110570460B (en)Target tracking method, device, computer equipment and computer readable storage medium
CN110400304B (en)Object detection method, device, equipment and storage medium based on deep learning
CN111192262A (en) Product defect classification method, device, equipment and medium based on artificial intelligence
WO2020224479A1 (en)Method and apparatus for acquiring positions of target, and computer device and storage medium
WO2020048308A1 (en)Multimedia resource classification method and apparatus, computer device, and storage medium
CN112749613B (en)Video data processing method, device, computer equipment and storage medium
CN111597922B (en)Cell image recognition method, system, device, equipment and medium
CN113706440B (en)Image processing method, device, computer equipment and storage medium
CN113205515B (en)Target detection method, device and computer storage medium
CN111368116B (en)Image classification method and device, computer equipment and storage medium
CN110647881B (en)Method, device, equipment and storage medium for determining card type corresponding to image
CN114462580B (en) Text recognition model training method, text recognition method, device and equipment
CN112818979A (en)Text recognition method, device, equipment and storage medium
CN113821658B (en) Method, device, equipment and storage medium for training encoder
CN114511864A (en)Text information extraction method, target model acquisition method, device and equipment
CN114298268A (en) Image acquisition model training method, image detection method, device and equipment
CN113343709B (en)Method for training intention recognition model, method, device and equipment for intention recognition
CN111753813B (en) Image processing method, device, equipment and storage medium
CN111080630B (en)Fundus image detection device, fundus image detection method, fundus image detection device, and fundus image storage medium
CN113705292A (en)Time sequence action detection method and device, computer equipment and storage medium
CN111639639B (en)Method, device, equipment and storage medium for detecting text area
CN113743186B (en)Medical image processing method, device, equipment and storage medium

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
REGReference to a national code

Ref country code:HK

Ref legal event code:DE

Ref document number:40019334

Country of ref document:HK

SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp