Movatterモバイル変換


[0]ホーム

URL:


CN112446388A - Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model - Google Patents

Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model
Download PDF

Info

Publication number
CN112446388A
CN112446388ACN202011410890.5ACN202011410890ACN112446388ACN 112446388 ACN112446388 ACN 112446388ACN 202011410890 ACN202011410890 ACN 202011410890ACN 112446388 ACN112446388 ACN 112446388A
Authority
CN
China
Prior art keywords
detection model
network
training
lightweight
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202011410890.5A
Other languages
Chinese (zh)
Inventor
孟庆宽
杨晓霞
都泽鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Technology and Education China Vocational Training Instructor Training Center
Original Assignee
Tianjin University of Technology and Education China Vocational Training Instructor Training Center
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Technology and Education China Vocational Training Instructor Training CenterfiledCriticalTianjin University of Technology and Education China Vocational Training Instructor Training Center
Priority to CN202011410890.5ApriorityCriticalpatent/CN112446388A/en
Publication of CN112446388ApublicationCriticalpatent/CN112446388A/en
Withdrawnlegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种基于轻量化二阶段检测模型的多类别蔬菜幼苗识别方法及系统,该方法包括:获取多类别蔬菜幼苗图像数据集,将所述图像数据集进行数据增强;标注所述增强数据集,将标注的数据集划分为训练集、验证集、测试集;在TensorFlow深度学习框架上,搭建轻量化二阶段检测模型,设计混合深度分离卷积神经网络作为前置基础网络,采用特征金字塔网络融合前置基础网络不同层级特征信息,压缩检测头网络通道维数和全连接层数量;初始化轻量化二阶段检测模型参数,将训练集输入到检测模型基于随机梯度下降法进行训练;训练完成后将待识别图像输入到检测模型,输出蔬菜幼苗种类和位置信息。此方法解决了传统蔬菜幼苗检测算法准确性低、实时性差的问题。

Figure 202011410890

The invention discloses a multi-category vegetable seedling identification method and system based on a lightweight two-stage detection model. The method includes: acquiring a multi-category vegetable seedling image data set, performing data enhancement on the image data set; labeling the enhancement Data set, the labeled data set is divided into training set, validation set, and test set; on the TensorFlow deep learning framework, a lightweight two-stage detection model is built, and a hybrid depth separation convolutional neural network is designed as the pre-basic network. The pyramid network fuses the feature information of different levels of the pre-basic network, compresses the channel dimension of the detection head network and the number of fully connected layers; initializes the parameters of the lightweight two-stage detection model, and inputs the training set into the detection model for training based on the stochastic gradient descent method; training After completion, input the image to be recognized into the detection model, and output the vegetable seedling type and location information. This method solves the problems of low accuracy and poor real-time performance of traditional vegetable seedling detection algorithms.

Figure 202011410890

Description

Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model
Technical Field
The invention relates to the field of agricultural crop detection and identification, in particular to a method and a system for identifying multi-class vegetable seedlings based on a lightweight two-stage detection model.
Background
The vegetables contain abundant vitamins, minerals and dietary fibers, and are one of important foods for maintaining the nutrition balance of human bodies and keeping the body healthy. In recent years, the vegetable planting area of China is stabilized at about 3 hundred million mu, the annual output reaches 7 hundred million tons, and the vegetable planting area exceeds the grain output and becomes the first large agricultural product. The rapid development of the vegetable industry meets the daily life needs of people, but the problems of excessive fertilization, overproof pesticide use and the like exist in the planting process, and adverse effects are generated on the ecological environment and the human health. With the development of electronic technology and computer technology, the automatic intelligent agricultural equipment is gradually applied to agricultural production, and the yield and the safety quality of vegetable crops are improved by a series of means such as targeted pesticide spraying, variable fertilization, mechanical weeding and the like.
The traditional vegetable seedling detection method mainly realizes vegetable identification and positioning based on one or more combinations of characteristics such as color, shape, texture, spectrum and position, but can only detect crops in a specific environment in the practical application process, and is easily influenced by factors such as natural illumination, background noise and branch and leaf shielding, so that the identification accuracy is reduced.
Compared with the traditional method, the target detection method based on the deep learning technology is rapidly developed in recent years, different level features of an input image are extracted from shallow to deep through a convolution layer, a pooling layer and a full-connection layer, and accurate detection of a target is achieved through information classification and position regression. In the field of precision agriculture, a target detection model based on deep learning is gradually applied to crop identification and detection and has a remarkable effect. The detection model can be divided into a one-stage target detection model and a two-stage target detection model according to the steps required for realizing the classified positioning of the targets. The first-stage detection model is represented by SSD and YOLO series algorithms, and the second-stage detection model is represented by Faster R-CNN and F-RCN. Compared with a one-stage target detection model, the two-stage target detection model is high in identification precision, but is long in consumed time, and is difficult to meet the requirement of rapid detection of crops in a complex agricultural environment.
Disclosure of Invention
The invention provides a multi-class vegetable seedling identification method based on a lightweight two-stage detection model, which aims to improve the detection precision and speed of vegetable seedlings in a natural environment. The lightweight two-stage detection model adopts mixed depth separation convolution as a preposed basic network to operate the input image, so that the image feature extraction speed and efficiency are improved; introducing a characteristic pyramid network to fuse different levels of characteristic information of a preposed basic network, and enhancing the identification precision of a detection model on a multi-scale target; by compressing the number of the network channel dimensions of the detection head and the number of the full connection layers, the scale of the model parameters and the calculation complexity are reduced.
In a first aspect, the invention provides a method for identifying multi-class vegetable seedlings based on a lightweight two-stage detection model, which comprises the following steps:
s01, acquiring multi-category vegetable seedling image data sets, and performing data enhancement on the image data sets;
s02, labeling the enhanced data set, and dividing the labeled data set into a training set, a verification set and a test set;
s03, building a lightweight two-stage detection model on a TensorFlow deep learning framework, designing a mixed deep separation convolutional neural network as a preposed basic network, fusing different levels of feature information of the preposed basic network by adopting a feature pyramid network, and compressing the network channel dimension and the number of full connection layers of a detection head;
s04, initializing the parameters of the lightweight two-stage detection model, and inputting a training set into the detection model to train based on a random gradient descent method;
and S05, inputting the image to be recognized into the detection model after training is finished, and outputting the type and position information of the vegetable seedling.
Optionally, the acquiring an image dataset of the multi-class vegetable seedlings in step S01, and performing data enhancement on the image dataset specifically includes:
(1.1) enabling the camera and the horizontal direction of crop rows to form an included angle of 80-90 degrees, enabling the camera to be 80cm away from the ground, and acquiring images of various vegetable seedlings under different weather conditions, different illumination directions and different environmental backgrounds to construct an image data set;
(1.2) data enhancing the image dataset by geometric transformation and color transformation.
Optionally, in the step S02, the enhancing data set is marked, and the marked data set is divided into a training set, a verification set, and a test set, which specifically includes:
(2.1) adopting labeling software to label the type and the position of the vegetable seedling in the enhanced data set;
and (2.2) randomly splitting the labeling data set into a training set, a verification set and a test set according to the proportion of 7:2: 1.
Optionally, in step S03, on the tensrflow deep learning framework, a lightweight two-stage detection model is built, a hybrid deep separation convolutional neural network is designed as a pre-base network, a feature pyramid network is adopted to fuse different levels of feature information of the pre-base network, and the network channel dimension and the number of full connection layers of the detection head are compressed, which specifically includes:
(3.1) fusing a plurality of convolution kernels with different sizes into a single depth separable convolution operation to form a mixed depth separable convolution neural network, and taking the mixed depth separable convolution neural network as a preposed basic network to perform feature acquisition on an input image;
(3.2) introducing a characteristic pyramid network to fuse different levels of characteristics of the preposed basic network, and inputting the fused characteristic diagram into a regional suggestion network to generate a series of sparse prediction frames;
and (3.3) operating the output characteristics of the final stage of the mixed deep separation convolutional neural network by using asymmetric convolution in a detection head network to generate a characteristic diagram with less channel dimensions, comprehensively accessing the characteristic diagram and a prediction frame into 1 full-connection layer to obtain global characteristics of a detection target, and finishing target classification and position prediction based on 2 parallel branches.
Optionally, the initializing the parameters of the lightweight two-stage detection model in step S04, inputting a training set to the detection model, and training the detection model based on a stochastic gradient descent method, specifically including:
(4.1) using a pre-trained mixed deep separation convolutional neural network weight parameter model for the pre-feature extraction network, and randomly initializing the rest layers by using Gaussian distribution with the mean value of 0 and the standard deviation of 0.01;
(4.2) setting hyper-parameters related to model training, and training by adopting a multi-task loss function as a target function based on a random gradient descent method;
and (4.3) calculating loss functions of input samples by using an online difficult sample mining strategy in the training process, sequencing the loss functions from large to small, and updating model weight parameters by back propagation training of difficult samples with larger loss functions of the first 1 percent screened.
In a second aspect, the present invention further provides a light-weight two-stage detection model-based multi-class vegetable seedling recognition system, including:
the image acquisition and enhancement module is used for acquiring image data sets of the multi-category vegetable seedlings and performing data enhancement on the image data sets; the image labeling and classifying module is used for labeling the enhanced data set and dividing the labeled data set into a training set, a verification set and a test set;
the detection model building module is used for building a lightweight two-stage detection model on a TensorFlow deep learning framework, designing a mixed deep separation convolutional neural network as a preposed basic network, fusing different levels of characteristic information of the preposed basic network by adopting a characteristic pyramid network, and compressing the network channel dimension and the number of full connection layers of the detection head;
the detection model training module is used for initializing the lightweight two-stage detection model parameters and inputting a training set into the detection model to train the detection model based on a random gradient descent method;
and the detection result output module is used for inputting the images to be recognized into the detection model after the training is finished and outputting the type and the position information of the vegetable seedlings.
Optionally, the image acquisition enhancing module specifically includes:
the image acquisition unit is used for enabling the camera to form an included angle of 80-90 degrees with the horizontal direction of the crop row and to be about 80cm away from the ground, and acquiring various vegetable seedling images under different weather conditions, different illumination directions and different environmental backgrounds to construct an image data set; and the image enhancement unit is used for performing data enhancement on the image data set through geometric transformation and color transformation.
Optionally, the image labeling and classifying module specifically includes:
the labeling unit is used for labeling the category and the position of the vegetable seedling in the enhanced data set by adopting labeling software;
and the classification unit is used for randomly splitting the labeling data set into a training set, a verification set and a test set according to the proportion of 7:2: 1.
Optionally, the detection model building module specifically includes:
the pre-basic network unit is used for fusing a plurality of convolution kernels with different sizes into a single depth separable convolution operation to form a mixed depth separation convolution neural network, and the mixed depth separation convolution neural network is used as a pre-basic network to carry out feature acquisition on an input image;
the feature information fusion unit is used for introducing a feature pyramid network to fuse different levels of features of the preposed basic network, and inputting the fused feature map into a regional suggestion network to generate a series of sparse prediction frames;
and the lightweight detection head unit is used for calculating the output characteristics of the mixed deep separation convolutional neural network at the last stage by using asymmetric convolution in the detection head network to generate a characteristic diagram with less channel dimensions, comprehensively accessing the characteristic diagram and a prediction frame into 1 full-connection layer to obtain global characteristics of a detection target, and finishing target classification and position prediction based on 2 parallel branches.
Optionally, the detection model training module specifically includes:
the initialization unit is used for using a pre-trained mixed deep separation convolutional neural network weight parameter model for the pre-feature extraction network, and randomly initializing the rest layers by using Gaussian distribution with the mean value of 0 and the standard deviation of 0.01;
the training unit is used for setting hyper-parameters related to model training and training on the basis of a random gradient descent method by adopting a multi-task loss function as a target function;
and the difficult sample mining unit is used for calculating the loss function of the input sample by utilizing an online difficult sample mining strategy in the training process, sorting the loss function of the input sample according to the sequence from large to small, and updating the model weight parameter by back propagation training of the difficult samples with the larger loss function of the first 1 percent.
According to the technical scheme, the method comprises the following steps: the invention provides a method and a system for identifying multi-class vegetable seedlings based on a lightweight two-stage detection model, which have the following advantages:
firstly, a mixed depth separation convolutional neural network is used as a preposed basic network to extract the characteristics of an input image, so that the calculated characteristic image pixels have different receptive fields, and the image characteristic extraction speed and efficiency are effectively improved;
fusing different levels of features of the preposed basic network by adopting a feature pyramid network, wherein the fused feature graph has enough resolution and stronger semantic information, and the detection precision of the multi-scale target can be enhanced;
thirdly, the detection head network is designed in a light weight mode, redundant parameters are reduced by compressing the number of network channel dimensions and the number of full connection layers, the calculated amount of the model is reduced, and the reasoning speed of the model is improved;
and fourthly, the multi-category vegetable seedling identification method and system based on the lightweight two-stage detection model have high identification precision and high reasoning speed, and can be applied to embedded agricultural mobile equipment with limited computing capacity and storage resources.
Drawings
Fig. 1 is a schematic flow chart of a multi-class vegetable seedling identification method based on a lightweight two-stage detection model according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a hybrid deep separation convolutional neural network according to an embodiment of the present invention;
fig. 3 is a schematic diagram of different-level feature structures of a feature pyramid network fusion hybrid depth separation convolutional neural network provided in the embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a lightweight two-stage target detection model according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a multi-class vegetable seedling identification system based on a lightweight two-stage detection model according to an embodiment of the present invention.
Detailed Description
The following embodiments are described in detail with reference to the accompanying drawings, and the following embodiments are only used to clearly illustrate the technical solutions of the present invention, and should not be used to limit the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a method for identifying multi-class vegetable seedlings based on a lightweight two-stage detection model according to an embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
101. acquiring multi-category vegetable seedling image data sets, and performing data enhancement on the image data sets;
102. labeling the enhanced data set, and dividing the labeled data set into a training set, a verification set and a test set;
103. building a lightweight two-stage detection model on a TensorFlow deep learning framework, designing a mixed deep separation convolutional neural network as a preposed basic network, fusing different levels of characteristic information of the preposed basic network by adopting a characteristic pyramid network, and compressing the network channel dimension and the number of full connection layers of a detection head;
104. initializing parameters of the lightweight two-stage detection model, and inputting a training set into the detection model to train based on a random gradient descent method;
105. and after the training is finished, inputting the image to be recognized into the detection model, and outputting the type and position information of the vegetable seedling.
Thestep 101 comprises the following specific steps:
(1.1) enabling the camera and the horizontal direction of crop rows to form an included angle of 80-90 degrees, enabling the camera to be 80cm away from the ground, and acquiring images of various vegetable seedlings under different weather conditions, different illumination directions and different environmental backgrounds to construct an image data set;
(1.2) data enhancing the image dataset by geometric transformation and color transformation;
for example, in the present embodiment, a Matlab tool is used for data enhancement. Geometric transformation: randomly dividing an original image data set into 2 parts, carrying out image rotation on one part, and selecting a rotation angle of-20 degrees, -5 degrees, 5 degrees and 20 degrees to generate a new image; and the other part randomly performs mirror image turning, horizontal turning and vertical turning on the image. Color transformation: the original image is transformed from RGB color space to HVS color space, and the brightness (Value) and Saturation (Saturation) of the image are randomly adjusted, wherein the brightness adjustment Value is 0.8 times, 0.9 times, 1.1 times and 1.2 times of the original Value, and the Saturation adjustment Value is 0.85 times, 0.95 times, 1.05 times and 1.15 times of the original Value. And combining the original image data set, the geometric transformation data set and the color transformation data set to form an enhanced image data set.
Thestep 102 comprises the following specific steps:
(2.1) adopting labeling software to label the type and the position of the vegetable seedling in the enhanced data set;
for example, LabelImg software is used for image annotation in this embodiment. Firstly, double-clicking LabelImg software to enter an operation interface, and opening a folder (Open Dir) where an image to be marked is located; then, setting a marked image storage directory (Change Save Dir), marking a target area in the current image by using Create \ nRectBox and setting a class name; finally, the labeled image (Save) is saved, and the Next image is clicked for marking (Next). The marked image is generated under the condition of saving a file path, the name of the xml file is consistent with the name of the marked image, and the file comprises information such as the name, the path, the target quantity, the category, the size and the like of the marked image;
and (2.2) randomly splitting the labeling data set into a training set, a verification set and a test set according to the proportion of 7:2: 1.
Thestep 103 comprises the following specific steps:
(3.1) fusing a plurality of convolution kernels with different sizes into a single depth separable convolution operation to form a mixed depth separable convolution neural network, and taking the mixed depth separable convolution neural network as a preposed basic network to perform feature acquisition on an input image, wherein the specific process comprises the following steps:
in this embodiment, the deep learning framework selects TensorFlow, and performs program design based on Python language on a Windows 10 operating system, and the design idea of the hybrid deep separation convolutional neural network is as follows: let the input feature map be X(h,w,c)H represents the height of the feature map, w represents the width, c represents the number of channels, and the feature map is divided into g groups of sub-feature maps along the channel direction
Figure BDA0002818515940000071
cs(s 1,2.. g) represents the number of channels of the s-th group of sub-feature maps, and c1+c2+...+cgC. Establishing g groups of different-size depth convolution kernels
Figure BDA0002818515940000072
m denotes a channel multiplier, kt×kt(t ═ 1,2.. g) denotes the t-th group convolution kernel size. And (3) operating the t group of input sub-feature maps and the corresponding depth convolution kernels to obtain a t group of output sub-feature maps, wherein the specific definition formula is as follows:
Figure BDA0002818515940000081
wherein x represents the characteristic image pixel line number, y represents the characteristic image pixel column number, ztRepresenting the number of channels of the t-th group of output feature maps, h representing the height of the input feature map, w representing the width of the input feature map, i representing the row number of the convolution kernel elements, j representing the column number of the convolution kernel elements,
Figure BDA0002818515940000082
showing the output sub-feature map of the t-th group,
Figure BDA0002818515940000083
representing the t-th group of input sub-feature maps,
Figure BDA0002818515940000084
representing a t-th set of deep convolution kernels;
according to the calculation result of the formula, all the output sub-feature graphs are spliced in the channel dimension in an addition mode to obtain a final output feature graph, and the calculation formula is as follows:
Figure BDA0002818515940000085
wherein z represents the number of channels of the output characteristic diagram, and z is equal to z1+...+zg,Yx,y,zRepresenting the spliced output characteristic diagram;
the structure of the mixed deep separation convolutional neural network in this embodiment is shown in fig. 2, the maximum grouping number g of the feature map is 5, each group has the same number of channels, the size of the corresponding deep convolutional kernel is {3 × 3, 5 × 5,7 × 7,9 × 9,11 × 11}, the feature map is grouped and then operated with convolutional kernels of different sizes, and then the result is spliced to obtain an output. FIG. 2 is a graph obtained by dividing the convolution neural network into 5 stages (stages) according to the size of a feature map, wherein the feature map with the same size is the same Stage, and the scale ratio of the feature map in the adjacent stages is 2;
(3.2) introducing a feature pyramid network to fuse different levels of features of the preposed basic network, inputting the fused feature map into a regional suggestion network to generate a series of sparse prediction frames, wherein the specific process comprises the following steps:
in the embodiment, a feature pyramid network is merged into the hybrid depth separation convolutional neural network, as shown in fig. 3. In fig. 3, the mixed depth separation convolution sequentially generates feature maps of different stages in the bottom-up order, wherein x in Stage x/y (x is 1,2,3,4, 5; y is 2,4,8,16,32) represents the number of stages in which the feature maps are located, and y represents the reduction factor of the feature map size relative to the input image at this Stage. The stages 2-5 are respectively input to the feature pyramid network after being subjected to 1 × 1 convolution operation, wherein the 1 × 1 convolution has the function of keeping the number of channels input to the feature pyramid network by each Stage feature diagram consistent. And the feature pyramid network unit performs up-sampling on the input high-level feature map according to the top-down sequence to enlarge the resolution, and then performs fusion with the adjacent low-level features in an addition mode. On one hand, the fused feature graph is input into a subsequent network for predictionReasoning, on the other hand, continues to fuse with the underlying feature map through upsampling. The mixed depth separation convolution stages 2-5 correspond to the P2-P5 levels of the feature pyramid network, and P6 is obtained by downsampling Stage5 and is used for generating a prediction box in the area suggestion network without participating in fusion operation. Each level of { P2, P3, P4, P5, P6} is responsible for information processing of a single scale, and corresponds to {16 }2,322,642,1282, 25625 scale prediction frames, each prediction frame has 3 length-width ratios of {1:1,1:2,2:1}, and the prediction frames totally comprise 15 prediction frames for predicting the target object and the background;
(3.3) in a detection head network, computing the output characteristics of the mixed deep separation convolutional neural network at the last stage by using asymmetric convolution to generate a characteristic diagram with less channel dimensions, comprehensively accessing the characteristic diagram and a prediction frame into 1 full-connection layer to obtain global characteristics of a detection target, and finishing target classification and position prediction based on 2 parallel branches, wherein the specific process comprises the following steps:
in this embodiment, the lightweight detection head unit is constructed by compressing the network channel dimension and the parameter scale, and the specific design method is as follows: generating a feature map of an alpha multiplied by p channel by adopting a large-size asymmetric convolution aiming at a feature map output by a final stage of a mixed deep separation convolutional neural network, wherein alpha is a number which is irrelevant to a category and has a small numerical value, the value of alpha is 10, the value of p multiplied by p is equal to the number of grids after pooling of a candidate area, the value of p multiplied by p is 49, and a feature map of a 490 channel is obtained through calculation; then, introducing ROI Align operation to pool the feature information corresponding to the prediction frames with different sizes to generate a feature map with a fixed size, wherein the ROI Align operation acquires the numerical value of a pixel point with coordinates as floating point numbers by using a bilinear difference method, and the whole feature aggregation process is converted into a continuous operation; finally, accessing 1 full-connection layer to obtain global characteristics of the detected target, and completing target classification and position prediction based on 2 parallel branches; as used herein, large-scale asymmetric convolution consists of 1 × 15 and 15 × 12 convolution kernels; FIG. 4 is a schematic block diagram of a lightweight two-stage target detection model.
Thestep 104 comprises the following specific steps:
(4.1) using a pre-trained mixed deep separation convolutional neural network weight parameter model for the pre-feature extraction network, and randomly initializing the rest layers by using Gaussian distribution with the mean value of 0 and the standard deviation of 0.01;
(4.2) setting hyper-parameters related to model training, and training by adopting a multi-task loss function as a target function based on a random gradient descent method, wherein the specific process comprises the following steps:
the momentum factor is 0.9, and the weight attenuation coefficient is 5X 10-4The initial learning rate is 0.002, the attenuation rate is 0.9, the attenuation is 1 time after every 2000 iterations, the accuracy rate of the training model is tested on the verification set, and the total iteration number of the model training is 50000;
secondly, in the training process, a multi-task loss function is adopted to complete the confidence degree discrimination and the position regression of the target type, and the method is specifically defined as follows:
LTotal=LRPN(pl,al)+LHEAD(p,u,o,g)
wherein
Figure BDA0002818515940000101
LHEAD(p,u,o,g)=Lcls(p,u)+λ'[u≥1]LDIOU(o,g)
Figure BDA0002818515940000102
Figure BDA0002818515940000103
The loss function of the embodiment comprises two parts, namely area recommendation network loss and detection head loss, wherein each part comprises classification loss and position regression loss. In the formula, LTotalTo detect model loss, LRPNSuggesting network loss for a region, LHEADTo detect head network loss, l is the anchor frame index, plPredict probability for the first anchor frame two classes, pl*For the first anchor frame discriminationValue of alFor the prediction box corresponding to the ith anchor box,
Figure BDA0002818515940000104
is a real frame corresponding to the ith anchor frame, p is a prediction class probability, u is a real class label, lambda' are weight parameters, LclsTo classify the loss, NclsNumber of anchor frames to sample, NregFor sampling positive and negative sample numbers, o is a prediction box output by the area recommendation network, g is a real box corresponding to the prediction box, and LDIOUFor Distance-based cross-over ratio (DIoU) loss, A is a prediction box, B is a real box, c is A, B is the minimum bounding box diagonal length, rho (·) is Euclidean Distance calculation, A isctrTo predict the frame center point coordinates, BctrThe coordinate of the center point of the real frame is IoU (intersection over Union), and the intersection ratio of the prediction frame and the real frame is IoU;
and (4.3) calculating loss functions of input samples by using an online difficult sample mining strategy in the training process, sequencing the loss functions from large to small, and updating model weight parameters by back propagation training of difficult samples with larger loss functions of the first 1 percent screened.
Thestep 105 comprises the following specific steps:
(5.1) setting a category confidence threshold value to be 0.5 and setting a threshold value of intersection and union ratio to be 0.5 in the trained detection model;
and (5.2) inputting the image to be recognized into the trained detection model to obtain a multi-class vegetable seedling recognition result, wherein the recognition result comprises a target class label, a class confidence coefficient and a target position frame.
Fig. 5 is a schematic structural diagram of a multi-class vegetable seedling identification system based on a lightweight two-stage detection model according to an embodiment of the present invention, and as shown in fig. 5, the system includes:
the imageacquisition enhancing module 501 is used for acquiring image data sets of multi-category vegetable seedlings and enhancing the data of the image data sets;
an image labeling and classifyingmodule 502, configured to label the enhanced data set, and divide the labeled data set into a training set, a verification set, and a test set;
the detectionmodel building module 503 is used for building a lightweight two-stage detection model on a TensorFlow deep learning framework, designing a mixed deep separation convolutional neural network as a preposed basic network, fusing different levels of characteristic information of the preposed basic network by adopting a characteristic pyramid network, and compressing the network channel dimension and the number of full connection layers of the detection head;
a detectionmodel training module 504, configured to initialize the lightweight two-stage detection model parameters, and input a training set to the detection model for training based on a random gradient descent method;
and the detectionresult output module 505 is used for inputting the images to be recognized into the detection model after the training is finished and outputting the type and position information of the vegetable seedlings.
The imageacquisition enhancing module 501 specifically includes:
the image acquisition unit is used for enabling the camera to form an included angle of 80-90 degrees with the horizontal direction of the crop row and to be about 80cm away from the ground, and acquiring various vegetable seedling images under different weather conditions, different illumination directions and different environmental backgrounds to construct an image data set; and the image enhancement unit is used for performing data enhancement on the image data set through geometric transformation and color transformation.
The image labeling and classifyingmodule 502 specifically includes:
the labeling unit is used for labeling the category and the position of the vegetable seedling in the enhanced data set by adopting labeling software;
and the classification unit is used for randomly splitting the labeling data set into a training set, a verification set and a test set according to the proportion of 7:2: 1.
The detectionmodel building module 503 specifically includes:
the pre-basic network unit is used for fusing a plurality of convolution kernels with different sizes into a single depth separable convolution operation to form a mixed depth separation convolution neural network, and the mixed depth separation convolution neural network is used as a pre-basic network to carry out feature acquisition on an input image;
the feature information fusion unit is used for introducing a feature pyramid network to fuse different levels of features of the preposed basic network, and inputting the fused feature map into a regional suggestion network to generate a series of sparse prediction frames;
and the lightweight detection head unit is used for calculating the output characteristics of the mixed deep separation convolutional neural network at the last stage by using asymmetric convolution in the detection head network to generate a characteristic diagram with less channel dimensions, comprehensively accessing the characteristic diagram and a prediction frame into 1 full-connection layer to obtain global characteristics of a detection target, and finishing target classification and position prediction based on 2 parallel branches.
The detectionmodel training module 504 specifically includes:
the initialization unit is used for using a pre-trained mixed deep separation convolutional neural network weight parameter model for the pre-feature extraction network, and randomly initializing the rest layers by using Gaussian distribution with the mean value of 0 and the standard deviation of 0.01;
the training unit is used for setting hyper-parameters related to model training and training on the basis of a random gradient descent method by adopting a multi-task loss function as a target function;
and the difficult sample mining unit is used for calculating the loss function of the input sample by utilizing an online difficult sample mining strategy in the training process, sorting the loss function of the input sample according to the sequence from large to small, and updating the model weight parameter by back propagation training of the difficult samples with the larger loss function of the first 1 percent.
The detectionresult output module 505 specifically includes:
the threshold setting unit is used for setting a category confidence threshold value of 0.5 and a threshold value of intersection and union ratio value of 0.5 in the trained detection model;
and the detection output unit is used for inputting the image to be recognized into the trained detection model to obtain the recognition result of the multi-class vegetable seedling, and the recognition result comprises a target class label, a class confidence coefficient and a target position frame.
The system and the method of the invention are in one-to-one correspondence, so the calculation process of some parameters in the method is also suitable for the calculation process in the system, and the detailed description in the system is omitted.
In the description of the present invention, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; while the invention has been described in detail and with reference to the foregoing embodiments, those skilled in the art will appreciate that; the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; these modifications and substitutions do not depart from the spirit of the invention in the form of examples, and are intended to be included within the scope of the claims and the specification.

Claims (10)

Translated fromChinese
1.一种基于轻量化二阶段检测模型的多类别蔬菜幼苗识别方法,其特征在于,包括:1. a multi-category vegetable seedling identification method based on a lightweight two-stage detection model, is characterized in that, comprising:S01、获取多类别蔬菜幼苗图像数据集,将所述图像数据集进行数据增强;S01, obtaining a multi-category vegetable seedling image dataset, and performing data enhancement on the image dataset;S02、标注所述增强数据集,将标注的数据集划分为训练集、验证集、测试集;S02, marking the enhanced data set, and dividing the marked data set into a training set, a verification set, and a test set;S03、在TensorFlow深度学习框架上,搭建轻量化二阶段检测模型,设计混合深度分离卷积神经网络作为前置基础网络,采用特征金字塔网络融合所述前置基础网络不同层级特征信息,压缩检测头网络通道维数和全连接层数量;S03. On the TensorFlow deep learning framework, build a lightweight two-stage detection model, design a hybrid depth separation convolutional neural network as the pre-basic network, use the feature pyramid network to fuse the feature information of different levels of the pre-basic network, and compress the detection head The network channel dimension and the number of fully connected layers;S04、初始化所述轻量化二阶段检测模型参数,将训练集输入到检测模型基于随机梯度下降法进行训练;S04, initialize the lightweight two-stage detection model parameters, and input the training set into the detection model for training based on the stochastic gradient descent method;S05、训练完成后将待识别图像输入到检测模型,输出蔬菜幼苗种类和位置信息。S05. After the training is completed, input the image to be recognized into the detection model, and output the vegetable seedling type and location information.2.根据权利要求1所述的一种基于轻量化二阶段检测模型的多类别蔬菜幼苗识别方法,其特征在于,所述步骤S01具体包括:2. a kind of multi-category vegetable seedling identification method based on lightweight two-stage detection model according to claim 1, is characterized in that, described step S01 specifically comprises:(1.1)使摄像机与作物行水平方向呈80°~90°夹角,距离地面高度约80cm,在不同天气条件、不同光照方向、不同环境背景下采集多类蔬菜幼苗图像构建图像数据集;(1.1) Make the camera and the horizontal direction of the crop row at an angle of 80° to 90°, with a height of about 80cm from the ground, and collect images of various types of vegetable seedlings under different weather conditions, different lighting directions, and different environmental backgrounds to construct an image dataset;(1.2)将所述图像数据集通过几何变换与颜色变换进行数据增强。(1.2) Perform data enhancement on the image data set through geometric transformation and color transformation.3.根据权利要求1所述的一种基于轻量化二阶段检测模型的多类别蔬菜幼苗识别方法,其特征在于,所述步骤S02具体包括:3. a kind of multi-category vegetable seedling identification method based on lightweight two-stage detection model according to claim 1, is characterized in that, described step S02 specifically comprises:(2.1)采用标注软件对所述增强数据集中的蔬菜幼苗进行类别与位置标注;(2.1) Use labeling software to label the vegetable seedlings in the enhanced data set by category and location;(2.2)将所述标注数据集按照7:2:1的比例随机拆分为训练集、验证集、测试集。(2.2) The labeled data set is randomly divided into a training set, a validation set, and a test set according to the ratio of 7:2:1.4.根据权利要求1所述的一种基于轻量化二阶段检测模型的多类别蔬菜幼苗识别方法,其特征在于,所述步骤S03中,具体包括:4. a kind of multi-category vegetable seedling identification method based on lightweight two-stage detection model according to claim 1, is characterized in that, in described step S03, specifically comprises:(3.1)将多个不同尺寸卷积核融合到一个单独的深度可分离卷积运算构成混合深度分离卷积神经网络,将其作为前置基础网络对输入图像进行特征获取;(3.1) Integrate multiple convolution kernels of different sizes into a single depthwise separable convolution operation to form a hybrid depthwise separable convolutional neural network, which is used as a pre-basic network for feature acquisition of the input image;(3.2)引入特征金字塔网络对所述前置基础网络不同层级特征进行融合,融合后的特征图输入到区域建议网络生成一系列稀疏的预测框;(3.2) The feature pyramid network is introduced to fuse the features of different levels of the pre-basic network, and the fused feature map is input to the region suggestion network to generate a series of sparse prediction frames;(3.3)在检测头网络中利用非对称卷积对所述混合深度分离卷积神经网络最后阶段输出特征进行运算生成通道维数较少的特征图,将特征图与预测框进行综合接入1个全连接层获得检测目标全局特征,基于2个并行分支完成目标分类和位置预测。(3.3) Use asymmetric convolution in the detection head network to operate on the output features of the final stage of the hybrid depth-separated convolutional neural network to generate a feature map with fewer channel dimensions, and comprehensively access the feature map and the prediction frame 1 A fully connected layer obtains the global features of the detection target, and completes target classification and position prediction based on 2 parallel branches.5.根据权利要求1所述的一种基于轻量化二阶段检测模型的多类别蔬菜幼苗识别方法,其特征在于,所述步骤S04具体包括5. a kind of multi-category vegetable seedling identification method based on lightweight two-stage detection model according to claim 1, is characterized in that, described step S04 specifically comprises(4.1)对前置特征提取网络使用预训练好的混合深度分离卷积神经网络权重参数模型,其余层用均值为0、标准差为0.01的高斯分布随机初始化;(4.1) The pre-trained hybrid depth-separated convolutional neural network weight parameter model is used for the pre-feature extraction network, and the remaining layers are randomly initialized with a Gaussian distribution with a mean of 0 and a standard deviation of 0.01;(4.2)设置模型训练所涉及的超参数,采用多任务损失函数为目标函数基于随机梯度下降法进行训练;(4.2) Set the hyperparameters involved in the model training, and use the multi-task loss function as the objective function for training based on the stochastic gradient descent method;(4.3)训练过程中利用在线困难样本挖掘策略对输入样本损失函数进行计算,按由大到小的顺序排序,筛选前1%比例损失函数较大难分样本通过反向传播训练更新模型权重参数。(4.3) In the training process, the online difficult sample mining strategy is used to calculate the loss function of the input sample, sort it in descending order, and filter the first 1% of the difficult samples with a larger proportion of the loss function, and update the model weight parameters through back-propagation training .6.一种基于轻量化二阶段检测模型的多类别蔬菜幼苗识别系统,其特征在于,包括:6. A multi-category vegetable seedling identification system based on a lightweight two-stage detection model is characterized in that, comprising:图像采集增强模块,用于获取多类别蔬菜幼苗图像数据集,将所述图像数据集进行数据增强;an image acquisition enhancement module for acquiring a multi-category vegetable seedling image dataset, and performing data enhancement on the image dataset;图像标注分类模块,用于标注所述增强数据集,将标注的数据集划分为训练集、验证集、测试集;an image labeling and classification module for labeling the enhanced data set, and dividing the labelled data set into a training set, a validation set, and a test set;检测模型构建模块,用于在TensorFlow深度学习框架上,搭建轻量化二阶段检测模型,设计混合深度分离卷积神经网络作为前置基础网络,采用特征金字塔网络融合所述前置基础网络不同层级特征信息,压缩检测头网络通道维数和全连接层数量;The detection model building module is used to build a lightweight two-stage detection model on the TensorFlow deep learning framework, design a hybrid depth separation convolutional neural network as the pre-basic network, and use the feature pyramid network to fuse the features of different levels of the pre-basic network. information, the network channel dimension of the compression detection head and the number of fully connected layers;检测模型训练模块,用于初始化所述轻量化二阶段检测模型参数,将训练集输入到检测模型基于随机梯度下降法进行训练;The detection model training module is used to initialize the parameters of the lightweight two-stage detection model, and input the training set into the detection model for training based on the stochastic gradient descent method;检测结果输出模块,用于训练完成后将待识别图像输入到检测模型,输出蔬菜幼苗种类和位置信息。The detection result output module is used to input the image to be recognized into the detection model after the training is completed, and output the vegetable seedling type and location information.7.根据权利要求6所述的一种基于轻量化二阶段检测模型的多类别蔬菜幼苗识别系统,其特征在于,所述图像采集增强模块,具体包括:7. A multi-category vegetable seedling identification system based on a lightweight two-stage detection model according to claim 6, wherein the image acquisition enhancement module specifically comprises:图像采集单元,用于使摄像机与作物行水平方向呈80°~90°夹角,距离地面高度约80cm,在不同天气条件、不同光照方向、不同环境背景下采集多类蔬菜幼苗图像构建图像数据集;The image acquisition unit is used to make the camera and the horizontal direction of the crop row at an angle of 80° to 90°, and the height from the ground is about 80cm. Under different weather conditions, different lighting directions, and different environmental backgrounds, various types of vegetable seedling images are collected to construct image data. set;图像增强单元,用于将所述图像数据集通过几何变换与颜色变换进行数据增强。An image enhancement unit, configured to perform data enhancement on the image data set through geometric transformation and color transformation.8.根据权利要求6所述的一种基于轻量化二阶段检测模型的多类别蔬菜幼苗识别系统,其特征在于,所述图像标注分类模块,具体包括:8. A multi-category vegetable seedling identification system based on a lightweight two-stage detection model according to claim 6, wherein the image labeling and classification module specifically comprises:标注单元,用于采用标注软件对所述增强数据集中的蔬菜幼苗进行类别与位置标注;a labeling unit, used for labeling the vegetable seedlings in the enhanced data set by using labeling software to label the categories and locations;分类单元,用于将所述标注数据集按照7:2:1的比例随机拆分为训练集、验证集、测试集。The classification unit is used to randomly split the labeled data set into a training set, a verification set, and a test set according to a ratio of 7:2:1.9.根据权利要求6所述的一种基于轻量化二阶段检测模型的多类别蔬菜幼苗识别系统,其特征在于,所述检测模型构建模块,具体包括:9. A multi-category vegetable seedling identification system based on a lightweight two-stage detection model according to claim 6, wherein the detection model building module specifically comprises:前置基础网络单元,用于将多个不同尺寸卷积核融合到一个单独的深度可分离卷积运算构成混合深度分离卷积神经网络,将其作为前置基础网络对输入图像进行特征获取;The pre-basic network unit is used to fuse multiple convolution kernels of different sizes into a single depth-separable convolution operation to form a hybrid depth-separated convolutional neural network, which is used as a pre-basic network to obtain features of the input image;特征信息融合单元,用于引入特征金字塔网络对所述前置基础网络不同层级特征进行融合,融合后的特征图输入到区域建议网络生成一系列稀疏的预测框;The feature information fusion unit is used to introduce a feature pyramid network to fuse the features of different levels of the pre-basic network, and the fused feature map is input to the region suggestion network to generate a series of sparse prediction frames;轻量化检测头单元,用于在检测头网络中利用非对称卷积对所述混合深度分离卷积神经网络最后阶段输出特征进行运算生成通道维数较少的特征图,将特征图与预测框进行综合接入1个全连接层获得检测目标全局特征,基于2个并行分支完成目标分类和位置预测。The lightweight detection head unit is used to use asymmetric convolution in the detection head network to operate on the output features of the final stage of the hybrid depth-separated convolutional neural network to generate a feature map with fewer channel dimensions, and combine the feature map with the prediction frame. Perform comprehensive access to a fully connected layer to obtain the global feature of the detection target, and complete target classification and position prediction based on two parallel branches.10.根据权利要求6所述的一种基于轻量化二阶段检测模型的多类别蔬菜幼苗识别系统,其特征在于,所述检测模型训练模块,具体包括:10. A multi-category vegetable seedling identification system based on a lightweight two-stage detection model according to claim 6, wherein the detection model training module specifically comprises:初始化单元,用于对前置特征提取网络使用预训练好的混合深度分离卷积神经网络权重参数模型,其余层用均值为0、标准差为0.01的高斯分布随机初始化;The initialization unit is used to use the pre-trained hybrid depth-separated convolutional neural network weight parameter model for the pre-feature extraction network, and the remaining layers are randomly initialized with a Gaussian distribution with a mean of 0 and a standard deviation of 0.01;训练单元,用于设置模型训练所涉及的超参数,采用多任务损失函数为目标函数基于随机梯度下降法进行训练;The training unit is used to set the hyperparameters involved in the model training, and uses the multi-task loss function as the objective function for training based on the stochastic gradient descent method;困难样本挖掘单元,用于训练过程中利用在线困难样本挖掘策略对输入样本损失函数进行计算,按由大到小的顺序排序,筛选前1%比例损失函数较大难分样本通过反向传播训练更新模型权重参数。Difficult sample mining unit is used to calculate the loss function of the input sample using the online difficult sample mining strategy during the training process, sort it in descending order, and filter the first 1% of the difficult samples with a larger proportion of the loss function and train through backpropagation Update the model weight parameters.
CN202011410890.5A2020-12-052020-12-05Multi-category vegetable seedling identification method and system based on lightweight two-stage detection modelWithdrawnCN112446388A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202011410890.5ACN112446388A (en)2020-12-052020-12-05Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202011410890.5ACN112446388A (en)2020-12-052020-12-05Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model

Publications (1)

Publication NumberPublication Date
CN112446388Atrue CN112446388A (en)2021-03-05

Family

ID=74739341

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202011410890.5AWithdrawnCN112446388A (en)2020-12-052020-12-05Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model

Country Status (1)

CountryLink
CN (1)CN112446388A (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112926605A (en)*2021-04-012021-06-08天津商业大学Multi-stage strawberry fruit rapid detection method in natural scene
CN113052255A (en)*2021-04-072021-06-29浙江天铂云科光电股份有限公司Intelligent detection and positioning method for reactor
CN113065446A (en)*2021-03-292021-07-02青岛东坤蔚华数智能源科技有限公司Depth inspection method for automatically identifying ship corrosion area
CN113076873A (en)*2021-04-012021-07-06重庆邮电大学Crop disease long-tail image identification method based on multi-stage training
CN113096080A (en)*2021-03-302021-07-09四川大学华西第二医院Image analysis method and system
CN113096079A (en)*2021-03-302021-07-09四川大学华西第二医院 Image analysis system and construction method thereof
CN113192040A (en)*2021-05-102021-07-30浙江理工大学Fabric flaw detection method based on YOLO v4 improved algorithm
CN113408423A (en)*2021-06-212021-09-17西安工业大学Aquatic product target real-time detection method suitable for TX2 embedded platform
CN113420819A (en)*2021-06-252021-09-21西北工业大学Lightweight underwater target detection method based on CenterNet
CN113435302A (en)*2021-06-232021-09-24中国农业大学GridR-CNN-based hydroponic lettuce seedling state detection method
CN113449611A (en)*2021-06-152021-09-28电子科技大学Safety helmet identification intelligent monitoring system based on YOLO network compression algorithm
CN113468992A (en)*2021-06-212021-10-01四川轻化工大学Construction site safety helmet wearing detection method based on lightweight convolutional neural network
CN113486781A (en)*2021-07-022021-10-08国网电力科学研究院有限公司Electric power inspection method and device based on deep learning model
CN113572742A (en)*2021-07-022021-10-29燕山大学Network intrusion detection method based on deep learning
CN113822265A (en)*2021-08-202021-12-21北京工业大学 A method for detecting non-metallic lighters in X-ray security images based on deep learning
CN113837058A (en)*2021-09-172021-12-24南通大学 A lightweight rain grate detection method coupled with context aggregation network
CN113887567A (en)*2021-09-082022-01-04华南理工大学Vegetable quality detection method, system, medium and equipment
CN113971731A (en)*2021-10-282022-01-25燕山大学 A target detection method, device and electronic device
CN113989620A (en)*2021-11-112022-01-28北京国网富达科技发展有限责任公司Line defect edge identification method and system
CN114187606A (en)*2021-10-212022-03-15江阴市智行工控科技有限公司Garage pedestrian detection method and system adopting branch fusion network for light weight
CN114332849A (en)*2022-03-162022-04-12科大天工智能装备技术(天津)有限公司 Method, device and storage medium for joint monitoring of crop growth state
CN114359546A (en)*2021-12-302022-04-15太原科技大学 A method for identifying the maturity of daylily based on convolutional neural network
CN114373113A (en)*2021-12-032022-04-19浙江臻善科技股份有限公司Wild animal species image identification system based on AI technology
CN114417966A (en)*2021-12-092022-04-29金华送变电工程有限公司 A target detection method based on multiple network fusion in complex environment
CN114494773A (en)*2022-01-202022-05-13上海交通大学宁波人工智能研究院Part sorting and identifying system and method based on deep learning
CN116229052A (en)*2023-05-092023-06-06浩鲸云计算科技股份有限公司Method for detecting state change of substation equipment based on twin network
CN116543301A (en)*2023-04-132023-08-04中国农业科学院农业信息研究所Method for identifying whole desert plant in natural complex background and electronic equipment
CN119672103A (en)*2024-10-242025-03-21北京市农林科学院信息技术研究中心 Leafy vegetable seedling identification and transplanting clamping position detection method and device
CN119672103B (en)*2024-10-242025-10-10北京市农林科学院信息技术研究中心 Leafy vegetable seedling identification and transplanting clamping position detection method and device

Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111340141A (en)*2020-04-202020-06-26天津职业技术师范大学(中国职业培训指导教师进修中心) A method and system for detecting crop seedlings and weeds based on deep learning

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111340141A (en)*2020-04-202020-06-26天津职业技术师范大学(中国职业培训指导教师进修中心) A method and system for detecting crop seedlings and weeds based on deep learning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
ALYOSHA507: "《https://blog.csdn.net/weixin_41059269/article/details/99232245》", 11 August 2019*
IFREEWOLF99: "《https://blog.csdn.net/ifreewolf_csdn/article/details/101352352》", 25 September 2019*
QIRUI REN ET AL.: "Slighter Faster R-CNN for real-time detection of steel strip surface defects", 《2018 CHINESE AUTOMATION CONGRESS (CAC)》*
TSUNG-YI LIN ET AL.: "Feature Pyramid Networks for Object Detection", 《ARXIV:1612.031442V2》*
ZEMING LI ET AL.: ""Light-Head R-CNN: In Defense of Two-Stage Object Detector"", 《ARXIV:1711.07264V2》*
孙哲 等: "基于Faster R-CNN的田间西兰花幼苗图像检测方法", 《农业机械学报》*

Cited By (43)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN113065446A (en)*2021-03-292021-07-02青岛东坤蔚华数智能源科技有限公司Depth inspection method for automatically identifying ship corrosion area
CN113096079A (en)*2021-03-302021-07-09四川大学华西第二医院 Image analysis system and construction method thereof
CN113096079B (en)*2021-03-302023-12-29四川大学华西第二医院Image analysis system and construction method thereof
CN113096080B (en)*2021-03-302024-01-16四川大学华西第二医院Image analysis method and system
CN113096080A (en)*2021-03-302021-07-09四川大学华西第二医院Image analysis method and system
CN113076873A (en)*2021-04-012021-07-06重庆邮电大学Crop disease long-tail image identification method based on multi-stage training
CN112926605B (en)*2021-04-012022-07-08天津商业大学Multi-stage strawberry fruit rapid detection method in natural scene
CN112926605A (en)*2021-04-012021-06-08天津商业大学Multi-stage strawberry fruit rapid detection method in natural scene
CN113076873B (en)*2021-04-012022-02-22重庆邮电大学Crop disease long-tail image identification method based on multi-stage training
CN113052255A (en)*2021-04-072021-06-29浙江天铂云科光电股份有限公司Intelligent detection and positioning method for reactor
CN113192040A (en)*2021-05-102021-07-30浙江理工大学Fabric flaw detection method based on YOLO v4 improved algorithm
CN113192040B (en)*2021-05-102023-09-22浙江理工大学Fabric flaw detection method based on YOLO v4 improved algorithm
CN113449611A (en)*2021-06-152021-09-28电子科技大学Safety helmet identification intelligent monitoring system based on YOLO network compression algorithm
CN113408423A (en)*2021-06-212021-09-17西安工业大学Aquatic product target real-time detection method suitable for TX2 embedded platform
CN113468992A (en)*2021-06-212021-10-01四川轻化工大学Construction site safety helmet wearing detection method based on lightweight convolutional neural network
CN113468992B (en)*2021-06-212022-11-04四川轻化工大学 Construction site safety helmet wearing detection method based on lightweight convolutional neural network
CN113408423B (en)*2021-06-212023-09-05西安工业大学Aquatic product target real-time detection method suitable for TX2 embedded platform
CN113435302B (en)*2021-06-232023-10-17中国农业大学Hydroponic lettuce seedling state detection method based on GridR-CNN
CN113435302A (en)*2021-06-232021-09-24中国农业大学GridR-CNN-based hydroponic lettuce seedling state detection method
CN113420819A (en)*2021-06-252021-09-21西北工业大学Lightweight underwater target detection method based on CenterNet
CN113420819B (en)*2021-06-252022-12-06西北工业大学Lightweight underwater target detection method based on CenterNet
CN113572742A (en)*2021-07-022021-10-29燕山大学Network intrusion detection method based on deep learning
CN113486781A (en)*2021-07-022021-10-08国网电力科学研究院有限公司Electric power inspection method and device based on deep learning model
CN113486781B (en)*2021-07-022023-10-24国网电力科学研究院有限公司Electric power inspection method and device based on deep learning model
CN113572742B (en)*2021-07-022022-05-10燕山大学 Network intrusion detection method based on deep learning
CN113822265B (en)*2021-08-202025-04-25北京工业大学 A non-metallic lighter detection method in X-ray security inspection images based on deep learning
CN113822265A (en)*2021-08-202021-12-21北京工业大学 A method for detecting non-metallic lighters in X-ray security images based on deep learning
CN113887567A (en)*2021-09-082022-01-04华南理工大学Vegetable quality detection method, system, medium and equipment
CN113837058A (en)*2021-09-172021-12-24南通大学 A lightweight rain grate detection method coupled with context aggregation network
CN114187606A (en)*2021-10-212022-03-15江阴市智行工控科技有限公司Garage pedestrian detection method and system adopting branch fusion network for light weight
CN113971731A (en)*2021-10-282022-01-25燕山大学 A target detection method, device and electronic device
CN113989620A (en)*2021-11-112022-01-28北京国网富达科技发展有限责任公司Line defect edge identification method and system
CN114373113A (en)*2021-12-032022-04-19浙江臻善科技股份有限公司Wild animal species image identification system based on AI technology
CN114417966A (en)*2021-12-092022-04-29金华送变电工程有限公司 A target detection method based on multiple network fusion in complex environment
CN114359546B (en)*2021-12-302024-03-26太原科技大学Day lily maturity identification method based on convolutional neural network
CN114359546A (en)*2021-12-302022-04-15太原科技大学 A method for identifying the maturity of daylily based on convolutional neural network
CN114494773A (en)*2022-01-202022-05-13上海交通大学宁波人工智能研究院Part sorting and identifying system and method based on deep learning
CN114332849A (en)*2022-03-162022-04-12科大天工智能装备技术(天津)有限公司 Method, device and storage medium for joint monitoring of crop growth state
CN116543301A (en)*2023-04-132023-08-04中国农业科学院农业信息研究所Method for identifying whole desert plant in natural complex background and electronic equipment
CN116229052A (en)*2023-05-092023-06-06浩鲸云计算科技股份有限公司Method for detecting state change of substation equipment based on twin network
CN116229052B (en)*2023-05-092023-07-25浩鲸云计算科技股份有限公司Method for detecting state change of substation equipment based on twin network
CN119672103A (en)*2024-10-242025-03-21北京市农林科学院信息技术研究中心 Leafy vegetable seedling identification and transplanting clamping position detection method and device
CN119672103B (en)*2024-10-242025-10-10北京市农林科学院信息技术研究中心 Leafy vegetable seedling identification and transplanting clamping position detection method and device

Similar Documents

PublicationPublication DateTitle
CN112446388A (en)Multi-category vegetable seedling identification method and system based on lightweight two-stage detection model
Lu et al.A hybrid model of ghost-convolution enlightened transformer for effective diagnosis of grape leaf disease and pest
Liu et al.Grape leaf disease identification using improved deep convolutional neural networks
Jiao et al.AF-RCNN: An anchor-free convolutional neural network for multi-categories agricultural pest detection
Le et al.Deep learning for noninvasive classification of clustered horticultural crops–A case for banana fruit tiers
Chen et al.Citrus fruits maturity detection in natural environments based on convolutional neural networks and visual saliency map
Wambugu et al.A hybrid deep convolutional neural network for accurate land cover classification
CN103955702B (en)SAR image terrain classification method based on depth RBF network
Wang et al.Precision detection of dense plums in orchards using the improved YOLOv4 model
Su et al.LodgeNet: Improved rice lodging recognition using semantic segmentation of UAV high-resolution remote sensing images
Gao et al.Multi-branch fusion network for hyperspectral image classification
CN110770752A (en)Automatic pest counting method combining multi-scale feature fusion network with positioning model
Chen et al.YOLOv8-CML: A lightweight target detection method for Color-changing melon ripening in intelligent agriculture
CN113435254A (en)Sentinel second image-based farmland deep learning extraction method
Zhang et al.Deep learning based automatic grape downy mildew detection
CN107832797B (en)Multispectral image classification method based on depth fusion residual error network
CN108416270B (en) A Traffic Sign Recognition Method Based on Multi-attribute Joint Features
Hao et al.Growing period classification of Gynura bicolor DC using GL-CNN
Tanwar et al.Red rot disease prediction in sugarcane using the deep learning approach
CN110222767A (en)Three-dimensional point cloud classification method based on nested neural and grating map
CN113902901B (en)Object separation method and system based on lightweight detection
CN114492634B (en)Fine granularity equipment picture classification and identification method and system
CN113032613A (en)Three-dimensional model retrieval method based on interactive attention convolution neural network
Xu et al.Two-level attention and score consistency network for plant segmentation
CN115909086A (en) SAR target detection and recognition method based on multi-level enhanced network

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
WW01Invention patent application withdrawn after publication
WW01Invention patent application withdrawn after publication

Application publication date:20210305


[8]ページ先頭

©2009-2025 Movatter.jp