Movatterモバイル変換


[0]ホーム

URL:


CN118379633B - A method for building a tea green leafhopper damage symptom detection model and its application - Google Patents

A method for building a tea green leafhopper damage symptom detection model and its application
Download PDF

Info

Publication number
CN118379633B
CN118379633BCN202410528963.2ACN202410528963ACN118379633BCN 118379633 BCN118379633 BCN 118379633BCN 202410528963 ACN202410528963 ACN 202410528963ACN 118379633 BCN118379633 BCN 118379633B
Authority
CN
China
Prior art keywords
tea
model
data
hyperspectral
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410528963.2A
Other languages
Chinese (zh)
Other versions
CN118379633A (en
Inventor
丁兆堂
徐阳
王玉
范凯
毛艺霖
孙立涛
申加枝
李晓江
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Agricultural University
Original Assignee
Qingdao Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Agricultural UniversityfiledCriticalQingdao Agricultural University
Priority to CN202410528963.2ApriorityCriticalpatent/CN118379633B/en
Publication of CN118379633ApublicationCriticalpatent/CN118379633A/en
Application grantedgrantedCritical
Publication of CN118379633BpublicationCriticalpatent/CN118379633B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Classifications

Landscapes

Abstract

Translated fromChinese

本发明公开了一种茶小绿叶蝉危害症状检测模型搭建的方法及应用。搭建方法包括以下步骤:步骤1:获取茶小绿叶蝉危害不同程度的茶芽图像数据集;步骤2:分别对获取的茶芽图像数据集进行预处理;步骤3:搭建茶小绿叶蝉危害症状检测模型,并进行模型评估;步骤4:对步骤3搭建的模型进行训练并测试,得到茶小绿叶蝉危害症状检测的深度学习模型。本发明还提供了一种茶小绿叶蝉危害症状检测的方法和一种茶小绿叶蝉危害症状检测的设备。本发明揭示了基于RGB和高光谱成像的茶小绿叶蝉危害症状分级潜力,为智慧茶园的建设提供必要的技术途径。同时,本发明也为其他作物病虫害的准确、高效、无损监测提供了具体的参考。

The present invention discloses a method for building a tea green leafhopper damage symptom detection model and its application. The building method comprises the following steps: Step 1: Acquire a tea bud image dataset with different degrees of damage caused by the tea green leafhopper; Step 2: Preprocess the acquired tea bud image datasets respectively; Step 3: Build a tea green leafhopper damage symptom detection model and perform model evaluation; Step 4: Train and test the model built in step 3 to obtain a deep learning model for tea green leafhopper damage symptom detection. The present invention also provides a method for tea green leafhopper damage symptom detection and a device for tea green leafhopper damage symptom detection. The present invention reveals the tea green leafhopper damage symptom classification potential based on RGB and hyperspectral imaging, and provides a necessary technical approach for the construction of smart tea gardens. At the same time, the present invention also provides a specific reference for accurate, efficient and non-destructive monitoring of other crop diseases and insect pests.

Description

Method for constructing tea lesser leafhopper hazard symptom detection model and application
Technical Field
The invention relates to the technical field of intelligent agriculture and the technical field of tea lesser leafhoppers detection, in particular to a method for constructing a tea lesser leafhoppers hazard symptom detection model and application thereof.
Background
Tea leafhoppers (Empoasca onukii Matsuda) are one of the most damaging pests in asian tea producing areas. About 10-17 generations can occur annually, usually living on the back of the second and third leaves under the buds. Because of its small size and high flowability, it is often not easy to draw attention before mass reproduction. The tea leafhoppers suck phloem juice from tender buds and tender leaves by using a mouthpart, so that plant nutrients and water are lost, and the tea plant withers and turns yellow, leaves are wrinkled, leaves fall prematurely, and finally the yield and quality of the tea are seriously affected. Therefore, the accurate, nondestructive and efficient tea leafhopper pest detection method is important for pest control.
The traditional tea lesser leafhopper hazard symptom diagnosis method mostly adopts a manual inspection and sampling method, and comprises the step of carrying out on-site investigation on symptoms, morbidity and severity. Although these conventional methods can obtain relatively accurate and reliable results, they are subject to subjective errors and are time-consuming and labor-intensive, limiting large-scale rapid surveys. Thus, there is a need for a real, efficient, non-destructive method to detect and control diseases in time.
In recent years, the rapid development of image processing and computer vision provides an effective means for plant leaf pest monitoring, especially RGB imaging has great advantages in the aspects of cost, operation, carrying and the like, and is widely applied in the aspects of crop classification, growth monitoring, yield estimation and the like. And hyperspectral imaging is not only limited to information in three bands of red, green and blue, but also has large information quantity and high precision. Due to the integration of spectra and images, hyperspectral imaging techniques have shown significant advantages in providing objective, accurate, non-destructive and intuitive plant pest diagnosis results. However, the conventional studies use only single-cell sensors, and do not fully utilize the complementation and fusion of multiple sensor data. Therefore, the value of the multiple remote sensing data is worth further mining.
Disclosure of Invention
According to the invention, tea buds with different degrees of harm of tea lesser leafhoppers are collected, and RGB and hyperspectral images are obtained. By rotating the RGB image data by 90 °, by 180 °, by 270 °, vertically flipping, and horizontally flipping, the sample data becomes 6 times the original sample size, and then wavelet transform enhancement techniques are used for the samples. For the acquired hyperspectral image, the spectral data is preprocessed using a Multiple Scatter Correction (MSC), S-G smoothing (Savitzky-Golay), first order differential (1D) and second order differential (2D) algorithms, and the characteristic bands of the spectrum are filtered using UVE, CARS and SPA algorithms. A hierarchical model of tea lesser leafhoppers hazard level was built on RGB image data using a depth learning algorithm such as ResNet, VGG16 and AlexNet. A grading model of the hazard degree of the tea leafhoppers is established for the hyperspectral image data by using an SVM and LSTM algorithm, and the model is evaluated by using four indexes. This work demonstrates the potential for classifying the hazard level of tea leafhoppers based on RGB and hyperspectral imaging, which can provide an accurate, lossless and efficient method for monitoring the occurrence of the insect hazard. The general framework of the present invention is shown in figure 1.
The specific technical scheme is as follows:
In a first aspect, the invention provides a method for constructing a tea lesser leafhopper hazard symptom detection model, which comprises the following steps:
step 1, acquiring tea bud image data sets with different degrees of damage of tea lesser leafhoppers;
step 2, preprocessing the obtained tea bud image sets respectively;
step 3, building a tea lesser leafhopper hazard symptom detection model and carrying out model evaluation;
and 4, training and testing the model constructed in the step 3 to obtain a deep learning model for detecting the hazard symptoms of the tea leafhoppers.
Further, the tea bud image set in the step 1 comprises an RGB image set and/or a hyperspectral image set;
The RGB image set is derived from tea bud images of different degrees of harm of a plurality of tea lesser leafhoppers shot in the field environment of the tea garden, and the hyperspectral image set is derived from the fact that tea buds of different degrees of harm of tea lesser leafhoppers picked in the field of the tea garden are pushed and scanned in a hyperspectral imaging system.
Further, preprocessing the obtained tea bud image set in the step2 comprises data amplification and hyperspectral data preprocessing;
the RGB data preprocessing comprises data amplification and data enhancement processing;
the hyperspectral data preprocessing comprises spectral reflectance extraction, spectral preprocessing and characteristic wave band screening, wherein the data amplification comprises the steps of rotating an RGB image sample obtained by shooting in the step 1 by 90 degrees, rotating the RGB image sample by 180 degrees, rotating the RGB image sample by 270 degrees, vertically overturning the RGB image sample, and horizontally overturning the RGB image sample to enable the RGB image sample to be 6 times of the original sample;
The data enhancement processing comprises that an RGB image set is enhanced by adopting a 2D-DWT, and in the process of decomposing an image by adopting the 2D-DWT, an LL component can be circulated for a plurality of times before meeting the requirement, and for discrete wavelet transformation of an image f (x, y) with the size of MxN, the LL component is circulated only once, and the formula is as follows:
Wherein j0 is an arbitrary starting scale; Is an approximation coefficient at scale j0; representing a scale function;
the method comprises the steps of (1) extracting spectral reflectivity, namely, carrying out lens correction and reflection correction on data by using an analysis tool of SpecVIEW in a hyperspectral image dataset acquired in the step (1), opening a hyperspectral image in a RAW format by using ENVI 5.3 software, selecting the whole region of a sample as a region of interest (ROI), and calculating the average spectral reflectivity of the region of interest as the spectral data of the sample;
the spectrum preprocessing is that the hyperspectral image adopts MSC, S-G smoothing, first derivative and second derivative to perform data preprocessing;
The spectrum characteristic wave band screening is that a hyperspectral image adopts UVE, CARS and SPA algorithm to select a representative wave band from full-wave band spectrum data as a characteristic wave band.
Furthermore, in the step 3, a tea lesser leafhopper hazard symptom detection model is built, RGB data modeling is conducted through ResNet, VGG16, alexNet, WT-ResNet18, WT-VGG16 and WT-AlexNet deep learning algorithms, and hyperspectral data modeling is conducted through SVM algorithms and LSTM algorithms.
Further, the modeling algorithm in the step 3 models RGB image data using ResNet, VGG16 and AlexNet algorithms, and hyperspectral image data using SVM and LSTM algorithms;
further, the model evaluations described in step 3 were performed using Accuracy, precision, recall, F-score, respectively.
In particular, the model evaluation in step 3 may also be performed by a confusion matrix method.
The second aspect is based on the same invention, and the invention provides application of the tea lesser leafhopper hazard symptom detection model constructed by any one of the construction methods in tea trees.
The invention provides a method for detecting the hazard symptoms of tea leafhoppers, which comprises the steps of obtaining tea bud images in natural environment, inputting the tea bud images into the tea leafhoppers hazard symptom detection model obtained by the building method of the tea leafhoppers hazard symptom detection model according to any one of the first aspect, and obtaining the hazard symptom degree of the tea leafhoppers output by the model.
In a fourth aspect, the invention also provides a device for detecting the tea leafhopper harm symptoms, which at least comprises a processor and at least one memory, wherein the memory stores program instructions of the tea leafhopper harm symptom detection model obtained by any one of the construction methods according to the first aspect, and the processor can detect the tea leafhopper harm symptom degree when executing the program instructions.
The invention has the following beneficial effects:
The invention provides a detection model for classifying the hazard degree of tea lesser leafhoppers based on RGB and hyperspectral imaging technologies. RGB and hyperspectral images of tea buds with different hazard degrees of the tea leafhoppers are collected, a hierarchical model of the hazard degrees of the tea leafhoppers is built on RGB image data by utilizing ResNet, VGG16, alexNet, WT-ResNet, WT-VGG16 and WT-AlexNet deep learning algorithms, the hyperspectral image data are subjected to spectral preprocessing by adopting four methods of MSC, S-G, 1D and 2D, spectral characteristic wave bands are screened by adopting three methods of UVE, CARS and SPA, and a hierarchical model of the hazard symptom severity degree of the tea leafhoppers is built on the hyperspectral image data by adopting SVM and LSTM algorithms.
The tea lesser leafhopper hazard symptom detection model disclosed by the invention can accurately, efficiently and nondestructively monitor the occurrence degree of the tea lesser leafhopper. The accuracy of the 14 classification models ResNet18、VGG16、AlexNet、WT-ResNet18、WT-VGG16、WT-AlexNet,、UVE-SVM、CARS-SVM、SPA-SVM、NONE-SVM、UVE-LSTM、CARS-LSTM、SPA-LSTM and NONE-LSTM were 65%,70%,62%,78%,80%,69%,69%,86%,89%,74%,82%,94%,96%, and 90%, respectively. The SPA-LSTM model is superior to other grading models, and is more suitable for monitoring the harm symptoms of the tea lesser leafhoppers.
In the RGB imaging technology grading model, the WT-VGG16 performs best using wavelet transform enhanced neural networks over other networks. In the hyperspectral imaging technology hierarchical model, the SPA method performs better than CARS and UVE band screening methods. The deep learning algorithm performs better than the machine learning algorithm. Hyperspectral imaging techniques perform better than RGB imaging techniques.
The SPA-LSTM model has excellent effect in monitoring the harm symptoms of the tea leafhoppers, and the accuracy is 96%. The invention discloses the hazard symptom grading potential of tea lesser leafhoppers based on RGB and hyperspectral imaging, and provides a necessary technical approach for the construction of intelligent tea gardens. Meanwhile, the invention also provides specific reference for accurate, efficient and nondestructive monitoring of other crop diseases and insect pests.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is apparent that the drawings in the following description are only one embodiment of the present invention, and that other embodiments of the drawings may be derived from the drawings provided without inventive effort for a person skilled in the art.
FIG. 1 is a general frame diagram of the model building method of the present invention.
FIG. 2 is a diagram of the experimental development position of the invention.
FIG. 3A shows an original image, wherein (a) is a first-stage asymptomatic tea bud, (B) is a second-stage mild-stage symptomatic tea bud, (C) is a third-stage severe-stage symptomatic tea bud, B shows a 2D-DWT schematic diagram, and C shows tea bud images of different insect pest degrees treated by 2D-DWT, wherein (a) is a first-stage asymptomatic tea bud, (B) is a second-stage mild-stage symptomatic tea bud, and (C) is a third-stage severe-stage symptomatic tea bud.
FIG. 4 shows a hyperspectral imaging device used in the invention, wherein A is a hyperspectral device live view diagram, B is a hyperspectral device mode diagram, (a) a cube cassette, (B) a hyperspectral camera, (c) a 200W halogen light source, (d) a tea bud sample, and (e) a computer.
FIG. 5 shows the network structure of the invention ResNet, VGG16 and AlexNet, wherein A is ResNet, B is VGG16 and C is AlexNet.
FIG. 6 shows the raw data and spectra of the tea leaf sample after pretreatment according to the invention, (a) the raw spectra of the tea leaf sample, and (b) the spectra after pretreatment according to the SNV+2D+S-G algorithm scheme.
FIG. 7 shows the characteristic bands of the present invention for (a) UVE, (b) CARS, (c) SPA.
FIG. 8 shows the results of the evaluation of different network models of the present invention on different disease levels, (a) criteria, (b) mild and (c) severity.
FIG. 9 is a confusion matrix with network model of the present invention, wherein ,(a)ResNet;(b)VGGNet;(c)AlexNet;(d)WT-ResNet;(e)WT-VGGNet;(f)WT-AlexNet;(g)UVE-SVM;(h)CARS-SVM;(i)SPA-SVM;(j)NONE-SVM;(k)UVE-LSTM;(l)CARS-LSTM;(m)SPA-LSTM;(n)NONE-LSTM.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention. In the present invention, the equipment, materials, etc. used are commercially available or commonly used in the art, unless otherwise specified. The methods in the following examples are conventional in the art unless otherwise specified.
Test area:
The test was carried out in a tea garden (117 ° 77'e,35 ° 22' n) in the spring of the city, county, the Yi-city, shandong, china. The area belongs to a warm temperate zone continental monsoon climate, and is dry in spring, hot and rainy in summer, high and refreshing in autumn and dry in winter. A tea area with high latitude at the ground, the temperature difference between day and night is large. The annual average air temperature is 13 ℃, the annual average illumination time is 2500 hours or so, and the annual average precipitation is about 800 mm. The area of the tea garden is 300 mu. The soil volume weight was 1.50g/cm3, the organic content was 1.65%, and the pH was 5.8. The tea variety planted in the tea garden comprises Hemerocallis fulva, chinese tea 108, qimen seed, huangshan seed, etc. The position diagram of the test area is shown in fig. 2.
Sample preparation:
The study was conducted in the season (autumn) where tea trees in multiple plots in a spring tea garden developed obvious tea leafhopper hazard symptoms. Since the harmful site of tea leafhoppers is tea buds, about 400 test samples (one bud by one leaf, one bud by two leaves) were randomly extracted. And the collected tea shoots were classified into three grades (first grade is asymptomatic, second grade is mild and third grade is severe) according to the severity of the hazard. As shown in fig. 3 (a), there are tea buds with (a) no symptoms, (b) mild symptoms, and (c) severe symptoms, respectively. And immediately collecting RGB and hyperspectral data of the tea buds after the tea buds are picked.
Test environment:
The conditions for processing data in the present invention are as follows:
Hardware processor, interXeon CPU E5-2640 V4@2.4GHZ 2.40GHZ (two processors), RAM 128GB, software environment CUDA tool 10.1;CUDN V7.6.0;MATLAB 2020;Python 3.8;Pytorch GPU 1.6.0, operating system Windows 10.
Example 1 RGB data acquisition and Pre-processing
1.1RGB data acquisition
RGB image data was collected under natural light conditions using a digital camera (EOS 6D,Canon Co.Ltd,Beijing,China) with an image resolution of (5184 pixels×3456 pixels).
Of these, 357, 430 and 435 samples of first-level asymptomatic, second-level mild and third-level severe symptoms were taken, respectively, for a total of 1222 photos. The images are stored in JPEG format with the shooting angle perpendicular to the ground.
1.2 Two-dimensional discrete wavelet transform of images of different hazard levels
The Two-dimensional DISCRETE WAVELET Transform (2D-DWT) can meet the requirement of time-frequency signal analysis, can focus on any details of the signals, and has the characteristic of multi-resolution analysis. In order to better restore and represent the image of tea buds endangered by tea leafhoppers, the RGB image is enhanced by utilizing the 2D-DWT, the model precision is effectively improved, and the formed model has stronger generalization capability.
To reduce or eliminate correlation between different features of the extracted image, the main information of the signal is separated from the detail information by using 2D-DWT.
The RGB images of tea shoots with different hazard degrees are shown in fig. 3 (A), wherein (a) in fig. 3 (A) is a first-stage asymptomatic tea shoot, (b) is a second-stage mild-symptom tea shoot, and (c) is a third-stage severe-symptom tea shoot.
The 2D-DWT passes the image data through a row filter and a column filter to obtain four components at high and low frequency signals, as shown in fig. 3 (B). Wherein the LL component represents low-frequency information of the image, the HL component represents high-frequency information in the horizontal direction of the image, the LH component represents high-frequency information in the vertical direction of the image, and the HH component represents high-frequency information in the diagonal line of the image.
Tea bud images of different hazard degrees after 2D-DWT treatment are shown in FIG. 3 (C). Fig. 3 (C) shows a first-stage asymptomatic tea sprout image after 2D-DWT treatment, (b) shows a second-stage mild-symptomatic tea sprout image after 2D-DWT treatment, and (C) shows a third-stage severe-symptomatic tea sprout image after 2D-DWT treatment.
In decomposing an image by a 2D-DWT, the LL component may be cycled multiple times before the requirements are met. In the present invention, for discrete wavelet transform of an image f (x, y) of size mxn, the LL component loops only once, as follows:
Wherein j0 is an arbitrary starting scale; Is an approximation coefficient at scale j0; representing a scale function.
EXAMPLE 2 hyperspectral data acquisition and Pre-treatment
2.1 Hyperspectral devices
The hyperspectral imaging system used in the invention mainly comprises a computer, a camera bellows, a light source system and an imaging system, wherein the light source system comprises four 200W halogen light sources (hsia-ls-t-200W, china), and the imaging system comprises a hyperspectral camera (GAIAFIELD-Pro-V10, jiangsu Dualix SPECTRAL IMAGE Technology Co.Ltd, china) and a lens. The hyperspectral camera captures images at a wavelength of 397.899994nm-1001.400024nm, can measure the reflectivity of 176 bands, has a Spectral resolution of 3.5nm, and has pixels of 1936×1456 (space×spectra). The imaging mode is built-in push broom, the frame rate is 7s/cube, the data interface is USB2.0, and the weight is 3kg. The hyperspectral imaging apparatus is shown in fig. 4.
2.2 Hyperspectral data acquisition
The specific steps for acquiring the hyperspectral image are as follows:
Firstly, a hyperspectral camera and a halogen light source system are started in a well-arranged camera bellows to preheat until a light source is stable, the exposure time of the camera is set to be below 19.6ms, black and white correction is carried out for compensating the influence of uneven spatial light intensity distribution and dark current in the camera, a white board is scanned to obtain a full white calibration image, and a lens cover is used for shielding a camera lens to obtain a full black calibration image.
Secondly, placing the prepared tea bud sample in a cube camera bellows, placing black flannelette under the tea bud sample for collecting hyperspectral images in order to ensure that the tea bud sample is not influenced by other reflecting light sources, and repeating the same operation to obtain hyperspectral images of all the tea bud samples.
Finally, the data was lens corrected and reflection corrected using an analysis tool of data preprocessing software SpecVIEW (Jiangsu Dualix SPECTRAL IMAGE Technology Co.Ltd, china), the high-spectrum image in RAW format was opened by ENVI 5.3 (RESEARCH SYSTEM INC, boulder, CO, USA) software, the whole region of the sample was selected as the region of interest (ROI), and the average spectral reflectance of the region of interest was calculated as the spectral data of the sample. Finally 983 spectrum data were collected as initial data for the tea sprout samples.
2.3 Hyperspectral data pretreatment
Because the spectral data is susceptible to unwanted signal interference due to hyperspectral collection instrumentation or environmental factors, the extracted spectral data is preprocessed using Multiple Scatter Correction (MSC), S-G smoothing (Savitzky-Golay), first derivative (1D) and second derivative (2D).
The MSC algorithm can eliminate artifact or defect spectrum in the data matrix, and the processed spectrum data can effectively eliminate scattering effect and improve spectrum information quality. Savitzky-Golay (S-G) fits or averages the data in the spectrum data points to obtain the best estimated value of the smooth point, so that the random noise of the average reflection spectrum is effectively reduced. The first derivative (1D) and the second derivative (2D) can eliminate baseline drift and spectral line overlap to obtain a clearer spectral profile variation.
The correlation formula is as follows:
Multiple scattering correction:
Where X is the original spectral matrix of the sample, Xi,Ki,bi,Xi(msc) is the spectrum value of the i-th sample, the average value of all the spectrum data, the baseline shift, and the spectrum of the MSC corrected i-th sample, respectively.
Savitzky-Golay(S-G)
Wherein Xi andSpectral data before and after S-G smoothing, respectively, Wj is a weight factor obtained by smoothing a window with a window width of 2 r+1.
First derivative:
Second derivative:
Where y is the spectral absorbance, λ is the wavelength, yi is the spectrum of the ith sample, and Δλ is the wavelength interval.
2.4 Hyperspectral data characteristic band screening
According to the invention, 176 wave band spectrum data are obtained in the 397-1001 nm spectrum range, and excessive wave band variables can increase the operand of the data and also influence the prediction accuracy and stability of the model.
To improve the efficiency of the post modeling, the UVE, CARS and SPA algorithms are used to select a representative band from the full band spectral data as the "characteristic band" and compare with the full band (NONE) data. Uninformative Variable Elimination (UVE) algorithm can remove wavelength variable with low efficiency on coefficient modeling, reduce complexity of spectrum data, optimize model variable and improve prediction capability of the model. Competitive ADAPATIVE REWEIGHTED SAMPLING (CARS) is a variable screening method based on the principle of "survival in the darwinian evolution theory. And reserving wavelength points with larger weight of absolute values of regression coefficients, removing wavelength points with smaller weight, and effectively finding out the optimal spectrum combination. Successive Projections Algorithm (SPA) can extract the smallest co-linear variable in the full band, thus eliminating redundant information in the original spectral matrix.
The basic parameters of the UVE, CARS and SPA algorithms used in this study are shown in table 1.
Table 1 parameters of the characteristic band screening algorithm.
Example 3 RGB data modeling
In the present invention, the RGB image data is modeled using a depth learning algorithm such as ResNet, VGG16, alexNet, etc.
3.1ResNet18
The basic architecture of ResNet network is ResNet, and 18 convolution layers are included in the network, but the network depth in the invention refers to the weight layer of the network, including the convolution layer and the full connection layer, and does not include the pooling layer and the BN layer. ResNet18 is a classical and effective deep convolutional neural network model, has good feature extraction and classification capability, and can be applied to computer vision tasks such as image classification, target detection and the like. The specific structure is shown in fig. 5 (a).
3.2VGG16
VGG16 is a Convolutional Neural Network (CNN) pre-training network that accomplishes the target task by pre-training the resulting model weights on a large dataset. VGG16 networks total 16 layers, of which 13 layers are convolutional layers, 5 layers are max pooling layers, and 3 layers are fully connected layers. The VGG model has a total of 7 different hierarchies, basically consisting of a convolution layer of 3×3 size and a largest pooling layer of 2×2 size. The VGG network adopts a plurality of 3 multiplied by 3 small convolution kernels to replace the previous single larger convolution kernel in the feature extraction part, so that the number of parameters is reduced on the premise of guaranteeing the receiving field, the depth of the network is increased, and the feature learning capability is enhanced. Besides, all hidden layers adopt ReLU functions, so that the nonlinear fitting capacity of the model is greatly improved. The VGG16 model is widely used in image classification and positioning tasks due to its tiny filter presence. The specific structure is shown in fig. 5 (b).
3.3AlexNet
The AlexNet networks share 5 convolutional layers, 3 pooling layers, and 3 fully-connected layers (including the output layer). The AlexNet model uses the ReLU as the activation function of the CNN for the first time, so that the calculated amount is greatly reduced, the convergence speed is increased, and the method is much faster than the equivalent tanh. By the covered pooling operation and the Dropout and data enhancement Data Augmentation during training, the problem of overfitting in the model training process is avoided, an LRN layer is provided, and the generalization capability of the model is enhanced. The specific structure is shown in fig. 5 (c).
EXAMPLE 4 hyperspectral data modeling
In the invention, the effect of the SVM algorithm and the LSTM algorithm on modeling hyperspectral image data is explored and compared.
4.1Support Vector Machine(SVM)
The core idea of the SVM algorithm is linearization, namely mapping a sample space from low-dimensional nonlinearity to high-dimensional space, and solving the sample space by combining a linear learning machine in a feature space to realize classification. The specific parameters are shown in Table 2.
4.2Long Short-Term Memory(LSTM)
LSTM is one of RNNs, and overcomes the defect that a common neural network cannot rely on long-term information. LSTM single nerve cells mainly contain four layers of structures, amnestic Gate Forget Gate, input Gate, output Gate, and CELL STATE. LSTM controls the forgetting or memory addition of cell state through gate (gate), stores long-term state through a special memory cell (CELL STATE), and realizes the learning of long-term memory. The validation set is a data set used to evaluate model performance and adjust the hyper-parameters. The highest model parameters for single round accuracy are selected and loaded into the training set. The parameters selected for best model effect are shown in table 2:
Table 2 main parameters of SVM and LSTM models.
Example 5 model Performance index
In order to accurately calculate the classification effect of the above model and observe the performance of the model in each category, four indices, accuracy, precision, recall, F-score, are used. The Accumy index refers to the proportion of the positively identified samples, the overall capability of the tea lesser leafhopper pest grading model can be evaluated, precision refers to the ratio of the positively identified tea lesser leafhopper hazard sample number to the total number of the identified tea lesser leafhopper hazard samples, recall refers to the proportion of the positively identified tea lesser leafhopper hazard sample number to the total number of the tea lesser leafhopper hazard samples, and F1-score is an evaluation index attempting to make an Accuracy and Recall rate coordination.
The calculation formula of the evaluation index is shown below.
"TP" (true positive) means the number of samples correctly identified as tea leafhoppers. "FN" (false negative) refers to the number of samples that were not identified as tea leafhoppers. "FP" (false positive) refers to the number of samples that were incorrectly identified as tea leafhoppers. "TN" (true negative) refers to the number of samples that are correctly identified as healthy samples.
Experimental results and data analysis
1 Data preprocessing
1.1 Preprocessing of RGB data
The RGB image samples taken in example 1 were rotated by 90 °, rotated by 180 °, rotated by 270 °, vertically flipped and horizontally flipped, so that the sample data became 6 times the original sample amount. The sample size of the RGB image is divided into a training set, a testing set and a verification set according to the ratio of 3:1:1.
1.2 Hyperspectral data pretreatment
From the original average spectral reflectance, the spectral reflectance curves of different hazard degrees are different, but the change trend is similar. The spectral reflectance is in fluctuation and rising change in the visible light range of 397nm-673nm, and the spectral reflectance value is relatively small. The spectral reflectance increases rapidly in the range of 673-811nm and tends to remain high in stability in the range of 811-1018 nm. In addition, there is a significant difference between the spectra of the three hazard levels, which can be clearly distinguished.
The raw spectral data contains, in addition to information related to the sample, baseline drift, noise, etc., which can reduce the robustness and accuracy of the prediction or classification model. To reduce noise and unwanted signal interference, the hyperspectral data is preprocessed by combining MSC, S-G, 1D and 2D. The spectral curves after pretreatment are shown in fig. 6 (b, d).
The result shows that compared with the original spectrum, the spectrum curve after the MSC, S-G, 1D and 2D combined pretreatment is more stable, the absorption peak and the reflection valley of the spectrum can be obviously observed, and the resolution and the sensitivity of the spectrum are greatly improved. Also, the pretreated spectral features show similar overall trends in spectral reflectance for the three hazard symptom severity levels. Exponentially increasing in the 673-742nm wavelength range, rapidly decreasing in the 742-811nm wavelength range, and distinct absorption peaks at 535nm and 742 nm. In addition, in the range of 397-1018nm, the spectrum reflectivity curves with different severity of the hazard symptoms have obvious differences, the more serious the hazard is, the larger the spectrum reflectivity is, and the existence of the differences is the basis for establishing a hazard symptom grading model by using hyperspectrum later.
Characteristic band selection for 2-spectral data
The spectral data obtained by screening comprises 176 wave bands, and the characteristic wave bands are selected by adopting UVE, CARS and SPA algorithms in consideration of the problem of wave band collinearity data redundancy, so that the complexity of a model is simplified, and the efficiency and the reliability of the model are improved.
The number and distribution of characteristic bands are shown in fig. 7 and table 3. The result shows that in the characteristic wave band screening method, the number of the characteristic wave bands screened by UVE is the largest and is 85, and the number of the characteristic wave bands screened by CARS is the smallest and is 12.
Overall, the variable selective capacity of CARS and SPAs is superior to UVE, and the characteristic bands filtered by the UVE algorithm are more abundant than CARS and SPAs.
TABLE 3 band screening results
3 Overall accuracy of grading different pests by different network models
The Accuracy is widely applied to the evaluation of model precision, and can be used for checking the performance of two imaging technologies under different models by counting the Accuracy of different models under the RGB imaging technology and the hyperspectral imaging technology.
Both imaging techniques perform various model tests under the same test specimen and test environment. As can be seen from table 4, overall, the 8 model accuracy (69% -96%) of the hyperspectral imaging technique is better than the 6 model accuracy (62% -80%) of the RGB imaging technique, and the wavelet transform enhanced RGB imaging technique model accuracy (69% -80%) is better than the ordinary RGB imaging technique model accuracy (62% -70%). And the SPA-LSTM model has the highest accuracy (96%).
Model accuracies of ResNet, VGG16, alexNet, WT-ResNet, WT-VGG16 and WT-AlexNet were 65%, 70%, 62%, 78%, 80% and 69%, respectively, with the highest (80%) accuracies of the WT-VGGNet16 models under RGB imaging techniques. The model accuracy through wavelet transformation enhancement technology is obviously improved by 7-13%, but is lower than that of hyperspectral imaging technology as a whole, which shows that the hyperspectral imaging technology is obviously superior to RGB imaging technology.
The accuracy of each model under hyperspectral imaging technique was further compared to 69%,86%,89%,74%,82%,94%,96% and 90% for UVE-SVM, CARS-SVM, SPA-SVM, NONE-SVM, UVE-LSTM, CARS-LSTM, SPA-LSTM and NONE-LSTM, respectively. Among them, SPA-LSTM has the highest model accuracy (96%). The result shows that under the same characteristic wave band screening algorithm (UVE, CARS and SPA), the model accuracy of LSTM is obviously improved (7-16%) compared with that of SVM, and the deep learning method is obviously superior to the traditional machine learning method. The classification accuracy of different characteristic wave band screening methods under the same modeling algorithm (SVM and LSTM) is the SPA > CARS > NONE > UVE method, which shows that the screening effect of the SPA method on the characteristic wave band is superior to the CARS method, the UVE method and the full wave band.
Table 4 accuracy of different classification models under two imaging techniques.
4 Comparison of Performance of different network models for different disease classifications
To intuitively compare the classification performance of RGB imaging and hyperspectral imaging techniques under different models, we evaluated 14 models using three evaluation indices Recall, precision and F1-score (fig. 8).
The result shows that the indexes of 8 models under the hyperspectral image technology are obviously better than those of 6 models under the RGB image technology on the whole, and the SPA-LSTM model has the best comprehensive performance of classifying the hazard severity degree of the tea lesser leafhoppers.
In the 6 models under the RGB image technology, compared with the model which is not enhanced by wavelet transformation, the processed model is obviously improved in each index. The WT-VGG16 model, of which accuracy is 80%, performs well under RGB image techniques. Among 8 models under the hyperspectral image technique, the LSTM model has obvious advantages in each index compared with the SVM model. The SPA-LSTM model, of which accuracy is 96%, is distinguished by RGB imaging techniques.
Overall, the SPA-LSTM model has the highest 3 evaluation criteria for class 1 and class 2, probably because the class 1 and class 2 hazard symptoms are more similar, and the relatively few networks of characteristic band screening and deep learning networks are well differentiated. For a rating of 3, the SPA-LSTM model had the highest Recall index, the Precision and F1-score evaluation indexes were 95.74% and 97.83%, respectively, with insignificant differences (only 3.14% and 0.49%) compared to 98.88% and 98.32% for the NONE-LSTM model. This is because 3 is a severe hazard, its spectral features are clearly different from the other two classes, and each model can well distinguish 3 classes of hazard symptoms. In short, the SPA-LSTM model provides the best combination of severity classification for tea lesser leafhoppers.
5 Confusion matrix
Confusion matrix is one of the methods of evaluating model performance, and to compare misclassifications of three types of hazard symptom severity for different network models, we plotted a confusion matrix map of 15 network models (fig. 9). In general, the error classification rate of the 8 models of hyperspectral imaging techniques (FIG. 9g, h, I, j, k, l, m, n) is significantly lower than that of the 6 models of RGB imaging techniques (FIG. 9a, b, c, d, e, f), and the error classification rate of the wavelet transform enhanced RGB imaging techniques models (FIG. 9d, e, f) is significantly lower than that of the conventional RGB imaging techniques (FIG. 9a, b, c). And the SPA-LSTM model (FIG. 9 m) showed significantly minimal misclassification, indicating that the model accurately distinguishes between 3 hazard symptom severity levels.
In all 6 models of RGB imaging techniques, class 1 and class 2 cannot be well distinguished, about 45% of the samples belonging to class 1 are mispredicted to the other two classes, about 35% of the samples belonging to class 2 are mispredicted to the other two classes, and only about 26% of the samples belonging to class 3 are mispredicted to the other two classes. This shows that the system has some problems distinguishing between class 1 and class 2, but distinguishing between class 3 is still good. The reason is two, firstly, because the hazard characteristics of the 1 class and the 2 class are similar, the model is difficult to distinguish. Second, RGB images provide limited information, and the accuracy of identifying the extent of the hazard symptoms is far less than that of hyperspectral images in 176 bands, resulting in confusion of the features of the two hazard symptoms. Thus, in subsequent studies, we can try to increase the dataset and use a higher pixel RGB camera to improve the accuracy of the model.
The misclassification rate is much lower in the 8 models of hyperspectral imaging techniques than in the RGB imaging technique model. The misclassification rate of the LSTM model (FIG. 9k, l, m, n) under hyperspectral imaging is lower than that of the SVM model (FIG. 9g, h, I, j), which indicates that the deep learning method is significantly better than the traditional machine learning method. Finally, we found that the SPA-LSTM model can accurately classify, and that the error classification is minimal, samples belonging to class 1 are completely correctly predicted, only about 6% of samples belonging to class 2 are incorrectly predicted to the other two classes, only 4% of samples belonging to class 3 are incorrectly predicted to the other two classes, and the accuracy of the model is 96%. The result shows that the SPA-LSTM model has better robustness, and also shows the effectiveness and stability of the method, and the hazard degree of the tea leafhoppers can be accurately classified.
In summary, as shown in the examples and the data analysis, the best model of the invention is the SPA-LSTM model, and the accuracy is as high as 96%.
The present invention has been described above by way of example, but the present invention is not limited to the above-described embodiments, and any modifications or variations based on the present invention fall within the scope of the present invention.

Claims (4)

CN202410528963.2A2024-04-292024-04-29 A method for building a tea green leafhopper damage symptom detection model and its applicationActiveCN118379633B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202410528963.2ACN118379633B (en)2024-04-292024-04-29 A method for building a tea green leafhopper damage symptom detection model and its application

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202410528963.2ACN118379633B (en)2024-04-292024-04-29 A method for building a tea green leafhopper damage symptom detection model and its application

Publications (2)

Publication NumberPublication Date
CN118379633A CN118379633A (en)2024-07-23
CN118379633Btrue CN118379633B (en)2025-04-15

Family

ID=91905461

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202410528963.2AActiveCN118379633B (en)2024-04-292024-04-29 A method for building a tea green leafhopper damage symptom detection model and its application

Country Status (1)

CountryLink
CN (1)CN118379633B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN118552870B (en)*2024-07-242024-11-01华南农业大学 A tea bud yield monitoring method combining ground-air spectral technology
CN119107573B (en)*2024-09-092025-09-19重庆市农业科学院Automatic tea garden lygus lucorum detection method based on deep neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112903602A (en)*2021-01-222021-06-04北京工商大学Multi-citrus diseased leaf identification method and system based on machine learning and hyperspectral imaging
CN116630810A (en)*2023-06-012023-08-22广东省农业科学院茶叶研究所Tea lesser leafhopper biting fresh leaf grade classification method and device based on neural network and image recognition

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112446298A (en)*2020-10-312021-03-05复旦大学Hyperspectral nondestructive testing method for wheat scab
CN114199880A (en)*2021-11-222022-03-18西南大学 A real-time detection method of citrus diseases and insect pests based on edge computing
TW202326596A (en)*2021-12-292023-07-01林金樹A plant disease and pest control method using spectral imaging sensing and artificial intelligence

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112903602A (en)*2021-01-222021-06-04北京工商大学Multi-citrus diseased leaf identification method and system based on machine learning and hyperspectral imaging
CN116630810A (en)*2023-06-012023-08-22广东省农业科学院茶叶研究所Tea lesser leafhopper biting fresh leaf grade classification method and device based on neural network and image recognition

Also Published As

Publication numberPublication date
CN118379633A (en)2024-07-23

Similar Documents

PublicationPublication DateTitle
CN118379633B (en) A method for building a tea green leafhopper damage symptom detection model and its application
CN107103306B (en)Winter wheat powdery mildew remote sensing monitoring method based on wavelet analysis and support vector machine
CN112557393B (en)Wheat leaf layer nitrogen content estimation method based on hyperspectral image fusion map features
CN112016392A (en) A small sample detection method based on hyperspectral image for soybean pest degree
CN111161362A (en)Tea tree growth state spectral image identification method
Xu et al.A deep learning model for rapid classification of tea coal disease
CN115049902B (en) Method, system, device and storage medium for visual prediction of water content in citrus leaves
Meng et al.Fine hyperspectral classification of rice varieties based on attention module 3D-2DCNN
CN111767863B (en)Remote sensing identification method for winter wheat scab based on near-earth hyperspectral technology
Yuan et al.Feasibility assessment of multi-spectral satellite sensors in monitoring and discriminating wheat diseases and insects
CN112287886A (en)Wheat plant nitrogen content estimation method based on hyperspectral image fusion map features
An et al.Classification of wheat powdery mildew based on hyperspectral: From leaves to canopy
Xu et al.A deep learning model based on RGB and hyperspectral images for efficiently detecting tea green leafhopper damage symptoms
CN119478705A (en) An intelligent crop growth perception algorithm, system, computer equipment and medium
CN112418073A (en) A method for estimating nitrogen content in wheat plants based on UAV image fusion features
CN115965875A (en)Intelligent monitoring method and system for crop diseases and insect pests
CN117541835A (en)Citrus leaf water stress detection method and system based on hyperspectral imaging and deep learning
Wei et al.Estimation for soluble solid content in Hetian jujube using hyperspectral imaging with fused spectral and textural Features
De Silva et al.Plant disease detection using deep learning on natural environment images
Mao et al.Early detection of gray blight in tea leaves and rapid screening of resistance varieties by hyperspectral imaging technology
CN119417816A (en) A citrus Huanglongbing detection device and detection method based on visual images
Jin et al.An innovative fusion feature method of spectrum and visual image for diagnosing ‘Akizuki’pear cork spot disorder
CN118470540A (en)Network model, construction method, and nondestructive identification method and system for gender of eggs
Kumawat et al.Time-Variant Satellite Vegetation Classification Enabled by Hybrid Metaheuristic-Based Adaptive Time-Weighted Dynamic Time Warping
Zhou et al.Hyperspectral imaging technology for detection of moisture content of tomato leaves

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp