Vehicle detector training method based on multi-subregion image feature automatic learningTechnical Field
The invention relates to the technical field of computer vision, in particular to a vehicle detector training method based on multi-subregion image feature automatic learning.
Background
With the rapid development of sensor technology and electronic technology, Advanced Driving Assistance Systems (ADAS) have become an important development direction in the automotive industry. The front vehicle detection plays a crucial role in the ADAS system based on vision, and is the basis of applications such as vision-based distance measurement and front vehicle collision avoidance.
Classifiers for object detection are trained in a learning manner, and appearance features of the whole area are generally extracted as a basis for classification. If the size of the area is large, some interference content in the background image is inevitably contained in the area; if a small area is used, the resolution is reduced, which tends to reduce the separability of the area.
In the image area containing the vehicle, some parts contain more appearance information for distinguishing the vehicle from other objects, and other parts hardly contain any valuable appearance content. Based on the observation, the invention divides the image area into sub-areas, and the image area is represented by a set of a plurality of sub-areas; extracting the characteristics of each sub-region, and performing connection and dimension reduction processing on the characteristics to serve as the characteristics of the region; in the above representation method, the number of different sub-region combinations determines the types of extractable features, the invention selects the features with better performance for vehicle detection application by using the RealBoost algorithm, each feature corresponds to a weak classifier, and a plurality of weak classifiers are combined into a strong classifier; a plurality of strong classifiers are constructed as a vehicle detector in a cascade. The advantage of this representation method is that not only the appearance features of the image can be extracted, but also the geometrical relations of the sub-regions that produce these appearance features are implicitly modeled.
Disclosure of Invention
The invention aims to solve the problems in the prior art and provides a vehicle detector training method based on multi-subregion image feature automatic learning.
For the technical problem, the invention is solved by the following technical scheme:
the vehicle detector training method based on multi-subregion image feature automatic learning comprises the following steps:
and S1, dividing the sample image used for training into sub-regions, taking a set of a plurality of sub-regions to represent the image region, and recording as: r ═ R (x)1,y1),r(x2,y2),...,r(xm,ym) (ii) a w, h), where R represents an image area, R (x)k,yk) Represents the kth sub-region in the image region, (x)k,yk) Coordinates in X and Y directions of an upper left corner point of a kth sub-region are set, m is the number of sub-regions in an image region, and all sub-regions in the same set have the same width and height which are respectively w and h;
s2, calculating the Histogram of Oriented Gradient (HOG) characteristics of each sub-region in the image region, sorting the sub-regions according to the positions of the upper left corners of the sub-regions, connecting the HOGs of the sub-regions into a vector according to the sorted sequence, and performing normalization and dimension reduction processing on the connected vector to be used as the characteristics of the image region;
s3, setting the image area to contain M sub-areas, taking M sub-areas to form a sub-area set, and sharing
Different combination modes are adopted, each combination mode corresponds to a weak classifier, and the weak classifiers with the performance meeting the preset standard for vehicle detection are selected by a RealBoost algorithm to be combined into a strong classifier;
s4, a plurality of strong classifiers are cascaded to form the vehicle detector, wherein each stage corresponds to one strong classifier.
Based on the technical scheme, the steps can be realized in the following preferred mode.
The calculating of the HOG feature of each sub-region in the image region in S2 includes:
let GxAnd GyGradient images in X and Y directions of the input image, G, respectivelyx(u, v) and Gy(u, v) are the gradient strengths of the pixel points (u, v) in the X and Y directions of the input image, respectively, and the gradient strength G (u, v) and the gradient direction α (u, v) of the pixel points (u, v) are calculated according to the following formulas:
α(u,v)=tan-1(Gy(u,v)/Gx(u,v))
the gradient directions are quantized uniformly into b levels, and the HOG feature of each sub-region is a vector containing b elements, the ith element of the vector being the sum of the gradient strengths of all pixels in the sub-region having the gradient directions covered by the ith quantization level.
In S2, the method of connecting the HOG of each sub-region into a vector, and performing normalization and dimension reduction on the connected vector includes the following steps:
s21, m sub-regions are set, HOG of each sub-region is a vector containing b elements, the vector obtained by connection is marked as H, H contains b multiplied by m elements, and Min-Max normalization processing is carried out on H according to the following formula:
where min (H) and max (H) represent the minimum and maximum element values, respectively, of vector H(j)Is the jth element in vector H;
s22, for each weak classifier, calculating HOG of each sub-region according to the sub-region combination mode defined by the weak classifier, connecting HOG of each sub-region and performing the column vector H obtained by Min-Max normalization processing in S21iThe covariance matrix is calculated according to the following formula:
wherein N ispFor the number of positive samples, μ is the mean vector calculated for all positive samples;
s23, solving the eigenvalue of the covariance matrix S, sorting all eigenvalues of S from big to small, and sequentially selecting d eigenvalues lambda larger than the threshold value0、λ1、...、λd-1Taking and λiCorresponding eigenvectors ciIn addition to c0、c1、...、cd-1Form a matrix C as a column vector, and pair H as followsiThe dimension reduction operation is carried out on the data,
Hi=CTHi。
the method for combining the weak classifiers which have the performance meeting the preset standard for vehicle detection and are selected by the RealBoost algorithm to form the strong classifier in the S3 comprises the following steps:
s31, iterating T from 1 to T, wherein T is the number of weak classifiers which are allowed to be contained in the maximum in a preset strong classifier, and selecting one weak classifier in each iteration;
s32, taking positive samples and negative samples in the training sample set as input, respectively calculating the weighted misclassification function of each weak classifier, selecting the weak classifier with the minimum misclassification function value as the optimal weak classifier, and adding the optimal weak classifier into the current strong classifier;
s33, updating the weight of each sample according to the following formula,
wherein xiRepresents the ith sample, yiFor marking of samples, corresponding to positive and negative samples yiAre respectively +1 and-1, wiFor corresponding sample xiWeight of ft(xi) Representing the optimal weak classifier selected by the t-th iteration on sample xiThe output resulting from the classification, Z is a number used for normalization, calculated as:
s34, classifying the samples for testing by using the current strong classifier, if the detection rate is greater than a preset first threshold and the false alarm rate is less than a preset second threshold, ending the iteration, and outputting a strong classifier F (x), otherwise continuing the next iteration; the strong classifier f (x) is shown by the following equation:
sign (·) represents symbol operation, q is a default value of 0 constant, and the detection rate η and the false alarm rate e are respectively calculated according to the following formulas:
wherein N ispIs the number of all positive samples, NppIs the number of correctly detected samples, N, of all positive samplesfIs the number of all negative samples, NfpIs erroneously detected as a positive sample among all negative samplesThe number of (2).
When the optimal weak classifier with the minimum misclassification function value is selected in S32, the method for calculating the misclassification function is as follows:
setting the weak classifier to be selected, normalizing and dimensionality-reducing the HOG features thereof according to the procedure of S2 to form a vector H, and regarding the vector H as a multidimensional random vector H consisting of a plurality of variables (H ═ H)0,h1,...,hd-1) Where d is the dimension of the reduced vector, for each variable hiDividing the value range into L intervals, wherein the variable value range corresponding to the interval 0 is hiNot more than-1, the interval L-1 corresponds to hi> +1, the range of the corresponding variable value of the nth interval is,
wherein n is more than 0 and less than L-1;
calculating the sum of the weights of the training samples falling in the interval according to the following formula:
wherein, a
j∈[0,L-1]Represents one of L intervals, δ is a kronecker function,
representative training sample x
iCorresponding to the vector formed by the weak classifiers to be selected through connection, normalization and dimension reduction processing,
represents the jth variable in the vector, Tr (-) is a mapping function if
Is within the u-th interval range of the L intervals, Tr (-) maps it to u;
the weak classifiers corresponding to the features to be selected are:
wherein V (a)0,a1,...,ad-1) Calculated according to the following formula:
wherein P ispositive(a0,a1,...,ad-1) And Pnegative(a0,a1,...,ad-1) The weights are respectively the sum of the weights calculated by the positive sample and the negative sample;
if there is weak classifier to sample xiIf the classification of (1) is false score, sign (f (x)i))≠sign(yi) The function of the misclassification loss generated by the weak classifier to be selected for classifying all samples is calculated according to the following formula,
the plurality of strong classifiers described in S4 constitute the vehicle detector in a cascade, including the steps of:
s41, after a strong classifier is newly trained, if the existing cascade classifier is empty, the strong classifier is a 0 th class classifier, and if the existing cascade classifier already comprises a k-class classifier, the strong classifier is added to a k +1 th class of the cascade classifier;
s42, detecting the test image by using a cascade classifier newly added with a strong classifier, and if the detection rate is greater than a preset third threshold value and the false detection rate is less than a preset fourth threshold value or the total number of stages of the cascade classifier reaches a preset number, outputting the cascade classifier and ending the training process; otherwise, collecting the wrongly-divided positive sample and negative sample, respectively adding the positive sample set and the negative sample set, and deleting the negative sample which is correctly detected by the currently-trained strong classifier in the negative sample set, so as to train a new strong classifier by the updated positive and negative sample sets.
The present invention has the advantage over the prior art that not only appearance features of an image can be extracted, but also the geometrical relations of the sub-regions that produce these appearance features are implicitly modeled.
Drawings
FIG. 1 shows a schematic diagram of a positive sample image for training;
FIG. 2 illustrates representing an image region in a collection of sub-regions;
FIG. 3 is a schematic flow chart of a vehicle detector training method based on multi-subregion image feature automatic learning according to an embodiment of the present invention;
fig. 4 illustrates a flow diagram for selecting the optimal weak classifier.
Detailed Description
The invention will be further elucidated and described with reference to the drawings and the detailed description.
The invention discloses a vehicle detector training method based on multi-subregion image feature automatic learning, which comprises the following steps of:
and S1, dividing the sample image used for training into sub-regions, taking a set of a plurality of sub-regions to represent the image region, and recording as: r ═ R (x)1,y1),r(x2,y2),...,r(xm,ym) (ii) a w, h), where R represents an image area, R (x)k,yk) Represents the kth sub-region in the image region, (x)k,yk) Coordinates in X and Y directions of an upper left corner point of a kth sub-region are set, m is the number of sub-regions in an image region, and all sub-regions in the same set have the same width and height which are respectively w and h;
s2, calculating HOG characteristics of each sub-region in the image region, sorting the sub-regions according to the positions of the upper left corners of the sub-regions, connecting the HOGs of the sub-regions into a vector according to the sorted sequence, and performing normalization and dimension reduction processing on the connected vector to be used as the characteristics of the image region;
the specific implementation manner of calculating the HOG feature of each sub-region in the image region is as follows:
let GxAnd GyGradient images in X and Y directions of the input image, G, respectivelyx(u, v) and Gy(u, v) are the gradient strengths of the pixel points (u, v) in the X and Y directions of the input image, respectively, and the gradient strength G (u, v) and the gradient direction α (u, v) are calculated according to the following formulas:
α(u,v)=tan-1(Gy(u,v)/Gx(u,v))
the gradient directions are quantized uniformly into b levels, and the HOG feature of each sub-region is a vector containing b elements, the ith element of the vector being the sum of the gradient strengths of all pixels in the sub-region having the gradient directions covered by the ith quantization level.
The HOGs of all the sub-areas are connected into a vector, and the vector obtained by the connection is subjected to normalization and dimension reduction treatment, and the method comprises the following steps:
s21, m sub-regions are set, HOG of each sub-region is a vector containing b elements, the vector obtained by connection is marked as H, H contains b multiplied by m elements, and Min-Max normalization processing is carried out on H according to the following formula:
where min (H) and max (H) represent the minimum and maximum element values, respectively, of vector H(j)Is the jth element in vector H;
s22, for each weak classifier, calculating HOG of each sub-region according to the sub-region combination mode defined by the weak classifier, connecting HOG of each sub-region and performing the column vector H obtained by Min-Max normalization processing in S21iThe covariance matrix is calculated according to the following formula:
wherein N ispFor the number of positive samples, μ is for all positive samplesA calculated mean vector;
s23, solving the eigenvalue of the covariance matrix S, sorting all eigenvalues of S from big to small, and sequentially selecting d eigenvalues lambda larger than the threshold value0、λ1、...、λd-1Taking and λiCorresponding eigenvectors ciIn addition to c0、c1、...、cd-1Form a matrix C as a column vector, and pair H as followsiThe dimension reduction operation is carried out on the data,
Hi=CTHi。
s3, setting the image area to contain M sub-areas, taking M sub-areas to form a sub-area set, and sharing
Different combination modes are adopted, each combination mode corresponds to a weak classifier, and the weak classifiers with the performance meeting the preset standard for vehicle detection are selected by a RealBoost algorithm to be combined into a strong classifier;
the method comprises the following steps of selecting weak classifiers with performance meeting preset standards for vehicle detection by a RealBoost algorithm to combine into a strong classifier:
s31, iterating T from 1 to T, wherein T is the number of weak classifiers which are allowed to be contained in the maximum in a preset strong classifier, and selecting one weak classifier in each iteration;
and S32, taking the positive samples and the negative samples in the training sample set as input, respectively calculating the weighted misclassification function of each weak classifier, selecting the weak classifier with the minimum misclassification function value as the optimal weak classifier, and adding the optimal weak classifier into the current strong classifier. The calculation method of the error separation loss function comprises the following steps:
setting the weak classifier to be selected, normalizing and dimensionality-reducing the HOG features thereof according to the procedure of S2 to form a vector H, and regarding the vector H as a multidimensional random vector H consisting of a plurality of variables (H ═ H)0,h1,...,hd-1) Where d is the dimension of the reduced vector, for each variable hiDividing the value range into L intervals, wherein the variable corresponding to the interval 0Value range of hiNot more than-1, the interval L-1 corresponds to hi> +1, the range of the corresponding variable value of the nth interval is,
wherein n is more than 0 and less than L-1;
calculating the sum of the weights of the training samples falling in the interval according to the following formula:
wherein, a
j∈[0,L-1]Represents one of L intervals, δ is a kronecker function,
representative training sample x
iCorresponding to the vector formed by the weak classifiers to be selected through connection, normalization and dimension reduction processing,
represents the jth variable in the vector, Tr (-) is a mapping function if
Is within the u-th interval range of the L intervals, Tr (-) maps it to u;
the weak classifiers corresponding to the features to be selected are:
wherein V (a)0,a1,...,ad-1) Calculated according to the following formula:
wherein P ispositive(a0,a1,...,ad-1) And Pnegative(a0,a1,...,ad-1) The weights are respectively the sum of the weights calculated by the positive sample and the negative sample;
if there is weak classifier to sample xiIf the classification of (1) is false score, sign (f (x)i))≠sign(yi) The function of the misclassification loss generated by the weak classifier to be selected for classifying all samples is calculated according to the following formula,
s33, updating the weight of each sample according to the following formula,
wherein xiRepresents the ith sample, yiFor marking of samples, corresponding to positive and negative samples yiAre respectively +1 and-1, wiFor corresponding sample xiWeight of ft(xi) Representing the optimal weak classifier selected by the t-th iteration on sample xiThe output resulting from the classification, Z is a number used for normalization, calculated as:
s34, classifying the samples for testing by using the current strong classifier, if the detection rate is greater than a preset first threshold and the false alarm rate is less than a preset second threshold, ending the iteration, and outputting a strong classifier F (x), otherwise continuing the next iteration; the strong classifier f (x) is shown by the following equation:
sign (·) represents symbol operation, q is a default value of 0 constant, and the detection rate η and the false alarm rate e are respectively calculated according to the following formulas:
wherein N ispIs the number of all positive samples, NppIs the number of correctly detected samples, N, of all positive samplesfIs the number of all negative samples, NfpIs the number of false positive samples detected in all negative samples.
S4, a plurality of strong classifiers are cascaded to form the vehicle detector, wherein each stage corresponds to one strong classifier. The method specifically comprises the following steps:
s41, after a strong classifier is newly trained, if the existing cascade classifier is empty, the strong classifier is a 0 th class classifier, and if the existing cascade classifier already comprises a k-class classifier, the strong classifier is added to a k +1 th class of the cascade classifier;
s42, detecting the test image by using a cascade classifier newly added with a strong classifier, and if the detection rate is greater than a preset third threshold value and the false detection rate is less than a preset fourth threshold value or the total number of stages of the cascade classifier reaches a preset number, outputting the cascade classifier and ending the training process; otherwise, collecting the wrongly-divided positive sample and negative sample, respectively adding the positive sample set and the negative sample set, and deleting the negative sample which is correctly detected by the currently-trained strong classifier in the negative sample set, so as to train a new strong classifier by the updated positive and negative sample sets.
Examples
Based on the above method, the embodiment describes, with reference to specific examples, an implementation process of a vehicle detector training method based on multi-subregion image feature automatic learning.
Fig. 1 is a schematic diagram of a positive sample image for training a vehicle detector according to an embodiment of the present invention, and referring to fig. 1, the embodiment manually labels a vehicle region, where the labeling criteria is that the positive sample image needs to include regions that include left and right edges of the vehicle and are respectively extended by 2% from left to right, and include regions that are all above and below the vehicle and are respectively extended by 2% from top to bottom. The labeled vehicle regions are extracted and saved as positive sample images, all of which are scaled to a uniform size. This embodiment sets the dimensions to 30 pixels high and 36 pixels wide. The negative sample image is a road or roadside natural scene image containing no vehicle.
Dividing the uniform-sized sample image area into sub-areas, taking several sub-areas r1, r2, r3, referring to fig. 2, representing the image area by the set of these sub-areas, and recording this representation as: r ═ R (x)1,y1),r(x2,y2),...,r(xm,ym) (ii) a w, h). In the above representation, r (x)k,yk) The X and Y coordinates representing the top left corner point of the kth sub-region in the sub-region set are (X)k,yk). The number of sub-regions in a set is m and all sub-regions in the same set have the same width and height, w and h respectively.
The image area is represented by the set of sub-areas, and if the image area comprises M sub-areas in total, and M of the sub-areas form a set, the image area has M sub-areas in total

Different combinations are possible. The number of sub-regions depends on the size of the image region, the size of the sub-regions, the size of the overlap between the sub-regions, and other factors. In this embodiment, the size of the sub-region is 8 × 8, the adjacent sub-regions are overlapped by 5 pixels in the X and Y directions, respectively, and the number of sub-regions in the set is 2 or 3. With the above arrangement, in the case where the height and width of the positive samples in the training set are set to 30 and 36 pixels, respectively, it is easy to calculate the number of sub-regions to be 80. According to the calculation formula of the number of combinations, about 3000 different combinations can be formed by taking 2 out of 80 different sub-areas, and about 80000 different combinations can be formed by taking 3 out of 80 sub-areas.
The image area is represented as a set of several sub-areas, and the image features for vehicle detection come from the sub-areas in the set. Specifically, the present embodiment calculates the HOG features of each sub-region, sorts the sub-regions from top to bottom and from left to right according to the top left corner positions of the sub-regions, connects the HOG of each sub-region into a vector according to the sorted order, performs dimension reduction processing on the vector obtained by the connection, and uses the vector after dimension reduction as the features of the image region.
Some sub-regions and combinations of sub-regions in the vehicle image have appearance features that are clearly distinguished from other objects at these locations, while others have little texture variation. It is obvious that extracting features in those single or multiple sub-regions having vehicle-specific features and designing a classifier based on the extracted features is advantageous for improving the performance of the vehicle detector. In the embodiment, the combination mode of the sub-regions corresponds to the weak classifiers one by one, one combination mode corresponds to one weak classifier, the weak classifiers which have better distinguishability for vehicle detection application are selected by a RealBoost algorithm, a strong classifier is constructed by a plurality of weak classifiers, and a vehicle detector is constructed by a plurality of strong classifiers in a cascading mode.
As shown in FIG. 3, the vehicle detector training method flow based on multi-subregion image feature automatic learning of the present invention may include the following steps:
step 301, initializing a cascade classifier to be null;
the vehicle detector of the embodiment is a cascade classifier, wherein each stage comprises a strong classifier, the first added strong classifier is a 0 th stage classifier, the last added strong classifier is a K-1 th stage classifier, and K is the maximum preset stage number;
step 302, inputting a positive and negative sample set, wherein the number of positive samples is NpThe number of negative samples is Nf(ii) a Initializing the weight of each sample, wherein the weight of each positive sample is 1/2NpThe weight of the negative sample is 1/2Nf(ii) a Initializing a strong classifier to be trained to comprise 0 weak classifiers;
step 303, iterating T from 1 to T, wherein T is the number of weak classifiers which are allowed to be contained in a preset strong classifier at most, and selecting a weak classifier in each iteration;
step 304, calculating a weighted misclassification function of each weak classifier to be selected, selecting the weak classifier with the minimum misclassification function value as an optimal weak classifier, and adding the optimal weak classifier into the current strong classifier;
step 305, updating the weight of each sample according to the following formula,
wherein f ist(-) represents the optimal weak classifier selected in the t-th iteration, when correctly classified, the output of positive samples is a positive real number, the output of negative samples is a real number not greater than 0, xiRepresentative sample, yiIs with xiCorresponding mark, if xiIs a positive sample, then yi-1, Z is a number for normalization, calculated as follows;
step 306, classifying the samples for testing using the current strong classifier, calculating detection rate η and false alarm rate e, wherein the detection rate is calculated as follows,
wherein N isppIs the number of positive samples identified, and the false positive rate is calculated as follows
Wherein N isfpIs the number of all negative samples that were incorrectly identified as positive samples;
307, if the detection rate η is greater than a predetermined threshold ηSAnd the false alarm rate e is less than the preset threshold value eSThen go to step 308, otherwise go to 303, an embodiment of the invention will ηSAnd eSSet to 0.995 and 0.3, respectively;
step 308, ending the iteration, and outputting the strong classifier shown as the following formula:
sign (·) represents a sign function, if the function value is a regular strong classifier, the sample is determined as a positive sample, otherwise, the sample is a negative sample, T' is the number of weak classifiers actually included in the strong classifier, and q is a constant with a default value of 0;
step 309, adding the strong classifier output in thestep 308 into a cascade classifier, and if the existing cascade classifier is empty, the strong classifier is a 0 th class classifier; if the existing cascade classifier already comprises a k-level classifier, adding a k + 1-level classifier;
step 310, detecting a test image by using a current cascade classifier;
instep 311, if the detection rate is greater than the preset threshold ηTAnd the false alarm rate is less than the preset threshold eTOr the total number of stages of the cascade classifier has reached the preset number, the cascade classifier is output to end the training process, otherwise, thestep 312 is executed, which is set η in this embodimentTAnd eT0.995 and 0.5X 10, respectively-5;
Step 312, collect the misclassified positive and negative samples, add the positive and negative sample sets respectively, and delete the negative sample that has been correctly classified by the currently trained strong classifier in the negative sample set, go to 302.
Instep 304, a weighted misclassification function of each weak classifier to be selected is calculated, and the weak classifier with the smallest misclassification function value is selected as the optimal weak classifier, where fig. 4 shows specific steps, which may include:
step 401, calculating a gradient intensity map and a gradient directional diagram of the sample image, in this embodiment, calculating gradient images of the sample image in the X and Y directions according to the following formula,
Gx(u,v)=I(u+1,v)-I(u-1,v) (6)
Gy(u,v)=I(u,v+1)-I(u,v-1) (7)
wherein,Gx(u, v) and Gy(u, v) represent the values of the gradient images in the X and Y directions at the pixel (u, v), respectively, and I is a sample image;
α(u,v)=tan-1(Gy(u,v)/Gx(u,v)) (9)
step 402, respectively calculating features corresponding to different combinations of sub-regions, in this embodiment, a set of sub-regions is used to represent an image region, different sub-region combination modes correspond to different weak classifiers, and for a to-be-selected weak classifier, the features of a sample image corresponding to the weak classifier are calculated as follows: firstly, calculating HOG characteristics of each sub-region respectively, uniformly quantizing the gradient directions in angle, which are calculated by formula (9), into b levels, wherein the HOG characteristics of each sub-region are a vector containing b elements, and the ith element of the vector is the sum of gradient strengths of all pixels in the sub-region with the gradient directions covered by the ith quantization level; secondly, sorting m sub-regions in the sub-region set from top to bottom and from left to right according to coordinates of an upper left corner point X and a Y direction, and sequentially connecting HOGs of the sub-regions to form a vector containing b multiplied by m elements, and recording the vector as H; and carrying out Min-Max standardization on the vector H obtained by connection according to the following formula:
where min (H) and max (H) represent the minimum and maximum element values, respectively, of vector H(i)Is the ith element in the vector H; finally, carrying out dimension reduction processing on the vector H, and taking the vector subjected to dimension reduction as the characteristic of the image area;
step 403, determining the corresponding weak classifier according to the features: for a specific weak classifier to be selected, each sample can calculate a characteristic value which is expressed in a vector form corresponding to the weak classifier to be selected, and the characteristic value is expressed in a vector formThe reduced features are regarded as a multi-dimensional random vector H ═ H (H) composed of a plurality of variables0,h1,...,hd-1) D is the dimension of the vector after dimension reduction, and d is 2 or 3 in the embodiment; for each variable hiDividing the value range into L intervals, wherein the variable value range corresponding to the interval 0 is hiNot more than-1, the interval L-1 corresponds to hi> +1, the range of the corresponding variable value of the nth interval is:
wherein n is more than 0 and less than L-1;
in this embodiment, the sum of the weights of the training samples falling in each interval is calculated according to the following formula:
wherein, a
j∈[0,L-1]Represents one of L intervals, δ is a kronecker function,
representative training sample x
iCorresponding to the vector formed by the weak classifiers to be selected through connection, normalization and dimension reduction processing,
represents the jth variable in the vector, Tr (-) is a mapping function if
Is within the u-th interval range of the L intervals, Tr (-) maps it to u;
if the dimension d after dimension reduction is 2, P (a) of the formula (12) is used0,a1) By a variable h0Falls within the interval a0And the variable h1Falls within the interval a1The weights of all samples are summed and calculated; if the dimension d after dimensionality reduction is 3, P (a) of the formula (12) is used0,a1,a2) By a variable h0Falls within the interval a0,h1Falls within the interval a1And h is2Falls within the interval a2The weight value of the weight value is obtained by summation calculation;
the weak classifier to be selected is determined according to the following formula:
wherein V (a)0,a1,...,ad-1) The following calculation is carried out,
wherein P ispositive(a0,a1,...,ad-1) And Pnegative(a0,a1,...,ad-1) The weights are respectively the sum of the weights calculated by the positive sample and the negative sample according to the formula (12);
step 404, calculating a false loss function generated by classifying the sample for each weak classifier to be selected, and selecting the weak classifier with the minimum false loss function value as an optimal weak classifier: first, if there is a weak classifier f for the sample xiIf the classification of (1) is false score, sign (f (x)i))≠sign(yi) The misclassification function generated by the weak classifier to be selected for classifying all samples is calculated according to the following formula,
wherein x isiAnd yiRespectively, sample and corresponding label, if xiIs a positive sample, then yi1, otherwise yi=-1,f(xi) Representing the classification output of the weak classifier to the sample, if the output value is inconsistent with the marked symbol, the false score is indicated; secondly, selecting the weak classifier with the minimum error distribution loss function value as the optimal weak classifier according to the following formula,
in the foregoingstep 402, the performing dimension reduction on the vector is performed, and the vector after dimension reduction is used as a feature value of the image region, optionally, this embodiment is implemented in a principal component analysis manner, and specifically, the method may include the following steps:
firstly, a weak classifier to be selected is set, and the vector formed by Min-Max normalization of the ith positive sample image according to the formula (10) is HiAnd is a column vector, a covariance matrix is calculated according to the following formula,
wherein N ispμ is the mean vector calculated for all positive samples for the number of positive samples;
secondly, solving the eigenvalue of the covariance matrix S, sorting all eigenvalues of S from big to small, and sequentially selecting the larger d eigenvalues lambda0、λ1、...、λd-1Taking and λiCorresponding eigenvectors ciIn addition to c0、 c1、...、cd-1Forming a matrix C as a column vector, and applying the following equation to HiThe dimension reduction operation is carried out on the obtained data,
Hi=CTHi(18)
wherein the eigenvalue λ of the matrix S is a scalar quantity which makes the following expression true, c is an eigenvector corresponding to the eigenvalue λ,
Sc=λc (19)。
the above-described embodiments are merely preferred embodiments of the present invention, which should not be construed as limiting the invention. Various changes and modifications may be made by one of ordinary skill in the pertinent art without departing from the spirit and scope of the present invention. Therefore, the technical scheme obtained by adopting the mode of equivalent replacement or equivalent transformation is within the protection scope of the invention.