Disclosure of Invention
The invention provides a cross-modal pedestrian re-identification method based on a symmetric convolutional neural network aiming at the problem of inter-modal and intra-modal differences, so as to reduce the inter-modal and intra-modal differences and improve the re-identification effect and precision.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention relates to a cross-modal pedestrian re-identification method based on a symmetric convolutional neural network, which is characterized by comprising the following steps of:
step 1, collecting a visible light image set V of N pedestrians, wherein j visible light images of the ith pedestrian are recorded as ViAnd V isi={Vi1,Vi2,...,Vij},VijA jth visible light picture representing the ith pedestrian and giving ith identity information y to the ith pedestriani;i=1,2,…,N;
Collecting an infrared light image set T of N pedestrians by using an infrared light camera or a depth camera, wherein m infrared light images of the ith pedestrian are recorded as TiAnd T isi={Ti1,Ti2,...,Tim},TimAn m-th infrared light image representing an i-th pedestrian;
constructing a search library by visible light pictures and infrared light images of other pedestrians with known identity information;
step 2, constructing a symmetrical convolutional neural network consisting of a generator and a discriminator;
the generator consists of two independent columns of ResNet50 networks, wherein the ResNet50 network consists of d residual sub-modules, and a column of full connection layers S is added after the d-1 residual sub-module1Adding a column of fully-connected layers S after the d-th residual sub-module2;
The discriminator consists of a visible light image classifier and an infrared light image classifier;
initializing network weights for the ResNet50 network;
initializing parameters of the full connection layer and the discriminator by adopting a random initialization mode;
step 3, respectively inputting the visible light image set V and the infrared light image set T of the N pedestrians into two independent ResNet50 networks, and outputting a d-1 group of visible light characteristic information V after a d-1 residual sub-moduled-1And d-1 group infrared light characteristic information td-1Respectively inputting the d-th residual error sub-module and outputting a d-th group of visible light characteristic information vdAnd d group infrared light characteristic information td;
Step 4, constructing the d-1 th sample feature space Xd-1;
Selecting visible light characteristic information and infrared light characteristic information of P pedestrians from all the characteristic information output by the d-1 th residual error submodule, wherein the visible light characteristic information v of each pedestriani,d-1And infrared light characteristic information ti,d-1Respectively selecting K pieces of feature information to construct a d-1 th sample feature space Xd-1;
The d-1 th sample feature space Xd-1Are input together to the subsequent fully-connected layer S1D-1 group visible light feature vector v 'is output'd-1And infrared light feature vector t'd-1;
Step 5, constructing the d sample feature space Xd;
Selecting visible light characteristic information and infrared light characteristic information of P pedestrians from all the characteristic information output by the d residual error submodule, wherein the visible light characteristic information v of each pedestriani,dAnd infrared light characteristic information ti,dRespectively selecting K pieces of feature information to construct a d-th sample feature space Xd;
Then the d sample feature space X is useddAre input together to the subsequent fully-connected layer S2And outputs a d-th group visible light feature vector v'dAnd infrared light feature vector t'd;
Step 6, the d-1 group of visible light feature vectors v'd-1Inputting the infrared light characteristic vector into the visible light image classifier, outputting an initial probability distribution GV of the visible light, and converting the d-1 th group of infrared light characteristic vectors t'd-1Inputting the infrared light image classifier and outputting an initial probability distribution GT of infrared light;
construction of identity loss function L using equation (1)ID:
Step 7, from the d-1 sample feature space X
d-1To select the kth characteristic information of the a-th pedestrian
If the feature vector is recorded as the anchor sample feature vector, the anchor sample feature vector is recorded together with the anchor sample feature vector
The z-th characteristic information of the a-th pedestrian with the same identity information
Is denoted as the z-th positive sample feature vector, and
the c characteristic information of the f pedestrian with different identity information
And (4) establishing a mixed ternary loss function L by using the formula (2) after the c negative sample feature vector is recorded
TRI1(X
d-1):
In the formula (2), the reaction mixture is,
representing anchor sample feature vectors
And the z-th positive sample feature vector
The Euclidean distance of (a) is,
representing anchor sample feature vector and the c negative sample feature vector of the f pedestrian
Euclidean distance of (p)
1Is a mixed ternary loss function L
TRI1(X
d-1) A predefined minimum interval;
step 8, from the d sample characteristic space X
dIn the method, the s characteristic information of the r pedestrian is selected
If the feature vector is recorded as the anchor sample feature vector, the anchor sample feature vector is recorded together with the anchor sample feature vector
B-th characteristic information of r-th pedestrian with same identity information
Is denoted as the b-th positive sample feature vector, and
qth characteristic information of the h pedestrian with different identity information
And (4) establishing a mixed ternary loss function L by using the formula (3) after the q negative sample feature vector is recorded
TRI2(X
d):
In the formula (3), the reaction mixture is,
representing anchor sample feature vectors
And the b-th positive sample feature vector
The Euclidean distance of (a) is,
representing the anchor sample feature vector and the qth negative sample feature vector of the h pedestrian
Euclidean distance of (p)
2Is a mixed ternary loss function L
TRI2(X
d) A predefined minimum interval;
step 9, establishing a mixed ternary loss function L by using the formula (4)TRI:
LTRI=LTRI1+LTRI2 (4)
Establishing a global penalty function L using equation (5)ALL:
LALL=LID+βLTRI (5)
In the formula (5), β represents a mixed ternary loss function LTRIThe coefficient of (a);
carrying out optimization solution on the formula (5) by a random gradient descent method, carrying out gradient back propagation, training each parameter of the symmetric convolutional neural network, and obtaining a preliminarily trained symmetric convolutional neural network model;
step 10, the d-1 group visible light feature vector v'd-1Inputting the visible light image classifier in the preliminarily trained symmetric convolutional neural network model, and outputting visible lightD-1 group infrared light feature vector t'd-1Inputting the infrared light image classifier in the preliminarily trained symmetric convolutional neural network model, and outputting the probability distribution GT' of infrared light; d-1 group visible light feature vector v'd-1Inputting the pseudo visible light probability distribution GV 'into an infrared light classifier in the preliminarily trained symmetric convolutional neural network model to obtain a pseudo visible light probability distribution GV';
constructing a divergence loss function L between the pseudo visible light eigenvector GV 'and the visible light probability distribution GV' by using the formula (6)KL:
LKL=KL(GV″,GV′) (6)
In the formula (6), KL (·,) represents the difference value of the probability distributions of the two;
establishing a discriminator loss function L using equation (7)DIS:
LDIS=LID-αLKL (7)
In the formula (7), α represents LKLThe coefficient of (a);
step 11, establishing a generator loss function L by using the formula (8)GEN:
LGEN=αLKL+βLTRI (8)
And 12, sequentially optimizing and solving the formula (5), the formula (7) and the formula (8) by a gradient descent method:
firstly, carrying out optimization solution on the formula (5) and training all parameters of the network;
secondly, carrying out optimization solution on the formula (7), in the gradient back propagation process, only carrying out back propagation on the gradient of the discriminator, and setting the gradient of the generator to zero, thereby freezing the generator parameters and training the discriminator parameters;
finally, carrying out optimization solution on the formula (8), in the gradient back propagation process, only carrying out back propagation on the gradient of the generator, and setting the gradient of the discriminator to zero, thereby freezing the parameters of the discriminator and training the parameters of the generator;
after training in turn, make LALL,LDIS,LGENConverge to optimum in antagonistic learning when LDISReach the optimumWhen the discriminator is optimal, when LGENWhen the optimal condition is reached, the generator is optimal, so that a final cross-modal pedestrian re-identification model of the symmetric convolutional neural network is obtained;
step 13, utilizing the final symmetrical convolutional neural network model to query and match the cross-modal pedestrian re-identification;
inputting the pedestrian image to be inquired into a final symmetrical convolutional neural network model to extract features, then carrying out similarity comparison with the features of the pedestrians in a search library, and finding corresponding pedestrian identity information from the ranking list according to the sequence of the similarity, thereby obtaining an identification result.
Compared with the prior art, the invention has the beneficial effects that:
1. aiming at the difference between the modes, the invention combines the mode confusion idea based on probability distribution with counterstudy to construct a symmetrical convolutional neural network, the symmetrical convolutional neural network is composed of a generator and a discriminator, and the network generates the mode invariant feature by minimizing the output probability distribution difference of a classifier in the discriminator, thereby achieving the purpose of mode confusion and realizing higher detection precision under the conditions of shielding, pedestrian posture change, illumination change and mode change.
2. In order to solve the two problems of the difference between the modes and the difference in the modes, the invention combines the ternary loss with the counterstudy and provides a mixed ternary loss to reduce the difference between the modes and the difference in the modes. When mode confusion is achieved through counterlearning, the positive sample and the negative sample are selected under the condition that modes are not distinguished to conduct feature alignment and reduce mode difference, so that high detection precision is achieved under the condition that the mode difference is large, and the adaptability of the method is high.
3. According to the capability of describing structure and spatial information of the hidden layer convolution characteristics, the invention adopts d-1 layer hidden layer convolution characteristics (namely characteristics from ResNet50 network residual submodule) as a rear full-connection layer S1And the input of the following discriminator, so that the network can learn more space structure information and reduce the color differenceThe influence is reduced, and the difference between the two modes is reduced, so that the detection precision of the invention is improved, and the invention has stronger applicability in the fields of target tracking, video monitoring, public security and the like.
4. The invention aligns the features at different depths of the symmetrical convolutional neural network, so that the network can learn more deep information, the robustness of the network is improved, the problem of inaccurate detection of the existing pedestrian re-identification method in a cross-mode state can be greatly solved, and the accurate detection can be realized under the condition of appearance difference and other problems.
Detailed Description
In the embodiment, a cross-modal pedestrian re-identification method based on a symmetric convolutional neural network mainly reduces inter-modal and intra-modal differences by using the symmetric convolutional neural network and counterstudy; and the network is optimized on different network depths, and the appearance difference is reduced by utilizing the shallow feature with more space structure information. Referring to fig. 1, there is shown a schematic diagram of images in two different modalities, the detailed steps are as follows:
step 1, collecting a visible light image set V of N pedestrians, wherein j visible light images of the ith pedestrian are recorded as ViAnd V isi={Vi1,Vi2,...,Vij},VijRepresenting the i-th pedestrianThe jth visible light picture and endows ith identity information y to the ith pedestriani;i=1,2,…,N;
Collecting an infrared light image set T of N pedestrians by using an infrared light camera or a depth camera, wherein m infrared light images of the ith pedestrian are recorded as TiAnd T isi={Ti1,Ti2,...,Tim},TimAn m-th infrared light image representing an i-th pedestrian;
constructing a search library by visible light pictures and infrared light images of other pedestrians with known identity information;
this embodiment utilizes the RegDB dataset and the SYSU-MM01 dataset. The SYSU-MM01 is a large-scale cross-modal pedestrian re-identification dataset collected by four visible light cameras and two infrared light cameras. The data set has two different scenes, namely indoor and outdoor, and the training set comprises 395 pieces of pedestrian identity data information, wherein 11909 infrared light pedestrian images and 22258 visible light pedestrian images are shared.
The RegDB data set contains 412 pedestrian identity information, which are captured by the dual camera system. Each pedestrian ID contains 10 visible light images and 10 infrared light images in total. The invention adopts a recognized data set processing method, randomly divides all data in the data set into two parts, and randomly selects a part of data for training.
Step 2, constructing a symmetrical convolutional neural network consisting of a generator and a discriminator;
the generator consists of two independent columns of ResNet50 networks, where the ResNet50 network consists of d residual sub-modules, a column of fully connected layers S is added after the d-1 th residual sub-module1Adding a column of fully-connected layers S after the d-th residual sub-module2;S1,S2For extracting modal sharing information; the ResNet50 network adopted by the invention is composed of 4 residual submodules, wherein d is 4, and d-1 is 3; full connection layer S1,S2The number of the neurons is set to 1024;
the discriminator consists of a visible light image classifier and an infrared light image classifier, which are shown in fig. 2;
initializing network weights for the ResNet50 network;
initializing parameters of the full connection layer and the discriminator by adopting a random initialization mode;
step 3, respectively inputting the visible light image set V and the infrared light image set T of N pedestrians into two independent ResNet50 networks for extracting the characteristic information of the pedestrians, and outputting d-1 group of visible light characteristic information V after the d-1 residual sub-moduled-1And d-1 group infrared light characteristic information td-1Respectively inputting the d-th residual error sub-module and outputting the d-th group of visible light characteristic information vdAnd d group infrared light characteristic information td;
Step 4, constructing the d-1 th sample feature space Xd-1;
Selecting visible light characteristic information and infrared light characteristic information of P pedestrians from all the characteristic information output by the d-1 th residual error submodule, wherein the visible light characteristic information v of each pedestriani,d-1And infrared light characteristic information ti,d-1Respectively selecting K pieces of feature information to construct a d-1 th sample feature space Xd-1(ii) a In the invention, P is 16, K is 4;
the d-1 th sample feature space Xd-1Are input together to the subsequent fully-connected layer S1And d-1 group visible light feature vector v 'is output for extracting modal sharing information'd-1And infrared light feature vector t'd-1;
Step 5, constructing the d sample feature space Xd;
Selecting visible light characteristic information and infrared light characteristic information of P pedestrians from all the characteristic information output by the d residual error submodule, wherein the visible light characteristic information v of each pedestriani,dAnd infrared light characteristic information ti,dRespectively selecting K pieces of feature information to construct a d-th sample feature space Xd;P=16,K=4;
Then the d sample feature space XdAre input together to the subsequent fully-connected layer S2For extracting modal sharing information and outputting d-th group visible light feature vector v'dAnd infrared light feature vector t'd;
Step 6, setting the d-1 group visible light feature vector v'd-1Inputting into a visible light image classifier, outputting an initial probability distribution GV of visible light, and converting the d-1 th group of infrared light feature vectors t'd-1Inputting the infrared light image into an infrared light image classifier, and outputting an initial probability distribution GT of the infrared light;
construction of identity loss function L using equation (1)ID:
Step 7, from the d-1 th sample feature space X
d-1To select the kth characteristic information of the a-th pedestrian
If the feature vector is recorded as the anchor sample feature vector, the anchor sample feature vector is recorded together with the anchor sample feature vector
The z-th characteristic information of the a-th pedestrian with the same identity information
Is denoted as the z-th positive sample feature vector, and
the c characteristic information of the f pedestrian with different identity information
And (4) establishing a mixed ternary loss function L by using the formula (2) after the c negative sample feature vector is recorded
TRI1(X
d-1):
In the formula (2), the reaction mixture is,
representing anchor sample feature vectors
And the z-th positive sample feature vector
The Euclidean distance of (a) is,
representing anchor sample feature vector and the c negative sample feature vector of the f pedestrian
Euclidean distance of (p)
1Is a mixed ternary loss function L
TRI1(X
d-1) A predefined minimum interval; is set to rho
10.5. The distance between the anchor sample feature vector and the positive sample feature vector can be reduced by optimizing equation (2) and the distance between the anchor sample feature vector and the negative sample feature vector can be increased. As shown in fig. 3;
step 8, from the d sample characteristic space X
dIn the method, the s characteristic information of the r pedestrian is selected
If the feature vector is recorded as the anchor sample feature vector, the anchor sample feature vector is recorded together with the anchor sample feature vector
B-th characteristic information of r-th pedestrian with same identity information
Is denoted as the b-th positive sample feature vector, and
qth characteristic information of the h pedestrian with different identity information
And (4) establishing a mixed ternary loss function L by using the formula (3) after the q negative sample feature vector is recorded
TRI2(X
d):
In the formula (3), the reaction mixture is,
representing anchor sample feature vectors
And the b-th positive sample feature vector
The Euclidean distance of (a) is,
representing the anchor sample feature vector and the qth negative sample feature vector of the h pedestrian
Euclidean distance of (p)
2Is a mixed ternary loss function L
TRI2(X
d) A predefined minimum interval; is set to rho
2=0.5。
Step 9, establishing a mixed ternary loss function L by using the formula (4)TRI:
LTRI=LTRI1+LTRI2 (4)
Establishing a global penalty function L using equation (5)ALL:
LALL=LID+βLTRI (5)
In the formula (5), β represents a mixed ternary loss function LTRIThe coefficient of (a). The coefficient β is set to β 1.4.
Carrying out optimization solution on the formula (5) by a random gradient descent method, carrying out gradient back propagation, training each parameter of the symmetric convolutional neural network, and obtaining a preliminarily trained symmetric convolutional neural network model;
step 10, setting a d-1 group visible light feature vector v'd-1Inputting the infrared light characteristic vectors t ' of the d-1 group into a visible light image classifier in a symmetrical convolutional neural network model after preliminary training, outputting the probability distribution GV ' of visible light 'd-1Inputting the infrared light image into an infrared light image classifier in a preliminarily trained symmetric convolutional neural network model, and outputting the probability distribution GT' of infrared light; d-1 group visible light feature vector v'd-1Inputting the pseudo visible light probability distribution GV' into an infrared light classifier in the preliminarily trained symmetric convolutional neural network model;
construction of divergence loss function L between pseudo visible light eigenvector GV 'and visible light probability distribution GV' using equation (6)KL:
LKL=KL(GV″,GV′) (6)
In the formula (6), the reaction mixture is,
to represent
And
difference values of probability distributions;
establishing a discriminator loss function L using equation (7)DIS:
LDIS=LID-αLKL (7)
In the formula (7), α represents LKLThe coefficient of (a). The coefficient α is set to α ═ 1.
Step 11, establishing a generator loss function L by using the formula (8)GEN:
LGEN=αLKL+βLTRI (8)
The invention has performed a verification experiment on the setting of α, β, and fig. 4 is the effect of the coefficient α on the RegDB data set in the invention; FIG. 5 is a graph of the effect of the coefficient α on the SYSU-MM01 data set; the performance of the invention is proved to be better when alpha is 1;
FIG. 6 is a graph of the effect of the coefficient β on the RegDB data set in the present invention; FIG. 7 is a graph of the effect of the coefficient β on the SYSU-MM01 data set; when alpha is 1 and beta is 1.4, the performance of the invention is optimal, and experiments prove that good results can be obtained in a wider value range of alpha and beta, which reflects the superiority of the invention.
And 12, sequentially carrying out optimization solving on the formula (5), the formula (7) and the formula (8) by a gradient descent method. The invention optimizes a network model using an adaptive gradient optimizer (Adam).
Firstly, carrying out optimization solution on the formula (5) and training all parameters of the network;
secondly, carrying out optimization solution on the formula (7), in the gradient back propagation process, only carrying out back propagation on the gradient of the discriminator, and setting the gradient of the generator to zero, thereby freezing the generator parameters and training the discriminator parameters;
finally, carrying out optimization solution on the formula (8), in the gradient back propagation process, only carrying out back propagation on the gradient of the generator, and setting the gradient of the discriminator to zero, thereby freezing the parameters of the discriminator and training the parameters of the generator;
after training in turn, make LALL,LDIS,LGENConverge to optimum in antagonistic learning when LDISWhen the optimum is reached, the discriminator is optimum, when L isGENWhen the optimal condition is reached, the generator is optimal, so that a final cross-modal pedestrian re-identification model of the symmetric convolutional neural network is obtained;
step 13, utilizing the final symmetrical convolutional neural network model to query and match the cross-modal pedestrian re-identification;
inputting the pedestrian image to be inquired into a final symmetrical convolutional neural network model to extract features, then carrying out similarity comparison with the features of the pedestrians in a search library, and finding corresponding pedestrian identity information from the ranking list according to the sequence of the similarity, thereby obtaining an identification result.
Example (b):
in order to prove the effectiveness of the invention, some comparative tests are carried out with other methods, and as shown in table 1, compared with other methods in the prior art, the effect of the invention is obviously better, and the effectiveness of the invention is proved. Ablation experiments were also performed on each module of the network of the present invention, and the results of the experiments are shown in table 2, demonstrating the effectiveness of each module of the present invention.
Table 1 is a graph comparing the effectiveness of the present invention with other methods
Table 2 shows the related ablation experimental graphs of the present invention
Experiments prove that the method can greatly relieve the problem that the existing pedestrian re-identification method is inaccurate in detection in the cross-mode, and still has higher detection precision under the condition of larger modal difference.