Movatterモバイル変換


[0]ホーム

URL:


CN103793721A - Pedestrian repeat recognition method and system based on area related feedback - Google Patents

Pedestrian repeat recognition method and system based on area related feedback
Download PDF

Info

Publication number
CN103793721A
CN103793721ACN201410076028.3ACN201410076028ACN103793721ACN 103793721 ACN103793721 ACN 103793721ACN 201410076028 ACN201410076028 ACN 201410076028ACN 103793721 ACN103793721 ACN 103793721A
Authority
CN
China
Prior art keywords
region
feedback
weight
formula
query
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410076028.3A
Other languages
Chinese (zh)
Other versions
CN103793721B (en
Inventor
胡瑞敏
王正
梁超
冷清明
李文刚
陈军
严岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHUfiledCriticalWuhan University WHU
Priority to CN201410076028.3ApriorityCriticalpatent/CN103793721B/en
Publication of CN103793721ApublicationCriticalpatent/CN103793721A/en
Application grantedgrantedCritical
Publication of CN103793721BpublicationCriticalpatent/CN103793721B/en
Expired - Fee Relatedlegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Landscapes

Abstract

Translated fromChinese

本发明提供一种基于区域相关反馈的行人重识别方法及系统,进行初次查询匹配和反馈样本收集,选取不相关图像作为反馈样本并标记类型;确定近邻集合、区域权重调整和特征权重调整;进行特征表达和距离度量,得到查询匹配结果;查询匹配结果,如果符合要求则输出结果,如果不符合要求,则返回进行迭代更新反馈样本,直到符合要求。本发明提出的基于区域相关反馈的技术,是充分了利用行人图像的局部特征信息,从局部特征出发,结合其他信息实时动态地调整局部特征权重,并结合传统行人重识别方法最终实现准确快速地找出并成功匹配目标嫌疑人。

The present invention provides a pedestrian re-identification method and system based on region-related feedback, which performs initial query matching and feedback sample collection, selects irrelevant images as feedback samples and marks the type; determines the neighbor set, region weight adjustment and feature weight adjustment; The feature expression and distance measurement are used to obtain the query matching result; if the query matching result meets the requirements, the result is output, and if it does not meet the requirements, it returns to iteratively update the feedback sample until it meets the requirements. The technology based on area correlation feedback proposed by the present invention makes full use of the local feature information of the pedestrian image, starts from the local feature, combines other information to dynamically adjust the weight of the local feature in real time, and combines the traditional pedestrian re-identification method to finally realize accurate and fast Find and successfully match the target suspect.

Description

Translated fromChinese
一种基于区域相关反馈的行人重识别方法及系统A pedestrian re-identification method and system based on area correlation feedback

技术领域technical field

本发明涉及视频侦查领域中的监控视频中对目标嫌疑人进行重识别的过程,属于一种基于区域相关反馈的行人重识别的方法及系统。The invention relates to the process of re-identifying target suspects in surveillance video in the field of video investigation, and belongs to a method and system for re-identifying pedestrians based on area-related feedback.

背景技术Background technique

随着平安城市的广泛建设和各种场所面临监控的普及,视频监控数据量变得越来越大,这给刑侦破案带来了巨大的挑战,如何快速准确地从这些海量数据库提取出目标嫌疑人成为破案的关键。With the extensive construction of safe cities and the popularity of surveillance in various places, the amount of video surveillance data has become larger and larger, which has brought great challenges to criminal investigation and detection. How to quickly and accurately extract target suspects from these massive databases become the key to solve the case.

传统的行人重识别方法能有效解决长时间人工手动检索可能带来的漏检和误检的问题,但是匹配效率相对较低,其主要是通过特征表达和距离度量的方法来对改善原始检索排序的,是一种非交互的查询方法。近年来有一些交互式的相关反馈方法应用于行人重识别系统,但大都是基于正样本图像的一个整体关联匹配,没有考虑行人样本库中的不相关样本以及其局部特征对改善查询排序结果的影响。The traditional person re-identification method can effectively solve the problem of missed detection and false detection that may be caused by long-term manual retrieval, but the matching efficiency is relatively low. It mainly uses the method of feature expression and distance measurement to improve the original retrieval ranking. , is a non-interactive query method. In recent years, some interactive relevant feedback methods have been applied to pedestrian re-identification systems, but most of them are based on an overall association matching of positive sample images, without considering the irrelevant samples in the pedestrian sample library and their local features to improve the query ranking results. Influence.

在现有的行人重识别方法中,基于特征表达的方法在实际中应用比较广泛。在进行行人重识别过程中,通过对目标的各个不同外观特征以及运动信息特征进行提取,找出一个合适的表达方式,然后在行人样本库中直接对有相似或相同的特征表达的样本进行匹配,直到找出嫌疑目标。Among the existing pedestrian re-identification methods, the method based on feature expression is widely used in practice. In the process of pedestrian re-identification, a suitable expression method is found by extracting various appearance features and motion information features of the target, and then directly matches samples with similar or identical feature expressions in the pedestrian sample library , until the suspect is identified.

专利号“CN102663366A”,名称为“行人目标识别方法及系统”的专利,提出了一种基于特征表达的行人目标识别方法,该方法包括采集视频帧、提取视频帧的HOG特征等步骤,提取出的视频帧包含了行人方向和强度信息的LBP特征,然后根据所述HOG特征和所述LBP特征来识别监控视频场景中的特定的行人目标,这种方法算法比较简单、效率也很高,但是鲁棒性不是很好,对摄像头的拍摄方位角度和不同的光照条件变化比较敏感,很容易导致错误匹配,不适于在复杂环境下用于进行行人重识别;Patent No. "CN102663366A", the patent titled "Pedestrian Target Recognition Method and System", proposes a pedestrian target recognition method based on feature expression, which includes steps such as collecting video frames, extracting HOG features of video frames, and extracting The video frame contains the LBP feature of the pedestrian direction and intensity information, and then identify the specific pedestrian target in the surveillance video scene according to the HOG feature and the LBP feature. This method has a relatively simple algorithm and high efficiency, but The robustness is not very good, and it is sensitive to the change of the camera's shooting orientation and different lighting conditions, which can easily lead to wrong matching, and is not suitable for pedestrian re-identification in complex environments;

相关反馈的方法在基于内容的图像检索中使用也很广泛。不同于在行人重识别特定的应用范围,在基于内容的图像检索中,存在着数量较多的用于匹配的正样本图像,这样就比较容易对大量的不相关的样本图像进行学习,在人机交互过程中,根据相关特征包含的信息量和图像场景,选取相对应的特征表达,通过比较、匹配、排序后反馈给识别子系统,从而实现优化初始检索结果的目的。这种方法实现效果很好,鲁棒性好,但是需要对大量样本的所有特征进行训练学习,系统开销较大,算法相对复杂,实时性方面难以达到要求,不适于在行人重识别系统中应用。Relevance feedback methods are also widely used in content-based image retrieval. Different from the specific application scope of person re-identification, in content-based image retrieval, there are a large number of positive sample images for matching, which makes it easier to learn from a large number of irrelevant sample images. In the computer-computer interaction process, according to the amount of information contained in the relevant features and the image scene, the corresponding feature expression is selected, and fed back to the recognition subsystem after comparison, matching, and sorting, so as to achieve the purpose of optimizing the initial retrieval results. This method is very effective and robust, but it needs to train and learn all the features of a large number of samples, the system overhead is relatively large, the algorithm is relatively complex, and it is difficult to meet the real-time requirements, so it is not suitable for application in pedestrian re-identification systems. .

专利号“CN101539930A”,名称为“一种相关反馈图像检索方法”的专利,通过基于分段相似性度量和多伦次联合反馈的图像检索方法来实现对目标图像进行检索的,这种方法检索出的匹配效果比较好,但是运算过程相对复杂,需要反复对图像中多个特征进行训练学习,不适于在行人重识别系统中进行实战部署。Patent No. "CN101539930A", the patent titled "A Relational Feedback Image Retrieval Method", realizes the retrieval of target images through an image retrieval method based on segmental similarity measurement and multi-entry joint feedback. This method retrieves The matching effect obtained is relatively good, but the calculation process is relatively complicated, and it is necessary to repeatedly train and learn multiple features in the image, which is not suitable for actual deployment in the pedestrian re-identification system.

发明内容Contents of the invention

本发明的目的在于克服现有技术缺陷,提出一种基于区域相关反馈的行人重识别方法和系统。The purpose of the present invention is to overcome the defects of the prior art, and propose a pedestrian re-identification method and system based on area correlation feedback.

本发明的技术方案提供一种基于区域相关反馈的行人重识别方法,包括以下步骤,The technical solution of the present invention provides a pedestrian re-identification method based on area-related feedback, including the following steps,

步骤S1,进行初次查询匹配和反馈样本收集,包括以下子步骤,Step S1, perform initial query matching and feedback sample collection, including the following sub-steps,

步骤S1.1,进行初次查询匹配,包括将输入目标人物图像作为查询图像,进行初始查询并输出初始查询排序结果;Step S1.1, performing the initial query matching, including using the input target person image as the query image, performing the initial query and outputting the initial query sorting result;

步骤S1.2,进行反馈样本收集,在第一次执行步骤S1.2时,从初次排序结果在排名最靠前的一定预设数目的图像中选取不相关图像作为反馈样本并标记类型,构成反馈样本集;后续执行步骤S1.2时,从上一轮迭代执行步骤S4的所得查询排序结果中选取不相关图像作为反馈样本并标记类型,加入反馈样本集;Step S1.2, collect feedback samples. When step S1.2 is executed for the first time, select irrelevant images from a certain preset number of images that rank the highest in the initial sorting results as feedback samples and mark the type to form Feedback sample set; when step S1.2 is subsequently executed, an irrelevant image is selected from the query sorting result of step S4 in the previous round of iterative execution as a feedback sample and marked as a type, and added to the feedback sample set;

标记类型方式为,设划分U个区域1、2…U,每个反馈样本分别标记为2U个类型之一,即与查询图像基于区域1相似、基于区域1不相似、基于区域2相似、基于区域2不相似…基于区域U相似、基于区域U不相似;对反馈样本进行标记时,按照区域的划分,在每个区域提取视觉特征,设得到M维特征向量,其中任一维记为第m维,根据特征向量与查询图像中对应区域分别进行相似性比较;The way of labeling type is as follows: Suppose U areas 1, 2...U are divided, and each feedback sample is marked as one of 2U types, that is, similar to the query image based on area 1, dissimilar based on area 1, similar based on area 2, and based on Area 2 is dissimilar... based on the similarity of the area U, and the dissimilarity based on the area U; when marking the feedback samples, according to the division of the area, extract the visual features in each area, and assume that the M-dimensional feature vector is obtained, and any dimension is recorded as the first m dimension, according to the similarity comparison between the feature vector and the corresponding region in the query image;

步骤S2,进行确定近邻集合、区域权重调整和特征权重调整,包括以下子步骤,Step S2, determine the set of neighbors, adjust the weight of the area and adjust the weight of the feature, including the following sub-steps,

步骤S2.1,对查询图像,首先通过区域K近邻集合的方法寻找出基于区域相似的区域相似和区域不相似的样本集合;然后运用动态k近邻规则,对每一个标记为某区域相似的样本,更新调整并得到新的包含k个近邻的区域相似的集合,对每一个标记为某区域不相似的样本,更新调整并得到新的包含k个近邻的该区域不相似的集合;Step S2.1, for the query image, first use the method of region K-nearest neighbor set to find out the sample sets of region similarity and region dissimilarity based on region similarity; , update and adjust and obtain a new set of similar regions containing k neighbors, for each sample marked as dissimilar in a certain region, update and adjust and obtain a new set of dissimilar regions containing k neighbors;

步骤S2.2,更新区域权重和特征权重,Step S2.2, update the region weight and feature weight,

设查询图像p与反馈样本集中第i个反馈样本的图像之间相似性Sa(p,gi)采用如下计算公式,Assuming that the similarity Sa (p, gi ) between the query image p and the image of the i-th feedback sample in the feedback sample set is calculated using the following formula,

Sa(p,gi)=Σj=1U[Wj(p,gi)Σm=1MWFOj(m)S(Fpj(m),Fgi,j(m))]式一其中,

Figure BDA0000472656770000032
代表第j个区域部位在m维特征向量下的特征权重;
Figure BDA0000472656770000033
代表第j个区域部位在第m维特征向量;Wj(p,gi)代表查询图像p与反馈样本之间的基于第j个区域部位区域权重;其中设
Figure BDA0000472656770000034
代表查询图像p与反馈样本之间的基于第j个区域部位在第m维特征值下的相似度;j的取值为1、2…U,S a ( p , g i ) = Σ j = 1 u [ W j ( p , g i ) Σ m = 1 m W f o j ( m ) S ( f p j ( m ) , f g i , j ( m ) ) ] Formula 1 where,
Figure BDA0000472656770000032
Represents the feature weight of the jth region under the m-dimensional feature vector;
Figure BDA0000472656770000033
Represents the feature vector of the j-th region in the m-th dimension; Wj (p, gi ) represents the region weight based on the j-th region between the query image p and the feedback sample; where
Figure BDA0000472656770000034
Represents the similarity between the query image p and the feedback sample based on the m-th dimensional feature value of the j-th region; the value of j is 1, 2...U,

步骤S2.2.1,利用机器学习中距离度量的方法,更新区域权重采用如下计算公式,Step S2.2.1, using the method of distance measurement in machine learning to update the weight of the region using the following calculation formula,

Wj(p,gi)=Wj(p,gi)×β1β1>1    式二Wj (p,gi )=Wj (p,gi )×β1 β1 >1 Formula 2

Wj(p,gi)=Wj(p,gi)×β20<β2<1    式三其中,Wj(p,gi)表示第j个区域部位的区域权重,β1、β2为预设系数;Wj (p, gi )=Wj (p, gi )×β2 0<β2 <1 Equation 3 where Wj (p, gi ) represents the area weight of the jth area, β1 , β2 is the preset coefficient;

步骤S2.2.2,利用机器学习中距离度量的方法,更新特征权重采用如下计算公式,WFOj(m)=WFOj(m)&times;&alpha;/(1+&mu;m&sigma;m)    式四Step S2.2.2, using the method of distance measurement in machine learning to update the feature weight using the following calculation formula, W f o j ( m ) = W f o j ( m ) &times; &alpha; / ( 1 + &mu; m &sigma; m ) Formula four

其中,μm、σm分别表示第1、2…m维特征值的均值和方差,α为预设参数;Among them, μm and σm represent the mean and variance of the 1st, 2...m-dimensional feature values respectively, and α is a preset parameter;

步骤S3,根据步骤S2中所得调整后的区域权重和式一,进行特征表达和距离度量,得到查询匹配结果;Step S3, perform feature expression and distance measurement according to the adjusted region weight and formula 1 obtained in step S2, and obtain query matching results;

步骤S4,显示步骤S3所得查询匹配结果,如果符合要求,则输出结果;如果不符合要求,则返回步骤S1.2进行迭代,直到符合要求。Step S4, display the query matching result obtained in step S3, and output the result if it meets the requirements; if it does not meet the requirements, return to step S1.2 to iterate until the requirements are met.

而且,对反馈样本按照区域的划分,与查询图像中对应区域分别进行相似性比较时,各区域分别所采用的相似性度量计算如下式,Moreover, when the feedback sample is divided into regions and compared with the corresponding regions in the query image for similarity, the similarity measure adopted by each region is calculated as follows,

SSjj((pp,,ggii))==&Sigma;&Sigma;WWFfOojj((mm))SS((Ffppjj((mm)),,Ffggii,,jj((mm))))

式五Formula five

其中,

Figure BDA0000472656770000037
表示第j个区域部位的特征权重,初次执行步骤S1.2采用预设初值,后续执行步骤S1.2时采用上一轮迭代时执行步骤S2更新的权重值;
Figure BDA0000472656770000038
代表第j个区域部位的m维特征向量;代表查询图像p与反馈样本gi之间的基于第j个区域部位在第m维特征值下的相似度。in,
Figure BDA0000472656770000037
Indicates the feature weight of the jth region part, the initial execution of step S1.2 adopts the preset initial value, and the subsequent execution of step S1.2 adopts the weight value updated in the execution of step S2 in the previous round of iteration;
Figure BDA0000472656770000038
Represents the m-dimensional feature vector of the jth region; Represents the similarity between the query image p and the feedback sample gi based on the m-th dimensional feature value of the j-th region.

而且,设划分2个区域,包括躯干和腿部,与查询图像中对应区域分别进行相似性比较时,躯干和腿部分别所采用的相似性度量计算如下式,Moreover, suppose two regions are divided, including the torso and legs, and when the similarity is compared with the corresponding regions in the query image, the similarity measures adopted by the torso and legs are calculated as follows,

St(p,gi)=&Sigma;WFOt(m)S(Fpt(m),Fgi,t(m))    式六S t ( p , g i ) = &Sigma; W f o t ( m ) S ( f p t ( m ) , f g i , t ( m ) ) Formula six

Sl(p,gi)=&Sigma;WFOl(m)S(Fpl(m),Fgi,l(m))    式七其中,

Figure BDA0000472656770000043
Figure BDA0000472656770000044
分别代表腿部和躯干部位的特征权重,
Figure BDA0000472656770000045
Figure BDA0000472656770000046
分别代表腿部和躯干部位m维特征向量;
Figure BDA0000472656770000048
分别代表查询图像与反馈样本之间基于腿部和躯干部位在第m维特征值下的相似度;St(p,gi)为躯干部位相似性度量的值,Sl(p,gi)为腿部部位相似性度量的值。S l ( p , g i ) = &Sigma; W f o l ( m ) S ( f p l ( m ) , f g i , l ( m ) ) Among them,
Figure BDA0000472656770000043
and
Figure BDA0000472656770000044
represent the feature weights of the legs and torso, respectively,
Figure BDA0000472656770000045
and
Figure BDA0000472656770000046
represent the m-dimensional feature vectors of the legs and torso, respectively; and
Figure BDA0000472656770000048
Represent the similarity between the query image and the feedback sample based on the m-th dimensional feature value of the leg and torso; St (p, gi ) is the value of the similarity measure of the torso part, Sl (p, gi ) is the value of the similarity measure of leg parts.

而且,查询图像p与反馈样本集中第i个样本的图像之间相似性采用如下计算公式,Moreover, the similarity between the query image p and the image of the i-th sample in the feedback sample set is calculated using the following formula,

Sa(p,gi)=Wt(p,gi)&Sigma;WFOt(m)S(Fgi,t(m),Fgi,t(m))+Wl(p,gi)&Sigma;WFOl(m)S(Fpl(m),Fgi,l(m))    式八其中,

Figure BDA00004726567700000410
Figure BDA00004726567700000411
分别代表腿部和躯干部位的在m维特征向量下的特征权重;
Figure BDA00004726567700000413
分别代表腿部和躯干部位m维特征向量;Wl(p,gi)和Wt(p,gi)分别代表嫌疑目标与样本之间的基于腿部区域和躯干部位区域权重;
Figure BDA00004726567700000414
则分别代表嫌疑目标与样本之间的基于腿部区域和躯干部位的在m维特征值下的相似度。S a ( p , g i ) = W t ( p , g i ) &Sigma; W f o t ( m ) S ( f g i , t ( m ) , f g i , t ( m ) ) + W l ( p , g i ) &Sigma; W f o l ( m ) S ( f pl ( m ) , f g i , l ( m ) ) Formula Eight Among them,
Figure BDA00004726567700000410
and
Figure BDA00004726567700000411
Represents the feature weights under the m-dimensional feature vector of the legs and torso, respectively; and
Figure BDA00004726567700000413
Represent the m-dimensional feature vectors of the legs and torso; Wl (p,gi ) and Wt (p,gi ) represent the weights based on the leg region and the torso region between the suspected target and the sample, respectively;
Figure BDA00004726567700000414
and respectively represent the similarity between the suspected target and the sample based on the m-dimensional feature values of the leg region and the torso.

而且,步骤S2.2.1,利用机器学习中距离度量的方法,更新区域权重采用如下计算公式,Moreover, in step S2.2.1, using the method of distance measurement in machine learning, the following calculation formula is used to update the weight of the region,

Wt,l(p,gi)=Wt,l(p,gi)×β1β1>1    式九Wt,l (p,gi )=Wt,l (p,gi )×β1 β1 >1 Formula 9

Wt,l(p,gi)=Wt,l(p,gi)×β20<β2<1    式十Wt,l (p,gi )=Wt,l (p,gi )×β2 0<β2 <1 Formula 10

其中,Wt,l(p,gi)表示Wt(p,gi)或Wl(p,gi),β1、β2为预设系数。Wherein, Wt,l (p,gi ) represents Wt (p,gi ) or Wl (p,gi ), and β1 and β2 are preset coefficients.

而且,步骤S2.2.2,利用机器学习中距离度量的方法,更新特征权重采用如下计算公式,Moreover, in step S2.2.2, using the method of distance measurement in machine learning, the following calculation formula is used to update the feature weights,

WFOj(m)=WFOj(m)&times;&alpha;/(1+&mu;m&sigma;m)    式十一W f o j ( m ) = W f o j ( m ) &times; &alpha; / ( 1 + &mu; m &sigma; m ) formula eleven

WFOt(m)=WFOt(m)&times;&alpha;/(1+&mu;m&sigma;m)    式十二其中,μm、σm分别表示第1、2…m维特征值的均值和方差,α为预设参数。W f o t ( m ) = W f o t ( m ) &times; &alpha; / ( 1 + &mu; m &sigma; m ) In formula 12, μm and σm represent the mean and variance of the 1st, 2...m-dimensional feature values respectively, and α is a preset parameter.

本发明还相应提供一种基于区域相关反馈的行人重识别系统,包括以下模块,The present invention also correspondingly provides a pedestrian re-identification system based on area-related feedback, including the following modules,

反馈模块,用于进行初次查询匹配和反馈样本收集,包括以下子模块,The feedback module is used for initial query matching and feedback sample collection, including the following sub-modules,

初次查询匹配子模块,用于将输入目标人物图像作为查询图像,进行初始查询并输出初始查询排序结果;The initial query matching submodule is used to use the input target person image as a query image, perform an initial query and output an initial query sorting result;

反馈样本收集子模块,用于在第一次执行反馈样本收集时,从初次排序结果在排名最靠前的一定预设数目的图像中选取不相关图像作为反馈样本并标记类型,构成反馈样本集;后续执行反馈样本收集时,从上一轮迭代时结果显示模块所得查询排序结果中选取不相关图像作为反馈样本并标记类型,加入反馈样本集;The feedback sample collection sub-module is used to select unrelated images as feedback samples from a certain preset number of images with the highest ranking in the initial sorting result when the feedback sample collection is performed for the first time, and mark the type to form a feedback sample set ; When collecting feedback samples in the subsequent round, select irrelevant images from the query sorting results obtained by the result display module in the previous round of iterations as feedback samples and mark the type, and add them to the feedback sample set;

标记类型方式为,设划分U个区域1、2…U,每个反馈样本分别标记为2U个类型之一,即与查询图像基于区域1相似、基于区域1不相似、基于区域2相似、基于区域2不相似…基于区域U相似、基于区域U不相似;对反馈样本进行标记时,按照区域的划分,在每个区域提取视觉特征,设得到M维特征向量,其中任一维记为第m维,根据特征向量与查询图像中对应区域分别进行相似性比较;The way of labeling type is as follows: Suppose U areas 1, 2...U are divided, and each feedback sample is marked as one of 2U types, that is, similar to the query image based on area 1, dissimilar based on area 1, similar based on area 2, and based on Area 2 is dissimilar... based on the similarity of the area U, and the dissimilarity based on the area U; when marking the feedback samples, according to the division of the area, extract the visual features in each area, and assume that the M-dimensional feature vector is obtained, and any dimension is recorded as the first m dimension, according to the similarity comparison between the feature vector and the corresponding region in the query image;

权值模块,用于进行确定近邻集合、区域权重调整和特征权重调整,包括以下子模块,确定近邻集合子模块,用于对查询图像,首先通过区域K近邻集合的方法寻找出基于区域相似的区域相似和区域不相似的样本集合;然后运用动态k近邻规则,对每一个标记为某区域相似的样本,更新调整并得到新的包含k个近邻的区域相似的集合,对每一个标记为某区域不相似的样本,更新调整并得到新的包含k个近邻的该区域不相似的集合;The weight module is used to determine the neighbor set, region weight adjustment and feature weight adjustment. It includes the following sub-modules. The neighbor set sub-module is used to query the image. A set of samples with similar and dissimilar regions; then use the dynamic k-nearest neighbor rule to update and adjust each sample marked as similar to a certain region and obtain a new set of similar regions containing k neighbors, and for each sample marked as a certain region For samples with dissimilar regions, update and adjust to obtain a new set of dissimilar regions containing k neighbors;

更新区域权重和特征权重子模块,用于执行以下操作,Update the Region Weights and Feature Weights submodules to perform the following operations,

设查询图像p与反馈样本集中第i个反馈样本的图像之间相似性Sa(p,gi)采用如下计算公式,Assuming that the similarity Sa (p, gi ) between the query image p and the image of the i-th feedback sample in the feedback sample set is calculated using the following formula,

Sa(p,gi)=&Sigma;j=1U[Wj(p,gi)&Sigma;m=1MWFOj(m)S(Fpj(m),Fgi,j(m))]    式一其中,

Figure BDA0000472656770000052
代表第j个区域部位在m维特征向量下的特征权重;
Figure BDA0000472656770000053
代表第j个区域部位在第m维特征向量;Wj(p,gi)代表查询图像p与反馈样本之间的基于第j个区域部位区域权重;其中设
Figure BDA0000472656770000054
代表查询图像p与反馈样本之间的基于第j个区域部位在第m维特征值下的相似度;S a ( p , g i ) = &Sigma; j = 1 u [ W j ( p , g i ) &Sigma; m = 1 m W f o j ( m ) S ( f p j ( m ) , f g i , j ( m ) ) ] Formula 1 where,
Figure BDA0000472656770000052
Represents the feature weight of the jth region under the m-dimensional feature vector;
Figure BDA0000472656770000053
Represents the feature vector of the j-th region in the m-th dimension; Wj (p, gi ) represents the region weight based on the j-th region between the query image p and the feedback sample; where
Figure BDA0000472656770000054
Represents the similarity between the query image p and the feedback sample based on the m-th dimensional feature value of the j-th region;

利用机器学习中距离度量的方法,更新区域权重采用如下计算公式,Using the method of distance measurement in machine learning, the following calculation formula is used to update the weight of the region,

Wj(p,gi)=Wj(p,gi)×β1β1>1    式二Wj (p,gi )=Wj (p,gi )×β1 β1 >1 Formula 2

Wj(p,gi)=Wj(p,gi)×β20<β2<1    式三Wj (p,gi )=Wj (p,gi )×β2 0<β2 <1 Equation 3

其中,Wj(p,gi)表示第j个区域部位的区域权重,β1、β2为预设系数;Among them, Wj (p, gi ) represents the regional weight of the jth regional part, and β1 and β2 are preset coefficients;

利用机器学习中距离度量的方法,更新特征权重采用如下计算公式,Using the method of distance measurement in machine learning, the update feature weight adopts the following calculation formula,

WFOj(m)=WFOj(m)&times;&alpha;/(1+&mu;m&sigma;m)    式四其中,μm、σm分别表示第1、2…m维特征值的均值和方差,α为预设参数;W f o j ( m ) = W f o j ( m ) &times; &alpha; / ( 1 + &mu; m &sigma; m ) In formula 4, μm and σm represent the mean and variance of the 1st, 2...m-dimensional eigenvalues respectively, and α is a preset parameter;

查询匹配模块,用于根据权值模块所得调整后的区域权重和式一,进行特征表达和距离度量,得到查询匹配结果;The query matching module is used to perform feature expression and distance measurement according to the adjusted regional weight and formula 1 obtained by the weight module, and obtain the query matching result;

结果显示模块,显示查询匹配模块所得查询匹配结果,如果符合要求,则输出结果;如果不符合要求,则通知反馈样本收集子模块更新反馈样本集,直到符合要求。The result display module displays the query matching result obtained by the query matching module. If the requirement is met, the result is output; if the requirement is not met, the feedback sample collection sub-module is notified to update the feedback sample set until the requirement is met.

本发明可提高行人重识别系统进行匹配识别的有效性和准确性,创新点主要有以下两点:The present invention can improve the effectiveness and accuracy of the pedestrian re-identification system for matching and identification. The innovations mainly include the following two points:

1.将图像的局部区域作为反馈单元而不是整幅图像应用于行人重识别系统,并且每一部分对应一个权重值;1. Apply the local area of the image as a feedback unit instead of the entire image to the pedestrian re-identification system, and each part corresponds to a weight value;

2.运用动态K近邻规则的方法来调整每一部分及其对应特征的权重值,然后依据总的相似度值来优化和改进排序结果的。2. Use the method of dynamic K nearest neighbor rule to adjust the weight value of each part and its corresponding feature, and then optimize and improve the sorting result according to the total similarity value.

附图说明Description of drawings

图1为本发明实施例的流程图。Fig. 1 is a flowchart of an embodiment of the present invention.

具体实施方式Detailed ways

本发明所提供的一种基于区域相关反馈的行人重识别方法,是针对视频监控场景下的行人图像与其他情况下各种图像的显著差异性,依据人在行走状态下身体各部位组成结构特点,将其分为几个固定的组成区域,将区域作为基本处理单元,根据每个区域所包含特征信息的不同以及不同区域对最终匹配结果的影响大小,及时更新并反馈调整后的动态权重值,最后依据优化后的相似度大小进行重排序,实现基于区域的相关反馈方法的全过程。具体实施时可采用计算机软件技术,支持流程运行。参见图1,实施例的流程包括步骤如下:A pedestrian re-identification method based on area-related feedback provided by the present invention is aimed at the significant difference between pedestrian images in video surveillance scenes and various images in other situations, and based on the structural characteristics of various parts of the body in the walking state , divide it into several fixed component areas, use the area as the basic processing unit, and update and feed back the adjusted dynamic weight value in time according to the different feature information contained in each area and the influence of different areas on the final matching result , and finally perform reordering according to the optimized similarity to realize the whole process of the region-based correlation feedback method. During the specific implementation, computer software technology can be used to support the operation of the process. Referring to Fig. 1, the flow process of the embodiment includes steps as follows:

步骤S1,初次查询匹配和反馈样本收集:按照区域的划分,标记区域相似和区域不相似的样本,实现反馈信息收集具体过程。Step S1, initial query matching and feedback sample collection: according to the division of regions, mark samples with similar regions and dissimilar regions to realize the specific process of collecting feedback information.

可采用以下子步骤实现:This can be achieved using the following sub-steps:

步骤S1.1,进行初次查询匹配,包括输入目标人物图像作为查询图像,例如刑侦破案涉及的嫌疑目标人图像,进行初始查询并输出初始查询排序结果,具体实施时,可采用传统方法进行查询图像的初始查询匹配,例如欧式距离或者L1距离的方法,将查询匹配结果根据相似性从高到低排列得到初次排序结果;Step S1.1, perform the initial query matching, including inputting the image of the target person as the query image, such as the image of the suspect target person involved in criminal investigation and solving the case, performing an initial query and outputting the initial query sorting results. In specific implementation, the traditional method can be used to query the image The initial query matching method, such as the method of Euclidean distance or L1 distance, arranges the query matching results according to the similarity from high to low to obtain the initial sorting results;

步骤S1.2,进行反馈信息收集,第一次执行步骤S1.2时,从初次排序结果在排名最靠前的一定预设数目的图像中选取不相关图像作为反馈样本并标记类型,构成反馈样本集。其中,不相关图像是指行人图像不是被查询的目标人物图像,即使外貌非常相似。设划分U个区域1、2…U,每个反馈样本分别标记为2U个类型之一,即与查询图像基于区域1相似、基于区域1不相似、基于区域2相似、基于区域2不相似…基于区域U相似、基于区域U不相似。在后续迭代时,如果某轮执行到步骤S4的当前查询排序结果不满足要求,则返回执行步骤S1.2时,在排名最靠前的一定预设数目的图像中选取不相关图像作为反馈样本并标记类型,Step S1.2, collect feedback information. When step S1.2 is executed for the first time, select irrelevant images from a certain preset number of images that rank the highest in the initial sorting result as feedback samples and mark the type to form feedback sample set. Among them, irrelevant images refer to pedestrian images that are not the image of the target person being queried, even if the appearance is very similar. Assuming that U regions 1, 2...U are divided, each feedback sample is marked as one of 2U types, that is, similar to the query image based on region 1, dissimilar based on region 1, similar based on region 2, dissimilar based on region 2... Similar based on area U, dissimilar based on area U. In subsequent iterations, if the current query sorting result of a certain round of execution to step S4 does not meet the requirements, return to step S1.2 and select an irrelevant image as a feedback sample from a certain preset number of images ranked the highest and mark the type,

并加入到反馈样本集内,然后基于扩大后的反馈样本重复步骤S2-S4的操作。And add it into the feedback sample set, and then repeat the operations of steps S2-S4 based on the enlarged feedback sample.

实施例划分为躯干部位和腿部部位两个区域。每个反馈样本的类型标记为基于躯干部位相似的样本(PSNt)、基于躯干部位不相似样本(PDNt)、基于腿部部位相似的样本(PSNl)或基于腿部部位不相似样本(PDNl)。The embodiment is divided into two regions, the trunk area and the leg area. The type of each feedback sample is labeled as a sample based on a similar body part (PSNt ), a sample based on a dissimilar body part (PDNt ), a sample based on a similar leg part (PSNl ), or a sample based on a dissimilar leg part ( PDN1 ).

具体实施时为提高效率起见,可由用户自行选择非查询目标的图像作为不相关图像,并根据视觉和先验知识进行标记。为便于用户选择参考起见,也可自动基于区域进行相似性度量并输出结果,度量实现方式,可由本领域技术人员自行指定。例如,按照人体结构组成特点,将行人身体结构分为上部的躯干部分(torso part)和下部的双腿部分(leg part),按照两个区域的划分,分别在每个区域提取出基于视觉所包含的外观形状及颜色特征,设得到M维特征向量,其中任一维记为第m维。In order to improve efficiency during specific implementation, the user can select images that are not the query target as irrelevant images, and mark them according to vision and prior knowledge. For the convenience of the user to select a reference, the similarity measurement can also be automatically performed based on the region and the result can be output. The measurement implementation method can be specified by those skilled in the art. For example, according to the composition characteristics of the human body structure, the pedestrian body structure is divided into the upper torso part and the lower leg part. The included appearance shape and color features are assumed to be M-dimensional feature vectors, where any dimension is denoted as the mth dimension.

设查询图像p,未作标记的样本库为G,其中设:G={gi|i=1,…,n},其中n代表样本库中样本个数。标记后反馈样本集有n个反馈样本。对反馈样本按照区域的划分,与查询图像中对应区域分别进行相似性比较时,各区域分别所采用的相似性度量计算如下式,Let the query image p, the unmarked sample library be G, where: G={gi |i=1,...,n}, where n represents the number of samples in the sample library. The marked feedback sample set has n feedback samples. When the feedback sample is divided according to the region and compared with the corresponding region in the query image, the similarity measure adopted by each region is calculated as follows:

SSjj((pp,,ggii))==&Sigma;&Sigma;WWFfOojj((mm))SS((Ffppjj((mm)),,Ffggii,,jj((mm))))------((11))

其中,

Figure BDA0000472656770000072
表示第j个区域部位的特征权重,初次执行步骤S1.2采用预设初值,后续执行步骤S1.2时采用上一轮迭代时执行步骤S2更新的权重值;
Figure BDA0000472656770000073
代表第j个区域部位的m维特征向量;
Figure BDA0000472656770000074
代表查询图像p与反馈样本gi之间的基于第j个区域部位在第m维特征值下的相似度。in,
Figure BDA0000472656770000072
Indicates the feature weight of the jth region part, the initial execution of step S1.2 adopts the preset initial value, and the subsequent execution of step S1.2 adopts the weight value updated in the execution of step S2 in the previous round of iteration;
Figure BDA0000472656770000073
Represents the m-dimensional feature vector of the jth region;
Figure BDA0000472656770000074
Represents the similarity between the query image p and the feedback sample gi based on the m-th dimensional feature value of the j-th region.

实施例中,在不相关图像类中,按照躯干和腿部两部分的划分,与查询图像中嫌疑目标对应区域分别进行相似性比较,躯干和腿部分别所采用的相似性度量计算方法如下:In the embodiment, in the irrelevant image class, according to the division of the trunk and the legs, the similarity comparison is performed with the corresponding area of the suspected target in the query image, and the calculation method of the similarity measure adopted by the trunk and the legs is as follows:

SStt((pp,,ggii))==&Sigma;&Sigma;WWFfOott((mm))SS((Ffpptt((mm)),,Ffggii,,tt((mm))))------((22))

SSll((pp,,ggii))==&Sigma;&Sigma;WWFfOoll((mm))SS((Ffppll((mm)),,Ffggii,,ll((mm))))------((33))

Figure BDA0000472656770000081
Figure BDA0000472656770000082
分别代表腿部和躯干部位的特征权重,初次执行步骤S1.2采用的初值可由用户依据行人图像与反馈样本之间具体不同特征下的相似性自行预设,后续执行步骤S1.2时可采用上一轮迭代时执行步骤S2更新的权重值;
Figure BDA0000472656770000084
分别代表腿部和躯干部位m维特征向量;
Figure BDA0000472656770000085
Figure BDA0000472656770000086
则分别代表嫌疑目标与样本之间的基于腿部区域和躯干部位在第m维特征值下的相似度;St(p,gi)为躯干部位相似性度量的值,Sl(p,gi)为腿部部位相似性度量的值。
Figure BDA0000472656770000081
and
Figure BDA0000472656770000082
Represent the feature weights of the legs and torso respectively. The initial value used in the initial execution of step S1.2 can be preset by the user based on the similarity between the pedestrian image and the feedback sample under specific characteristics. In the subsequent execution of step S1.2, it can be Using the weight value updated in step S2 during the previous round of iteration; and
Figure BDA0000472656770000084
represent the m-dimensional feature vectors of the legs and torso, respectively;
Figure BDA0000472656770000085
and
Figure BDA0000472656770000086
respectively represent the similarity between the suspected target and the sample based on the leg region and the trunk part under the m-th eigenvalue; St (p, gi ) is the value of the similarity measure of the trunk part, Sl (p, gi ) is the value of the similarity measure of leg parts.

根据相似性度量值得到初次排序结果后,通过人机交互,用户可根据视觉和先验判断,标记出基于躯干部位相似的样本(PSNt)和不相似样本(PDNt)以及基于腿部部位相似的样本(PSNl)和不相似样本(PDNl)。After the initial sorting result is obtained according to the similarity measure, through human-computer interaction, the user can mark the similar samples (PSNt ) and dissimilar samples (PDNt ) based on the body part and the samples based on the leg part according to the visual and prior judgment. Similar samples (PSNl ) and dissimilar samples (PDNl).

步骤S2,包括确定近邻集合、区域权重调整、特征权重调整:Step S2, including determining the neighbor set, area weight adjustment, and feature weight adjustment:

本步骤通过区域K近邻集合的方法寻找出类似基于区域相似的区域相似和区域不相似的样本集合,对于每一种区域,都可以找出这样的一对集合;对找出的每一对集合,运用动态K近邻的方法,分别重新确定其新的K个近邻,调整不同区域对应的权重。对于不同区域内所包含的特征信息对最终行人重识别的匹配结果的影响程度不同,所以不同的区域具有不同的权重值,而针对相同的区域,由于包含很多特征信息,其中某些特征能够显著地区别于其他样本,对最终的匹配结果影响较大,因而具有较大的权重值,而另外一些特征则与其他样本没有显著区别,对最终匹配结果影响很小,因而具有较小的权重。为此,本发明实施例先找出近邻集合,然后再调整权重大小。针对区域不相似近邻的集合,找出该区域内所有的特征信息,并将所有特征表达组成一个多维的特征向量,对于更新后的新的K个近邻,依据不同的维度上对应的特征值变动大小确定该特征值对匹配结果的影响,对那些能够很好代表查询目标共有性质的特征赋予较大的权重;反之,对那些不能够很好代表查询目标性质的特征,赋予较小的权重值,以此来调整不同特征的特征权重。This step uses the method of region K-nearest neighbor set to find out the sample sets similar to region similarity and region dissimilarity based on region similarity. For each region, such a pair of sets can be found; for each pair of sets found , using the method of dynamic K-nearest neighbors to re-determine its new K-nearest neighbors and adjust the weights corresponding to different regions. For the feature information contained in different areas has different influence on the matching results of the final pedestrian re-identification, so different areas have different weight values, and for the same area, because it contains a lot of feature information, some features can be significant Different from other samples, it has a greater impact on the final matching result, so it has a larger weight value, while other features are not significantly different from other samples, and have little impact on the final matching result, so they have a smaller weight. For this reason, in the embodiment of the present invention, the neighbor set is found first, and then the weight is adjusted. For the set of dissimilar neighbors in the area, find out all the feature information in the area, and form all feature expressions into a multi-dimensional feature vector. For the updated new K neighbors, change according to the corresponding feature values in different dimensions The size determines the influence of the feature value on the matching results, and assigns larger weights to those features that can well represent the common properties of the query target; conversely, assigns smaller weights to those features that cannot well represent the properties of the query target , to adjust the feature weights of different features.

可采用以下子步骤实现:This can be achieved using the following sub-steps:

步骤S2.1,对查询图像,首先通过区域K近邻集合的方法寻找出基于区域相似的区域相似和区域不相似的样本集合;然后运用动态k近邻规则,对每一个标记为某区域相似的样本,更新调整并得到新的包含k个近邻的区域相似的集合,对每一个标记为某区域不相似的样本,更新调整并得到新的包含k个近邻的该区域不相似的集合。具体实现方法如下:Step S2.1, for the query image, first use the method of region K-nearest neighbor set to find out the sample sets of region similarity and region dissimilarity based on region similarity; , update and adjust and obtain a new set of similar regions containing k neighbors, for each sample marked as dissimilar in a certain region, update and adjust and obtain a new set of dissimilar regions containing k neighbors. The specific implementation method is as follows:

对每一种区域,根据公式(1)可知,都可得一个基于该区域相似性大小的排序,设对查询图像通过区域K近邻集合的方法寻找后,分别用

Figure BDA0000472656770000096
Figure BDA0000472656770000097
代表组成基于腿部相似和不相似的一对近邻集合,用SetPSNt和SetPDNt代表组成基于躯干相似和不相似的一对近邻集合;这些近邻集合所包含的K个样本都是通过K-近邻规则从反馈样本中获取的,K值可预先设定,实施例中取K的值为K=5。For each type of region, according to the formula (1), we can get a sorting based on the similarity of the region. After the query image is searched by the method of the region K nearest neighbor set, use
Figure BDA0000472656770000096
and
Figure BDA0000472656770000097
Representatives form a pair of neighbor sets based on the similarity and dissimilarity of legs, and use SetPSNt and SetPDNt to represent a pair of neighbor sets based on torso similarity and dissimilarity; the K samples contained in these neighbor sets are all passed K-nearest neighbors The rule is obtained from the feedback samples, and the value of K can be preset. In the embodiment, the value of K is K=5.

对每一个标记为某区域相似的样本,将该区域不相似的近邻集合作为分界依据,从排序结果的最前面依次往后面选择样本,直到所选取的样本属于该区域不相似的近邻集合,通过这种方法可以得到新的包含k个近邻的区域相似的集合;同理,对每一个标记为某区域不相似的样本,将该区域相似的近邻集合作为分界依据,运用相同方法,可以得到新的包含k个近邻的该区域不相似的集合。k值根据具体分界选取的情况动态调整。For each sample marked as similar to a certain area, the dissimilar neighbor set of the area is used as the boundary basis, and the samples are selected from the front of the sorting result to the rear until the selected sample belongs to the dissimilar neighbor set of the area, through This method can obtain a new set of similar regions containing k neighbors; similarly, for each sample marked as dissimilar in a certain region, the set of neighbors similar to the region can be used as the basis for dividing, and using the same method, a new set can be obtained A dissimilar set of k neighbors in this region. The k value is dynamically adjusted according to the selection of the specific boundary.

步骤S2.2,更新区域权重值和特征权重:Step S2.2, update the region weight value and feature weight:

设查询图像p与反馈样本集中第i个反馈样本的图像之间相似性Sa(p,gi)采用如下计算公式,Assuming that the similarity Sa (p, gi ) between the query image p and the image of the i-th feedback sample in the feedback sample set is calculated using the following formula,

SSaa((pp,,ggii))==WWtt((pp,,ggii))&Sigma;&Sigma;WWFfOott((mm))SS((Ffggii,,tt((mm)),,Ffggii,,tt((mm))))++WWll((pp,,ggii))&Sigma;&Sigma;WWFfOoll((mm))SS((Ffplpl((mm)),,Ffggii,,ll((mm))))------((44))

其中,

Figure BDA0000472656770000092
代表第j个区域部位在m维特征向量下的特征权重;
Figure BDA0000472656770000093
代表第j个区域部位在第m维特征向量;Wj(p,gi)代表查询图像p与反馈样本之间的基于第j个区域部位区域权重;其中设
Figure BDA0000472656770000094
代表查询图像p与反馈样本之间的基于第j个区域部位在第m维特征值下的相似度;j的取值为1、2…U,in,
Figure BDA0000472656770000092
Represents the feature weight of the jth region under the m-dimensional feature vector;
Figure BDA0000472656770000093
Represents the feature vector of the j-th region in the m-th dimension; Wj (p, gi ) represents the region weight based on the j-th region between the query image p and the feedback sample; where
Figure BDA0000472656770000094
Represents the similarity between the query image p and the feedback sample based on the m-th dimensional feature value of the j-th region; the value of j is 1, 2...U,

步骤S2.2.1,利用机器学习中距离度量的方法,更新区域权重采用如下计算公式,Step S2.2.1, using the method of distance measurement in machine learning to update the weight of the region using the following calculation formula,

Wj(p,gi)=Wj(p,gi)×β1β1>1    (5)Wj (p,gi )=Wj (p,gi )×β1 β1 >1 (5)

Wj(p,gi)=Wj(p,gi)×β20<β2<1    (6)Wj (p,gi )=Wj (p,gi )×β2 0<β2 <1 (6)

其中,Wj(p,gi)表示第j个区域部位的区域权重,β1、β2为预设系数;Among them, Wj (p, gi ) represents the regional weight of the jth regional part, and β1 and β2 are preset coefficients;

步骤S2.2.2,利用机器学习中距离度量的方法,更新特征权重采用如下计算公式,Step S2.2.2, using the method of distance measurement in machine learning to update the feature weight using the following calculation formula,

WWFfOojj((mm))==WWFfOojj((mm))&times;&times;&alpha;&alpha;//((11++&mu;&mu;mm&sigma;&sigma;mm))------((77))

其中,μm、σm分别表示第1、2…m维特征值的均值和方差,α为预设参数。Among them, μm and σm represent the mean and variance of the 1st, 2nd...m-dimensional feature values respectively, and α is a preset parameter.

实施例中,In the embodiment,

设嫌疑目标(图像p)与反馈样本集中第i个样本的图像之间相似性的可用如下计算公式:The similarity between the suspected target (image p) and the image of the i-th sample in the feedback sample set can be calculated as follows:

SSaa((pp,,ggii))==WWtt((pp,,ggii))&Sigma;&Sigma;WWFfOott((mm))SS((Ffpptt((mm)),,Ffggii,,tt((mm))))++WWll((pp,,ggii))&Sigma;&Sigma;WWFfOoll((mm))SS((Ffplpl((mm)),,Ffggii,,ll((mm))))------((88))

其中:in:

Figure BDA0000472656770000103
分别代表腿部和躯干部位的在m维特征向量下的特征权重;
Figure BDA0000472656770000104
Figure BDA0000472656770000105
分别代表腿部和躯干部位m维特征向量;Wl(p,gi)和Wt(p,gi)分别代表嫌疑目标与样本之间的基于腿部区域和躯干部位区域权重值;
Figure BDA0000472656770000106
Figure BDA0000472656770000107
则分别代表嫌疑目标与样本之间的基于腿部区域和躯干部位的在m维特征值下的相似度。 and
Figure BDA0000472656770000103
Represents the feature weights under the m-dimensional feature vector of the legs and torso, respectively;
Figure BDA0000472656770000104
and
Figure BDA0000472656770000105
represent the m-dimensional feature vectors of the legs and torso; Wl (p, gi ) and Wt (p, gi ) represent the weight values based on the leg area and the torso area between the suspected target and the sample;
Figure BDA0000472656770000106
and
Figure BDA0000472656770000107
respectively represent the similarity between the suspected target and the sample based on the m-dimensional feature values of the leg region and the torso.

步骤S2.2.1,利用机器学习中距离度量的方法,更新区域权重值,Step S2.2.1, using the method of distance measurement in machine learning to update the area weight value,

通过公式(4)和(5)来更新区域权重值:Update the area weight value by formulas (4) and (5):

Wt,l(p,gi)=Wt,l(p,gi)×β1β1>1    (9)Wt,l (p,gi )=Wt,l (p,gi )×β1 β1 >1 (9)

Wt,l(p,gi)=Wt,l(p,gi)×β20<β2<1    (10)Wt,l (p,gi )=Wt,l (p,gi )×β2 0<β2 <1 (10)

其中,Wt,l(p,gi)表示Wt(p,gi)或Wl(p,gi),第一次执行本步骤时,计算前Wt(p,gi)、Wl(p,gi)的初值可依据用户所选取的查询行人图像(嫌疑目标的图像p)与反馈样本在不同区域的差异性自行预设,比如在某些环境下,衣服颜色比较鲜明,而且外观差异变化不大,这样上身的权重就更大一些。后续执行本步骤时,计算前Wt(p,gi)、Wl(p,gi)的值可采用上一轮迭代执行本步骤时根据计算所得更新的权重值。在上述公式(9)中β1>1是针对标记为区域相似的样本,使其组成新的k个近邻更加接近所选取的查询图像,从而增加这部分区域的区域权重值;在上述公式(10)中0<β2<1则针对区域不相似的样本,使其组成的新的k个近邻更加远离所选取的查询图像,从而减小这部分区域的区域权重值,实施例中预设取值为:系数β1=100,系数β2=0.01。因为得到新的K个近邻样本是计算机依据动态K近邻规则重新选取的,那么与原来样本相比较,会有一部分在某些区域的特征比较明显(比如体型比较大)样本包含到新形成的集合中,于是对应于查询图像,这些样本在进行匹配时,这些区域对结果影响较大,所以区域权重随之发生变化(β1增大)Among them, Wt,l (p,gi ) means Wt (p,gi ) or Wl (p,gi ), when this step is performed for the first time, before calculating Wt (p,gi ), The initial value of Wl (p, gi ) can be preset according to the difference between the query pedestrian image (image p of the suspected target) and the feedback sample selected by the user in different regions. It is distinct, and the difference in appearance does not change much, so the weight of the upper body is greater. When this step is executed subsequently, the values of Wt (p, gi ) and Wl (p, gi ) before the calculation can use the weight values updated according to the calculation obtained during the execution of this step in the previous round of iterations. In the above formula (9), β1 >1 is for samples marked as similar to the region, so that the new k neighbors are closer to the selected query image, thereby increasing the region weight value of this part of the region; in the above formula ( 10) In 0<β2 <1, for samples with dissimilar regions, the new k neighbors formed by it will be farther away from the selected query image, thereby reducing the region weight value of this part of the region. In the embodiment, the preset Values are: coefficient β1 =100, coefficient β2 =0.01. Because the new K nearest neighbor samples are reselected by the computer according to the dynamic K nearest neighbor rule, compared with the original samples, some samples with more obvious features in certain areas (such as larger body size) will be included in the newly formed set , so corresponding to the query image, when these samples are matched, these areas have a greater impact on the results, so the weight of the area changes accordingly (β1 increases)

对于区域不相似的近邻集合,原来相同。For the set of neighbors whose regions are dissimilar, it turns out to be the same.

步骤S2.2.2,利用机器学习中距离度量的方法,更新特征权重值,Step S2.2.2, using the method of distance measurement in machine learning to update the feature weight value,

通过以下公式来调整更新特征权重:The update feature weights are adjusted by the following formula:

WWFfOoll((mm))==WWFfOoll((mm))&times;&times;&alpha;&alpha;//((11++&mu;&mu;mm&sigma;&sigma;mm))------((1111))

WWFfOott((mm))==WWFfOott((mm))&times;&times;&alpha;&alpha;//((11++&mu;&mu;mm&sigma;&sigma;mm))------((1212))

在公式(11)和(12)中,第一次执行本步骤时,计算前

Figure BDA0000472656770000114
的初值可由用户依据行人图像与反馈样本之间具体不同特征下的相似性自行预设,μm、σm分别表示第1、2…m维特征值的均值和方差,预设参数α一般在范围[0.5,1]内取值,例如取值0.5。后续执行本步骤时,计算前
Figure BDA0000472656770000115
的值可采用上一轮迭代执行本步骤时根据计算所得更新的权重值。在更新后得到新的包含k个近邻的区域不相似集合中,当相似性的值
Figure BDA0000472656770000117
Figure BDA0000472656770000118
较大时,则表明该维度向量下的特征不能真实反映用户的实际意图,应当减小其关联匹配权重;并将所有相似性序列的值
Figure BDA0000472656770000119
Figure BDA00004726567700001110
堆叠起来,形成一个Lpn×(2M)矩阵,其中,2M代表特征的个数,上述矩阵的列即为相似性序列
Figure BDA00004726567700001111
Figure BDA00004726567700001112
在某个特征值下的长度Lpn,当标记的所有区域不相似样本在该特征值下的具有相近的相似性值时,则代表该特征值很好代表查询目标共有性质,于是该序列标准差的倒数成为特征权重的无偏估计量,当参数μm取值在前M/4时,运用公式(11)和(12)来调整更新特征权重。当在第m维特征值下,样本中的某幅图像与查询图像反而有较大的相似度,这并没有反应用户真实意图(用户目标是尽可能找到那些差异比较大的特征值的样本,使他们尽可能远离聚类中心,那么相似的样本的排序就会相应靠前,接近聚类中心),因此根据经验约定在参数μm取值在前M/4范围内情况下运用公式(11)、(12)。μm、σm是从新得到的近邻样本集合中提取出特征组成的特征向量的均值和方差(他们依据样本的不同而变化)于是,特征权重就会自动更新。In formulas (11) and (12), when this step is performed for the first time, before calculating and
Figure BDA0000472656770000114
The initial value of can be preset by the user according to the similarity between the pedestrian image and the feedback sample under different characteristics. μm and σm represent the mean and variance of the 1st, 2...m-dimensional feature values respectively, and the preset parameter α is generally Take a value in the range [0.5, 1], for example take a value of 0.5. When performing this step later, before calculating
Figure BDA0000472656770000115
and The value of can use the weight value updated according to the calculation when performing this step in the previous round of iterations. In the new dissimilar set of regions containing k neighbors obtained after the update, when the value of similarity
Figure BDA0000472656770000117
and
Figure BDA0000472656770000118
When it is larger, it indicates that the features under this dimension vector cannot truly reflect the user's actual intention, and its associated matching weight should be reduced; and the values of all similarity sequences
Figure BDA0000472656770000119
and
Figure BDA00004726567700001110
Stacked to form a Lpn × (2M) matrix, where 2M represents the number of features, and the columns of the above matrix are the similarity sequences
Figure BDA00004726567700001111
and
Figure BDA00004726567700001112
The length Lpn under a certain eigenvalue, when all the dissimilar samples in the marked region have similar similarity values under this eigenvalue, it means that the eigenvalue well represents the common property of the query target, so the sequence standard The reciprocal of the difference becomes the unbiased estimator of the feature weight. When the value of the parameter μm is in the top M/4, formulas (11) and (12) are used to adjust and update the feature weight. Under the m-th dimensional feature value, a certain image in the sample has a relatively large similarity with the query image, which does not reflect the user's real intention (the user's goal is to find samples with relatively large feature values as much as possible, Make them as far away from the cluster center as possible, then the ranking of similar samples willbe correspondingly higher, close to the cluster center), so according to the empirical convention, use the formula (11 ), (12). μm and σm are the mean and variance of the feature vectors that are composed of features extracted from the newly obtained neighbor sample set (they vary according to different samples). Therefore, the feature weights will be automatically updated.

步骤S3,包括特征表达和距离度量,得到查询匹配结果:Step S3, including feature expression and distance measure, to obtain the query matching result:

特征表达和距离度量可采用现有行人重识别技术里面的成熟方法。Feature expression and distance measurement can use the mature methods in the existing person re-identification technology.

特征表达是基于在图像已提取出的特征的前提下,通过调整后的权重值,并对不同特征进行多次的训练学习,选出能够很好表征图像与其他样本之间差异的特征。Feature expression is based on the extracted features of the image, through the adjusted weight value, and multiple training and learning of different features, to select features that can well represent the difference between the image and other samples.

距离度量中可通过以上步骤S2中得出的调整后的权重值,利用公式(4)重新进行相似性度量,得出新的查询匹配结果。这样可以充分利用上述经过调整后权重值增大的区域及其所包含的特征值,找出相似度更高的样本,提高行人重识别的准确性;In the distance measurement, the adjusted weight value obtained in the above step S2 can be used to re-measure the similarity using the formula (4) to obtain a new query matching result. In this way, we can make full use of the above-mentioned areas with increased weight after adjustment and the eigenvalues they contain, find out samples with higher similarity, and improve the accuracy of pedestrian re-identification;

步骤S4,显示步骤S3所得查询匹配结果,如果符合要求,则输出结果;如果不符合要求,则返回步骤S1.2进行迭代,直到符合要求。具体实施时,可在交互界面上,输出显示优化匹配后的排序结果,用户可自行结合匹配目标和其他各方面的信息,综合判断决定查询匹配结果是否符合要求。Step S4, display the query matching result obtained in step S3, and output the result if it meets the requirements; if it does not meet the requirements, return to step S1.2 to iterate until the requirements are met. During specific implementation, the sorting results after optimized matching can be output and displayed on the interactive interface, and users can combine the matching target and other information to make a comprehensive judgment to determine whether the query matching results meet the requirements.

一种基于区域相关反馈的行人重识别系统,利用相关反馈技术来优化排序结果,具体实施时可提供人机交互界面方便用户选择不相关的图像,并在图像上标记与查询图像相似或不相似的区域,利用这些信息进行相关反馈,基于区域的相关反馈技术,是充分了利用行人图像的局部特征信息,并结合传统行人重识别方法来实现优化排序结果的。其系统可采用软件模块化方式实现,包括以下模块:A pedestrian re-identification system based on area-related feedback, which uses relevant feedback technology to optimize the ranking results. During the implementation, it can provide a human-computer interaction interface to facilitate users to select irrelevant images, and mark the similarity or dissimilarity with the query image on the image Using these information for relevant feedback, the region-based relevant feedback technology fully utilizes the local feature information of pedestrian images and combines traditional pedestrian re-identification methods to achieve optimized ranking results. Its system can be realized by software modularization, including the following modules:

反馈模块,用于进行初次查询匹配和反馈样本收集,包括以下子模块,The feedback module is used for initial query matching and feedback sample collection, including the following sub-modules,

初次查询匹配子模块,用于将输入目标人物图像作为查询图像,进行初始查询并输出初始查询排序结果;The initial query matching submodule is used to use the input target person image as a query image, perform an initial query and output an initial query sorting result;

反馈样本收集子模块,用于在第一次执行反馈样本收集时,从初次排序结果在排名最靠前的一定预设数目的图像中选取不相关图像作为反馈样本并标记类型,构成反馈样本集;后续执行反馈样本收集时,从上一轮迭代时结果显示模块所得查询排序结果中选取不相关图像作为反馈样本并标记类型,加入反馈样本集;The feedback sample collection sub-module is used to select unrelated images as feedback samples from a certain preset number of images with the highest ranking in the initial sorting result when the feedback sample collection is performed for the first time, and mark the type to form a feedback sample set ; When collecting feedback samples in the subsequent round, select irrelevant images from the query sorting results obtained by the result display module in the previous round of iterations as feedback samples and mark the type, and add them to the feedback sample set;

标记类型方式为,设划分U个区域1、2…U,每个反馈样本分别标记为2U个类型之一,即与查询图像基于区域1相似、基于区域1不相似、基于区域2相似、基于区域2不相似…基于区域U相似、基于区域U不相似;对反馈样本进行标记时,按照区域的划分,在每个区域提取视觉特征,设得到M维特征向量,其中任一维记为第m维,根据特征向量与查询图像中对应区域分别进行相似性比较;The way of labeling type is as follows: Suppose U areas 1, 2...U are divided, and each feedback sample is marked as one of 2U types, that is, similar to the query image based on area 1, dissimilar based on area 1, similar based on area 2, and based on Area 2 is dissimilar... based on the similarity of the area U, and the dissimilarity based on the area U; when marking the feedback samples, according to the division of the area, extract the visual features in each area, and assume that the M-dimensional feature vector is obtained, and any dimension is recorded as the first m dimension, according to the similarity comparison between the feature vector and the corresponding region in the query image;

权值模块,用于进行确定近邻集合、区域权重调整和特征权重调整,包括以下子模块,确定近邻集合子模块,用于对查询图像,首先通过区域K近邻集合的方法寻找出基于区域相似的区域相似和区域不相似的样本集合;然后运用动态k近邻规则,对每一个标记为某区域相似的样本,更新调整并得到新的包含k个近邻的区域相似的集合,对每一个标记为某区域不相似的样本,更新调整并得到新的包含k个近邻的该区域不相似的集合;The weight module is used to determine the neighbor set, region weight adjustment and feature weight adjustment. It includes the following sub-modules. The neighbor set sub-module is used to query the image. A set of samples with similar and dissimilar regions; then use the dynamic k-nearest neighbor rule to update and adjust each sample marked as similar to a certain region and obtain a new set of similar regions containing k neighbors, and for each sample marked as a certain region For samples with dissimilar regions, update and adjust to obtain a new set of dissimilar regions containing k neighbors;

更新区域权重和特征权重子模块,用于执行以下操作,Update the Region Weights and Feature Weights submodules to perform the following operations,

设查询图像p与反馈样本集中第i个反馈样本的图像之间相似性Sa(p,gi)采用如下计算公式,Assuming that the similarity Sa (p, gi ) between the query image p and the image of the i-th feedback sample in the feedback sample set is calculated using the following formula,

SSaa((pp,,ggii))==&Sigma;&Sigma;jj==11Uu[[WWjj((pp,,ggii))&Sigma;&Sigma;mm==11MmWWFfOojj((mm))SS((Ffppjj((mm)),,Ffggii,,jj((mm))))]]------((44))

其中,

Figure BDA0000472656770000132
代表第j个区域部位在m维特征向量下的特征权重;
Figure BDA0000472656770000133
代表第j个区域部位在第m维特征向量;Wj(p,gi)代表查询图像p与反馈样本之间的基于第j个区域部位区域权重;其中设
Figure BDA0000472656770000134
代表查询图像p与反馈样本之间的基于第j个区域部位在第m维特征值下的相似度;in,
Figure BDA0000472656770000132
Represents the feature weight of the jth region under the m-dimensional feature vector;
Figure BDA0000472656770000133
Represents the feature vector of the j-th region in the m-th dimension; Wj (p, gi ) represents the region weight based on the j-th region between the query image p and the feedback sample; where
Figure BDA0000472656770000134
Represents the similarity between the query image p and the feedback sample based on the m-th dimensional feature value of the j-th region;

利用机器学习中距离度量的方法,更新区域权重采用如下计算公式,Using the method of distance measurement in machine learning, the following calculation formula is used to update the weight of the region,

Wj(p,gi)=Wj(p,gi)×β1β1>1    (5)Wj (p,gi )=Wj (p,gi )×β1 β1 >1 (5)

Wj(p,gi)=Wj(p,gi)×β20<β2<1    (6)Wj (p,gi )=Wj (p,gi )×β2 0<β2 <1 (6)

其中,Wj(p,gi)表示第j个区域部位的区域权重,β1、β2为预设系数;Among them, Wj (p, gi ) represents the regional weight of the jth regional part, and β1 and β2 are preset coefficients;

利用机器学习中距离度量的方法,更新特征权重采用如下计算公式,Using the method of distance measurement in machine learning, the update feature weight adopts the following calculation formula,

WWFfOojj((mm))==WWFfOojj((mm))&times;&times;&alpha;&alpha;//((11++&mu;&mu;mm&sigma;&sigma;mm))------((77))

其中,μm、σm分别表示第1、2…m维特征值的均值和方差,α为预设参数;Among them, μm and σm represent the mean and variance of the 1st, 2...m-dimensional feature values respectively, and α is a preset parameter;

查询匹配模块,用于根据权值模块所得调整后的区域权重和式(4),进行特征表达和距离度量,得到查询匹配结果;The query matching module is used to perform feature expression and distance measurement according to the adjusted regional weight and formula (4) obtained by the weight module, and obtain the query matching result;

结果显示模块,显示查询匹配模块所得查询匹配结果,如果符合要求,则输出结果;如果不符合要求,则通知反馈样本收集子模块更新反馈样本集,直到符合要求。The result display module displays the query matching result obtained by the query matching module. If the requirement is met, the result is output; if the requirement is not met, the feedback sample collection sub-module is notified to update the feedback sample set until the requirement is met.

各模块具体实现参见方法流程的各步骤,本发明不予赘述。For the specific implementation of each module, refer to each step of the method flow, which will not be described in detail in the present invention.

本文中所描述的具体实施例仅仅是对本发明精神作举例说明。本发明所属技术领域的技术人员可以对所描述的具体实施例做各种各样的修改或补充或采用类似的方式替代,但并不会偏离本发明的精神或者超越所附权利要求书所定义的范围。The specific embodiments described herein are merely illustrative of the spirit of the invention. Those skilled in the art to which the present invention belongs can make various modifications or supplements to the described specific embodiments or adopt similar methods to replace them, but they will not deviate from the spirit of the present invention or go beyond the definition of the appended claims range.

Claims (7)

1. a heavily recognition methods of the pedestrian based on region relevant feedback, is characterized in that: comprises the following steps,
Step S1, carries out first match query and feedback samples and collects, and comprises following sub-step,
Step S1.1, carries out first match query, comprises using input target person image as query image, carries out initial query and exports initial query ranking results;
Step S1.2, carries out feedback samples collection, in the time performing step S1.2 for the first time, chooses uncorrelated image as feedback samples type from first ranking results the image of the most forward certain predetermined number of rank, forms feedback samples collection; When follow-up execution step S1.2, from the gained inquiry ranking results of last round of iteration execution step S4, choose uncorrelated image as feedback samples type, add feedback samples collection;
Type mode is, if divide U region 1,2 ... U, each feedback samples is labeled as respectively one of 2U type, similar based on region 1 to query image, based on region 1 dissmilarity, similar based on region 2, based on region 2 dissmilarities ... similar based on region U, based on region U dissmilarity; When feedback samples is carried out to mark, according to the division in region, at each extracted region visual signature, establish and obtain M dimensional feature vector, wherein arbitrary dimension is designated as m dimension, carries out respectively similarity comparison according to corresponding region in proper vector and query image;
Step S2, determines neighbour's set, region weight adjustment and feature weight adjustment, comprises following sub-step,
Step S2.1, to query image, first finds out the region phase Sihe region dissimilar sample set similar based on region by the method for region k nearest neighbor set; Then use dynamic k neighbour rule, each is labeled as to the similar sample in certain region, upgrade and adjust and obtain the new similar set in the region that comprises k neighbour, each is labeled as to the dissimilar sample in certain region, upgrades and adjust and obtain the new dissimilar set in this region that comprises k neighbour;
Step S2.2, upgrades region weight and feature weight,
If similarity S between the image of concentrated i the feedback samples of query image p and feedback samplesa(p, gi) adopt following computing formula,
Sa(p,gi)=&Sigma;j=1U[Wj(p,gi)&Sigma;m=1MWFOj(m)S(Fpj(m),Fgi,j(m))]Formula one wherein,represent the feature weight of position, j region under m dimensional feature vector;
Figure FDA0000472656760000013
represent that position, j region is at m dimensional feature vector; Wj(p, gi) represent between query image p and feedback samples based on position, j region region weight; Wherein establish
Figure FDA0000472656760000014
represent the similarity under m dimensional feature value based on position, j region between query image p and feedback samples; The value of j is 1,2 ... U,
Step S2.2.1, the method for utilizing machine learning middle distance to measure, upgrades region weight and adopts following computing formula,
Wj(p, gi)=Wj(p, gi) × β1β1>1 formula two
Wj(p, gi)=Wj(p, gi) × β20< β2<1 formula three wherein, Wj(p, gi) represent the region weight at position, j region, β1, β2for default coefficient;
Step S2.2.2, the method for utilizing machine learning middle distance to measure, regeneration characteristics weight adopts following computing formula,WFOj(m)=WFOj(m)&times;&alpha;/(1+&mu;m&sigma;m)Formula four wherein, μm, σmrepresent respectively the 1st, 2 ... average and the variance of m dimensional feature value, α is parameter preset;
Step S3, region weight and formula one after adjusting according to gained in step S2, carry out feature representation and distance metric, obtains match query result;
Step S4, step display S3 gained match query result, if met the requirements, Output rusults; If undesirable, return to step S1.2 and carry out iteration, until meet the requirements.
2. heavily recognition methods of the pedestrian based on region relevant feedback according to claim 1, it is characterized in that: to feedback samples according to the division in region, while carrying out respectively similarity comparison with corresponding region in query image, the similarity measurement that each region adopted is respectively calculated as follows formula
Sj(p,gi)=&Sigma;WFOj(m)S(Fpj(m),Fgi,j(m))
Formula five
Wherein,
Figure FDA0000472656760000023
represent the feature weight at position, j region, perform step for the first time S1.2 and adopt default initial value, while adopting last round of iteration when follow-up execution step S1.2, perform step the weighted value that S2 upgrades;represent the m dimensional feature vector at position, j region;
Figure FDA0000472656760000025
represent query image p and feedback samples gibetween the similarity under m dimensional feature value based on position, j region.
3. heavily recognition methods of the pedestrian based on region relevant feedback according to claim 2, it is characterized in that: establish and divide 2 regions, comprise trunk and shank, while carrying out respectively similarity comparison with corresponding region in query image, the similarity measurement that trunk and shank adopted is respectively calculated as follows formula
St(p,gi)=&Sigma;WFOt(m)S(Fpt(m),Fgi,t(m))Formula six
Sl(p,gi)=&Sigma;WFOl(m)S(Fpl(m),Fgi,l(m))Formula seven wherein,
Figure FDA0000472656760000031
with
Figure FDA0000472656760000032
represent respectively the feature weight of shank and metastomium,
Figure FDA0000472656760000033
withrepresent respectively shank and metastomium m dimensional feature vector;
Figure FDA0000472656760000035
with
Figure FDA0000472656760000036
represent respectively between query image and feedback samples the similarity under m dimensional feature value based on shank and metastomium; St(p, gi) be the value of metastomium similarity measurement, Sl(p, gi) be the value of shank position similarity measurement.
4. heavily recognition methods of the pedestrian based on region relevant feedback according to claim 3, is characterized in that: between the image of concentrated i the sample of query image p and feedback samples, similarity adopts following computing formula,
Sa(p,gi)=Wt(p,gi)&Sigma;WFOt(m)S(Fpt(m),Fgi,t(m))+Wl(p,gi)&Sigma;WFOl(m)S(Fpl(m),Fgi,l(m))Formula eight wherein,with
Figure FDA0000472656760000039
represent respectively the feature weight under m dimensional feature vector of shank and metastomium;
Figure FDA00004726567600000310
with
Figure FDA00004726567600000311
represent respectively shank and metastomium m dimensional feature vector; Wl(p, gi) and Wt(p, gi) represent respectively between suspicion target and sample based on leg area and metastomium region weight;
Figure FDA00004726567600000312
with
Figure FDA00004726567600000313
represent respectively the similarity under m dimensional feature value based on leg area and metastomium between suspicion target and sample.
5. heavily recognition methods of the pedestrian based on region relevant feedback according to claim 4, is characterized in that: step S2.2.1, and the method for utilizing machine learning middle distance to measure, upgrades region weight and adopts following computing formula,
Wt,l(p, gi)=Wt,l(p, gi) × β1β1>1 formula nine
Wt,l(p, gi)=Wt,l(p, gi) × β20< β2<1 formula ten
Wherein, Wt,l(p, gi) expression Wt(p, gi) or Wl(p, gi), β1, β2for default coefficient.
6. heavily recognition methods of the pedestrian based on region relevant feedback according to claim 4, is characterized in that: step S2.2.2, and the method for utilizing machine learning middle distance to measure, regeneration characteristics weight adopts following computing formula,
WFOl(m)=WFOl(m)&times;&alpha;/(1+&mu;m&sigma;m)Formula 11
WFOt(m)=WFOt(m)&times;&alpha;/(1+&mu;m&sigma;m)Formula 12 wherein, μm, σmrepresent respectively the 1st, 2 ... average and the variance of m dimensional feature value, α is parameter preset.
7. the heavy recognition system of the pedestrian based on region relevant feedback, is characterized in that: comprises with lower module,
Feedback module, collects for carrying out first match query and feedback samples, comprises following submodule,
First match query submodule, for inputting target person image as query image, carries out initial query and exports initial query ranking results;
Feedback samples is collected submodule, in the time carrying out feedback samples collection for the first time, chooses uncorrelated image as feedback samples type from first ranking results the image of the most forward certain predetermined number of rank, forms feedback samples collection; When follow-up execution feedback samples is collected, during from last round of iteration, result display module gained inquiry ranking results, choose uncorrelated image as feedback samples type, add feedback samples collection;
Type mode is, if divide U region 1,2 ... U, each feedback samples is labeled as respectively one of 2U type, similar based on region 1 to query image, based on region 1 dissmilarity, similar based on region 2, based on region 2 dissmilarities ... similar based on region U, based on region U dissmilarity; When feedback samples is carried out to mark, according to the division in region, at each extracted region visual signature, establish and obtain M dimensional feature vector, wherein arbitrary dimension is designated as m dimension, carries out respectively similarity comparison according to corresponding region in proper vector and query image;
Weights module, be used for determining neighbour's set, region weight adjustment and feature weight adjustment, comprise following submodule, determine that neighbour gathers submodule, for to query image, first find out the region phase Sihe region dissimilar sample set similar based on region by the method for region k nearest neighbor set; Then use dynamic k neighbour rule, each is labeled as to the similar sample in certain region, upgrade and adjust and obtain the new similar set in the region that comprises k neighbour, each is labeled as to the dissimilar sample in certain region, upgrades and adjust and obtain the new dissimilar set in this region that comprises k neighbour;
Upgrade region weight and feature weight submodule, for carrying out following operation,
If similarity S between the image of concentrated i the feedback samples of query image p and feedback samplesa(p, gi) adopt following computing formula,
Sa(p,gi)=&Sigma;j=1U[Wj(p,gi)&Sigma;m=1MWFOj(m)S(Fpj(m),Fgi,j(m))]Formula one wherein,
Figure FDA0000472656760000042
represent the feature weight of position, j region under m dimensional feature vector;
Figure FDA0000472656760000043
represent that position, j region is at m dimensional feature vector; Wj(p, gi) represent between query image p and feedback samples based on position, j region region weight; Wherein establish
Figure FDA0000472656760000044
represent the similarity under m dimensional feature value based on position, j region between query image p and feedback samples;
Utilize the method for machine learning middle distance tolerance, upgrade region weight and adopt following computing formula,
Wj(p, gi)=Wj(p, gi) × β1β1>1 formula two
Wj(p, gi)=Wj(p, gi) × β20< β2<1 formula three
Wherein, Wj(p, gi) represent the region weight at position, j region, β1, β2for default coefficient;
The method of utilizing machine learning middle distance tolerance, regeneration characteristics weight adopts following computing formula,
WFOj(m)=WFOj(m)&times;&alpha;/(1+&mu;m&sigma;m)Formula four wherein, μm, σmrepresent respectively the 1st, 2 ... average and the variance of m dimensional feature value, α is parameter preset;
Match query module, for region weight and formula one after adjusting according to weights module gained, carries out feature representation and distance metric, obtains match query result;
Result display module, shows match query module gained match query result, if met the requirements, and Output rusults; If undesirable, notify feedback samples to collect submodule and upgrade feedback samples collection, until meet the requirements.
CN201410076028.3A2014-03-042014-03-04Pedestrian repeat recognition method and system based on area related feedbackExpired - Fee RelatedCN103793721B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201410076028.3ACN103793721B (en)2014-03-042014-03-04Pedestrian repeat recognition method and system based on area related feedback

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201410076028.3ACN103793721B (en)2014-03-042014-03-04Pedestrian repeat recognition method and system based on area related feedback

Publications (2)

Publication NumberPublication Date
CN103793721Atrue CN103793721A (en)2014-05-14
CN103793721B CN103793721B (en)2017-05-10

Family

ID=50669363

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201410076028.3AExpired - Fee RelatedCN103793721B (en)2014-03-042014-03-04Pedestrian repeat recognition method and system based on area related feedback

Country Status (1)

CountryLink
CN (1)CN103793721B (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104200206A (en)*2014-09-092014-12-10武汉大学Double-angle sequencing optimization based pedestrian re-identification method
CN104376212A (en)*2014-11-172015-02-25深圳市银雁金融配套服务有限公司Method and device for assessing operation accuracy
CN104376334A (en)*2014-11-122015-02-25上海交通大学Pedestrian comparison method based on multi-scale feature fusion
CN104462550A (en)*2014-12-252015-03-25武汉大学Pedestrian re-recognition method based on similarity and dissimilarity fusion ranking optimization
CN105488502A (en)*2015-11-272016-04-13北京航空航天大学Target detection method and device
CN106557533A (en)*2015-09-242017-04-05杭州海康威视数字技术股份有限公司A kind of method and apparatus of many image retrieval-by-unifications of single goal
CN107563327A (en)*2017-08-312018-01-09武汉大学It is a kind of that the pedestrian fed back recognition methods and system again are walked based on oneself
CN108052665A (en)*2017-12-292018-05-18深圳市中易科技有限责任公司A kind of data cleaning method and device based on distributed platform
CN108229521A (en)*2017-02-232018-06-29北京市商汤科技开发有限公司Training method, device, system and its application of Object identifying network
CN108710824A (en)*2018-04-102018-10-26国网浙江省电力有限公司信息通信分公司A kind of pedestrian recognition method divided based on regional area
CN108960013A (en)*2017-05-232018-12-07上海荆虹电子科技有限公司A kind of pedestrian recognition methods and device again
WO2019080669A1 (en)*2017-10-232019-05-02北京京东尚科信息技术有限公司Method for person re-identification in enclosed place, system, and terminal device
CN109740541A (en)*2019-01-042019-05-10重庆大学A kind of pedestrian weight identifying system and method
CN110377774A (en)*2019-07-152019-10-25腾讯科技(深圳)有限公司Carry out method, apparatus, server and the storage medium of personage's cluster
CN110536305A (en)*2019-08-292019-12-03武汉赛可锐信息技术有限公司Wi-Fi hotspot methods of investigation, device, terminal device and storage medium
WO2020052513A1 (en)*2018-09-142020-03-19阿里巴巴集团控股有限公司Image identification and pedestrian re-identification method and apparatus, and electronic and storage device
CN111914241A (en)*2020-08-062020-11-10上海熙菱信息技术有限公司Method for dynamically identifying unstructured object identity information

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101539930A (en)*2009-04-212009-09-23武汉大学Search method of related feedback images
CN102663366A (en)*2012-04-132012-09-12中国科学院深圳先进技术研究院Method and system for identifying pedestrian target
CN103325122A (en)*2013-07-032013-09-25武汉大学Pedestrian retrieval method based on bidirectional sequencing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101539930A (en)*2009-04-212009-09-23武汉大学Search method of related feedback images
CN102663366A (en)*2012-04-132012-09-12中国科学院深圳先进技术研究院Method and system for identifying pedestrian target
CN103325122A (en)*2013-07-032013-09-25武汉大学Pedestrian retrieval method based on bidirectional sequencing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHUNXIAO LIU ET AL.: ""POP: Person Re-Identification Post-Rank Optimisation"", 《PROCEEDING OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION》*
FISCHER M. ET AL.: ""Interactive person re-identification in TV series"", 《2010 INTERNATIONAL WORKSHOP ON CONTEXT-BASED MULTIMEDIA INDEXING》*
KHERFI M.L. ET AL.: ""Relevance Feedback for CBIR: A New Approach Based on Probabilistic Feature Weighting With Positive and Negative Examples"", 《IEEE TRANSACTIONS ON IMAGE PROCESSING》*

Cited By (28)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104200206A (en)*2014-09-092014-12-10武汉大学Double-angle sequencing optimization based pedestrian re-identification method
CN104376334A (en)*2014-11-122015-02-25上海交通大学Pedestrian comparison method based on multi-scale feature fusion
CN104376212A (en)*2014-11-172015-02-25深圳市银雁金融配套服务有限公司Method and device for assessing operation accuracy
CN104376212B (en)*2014-11-172016-12-21深圳市银雁金融服务有限公司The method and device of assessment operation accuracy
CN104462550A (en)*2014-12-252015-03-25武汉大学Pedestrian re-recognition method based on similarity and dissimilarity fusion ranking optimization
CN104462550B (en)*2014-12-252017-07-11武汉大学Pedestrian's recognition methods again of sorting consistence is merged based on similitude and dissimilarity
CN106557533A (en)*2015-09-242017-04-05杭州海康威视数字技术股份有限公司A kind of method and apparatus of many image retrieval-by-unifications of single goal
CN106557533B (en)*2015-09-242020-03-06杭州海康威视数字技术股份有限公司Single-target multi-image joint retrieval method and device
CN105488502B (en)*2015-11-272018-12-21北京航空航天大学Object detection method and device
CN105488502A (en)*2015-11-272016-04-13北京航空航天大学Target detection method and device
CN108229521B (en)*2017-02-232020-09-15北京市商汤科技开发有限公司Object recognition network training method, device and system and application thereof
CN108229521A (en)*2017-02-232018-06-29北京市商汤科技开发有限公司Training method, device, system and its application of Object identifying network
CN108960013B (en)*2017-05-232020-09-15深圳荆虹科技有限公司Pedestrian re-identification method and device
CN108960013A (en)*2017-05-232018-12-07上海荆虹电子科技有限公司A kind of pedestrian recognition methods and device again
CN107563327B (en)*2017-08-312021-07-20武汉大学 A pedestrian re-identification method and system based on self-paced feedback
CN107563327A (en)*2017-08-312018-01-09武汉大学It is a kind of that the pedestrian fed back recognition methods and system again are walked based on oneself
WO2019080669A1 (en)*2017-10-232019-05-02北京京东尚科信息技术有限公司Method for person re-identification in enclosed place, system, and terminal device
US11263446B2 (en)2017-10-232022-03-01Beijing Jingdong Shangke Information Technology Co., Ltd.Method for person re-identification in closed place, system, and terminal device
CN108052665A (en)*2017-12-292018-05-18深圳市中易科技有限责任公司A kind of data cleaning method and device based on distributed platform
CN108052665B (en)*2017-12-292020-05-05深圳市中易科技有限责任公司Data cleaning method and device based on distributed platform
CN108710824A (en)*2018-04-102018-10-26国网浙江省电力有限公司信息通信分公司A kind of pedestrian recognition method divided based on regional area
WO2020052513A1 (en)*2018-09-142020-03-19阿里巴巴集团控股有限公司Image identification and pedestrian re-identification method and apparatus, and electronic and storage device
CN109740541A (en)*2019-01-042019-05-10重庆大学A kind of pedestrian weight identifying system and method
CN110377774A (en)*2019-07-152019-10-25腾讯科技(深圳)有限公司Carry out method, apparatus, server and the storage medium of personage's cluster
CN110377774B (en)*2019-07-152023-08-01腾讯科技(深圳)有限公司Method, device, server and storage medium for person clustering
CN110536305A (en)*2019-08-292019-12-03武汉赛可锐信息技术有限公司Wi-Fi hotspot methods of investigation, device, terminal device and storage medium
CN110536305B (en)*2019-08-292023-09-12武汉赛可锐信息技术有限公司 WiFi hotspot detection method, device, terminal equipment and storage medium
CN111914241A (en)*2020-08-062020-11-10上海熙菱信息技术有限公司Method for dynamically identifying unstructured object identity information

Also Published As

Publication numberPublication date
CN103793721B (en)2017-05-10

Similar Documents

PublicationPublication DateTitle
CN103793721B (en)Pedestrian repeat recognition method and system based on area related feedback
Liu et al.Patch attention convolutional vision transformer for facial expression recognition with occlusion
Deng et al.Image aesthetic assessment: An experimental survey
CN112750148B (en)Multi-scale target perception tracking method based on twin network
Wang et al.Large-scale isolated gesture recognition using convolutional neural networks
Zhang et al.Attend to the difference: Cross-modality person re-identification via contrastive correlation
CN110516536B (en) A Weakly Supervised Video Behavior Detection Method Based on Complementarity of Temporal Category Activation Maps
CN111709295A (en) A real-time gesture detection and recognition method and system based on SSD-MobileNet
CN114787865A (en) Light Tracking: A System and Method for Online Top-Down Human Pose Tracking
CN103559196B (en)Video retrieval method based on multi-core canonical correlation analysis
Zhu et al.Convolutional relation network for skeleton-based action recognition
CN110163117B (en)Pedestrian re-identification method based on self-excitation discriminant feature learning
CN109753891A (en) Soccer player posture calibration method and system based on human key point detection
CN103336835B (en)Image retrieval method based on weight color-sift characteristic dictionary
CN109934258B (en) Image retrieval method based on feature weighting and region integration
CN103778227A (en)Method for screening useful images from retrieved images
CN107767416B (en)Method for identifying pedestrian orientation in low-resolution image
CN102750347B (en)Method for reordering image or video search
Wang et al.Human activity prediction using temporally-weighted generalized time warping
Wang et al.AMC-Net: Attentive modality-consistent network for visible-infrared person re-identification
CN104376308B (en)A kind of human motion recognition method based on multi-task learning
Pang et al.Analysis of computer vision applied in martial arts
Zhou et al.A multidimensional feature fusion network based on MGSE and TAAC for video-based human action recognition
Jindal et al.Spatio-temporal attention and gaussian processes for personalized video gaze estimation
Zhang et al.3D Graph Convolutional Feature Selection and Dense Pre-Estimation for Skeleton Action Recognition

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant
CF01Termination of patent right due to non-payment of annual fee
CF01Termination of patent right due to non-payment of annual fee

Granted publication date:20170510


[8]ページ先頭

©2009-2025 Movatter.jp