技术领域technical field
本发明涉及视频智能分析技术领域,更具体的说,涉及一种基于搜索匹配的角色标注方法。The present invention relates to the technical field of video intelligent analysis, and more specifically, to a method for character labeling based on search and matching.
背景技术Background technique
随着电影电视剧产业的蓬勃发展,每年有大量影视剧节目被制作出来并极大丰富了人民群众的娱乐生活。绝大部分影视剧的故事主体是人物角色。这些角色由真实演员扮演,影视剧情节也随着角色的出现和交互不断发展和深入。因此,对影视剧进行角色标注,为影视剧中出现的人脸加上相应的角色名,建立人脸—角色名之间的映射关系,从而得到人物角色在影视剧中具体出现的时间片段和空间区域信息,成为一个有着广泛应用价值的重要课题。当前,影视剧角色标注已成为大规模影视剧数据的智能化和个性化管理、浏览和检索等服务中的基础支撑技术。在以角色为中心的影视剧浏览、智能视频摘要、面向特定角色的视频检索等应用中扮演着核心模块的角色。With the vigorous development of the film and television drama industry, a large number of film and television drama programs are produced every year and greatly enrich the entertainment life of the people. The main body of the story of most film and television dramas is the character. These roles are played by real actors, and the plot of the film and television drama continues to develop and deepen with the appearance and interaction of the characters. Therefore, the role labeling of the film and television dramas is carried out, and the corresponding character names are added to the faces appearing in the film and television dramas, and the mapping relationship between the faces and the role names is established, so as to obtain the time segment and time period when the characters appear in the film and television dramas. Spatial area information has become an important topic with wide application value. At present, the role labeling of film and television dramas has become the basic supporting technology in the intelligent and personalized management, browsing and retrieval of large-scale film and television drama data. It plays the role of a core module in applications such as character-centric film and television drama browsing, intelligent video summarization, and role-oriented video retrieval.
目前已经有一些影视剧角色标注的方法被提出来,它们可大致地分为基于人脸模型的方法和基于剧本的方法。基于人脸模型的方法为每个角色收集一定数量的人脸作为训练样本,并利用这些样本为每个角色构造各自的人脸模型,基于这些模型,影视剧中人脸的角色标注则根据它和不同角色人脸模型的相似度实现。这类方法虽然在不少系统中已经得到成功应用。但是,它需要人工收集训练样本,通常会耗费一定的时间和精力。而且上述训练得到的人脸模型一般也较难应用到其它影视剧。因为即使是同一个演员,她/他们在不同影视剧中的视觉表观也可能存在较大的差异,导致基于人脸模型的方法难以扩展到大规模影视剧的处理和分析上来。另一方面, 基于剧本的方法则通过挖掘影视剧文本和视觉信息模态在时间上的一致性实现角色标注。一般地,这类方法首先从外部渠道例如互联网上获得影视剧节目的剧本和字幕文本,通过对齐剧本和字幕,得到特定角色在特定时间点在说话的信息。同时根据影视剧中所检测人脸的时间点,初步建立人脸与角色名的映射关系,进而利用人脸间的视觉相似性,对这一关系予以精化使之更准确。基于剧本的方法优势在于标注过程是自动的(无需人工干预)。然而,并不是所有影视剧的剧本和字幕信息都是易于获得的。不少影视剧没有公开它的剧本,或者剧本与字幕并非完全对应,不少译制片也没有中文剧本和字幕,这些因素限制了基于剧本的方法的普适性。At present, some methods of character labeling in film and television dramas have been proposed, which can be roughly divided into methods based on face models and methods based on scripts. The method based on the face model collects a certain number of faces as training samples for each character, and uses these samples to construct a face model for each character. Realization of similarity with different character face models. Although such methods have been successfully applied in many systems. However, it requires manual collection of training samples, which usually consumes a certain amount of time and effort. Moreover, the face model obtained by the above training is generally difficult to apply to other film and television dramas. Because even if it is the same actor, her/their visual appearance may be quite different in different film and television dramas, which makes it difficult to extend the method based on face model to the processing and analysis of large-scale film and television dramas. On the other hand, script-based methods achieve character labeling by mining the temporal consistency of text and visual information modalities in film and television dramas. Generally, this type of method first obtains scripts and subtitle texts of film and television drama programs from external sources such as the Internet, and by aligning the scripts and subtitles, information that a specific character is speaking at a specific time point is obtained. At the same time, according to the time points of detected faces in film and television dramas, the mapping relationship between faces and character names is initially established, and then the visual similarity between faces is used to refine this relationship to make it more accurate. The advantage of the script-based approach is that the annotation process is automatic (without human intervention). However, not all scripts and subtitle information of film and television dramas are easy to obtain. Many film and television dramas do not disclose their scripts, or the scripts and subtitles do not completely correspond, and many dubbed films do not have Chinese scripts and subtitles. These factors limit the universality of script-based methods.
除上述方法外,近期也有一些基于搜索的名人图像标注方法被提出来。这些方法首先利用搜索引擎收集名人人脸图像构造名人库。然后对待标注图像,通过计算该图像与名人库中图像的视觉相似度,得到少量高度相似的图像,进而根据这些图像所属的名人信息,实现对待标注图像的名人标注。但是,这类方法的有效性尚只在仅包含数百个名人的库上得到证实,此外,这一工作是针对图像域而不是视频域的,无法利用视频结构等可用来辅助标注的有价值线索。In addition to the above methods, some search-based celebrity image annotation methods have been proposed recently. These methods first use search engines to collect celebrity face images to construct celebrity databases. Then, by calculating the visual similarity between the image and the image in the celebrity library, a small number of highly similar images are obtained, and then according to the celebrity information to which these images belong, the celebrity annotation of the image to be labeled is realized. However, the effectiveness of such methods has only been confirmed on a library containing only a few hundred celebrities. In addition, this work is aimed at the image domain rather than the video domain, and cannot take advantage of the video structure and other valuable information that can be used to assist annotation. clue.
互联网的繁荣使得大量的人物图像出现在网络上。对具有一定知名度的演员来说,用她/他的真实姓名作为查询,通过图像搜索引擎即可检索到很多她/他的人脸图像。这些人脸通常具有如下特点:1)检索结果图像包含该演员在不同影视剧,以及生活中的形象,人脸也因此有一定的视觉表观变化;2)人脸图像中通常含有一定噪声,例如图像中出现的是其它人的人脸;3)检索结果中排序靠前的图像的正确比例通常比排序靠后的高。另一方面,用影视剧名加上影视剧中演员所扮演的角色名作为查询,由于查询较为严格,通过图像搜索引擎检索到的人脸图像的特点则不同于前者。一般地,当所查询角色是影视剧中的主要角色时,检索结果中排序靠前的图像大部分是该角色在该影视剧中的人脸图像,但当该角色不是主要角色时,排序靠前的检索结果的噪声比例通常会高一些,结果中也会有较高的概率出现一些该影视剧中其它主要角色的人脸图像。The prosperity of the Internet has made a large number of images of people appear on the Internet. For an actor with a certain reputation, using her/his real name as a query, a lot of her/his face images can be retrieved through an image search engine. These faces usually have the following characteristics: 1) The search result images contain the images of the actor in different film and television dramas and in life, and the faces have certain visual appearance changes; 2) The face images usually contain certain noise, For example, the faces of other people appear in the image; 3) the correct ratio of the images ranked first in the retrieval results is usually higher than that of the images ranked lower. On the other hand, if the name of the film and television drama plus the name of the role played by the actors in the film and television drama is used as the query, the characteristics of the face image retrieved by the image search engine are different from the former due to the strict query. Generally, when the queried character is the main character in a film and television drama, most of the top-ranked images in the retrieval results are the face images of the character in the film and television drama, but when the character is not the main character, the top-ranked images are mostly The noise ratio of the search results is usually higher, and there is a higher probability that some face images of other main characters in the film and television drama will appear in the results.
影视剧角色搜索得到的人脸图像及其上述特点显然可以被用来更好的实现角色标注。但是,现有技术并没有很好的利用这些信息,特别是在挖掘不同查询检索得到的结果图像的特点这一方面。本发明正是基于这一认识提出来。具体地,本发明利用影视剧名加角色名检索得到的图像中通常包含该角色在该影视剧中出现的人脸图像。因此,采用基于视觉匹配的方法即可获得很好的角色标注效果。但是,这样检索得到的图像集合中也可能存在少数甚至较多的噪声,如何鉴别噪声并去除它的影响成为一个难点。为此,本发明创新性的利用真实姓名检索得到的图像集合噪声比例通常较低这一特点,通过挖掘“真实姓名”的人脸集合得到演员的视觉属性,进而利用这些视觉属性对“影视剧名加角色名”的人脸集合进行去噪,从而得到演员的角色人脸集合。基于此,再利用角色人脸与影视剧中人脸的视觉相似性,以及影视剧中人脸之间的视觉相似性,实现影视剧角色的高精度标注。与传统基于人脸模型的方法相比,本发明的标注过程是自动的无需人工干预,且角色人脸图像随影视剧自适应确定,具有良好的扩展性。与基于剧本的方法相比,本发明只需要有影视剧的演员表即可进行,相比于获取剧本和字幕,获取演员表是相对容易很多的任务。退一步说,即使得不到演员表,人工总结一个也是一个远比人工总结剧本和字幕文本容易的任务。因此本发明具有更强的普适性,能够应用到更多影视剧中。此外,基于搜索的名人图像标注方法仅利用人名收集人脸图像,本发明则充分挖掘了不同查询得到的人脸图像间的相关性,并依此实现极具针对性的影视剧角色人脸收集。不仅如此,本发明还通过挖掘视频的结构信息更好地实现角色标注,因而在技术上更加先进标注精度更高。以上可参考申请号为201210215951.1,发明名称为“一种电视节目内部自动生成主要人物摘要的方法”的发明专利;以及申请号为201110406765.1,发明名称为“一种基于角色的电视剧视频分析方法”的发明专利。The face images and the above-mentioned characteristics obtained by searching for characters in film and television dramas can obviously be used to better realize character labeling. However, the existing technologies do not make good use of this information, especially in mining the characteristics of the result images retrieved by different queries. The present invention proposes based on this understanding just. Specifically, in the present invention, the images retrieved by using the name of the film and television drama plus the name of the character usually contain the face image of the character appearing in the film and television drama. Therefore, a good role labeling effect can be obtained by using a method based on visual matching. However, there may be a small amount or even a lot of noise in the retrieved image collection, how to identify the noise and remove its influence becomes a difficult point. For this reason, the present invention innovatively utilizes the feature that the noise ratio of the image collection obtained by real name retrieval is usually low, and obtains the visual attributes of the actors by mining the face collection of the "real name", and then uses these visual attributes to analyze the "film and television drama". Name plus character name" face set to denoise, so as to obtain the actor's role face set. Based on this, the visual similarity between the character's face and the face in the film and television drama, and the visual similarity between the faces in the film and television drama are used to achieve high-precision annotation of the characters in the film and television drama. Compared with the traditional face model-based method, the marking process of the present invention is automatic without human intervention, and the face image of the character is determined adaptively with the film and television drama, and has good scalability. Compared with the method based on the script, the present invention only needs the cast list of the film and television drama. Compared with acquiring the script and subtitles, obtaining the cast list is a much easier task. Taking a step back, even if the cast list is not available, manually summarizing one is a far easier task than manually summarizing the script and subtitle text. Therefore, the present invention has stronger universality and can be applied to more film and television dramas. In addition, the search-based celebrity image tagging method only uses the name of the person to collect face images, while the present invention fully excavates the correlation between face images obtained from different queries, and thus realizes highly targeted collection of faces of characters in film and television dramas. . Not only that, but the present invention can better achieve character labeling by mining the structural information of the video, so it is more advanced in technology and has higher labeling accuracy. The above can refer to the invention patent with the application number 201210215951.1, the invention title is "A Method for Automatically Generating the Summary of Main Characters in a TV Program"; Patent.
发明内容Contents of the invention
本发明的目的在于充分挖掘和有效利用互联网中关于影视剧角色的人脸图像,提供一种自动、可扩展、普适性强、高精度的角色标注方法,为海量影视剧数据的智能化和个性化管理、浏览和检索等服务提供基础支 撑技术。The purpose of the present invention is to fully excavate and effectively utilize the face images of the characters in the Internet, to provide an automatic, scalable, universal, and high-precision character labeling method, which is an intelligent and efficient method for massive film and television drama data. Services such as personalized management, browsing and retrieval provide basic supporting technologies.
为实现上述目的,本发明提供一种基于搜索匹配的角色标注方法,该方法包括以下步骤:In order to achieve the above object, the present invention provides a method for character labeling based on search and matching, the method includes the following steps:
S1、根据待标注对象列表,得到标注场景的待标注对象集合及所有待标注对象的信息;S1. According to the list of objects to be marked, obtain the set of objects to be marked in the marked scene and the information of all the objects to be marked;
S2、为每位待标注对象构造文本关键词,利用图像搜索引擎获得相应的搜索结果图像集合;S2. Construct text keywords for each object to be marked, and use an image search engine to obtain a corresponding search result image set;
S3、在所获得的搜索结果图像上进行人脸检测和视觉属性分析,利用人脸视觉属性的一致性去除其中的噪声,得到待标注对象与标注场景密切相关的角色人脸集合;S3. Perform face detection and visual attribute analysis on the obtained search result image, remove the noise in it by using the consistency of the visual attribute of the face, and obtain a character face set that is closely related to the object to be marked and the marked scene;
S4、对所述标注场景进行人脸检测和跟踪,得到其中所有的人脸序列;S5、基于人脸序列之间的视觉相似度,以及人脸序列与待标注对象角色人脸的视觉相似度分析,对所述标注场景进行角色标注。S4. Perform face detection and tracking on the marked scene to obtain all face sequences therein; S5. Based on the visual similarity between the face sequences, and the visual similarity between the face sequence and the face of the character to be marked Analyzing, performing role labeling on the labeling scene.
根据本发明,提出了一种基于搜索匹配的影视剧角色标注方法。该方法通过挖掘不同查询检索得到的人脸图像的关系,得到与影视剧密切相关的角色人脸图像,进而根据所获角色人脸图像与影视剧中人脸序列的视觉相似性,以及影视剧中人脸序列之间的视觉相似性实现角色标注。该方法具有标注过程全自动无需人工干预,标注精度高,适用于大规模影视剧数据处理,扩展性强,适用于多种类型的影视剧,普适性强的优点。该方法还可作为大规模影视剧数据的智能化和个性化管理、浏览和检索服务中的重要基础支撑技术,在以角色为中心的影视剧浏览、智能视频摘要、面向特定角色的视频检索等应用中起到核心模块的作用。According to the present invention, a method for labeling roles in film and television dramas based on search and matching is proposed. This method mines the relationship between face images retrieved by different queries to obtain character face images that are closely related to film and television dramas, and then according to the visual similarity between the obtained character face images and the face sequences in film and television dramas, and the Visual similarity between human face sequences for character annotation. The method has the advantages of fully automatic labeling process without manual intervention, high labeling accuracy, suitable for large-scale film and television drama data processing, strong scalability, suitable for various types of film and television dramas, and strong universality. This method can also be used as an important basic support technology in the intelligent and personalized management of large-scale film and television drama data, browsing and retrieval services, and can be used in role-centered film and television drama browsing, intelligent video summarization, and video retrieval for specific roles. It acts as a core module in the application.
附图说明Description of drawings
图1为依照本发明一实施例的基于搜索匹配的角色标注方法的流程图。FIG. 1 is a flow chart of a method for character labeling based on search and matching according to an embodiment of the present invention.
具体实施方式detailed description
为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本发明进一步详细说明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings.
如图1所示,本发明的基于搜索匹配的角色标注方法包括以下步骤:As shown in Figure 1, the role labeling method based on search matching of the present invention comprises the following steps:
S1、根据演员表等待标注对象列表,得到标注场景的待标注对象集合及所有待标注对象的信息:真实姓名和角色名;S1. Obtain the set of objects to be marked in the marked scene and the information of all the objects to be marked according to the list of objects waiting to be marked in the cast list: real name and role name;
S2、为每位演员构造文本关键词,利用图像搜索引擎获得相应的搜索结果图像集合;S2. Construct text keywords for each actor, and use an image search engine to obtain a corresponding search result image set;
S3、在所获得的搜索结果图像集合上进行人脸检测和视觉属性分析,利用人脸视觉属性的一致性去除其中的噪声,得到演员与该影视剧密切相关的角色人脸集合;S3. Perform face detection and visual attribute analysis on the obtained search result image set, use the consistency of the face visual attribute to remove the noise, and obtain the character face set closely related to the actor and the film and television drama;
S4、对影视剧进行人脸检测和跟踪,得到影视剧中所有的人脸序列;S4. Perform face detection and tracking on the film and television drama, and obtain all face sequences in the film and television drama;
S5、基于人脸序列之间的视觉相似度,以及人脸序列与演员角色人脸的视觉相似度分析,实现对影视剧的角色标注。S5. Based on the visual similarity between face sequences, and the visual similarity analysis between the face sequence and the faces of actors and characters, the role labeling of film and television dramas is realized.
根据本发明的优选实施例,根据演员表等待标注对象列表,取得所有待标注对象的真实姓名和角色名的具体过程为:According to a preferred embodiment of the present invention, according to the list of actors waiting to be marked, the specific process of obtaining the real names and role names of all objects to be marked is as follows:
步骤11、访问爱演员网(http://www.ayanyuan.com/)、IMDB(http://www.imdb.com/)等专业针对影视剧演员表、剧情介绍的网站,利用影视剧名查询得到该影视剧,即与所述标注场景相关的网页;Step 11. Visit Aiyanyuan.com (http://www.ayanyuan.com/), IMDB (http://www.imdb.com/) and other professional websites for film and television drama cast and plot introduction, and use the title of film and television drama Query to obtain the film and television drama, that is, the webpage related to the marked scene;
步骤12、根据该网页的页面布局,抓取得到演员表部分,获得该影视剧的演员集合,以及每个演员的真实姓名,角色名等信息。Step 12. According to the page layout of the webpage, capture the part of the actor list, obtain the actor collection of the film and television drama, and information such as the real name and role name of each actor.
根据本发明的优选实施例,对步骤12得到的演员集合,为每位演员构造真实姓名和影视剧名加角色名两组文本关键词,利用图像搜索引擎获得搜索结果图像的具体过程如下:According to a preferred embodiment of the present invention, for the actor collection that step 12 obtains, for each actor constructs real name and film and television play name plus two groups of text keywords of role name, utilizes image search engine to obtain the specific process of search result image as follows:
步骤21、为步骤12得到的演员集合中的每位演员构造两个文本关键词,一个是演员的真实姓名,另一个是影视剧全名加上演员所扮演角色的名字的组合;Step 21, construct two text keywords for each actor in the actor set obtained in step 12, one is the real name of the actor, and the other is the combination of the full name of the film and television drama plus the name of the role played by the actor;
步骤22、文本关键词构造完毕后,利用图像搜索引擎,比如调用Google提供的应用程序接口,依次将这两个文本关键词提交到Google图像搜索引擎,并设置搜索参数为检索包含人脸的图像,返回多幅与该演员对应的搜索结果图像,比如检索得到的结果图像数量为64,该设置下,Google图像 搜索引擎会将检索结果排序在前64位的人脸图像的统一资源定位符(即URL地址)返回到检索端,检索端进而根据该地址下载相应图像。也就是说,在所有图像都可以正常下载的理想情况下,这一步骤可以得到64个搜索结果图像。实际应用中,每个关键词可以下载到的图像通常在50到64之间。用真实姓名和影视剧名加角色名下载得到的图像集合分别被称为“真实姓名”和“影视剧名加角色名”图像集合。Step 22, after the text keywords are constructed, use the image search engine, such as calling the application program interface provided by Google, to submit these two text keywords to the Google image search engine in turn, and set the search parameters to retrieve images containing faces , return multiple search result images corresponding to the actor, for example, the number of search result images is 64, under this setting, the Google image search engine will sort the search results in the first 64 uniform resource locators of face images ( That is, the URL address) is returned to the retrieval terminal, and the retrieval terminal then downloads the corresponding image according to the address. That is to say, in an ideal situation where all images can be downloaded normally, 64 search result images can be obtained in this step. In practical applications, the images that can be downloaded for each keyword are usually between 50 and 64. The image collections obtained by downloading real names and film and television drama titles plus role names are respectively called "real names" and "film and television drama titles plus role names" image collections.
对演员集合中的每位演员重复上述过程,即得到每个演员的“真实姓名”和“影视剧名加角色名”图像集合。Repeat the above process for each actor in the actor set, that is to get the image set of "real name" and "movie title plus role name" of each actor.
根据本发明的优选实施例,对步骤2得到的“真实姓名”和“影视剧名加角色名”图像集合进行人脸检测和视觉属性分析,利用人脸视觉属性的一致性去除其中的噪声,得到演员与该影视剧密切相关的角色人脸集合的具体过程如下:According to a preferred embodiment of the present invention, face detection and visual attribute analysis are carried out to the "real name" and "film and television drama name plus role name" image collections obtained in step 2, and the noise is removed using the consistency of the visual attribute of the face, The specific process of obtaining the character face collection of actors closely related to the film and television drama is as follows:
步骤31、调用人脸识别云服务Face++(http://www.faceplusplus.com.cn/)的人脸检测接口等工具,对“真实姓名”和“影视剧名加角色名”图像集合进行人脸检测,并根据检测结果将图像集合表示为相应的“真实姓名”和“影视剧名加角色名”人脸集合;同时提取每个待标注对象人脸的视觉属性,在本发明一实施例中,所述视觉属性包括性别、年龄和人种三种,并定位人脸的M个面部关键区域,在本发明一实施例中,所述面部关键区域包括九个,分别为:两个眼睛的左右角,鼻子的左下沿、中下沿和右下沿,嘴巴的左右角。在每个面部关键区域提取N维特征向量(比如128维的SIFT特征向量),并将这9个128维的特征向量拼接为1152维的人脸面部视觉特征描述子。对演员集合中的每位演员重复上述过程,得到每个演员的“真实姓名”和“影视剧名加角色名”人脸集合,每个人脸的上述三种视觉属性和面部关键区域位置;Step 31, call tools such as the face detection interface of the face recognition cloud service Face++ (http://www.faceplusplus.com.cn/), and perform human recognition on the image collections of "real name" and "film and television drama name plus character name". Face detection, and according to the detection result, the image set is represented as the corresponding "real name" and "film and television drama name plus role name" face set; simultaneously extract the visual attributes of each object face to be marked, in an embodiment of the present invention Among them, the visual attributes include gender, age and race, and locate the M facial key areas of the face. In an embodiment of the present invention, the facial key areas include nine, respectively: two eyes The left and right corners of the nose, the lower left, middle and lower right edges of the nose, and the left and right corners of the mouth. Extract N-dimensional feature vectors (such as 128-dimensional SIFT feature vectors) in each key facial area, and splice these nine 128-dimensional feature vectors into a 1152-dimensional facial visual feature descriptor. Repeat the above process for each actor in the actor set to obtain the "real name" and "film and television drama name plus role name" face set of each actor, the above three visual attributes of each face and the position of the key area of the face;
步骤32、在每位演员的“真实姓名”人脸集合上,分别生成上述三种视觉属性的统计直方图,比如:为性别属性生成一个2维直方图,2维分别对应男性和女性;为年龄属性生成一个8维直方图,其中第1维和第8维分别对应10岁以下和70岁以上的人脸,年龄落在区间[10*(i-1),10*i)的人脸对 应直方图的第i维;为人种属性生成一个3维直方图,3维分别对应“亚洲人”、“白人”和“黑人”。根据人脸三种视觉属性的出现情况对所述统计直方图的相应维度进行投票。当该演员“真实姓名”人脸集合中所有人脸均已投票完时,计算直方图得票数最多的维度与人脸数量的比值,若该比值超过设定的阈值,比如0.5,则认为该视觉属性在“真实姓名”人脸集合上是显著的。一个演员被定义为可识别的当且仅当她/他的上述三种视觉属性都是显著的。这三种显著属性也被定义为该演员的人物属性。在所有演员的“真实姓名”人脸集合上重复上述过程,得到所有的可识别演员和她/他们的人物属性。对于那些未被定义为可识别的演员,由于从网络人脸图像中无法鉴别出她/他们的人物属性,在后续的角色标注中将不会被考虑;Step 32. On the "real name" face set of each actor, generate statistical histograms of the above three visual attributes, for example: generate a 2-dimensional histogram for the gender attribute, and the 2 dimensions correspond to male and female respectively; for The age attribute generates an 8-dimensional histogram, in which the first dimension and the eighth dimension correspond to faces under 10 years old and over 70 years old respectively, and faces whose age falls in the interval [10*(i-1), 10*i) correspond to The i-th dimension of the histogram; generate a 3-dimensional histogram for the race attribute, and the 3 dimensions correspond to "Asian", "white" and "black". The corresponding dimensions of the statistical histogram are voted according to the appearance of the three visual attributes of the human face. When all the faces in the actor's "real name" face set have voted, calculate the ratio of the dimension with the most votes in the histogram to the number of faces. If the ratio exceeds the set threshold, such as 0.5, the actor is considered Visual properties are salient on the "real name" face set. An actor is defined as recognizable if and only if her/his above three visual attributes are all salient. These three salient attributes are also defined as the character attributes of the actor. Repeat the above process on the "real name" face collection of all actors to get all recognizable actors and her/their character attributes. For those actors who are not defined as identifiable, since her/their character attributes cannot be identified from online face images, they will not be considered in the subsequent role labeling;
步骤33、对步骤32得到的每位可识别演员,在其“影视剧名加角色名”人脸集合上(不失一般性,演员角色名和“影视剧名加角色名”人脸集合分别定义为Peri和CFi),基于步骤31得到的1152维人脸面部视觉特征描述子进行人脸聚类,在本发明一实施例中,采用仿射传播(Affinity Propagation)算法进行人脸聚类,该聚类算法需要计算人脸的相似度矩阵S≡[si,j]T×T,其中,元素si,j为人脸fi和fj的视觉相似度,当i≠j时,为人脸fi和fj描述子的余弦距离,当i=j时,为该集合中所有人脸相似度的平均值,T为集合CFi中的人脸数量。根据该聚类过程,可将CFi表示为公式(1)的形式Step 33, for each identifiable actor obtained in step 32, on its "film and television drama name plus role name" face set (without loss of generality, actor role name and "film and television drama name plus role name" face set are defined respectively is Peri and CFi ), based on the 1152-dimensional human face facial visual feature descriptor obtained in step 31, face clustering is carried out, and in an embodiment of the present invention, an affine propagation (Affinity Propagation) algorithm is used to carry out face clustering , the clustering algorithm needs to calculate the face similarity matrix S≡[si,j ]T×T , where the element si,j is the visual similarity of faces fi and fj , when i≠j, is the cosine distance between the face fi and fj descriptors, when i=j, it is the average of the similarity of all faces in the set, and T is the number of faces in the set CFi . According to the clustering process, CFi can be expressed in the form of formula (1)
其中,w为聚类后生成的类别数量,为CFi集合中的第j个聚类结果,为中第k个人脸的描述子。聚类仅保留人脸数量大于等于3个的结果类别。Among them, w is the number of categories generated after clustering, is the jth clustering result in the CFi set, for The descriptor of the kth face in . Clustering only retains the result categories with the number of faces greater than or equal to 3.
对公式(1)得到的每个聚类结果类别分别统计步骤32得到的该演员的性别、年龄、人种三种人物属性在该类别中的出现比率。当三种属性的出现比率都大于一预定阈值,比如0.6,则认为中的人脸都是演员Peri与该影视剧密切相关的候选角色人脸。对所有的类别重复上述过程,得到Peri的所有候选角色人脸。对所有可识别演员重复上述过程,得到她/他们各自的候选角色人脸集合;For each clustering result category obtained by formula (1) Count the occurrence ratios of the actor's gender, age, and race in the category obtained in step 32 respectively. When the occurrence ratios of the three attributes are greater than a predetermined threshold, such as 0.6, it is considered The faces in are the faces of the candidate roles of the actor Peri who are closely related to the film and television drama. to all Repeat the above process to get all the candidate faces of Peri . Repeat the above process for all identifiable actors to get her/their respective candidate character face sets;
步骤34、对于待标注对象的候选角色人脸集合进行图像去重,即对于步骤33得到的演员Peri的候选角色人脸集合由于网络人脸图像中通常存在一定数量的视觉拷贝图像,为去除拷贝图像的影响,对集合内的人脸图像进行视觉拷贝检测,在本发明一实施例中,使用视觉拷贝检测与检索工具箱SOTU作为检测工具包来进行检测(具体可参见http://vireo.cs.cityu.edu.hk/research/project/sotu.htm)。若内检出视觉拷贝人脸,则根据人脸图像在Google检索结果中的排序,删除在检索结果中排序靠后的人脸,重复执行上述过程直至中不再有拷贝人脸。对所有可识别演员重复上述过程,使得她/他们各自的候选角色人脸集合中不再有视觉拷贝人脸;Step 34, perform image deduplication on the candidate character face set of the object to be labeled, that is, for the candidate role face set of actor Peri obtained in step 33 Since there are usually a certain number of visual copy images in network face images, in order to remove the influence of copy images, the Face images in the collection are visually copied and detected. In one embodiment of the present invention, visual copy detection and retrieval toolbox SOTU is used as a detection toolkit to detect (see http://vireo.cs.cityu.edu for details .hk/research/project/sotu.htm). like If the visually copied face is detected internally, according to the ranking of the face images in the Google search results, delete the faces that are ranked lower in the search results, and repeat the above process until There are no longer copied faces in . Repeat the above process for all identifiable actors such that there are no more visually copied faces in her/their respective set of candidate character faces;
步骤35、基于步骤34的处理结果,进一步进行人脸去重,即检测不同可识别演员的候选角色人脸集合中是否存在视觉拷贝人脸。由于视觉拷贝人脸只可能属于一个角色。若演员Peri和Perj的候选角色人脸集合中检出拷贝人脸f,则分别计算f与这两个集合中其它人脸的平均视觉相似度,在相似度低的人脸集合中删除f。重复上述过程直至不同演员间不再存在拷贝人脸。通过上述步骤,可得到K个可识别演员的集合Γ以及她/他们各自的角色人脸集合Ai,分别记为:Step 35. Based on the processing result of step 34, face deduplication is further performed, that is, whether there is a visually copied face in the set of candidate character faces of different identifiable actors. Due to visual copying a face can only belong to one character. If a copy face f is detected in the candidate character face sets of actors Peri and Perj , calculate the average visual similarity between f and other faces in the two sets, and delete the face set with low similarity f. Repeat the above process until there are no more copied faces between different actors. Through the above steps, the set Γ of K identifiable actors and her/their respective role face sets Ai can be obtained, which are respectively denoted as:
Γ={A1,A2,…,AK},其中Γ={A1 ,A2 ,…,AK }, where
其中,表示Peri的角色人脸集合中的第j个人脸的描述子。in, Indicates the descriptor of the jth face in the character face set of Peri .
根据本发明的优选实施例,对影视剧进行人脸检测和跟踪,得到影视剧中的人脸序列的具体过程为:According to a preferred embodiment of the present invention, the face detection and tracking are carried out to the film and television drama, and the specific process of obtaining the human face sequence in the film and television drama is:
步骤41、对影视剧进行镜头边界检测,设检测到s-1个镜头边界点。根据这s-1个镜头边界点将影视剧分解为s个镜头;Step 41. Perform shot boundary detection on the film and television drama, assuming that s-1 shot boundary points are detected. Decompose the film and television drama into s shots according to the s-1 shot boundary points;
步骤42、调用人脸识别云服务Face++的人脸检测和追踪接口等工具,在每个镜头内进行人脸检测和跟踪,得到该镜头内检测到的人脸序列。对所有s个镜头重复这一过程,得到该影视剧内所有的人脸序列。当然,也可以使用其它人脸检测和跟踪方法,本发明对于人脸检测和跟踪方法不做任何限制。Step 42: Invoke tools such as the face detection and tracking interface of the face recognition cloud service Face++, perform face detection and tracking in each shot, and obtain a sequence of faces detected in the shot. Repeat this process for all s shots to obtain all face sequences in the film and television drama. Of course, other face detection and tracking methods can also be used, and the present invention does not impose any limitation on the face detection and tracking methods.
根据本发明的优选实施例,基于步骤35得到的可识别演员和她/他们各自的角色人脸集合,以及步骤42得到的影视剧人脸序列,基于人脸序列之间的视觉相似度,以及人脸序列与演员角色人脸的视觉相似度分析,实现对影视剧的角色标注的具体过程为:According to a preferred embodiment of the present invention, based on the identifiable actors and her/their respective character face collections obtained in step 35, and the film and television drama human face sequences obtained in step 42, based on the visual similarity between the human face sequences, and The visual similarity analysis between the face sequence and the face of the actor's character, the specific process of realizing the role labeling of the film and television drama is as follows:
步骤51、设步骤42共得到T个人脸序列,对每个人脸序列中的所有人脸提取颜色直方图特征,并基于这一特征进行聚类。聚类算法同样采用仿射传播(AffinityPropagation)算法,其中人脸相似度矩阵计算原则与步骤33所述的相同。根据聚类结果将人脸序列FTk表示为:Step 51, set step 42 to obtain a total of T human face sequences, extract color histogram features for all faces in each human face sequence, and perform clustering based on this feature. The clustering algorithm also adopts the Affinity Propagation algorithm, and the calculation principle of the face similarity matrix is the same as that described in step 33. According to the clustering results, the face sequence FTk is expressed as:
其中,和分别是类别i的类中心向量类别i中的人脸数量,该类中心向量由距离类别i中心点最近的人脸的特征表示,w是类别数量;in, with are the number of faces in category i of the class center vector of category i, which is represented by the feature of the face closest to the center point of category i, w is the number of categories;
步骤52、由于出现在同一时刻的多个人脸序列一般不可能对应同一个人。根据人脸序列出现时间的重叠情况,生成冲突矩阵C≡[ci,j]T×T。若人脸序列FTi和FTj的出现时间有重叠,则ci,j=1,若无重叠则ci,j=0;Step 52, since multiple human face sequences appearing at the same moment generally cannot correspond to the same person. A conflict matrix C≡[ci,j ]T×T is generated according to the overlap of appearance time of face sequences. If the appearance time of the face sequence FTi and FTj overlaps, then ci,j =1, if there is no overlap, then ci,j =0;
步骤53、根据步骤51得到的人脸序列表示,基于堆土机距离(Earth Mover’sDistance)计算人脸序列FTi和FTi的视觉相似度,记为fsi,j,对所有人脸序列的两两组合重复上述计算过程,并通过公式(2)此得到人脸序列相似度的概率传播矩阵P≡[pi,j]T×T,其中:Step 53. According to the face sequence representation obtained in step 51, calculate the visual similarity between face sequences FTi and FTi based on the Earth Mover's Distance, denoted as fsi,j , for all face sequences Repeat the above calculation process for pairwise combinations, and obtain the probability propagation matrix P≡[pi,j ]T×T of face sequence similarity through formula (2), where:
步骤54、计算角色与人脸序列的匹配置信度矩阵S≡[si,j]T×K,其中si,j为人脸序列FTi与Perj的角色人脸集合的相似度,该相似度等于这两个集合中最相似人脸的视觉相似性,依据公式(3)计算:Step 54. Calculate the matching confidence matrix S≡[si,j ]T×K between the character and the face sequence, wheresi,j is the similarity between the face sequence FTi and the character face set of Perj , the similarity The degree is equal to the visual similarity of the most similar faces in the two sets, calculated according to formula (3):
其中为人脸序列FTi中第m个类中心向量和Perj的角色人脸集合中第n个角色人脸的相似度;in is the similarity between the mth class center vector in the face sequence FTi and the nth character face in the character face set of Perj ;
步骤55、通过公式(4),利用冲突矩阵C更新匹配置信度矩阵SStep 55. Utilize the conflict matrix C to update the matching confidence matrix S through the formula (4)
这一操作可避免为出现时间重叠的人脸序列同时赋予高匹配置信度,从而在后续步骤中被标注为同一角色;This operation avoids assigning high matching confidence to face sequences that overlap in time, thus being labeled as the same character in subsequent steps;
步骤56、利用步骤55更新后的矩阵S,相似阈值V1(比如V1=0.8)和不相似阈值V2(比如V2=0.2),通过公式(5)生成初始标注矩阵Step 56, using the matrix S updated in step 55, the similarity threshold V1 (such as V1=0.8) and the dissimilarity threshold V2 (such as V2=0.2), generate an initial labeling matrix by formula (5)
在矩阵L(0)中,表示FTi是角色Perj的人脸,表示人脸序列FTi不是角色Perj的人脸,表示仅通过匹配置信度,人脸序列FTi所对应的角色尚不能确定。将满足的二元组<FTi,Perj>加入到已标注角色集合LFaces。实现对具有高匹配置信度值且不冲突的二元组<FTi,Perj>的角色标注;In matrix L(0) , Indicates that FTi is the face of character Perj , Indicates that the face sequence FTi is not the face of the character Perj , It means that only through the matching confidence, the character corresponding to the face sequence FTi cannot be determined yet. will satisfy The binary group <FTi , Perj > is added to the marked role set LFaces. Realize the role labeling of the binary group <FTi , Perj > with high matching confidence value and no conflict;
步骤57、基于公式(2)得到的概率传播矩阵P和公式(5)得到的初始标注矩阵L(0),通过标签传播(Label Propagation)算法,也即迭代执行公式(6)和公式(7)更新初始标注矩阵L(0)中的元素,直至算法收敛Step 57. Based on the probability propagation matrix P obtained by formula (2) and the initial label matrix L(0) obtained by formula (5), through the label propagation (Label Propagation) algorithm, that is, iteratively execute formula (6) and formula (7 ) to update the initial label matrix L(0) elements until the algorithm converges
L(t+1)≡PL(t) (6)L(t+1) ≡PL(t) (6)
通过执行标签传播算法,已有的高置信度角色标注结果将根据人脸序列之间的相似度,以一定概率对尚不能确定角色的人脸序列进行传播;By executing the label propagation algorithm, the existing high-confidence character labeling results will propagate the face sequences whose characters cannot be determined with a certain probability according to the similarity between the face sequences;
步骤58、令为算法收敛后的标注矩阵,根据公式(8)更新LΔ中满足条件的元素的标注置信度Step 58, command is the annotation matrix after the algorithm converges, according to the formula (8) to update LΔ to satisfy conditional element label confidence
其中α∈(0,1)是调节标注置信度和匹配置信度权重的阈值,设置为0.5。通过公式(8),人脸序列之间的相似度和人脸序列与角色人脸的匹配置信度得到有效融合;where α∈(0,1) is the threshold to adjust the label confidence and matching confidence weight, which is set to 0.5. Through formula (8), the similarity between face sequences and the matching confidence between face sequences and character faces are effectively fused;
步骤59、依次从步骤58更新后的LΔ中查找值最大且满足条件(9)的元 素将<FTi,Perj>加入到已标注角色集合LFaces,同时按照公式(10)更新矩阵LΔ。重复上述查找过程直至LΔ中不再存在满足条件(9)的元素;Step 59, sequentially search for the element with the largest value and satisfy the condition (9) from the updated LΔ in step 58 Add <FTi ,Perj > to the marked role set LFaces, and update the matrix LΔ according to formula (10). Repeat the above search process until there is no element satisfying condition (9) in LΔ ;
其中,<FTi,Perj>表示由人脸序列FTi和角色Perj组成的具有高匹配置信度值且不冲突的二元组,Tlabel为预先设定的区分阈值,设置为0.5。Among them, <FTi , Perj > represents a binary group composed of face sequence FTi and character Perj with high matching confidence value and no conflict, and Tlabel is a preset discrimination threshold, which is set to 0.5.
依据公式(9)和(10),可依次挑出当前置信度最高的人脸序列和角色名组合进行标注。当LΔ不再存在满足条件(9)的元素时,标注过程结束。已标注角色集合LFaces中的结果即角色标注结果。According to formulas (9) and (10), the face sequence and character name combination with the highest current confidence can be selected in turn for labeling. When there are no more elements satisfying condition (9) in LΔ , the labeling process ends. The result in the labeled character collection LFaces is the role labeling result.
以上所述的具体实施例,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施例而已,并不用于限制本发明,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The specific embodiments described above have further described the purpose, technical solutions and beneficial effects of the present invention in detail. It should be understood that the above descriptions are only specific embodiments of the present invention and are not intended to limit the present invention. Any modifications, equivalent replacements, improvements, etc. made within the spirit and principles of the present invention shall be included within the protection scope of the present invention.
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410218854.7ACN103984738B (en) | 2014-05-22 | 2014-05-22 | Role labelling method based on search matching |
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201410218854.7ACN103984738B (en) | 2014-05-22 | 2014-05-22 | Role labelling method based on search matching |
| Publication Number | Publication Date |
|---|---|
| CN103984738A CN103984738A (en) | 2014-08-13 |
| CN103984738Btrue CN103984738B (en) | 2017-05-24 |
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201410218854.7AExpired - Fee RelatedCN103984738B (en) | 2014-05-22 | 2014-05-22 | Role labelling method based on search matching |
| Country | Link |
|---|---|
| CN (1) | CN103984738B (en) |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104217008B (en)* | 2014-09-17 | 2018-03-13 | 中国科学院自动化研究所 | Internet personage video interactive mask method and system |
| CN104778481B (en)* | 2014-12-19 | 2018-04-27 | 五邑大学 | A kind of construction method and device of extensive face pattern analysis sample storehouse |
| CN105335726B (en)* | 2015-11-06 | 2018-11-27 | 广州视源电子科技股份有限公司 | Face recognition confidence coefficient acquisition method and system |
| CN105913275A (en)* | 2016-03-25 | 2016-08-31 | 哈尔滨工业大学深圳研究生院 | Clothes advertisement putting method and system based on video leading role identification |
| CN105843949B (en)* | 2016-04-11 | 2019-07-16 | 麒麟合盛网络技术股份有限公司 | A kind of image display method and device |
| CN106682094B (en)* | 2016-12-01 | 2020-05-22 | 深圳市梦网视讯有限公司 | Face video retrieval method and system |
| CN106708806B (en)* | 2017-01-17 | 2020-06-02 | 科大讯飞股份有限公司 | Sample confirmation method and device and system |
| CN107153817B (en)* | 2017-04-29 | 2021-04-27 | 深圳市深网视界科技有限公司 | Pedestrian re-identification data labeling method and device |
| CN107273859B (en)* | 2017-06-20 | 2020-10-02 | 南京末梢信息技术有限公司 | Automatic photo marking method and system |
| CN108228871A (en)* | 2017-07-21 | 2018-06-29 | 北京市商汤科技开发有限公司 | Facial image dynamic storage method and device, electronic equipment, medium, program |
| CN107633048B (en)* | 2017-09-15 | 2021-02-26 | 国网重庆市电力公司电力科学研究院 | Image annotation identification method and system |
| CN107909088B (en)* | 2017-09-27 | 2022-06-28 | 百度在线网络技术(北京)有限公司 | Method, apparatus, device and computer storage medium for obtaining training samples |
| CN107886109B (en)* | 2017-10-13 | 2021-06-25 | 天津大学 | A Video Summarization Method Based on Supervised Video Segmentation |
| CN108228845B (en)* | 2018-01-09 | 2020-10-27 | 华南理工大学 | A mobile game classification method |
| JP2020035086A (en)* | 2018-08-28 | 2020-03-05 | 富士ゼロックス株式会社 | Information processing system, information processing apparatus and program |
| CN109740623B (en)* | 2018-11-21 | 2020-12-04 | 北京奇艺世纪科技有限公司 | Actor screening method and device |
| CN109933719B (en)* | 2019-01-30 | 2021-08-31 | 维沃移动通信有限公司 | A search method and terminal device |
| CN110135804B (en)* | 2019-04-29 | 2024-03-29 | 深圳市元征科技股份有限公司 | Data processing method and device |
| CN110555117B (en)* | 2019-09-10 | 2022-05-31 | 联想(北京)有限公司 | Data processing method and device and electronic equipment |
| CN110807108A (en)* | 2019-10-15 | 2020-02-18 | 华南理工大学 | Asian face data automatic collection and cleaning method and system |
| CN111770299B (en)* | 2020-04-20 | 2022-04-19 | 厦门亿联网络技术股份有限公司 | Method and system for real-time face abstract service of intelligent video conference terminal |
| CN111813660B (en)* | 2020-06-12 | 2021-10-12 | 北京邮电大学 | Visual cognition search simulation method, electronic equipment and storage medium |
| CN113052079B (en)* | 2021-03-26 | 2022-01-21 | 重庆紫光华山智安科技有限公司 | Regional passenger flow statistical method, system, equipment and medium based on face clustering |
| CN113283480B (en)* | 2021-05-13 | 2023-09-05 | 北京奇艺世纪科技有限公司 | Object identification method and device, electronic equipment and storage medium |
| CN113792186B (en)* | 2021-08-16 | 2023-07-11 | 青岛海尔科技有限公司 | Method, device, electronic equipment and storage medium for name retrieval |
| CN115482618A (en)* | 2022-08-10 | 2022-12-16 | 青岛民航凯亚系统集成有限公司 | Remote airplane boarding check auxiliary method based on face recognition |
| CN116795840A (en)* | 2023-06-16 | 2023-09-22 | 平安银行股份有限公司 | Service table processing method and device and electronic equipment |
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN1311677C (en)* | 2004-03-12 | 2007-04-18 | 冯彦 | Substitute method of role head of digital TV. program |
| US8300953B2 (en)* | 2008-06-05 | 2012-10-30 | Apple Inc. | Categorization of digital media based on media characteristics |
| KR20130000828A (en)* | 2011-06-24 | 2013-01-03 | 엘지이노텍 주식회사 | A method of detecting facial features |
| CN102521340B (en)* | 2011-12-08 | 2014-09-03 | 中国科学院自动化研究所 | Method for analyzing TV video based on role |
| CN102542292B (en)* | 2011-12-26 | 2014-03-26 | 湖北莲花山计算机视觉和信息科学研究院 | Method for determining roles of staffs on basis of behaviors |
| CN102902821B (en)* | 2012-11-01 | 2015-08-12 | 北京邮电大学 | The image high-level semantics mark of much-talked-about topic Network Based, search method and device |
| CN103309953B (en)* | 2013-05-24 | 2017-02-08 | 合肥工业大学 | Method for labeling and searching for diversified pictures based on integration of multiple RBFNN classifiers |
| CN103793697B (en)* | 2014-02-17 | 2018-05-01 | 北京旷视科技有限公司 | The identity mask method and face personal identification method of a kind of facial image |
| Publication number | Publication date |
|---|---|
| CN103984738A (en) | 2014-08-13 |
| Publication | Publication Date | Title |
|---|---|---|
| CN103984738B (en) | Role labelling method based on search matching | |
| Wang et al. | Event driven web video summarization by tag localization and key-shot identification | |
| CN103299324B (en) | Using Latent Sub-Tags to Learn Tags for Video Annotation | |
| US10789525B2 (en) | Modifying at least one attribute of an image with at least one attribute extracted from another image | |
| Jiang et al. | Partial copy detection in videos: A benchmark and an evaluation of popular methods | |
| CN108268600B (en) | AI-based unstructured data management method and device | |
| US8577882B2 (en) | Method and system for searching multilingual documents | |
| Awad et al. | Trecvid semantic indexing of video: A 6-year retrospective | |
| Hong et al. | Multimedia question answering | |
| US8606780B2 (en) | Image re-rank based on image annotations | |
| KR20120026093A (en) | Landmarks from digital photo collections | |
| Ulges et al. | Learning visual contexts for image annotation from flickr groups | |
| CN105893573B (en) | A kind of location-based multi-modal media data subject distillation model | |
| US20230401389A1 (en) | Enhanced Natural Language Processing Search Engine for Media Content | |
| Altadmri et al. | A framework for automatic semantic video annotation: Utilizing similarity and commonsense knowledge bases | |
| Liu et al. | Event analysis in social multimedia: a survey | |
| GB2542890A (en) | Searching using specific attributes found in images | |
| CN118861211B (en) | Multi-mode data retrieval method and device based on measurement index | |
| Chua et al. | From text question-answering to multimedia QA on web-scale media resources | |
| Tommasi et al. | Beyond metadata: searching your archive based on its audio-visual content | |
| Li et al. | Question answering over community-contributed web videos | |
| Ivanov et al. | Object-based tag propagation for semi-automatic annotation of images | |
| CN116980646A (en) | Video data processing method, device, equipment and readable storage medium | |
| CN117648504A (en) | Method, device, computer equipment and storage medium for generating media resource sequence | |
| Manjula et al. | Visual and tag-based social image search based on hypergraph ranking method |
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee | ||
| CF01 | Termination of patent right due to non-payment of annual fee | Granted publication date:20170524 |