Movatterモバイル変換


[0]ホーム

URL:


CN111079757A - Clothing attribute identification method and device and electronic equipment - Google Patents

Clothing attribute identification method and device and electronic equipment
Download PDF

Info

Publication number
CN111079757A
CN111079757ACN201811223714.3ACN201811223714ACN111079757ACN 111079757 ACN111079757 ACN 111079757ACN 201811223714 ACN201811223714 ACN 201811223714ACN 111079757 ACN111079757 ACN 111079757A
Authority
CN
China
Prior art keywords
image
recognition
clothing
clothing attribute
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811223714.3A
Other languages
Chinese (zh)
Other versions
CN111079757B (en
Inventor
王涛
李律松
陈强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co LtdfiledCriticalBeijing Qihoo Technology Co Ltd
Priority to CN201811223714.3ApriorityCriticalpatent/CN111079757B/en
Publication of CN111079757ApublicationCriticalpatent/CN111079757A/en
Application grantedgrantedCritical
Publication of CN111079757BpublicationCriticalpatent/CN111079757B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

Translated fromChinese

本申请实施例提供一种服饰属性识别方法、装置及电子设备,应用于图像识别技术领域,其中该方法包括:通过预设的识别方式,识别确定待识别图像中包括至少一个目标人物,然后通过预训练的神经网络识别模型对包括至少一个目标人物的待识别图像进行服饰属性识别,得到至少一个目标人物的服饰属性识别结果,即本申请实施例通过预训练的神经网络识别模型,实现了对待识别图像中包括的目标人物的服饰属性的自动识别,从而提升了服饰属性识别的效率,避免了人工识别易出错的问题,同时降低了人工成本。

Figure 201811223714

The embodiments of the present application provide a method, device and electronic device for identifying clothing attributes, which are applied to the technical field of image recognition. The pre-trained neural network recognition model performs clothing attribute recognition on the to-be-recognized image including at least one target person, and obtains the clothing attribute recognition result of at least one target person. The automatic recognition of the clothing attributes of the target person included in the recognition image improves the efficiency of clothing attribute recognition, avoids the error-prone problem of manual identification, and reduces labor costs.

Figure 201811223714

Description

Clothing attribute identification method and device and electronic equipment
Technical Field
The application relates to the technical field of image recognition, in particular to a clothing attribute recognition method and device and electronic equipment.
Background
The clothes are arranged at the head in the commonly-known 'clothes eating and housing' in China, the clothes become indispensable for people, the clothes attribute information of a target person or a target group is very important to obtain, for example, a merchant can provide targeted clothes products for customers according to the obtained clothes attribute information, a public security department can quickly find lost children or quickly determine criminal suspects and the like according to the obtained clothes attribute information of the target person, and how to obtain the clothes attribute information of the target person or the target group becomes a key problem.
At present, for how to identify and determine clothing attribute information of a target person or a target group, the prior art uses a manual classification statistical method to obtain clothing attribute information of the target person or the target object, for example, a research staff of a merchant acquires an image of a group of people passing through an intersection, and then manually counts clothing attribute information (such as clothes, colors of accessories, styles and the like) of the target person contained in the acquired image, so as to determine clothing which is popular at the present time.
Disclosure of Invention
The application provides a clothing attribute identification method, a clothing attribute identification device and electronic equipment, which are used for improving the efficiency and accuracy of clothing attribute identification, and the technical scheme adopted by the application is as follows:
in a first aspect, a neural network-based apparel attribute identification method is provided, the method comprising,
identifying and determining at least one target person in the image to be identified in a preset identification mode;
and clothing attribute recognition is carried out on the image to be recognized comprising at least one target character through a pre-trained neural network recognition model, so that clothing attribute recognition results of the at least one target character are obtained.
Specifically, clothing attribute recognition is carried out on an image to be recognized comprising at least one target character through a pre-trained neural network recognition model to obtain a clothing attribute recognition result of the at least one target character, including,
and carrying out body region segmentation on any target person in the image to be recognized through a pre-trained neural network recognition model, and carrying out clothing attribute recognition on each body region to obtain a clothing attribute recognition result of the target person in the image to be recognized.
Specifically, clothing attribute recognition is carried out on an image to be recognized comprising at least one target character through a pre-trained neural network recognition model to obtain a clothing attribute recognition result of the at least one target character, including,
performing segmentation processing on an image to be identified to obtain at least one segmentation image comprising a single target figure;
and (3) clothing attribute recognition is carried out on any segmentation image comprising a single target character through a pre-trained neural network recognition model, so that clothing attribute recognition results of the characters in any segmentation image are obtained.
Wherein, the clothing attribute recognition result of the person comprises at least one of the following items:
a type of apparel; clothing color; the number of clothes;
the apparel includes at least one of:
clothing, hats, shoes, accessories.
Specifically, through a preset identification mode, the image to be identified is identified and determined to include at least one target person, including,
extracting at least one image frame from a video acquired by image acquisition equipment according to a preset extraction frequency, wherein the preset extraction frequency is determined according to the statistical average time length of a pedestrian passing through a control area of an image acquisition device;
and detecting and recognizing at least one image frame through a pre-trained portrait detection and recognition model, and recognizing and determining at least one to-be-recognized image comprising at least one target person.
Further, the image to be identified is segmented to obtain at least one segmented image comprising a single target character, wherein the at least one segmented image comprises at least one item selected from the group consisting of,
performing segmentation processing on an image to be identified based on a region segmentation method to obtain at least one segmentation image comprising a single target figure;
and performing segmentation processing on the image to be identified based on an edge segmentation method to obtain at least one segmentation image comprising a single target character.
Further, clothing attribute recognition is carried out on any segmentation image comprising a single target character through a pre-trained neural network recognition model to obtain clothing attribute recognition results of the characters in any segmentation image, including,
clothing feature extraction is carried out on any segmented image comprising a single target figure to obtain clothing feature information aiming at any target figure;
and inputting the clothing feature information aiming at any target person into a pre-trained neural network recognition model to obtain a clothing attribute recognition result of any person.
Further, the method may further comprise,
aiming at a current image to be recognized, performing segmentation processing on a first preset number of image frames which are in front and a second preset number of image frames which are behind on a video time axis relative to the current image to be recognized to obtain a plurality of segmented images comprising a single target figure;
extracting character features of a plurality of segmentation images comprising a single target character to obtain feature information aiming at each character;
similarity calculation is carried out on the feature information of each character, and duplication removal is carried out on a plurality of segmentation images comprising a single target character according to the similarity calculation result, so that duplicated segmentation images are obtained;
clothing attribute recognition is carried out on any segmentation image comprising a single target character through a pre-trained neural network recognition model to obtain clothing attribute recognition results of the character in any segmentation image, including,
and (4) clothing attribute recognition is carried out on the cut images after the duplication removal through a pre-trained neural network recognition model, and clothing attribute recognition results of figures included in any cut image after the duplication removal are obtained.
Further, the method further comprises:
storing the clothing attribute identification result, the image to be identified and the corresponding relation between the clothing attribute identification result and the image to be identified;
wherein, the method also comprises:
when a person query request including clothing attribute information is received, the image information of the person corresponding to the query request is queried and determined based on the corresponding relation between the clothing attribute recognition result and the image to be recognized.
In a second aspect, a clothing attribute recognition device based on a neural network is provided, and the device comprises a recognition determination module and a recognition module;
the identification determining module is used for identifying and determining at least one target person in the image to be identified in a preset identification mode;
and the recognition module is used for carrying out clothing attribute recognition on the image to be recognized, which is recognized and determined by the recognition determination module and comprises at least one target character, through the pre-trained neural network recognition model to obtain a clothing attribute recognition result of the at least one target character.
Further, the identification module is used for performing body region segmentation on any target person in the image to be identified through the pre-trained neural network identification model, and performing clothing attribute identification on each body region to obtain a clothing attribute identification result of the target person in the image to be identified.
Further, the identification module comprises a first dividing unit and an identification unit;
the first segmentation unit is used for carrying out segmentation processing on the image to be identified to obtain at least one segmentation image comprising a single target figure;
and the identification unit is used for carrying out clothing attribute identification on any segmented image which is obtained by segmenting the first segmentation unit and comprises a single target character through a pre-trained neural network identification model to obtain clothing attribute identification results of the character in any segmented image.
Wherein, the clothing attribute recognition result of the person comprises at least one of the following items:
a type of apparel; clothing color; the number of clothes;
the apparel includes at least one of:
clothing, hats, shoes, accessories.
Further, the identification determination module comprises an extraction unit and an identification determination unit;
the image acquisition device comprises an extraction unit, a processing unit and a control unit, wherein the extraction unit is used for extracting at least one image frame from a video acquired by the image acquisition device according to a preset extraction frequency, and the preset extraction frequency is determined according to the counted average duration of a pedestrian passing through a control area of the image acquisition device;
and the recognition determining unit is used for detecting and recognizing the at least one image frame extracted by the extracting unit through a pre-trained portrait detection and recognition model, and recognizing and determining at least one image to be recognized comprising at least one target person.
Further, the first segmentation unit is used for performing segmentation processing on the image to be recognized based on a region segmentation method to obtain at least one segmentation image comprising a single target character;
and/or the method is used for performing segmentation processing on the image to be identified based on an edge segmentation method to obtain at least one segmentation image comprising a single target person.
Further, the identification unit comprises a feature extraction subunit and an input subunit;
the feature extraction subunit is used for carrying out clothing feature extraction on any segmented image comprising a single target figure to obtain clothing feature information aiming at any target figure;
and the input subunit is used for inputting the clothing feature information, which is extracted by the feature extraction subunit and aims at any target character, into the pre-trained neural network recognition model to obtain the clothing attribute recognition result of any character.
Furthermore, the identification module also comprises a second segmentation unit, a feature extraction unit and a duplication elimination unit;
the second segmentation unit is used for carrying out segmentation processing on a first preset number of image frames which are in front and a second preset number of image frames which are behind on a video time axis and are relative to the current image to be recognized according to the current image to be recognized to obtain a plurality of segmented images comprising a single target figure;
the character extraction unit is used for extracting character characteristics of a plurality of segmentation images including a single target character obtained by segmentation processing of the second segmentation unit to obtain characteristic information aiming at each character;
the duplication removing unit is used for carrying out similarity calculation on the feature information extracted by the feature extracting unit aiming at each figure and carrying out duplication removal on a plurality of split images comprising a single target figure according to the similarity calculation result to obtain duplicated split images;
and the recognition unit is used for carrying out clothing attribute recognition on the de-duplicated segmented image obtained after the de-duplication processing of the de-duplication unit through a pre-trained neural network recognition model to obtain clothing attribute recognition results of persons included in any de-duplicated segmented image.
Further, the device also comprises a storage module;
the storage module is used for storing the clothing attribute identification result, the segmentation image and the corresponding relation between the clothing attribute identification result and the segmentation image;
the apparatus also includes a query determination module;
and the query determining module is used for querying and determining the image information of the person corresponding to the query request through the storage module based on the corresponding relation between the clothing attribute identification result and the segmentation image when the person query request comprising the clothing attribute information is received.
In a third aspect, an electronic device is provided, which includes:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: the neural network-based apparel attribute identification method shown in the first aspect is performed.
In a fourth aspect, a computer-readable storage medium is provided, which is used for storing computer instructions, which when run on a computer, make the computer perform the neural network-based clothing attribute identification method shown in the first aspect.
Compared with the prior art that the target person or the clothing attribute of the target group is identified in a manual mode, the clothing attribute identification method, the clothing attribute identification device and the electronic equipment have the advantages that the preset identification mode is adopted, the target person is identified and determined to be included in the image to be identified, then the clothing attribute identification is carried out on the image to be identified including the target person through the pre-trained neural network identification model, the clothing attribute identification result of the target person is obtained, namely, the clothing attribute identification method, the clothing attribute identification device and the electronic equipment achieve automatic identification of the clothing attribute of the target person included in the image to be identified through the pre-trained neural network identification model, accordingly, the clothing attribute identification efficiency is improved, the problems that manual identification is prone to errors and low in efficiency are solved, and meanwhile the labor cost is reduced.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a clothing attribute identification method based on a neural network according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a clothing attribute identification apparatus based on a neural network according to an embodiment of the present application;
FIG. 3 is a schematic structural diagram of another neural network-based clothing attribute identification apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
The embodiment of the application provides a clothing attribute identification method based on a neural network, and as shown in fig. 1, the method can comprise the following steps:
s101, identifying and determining that an image to be identified comprises at least one target person in a preset identification mode;
for the embodiment of the application, a corresponding identification mode is preset, at least one target person is determined to be included in the image to be identified, and the image not including the target person is removed.
Step S102, clothing attribute recognition is carried out on the image to be recognized including at least one target character through a pre-trained neural network recognition model, and clothing attribute recognition results of the at least one target character are obtained.
For the embodiment of the application, clothing attribute recognition is carried out on the image to be recognized through the pre-trained neural network recognition model, and clothing attribute recognition results of target characters in the image to be recognized including at least one target character are obtained. The pre-trained Neural Network recognition model may be a Neural Network recognition model based on fast-RCNN (convolutional Neural Network with regions), and the architecture thereof may adopt a Neural Network model using VGG16, ResNet, google net, which is not limited herein, or based on RCNN (region based cnn) or ssd (single Shot multi detector) or YOLO.
Illustratively, the training sample of the neural network recognition model based on the fast-RCNN may include a plurality of images acquired from a video or an image acquired from a camera device and including at least one target person and labeled clothing attributes, the neural network training is performed by using the labeled image sample, which is beneficial to improving the accuracy of the neural network in recognizing image data, in the training process, the training result is compared with the manually labeled information, when the comparison result meets the predetermined accuracy requirement, the training can be considered to be completed, and when the comparison result does not meet the accuracy requirement, the training can be continued by adjusting corresponding parameters (such as parameters in each convolutional neural network layer) until the training result meets the predetermined accuracy requirement; furthermore, the neural network recognition model based on the fast-RCNN can be obtained by performing fine-tuning on the existing model.
Compared with the prior art that the target person or the clothing attribute of the target group is identified in a manual mode, the clothing attribute identification method based on the neural network identifies and determines that the image to be identified comprises at least one target person through a preset identification mode, then clothing attribute identification is carried out on the image to be identified comprising at least one target person through a pre-trained neural network identification model, and the clothing attribute identification result of at least one target person is obtained.
In one possible implementation manner, the step S102 includes,
step 1021 (not shown in the figure), performing body region segmentation on any target person in the image to be recognized through the pre-trained neural network recognition model, and performing clothing attribute recognition on each body region to obtain a clothing attribute recognition result of the target person in the image to be recognized.
In the embodiment of the application, the figure in the image to be recognized is divided into the body regions such as the upper half and the lower half through the pre-trained neural network recognition model, and the clothing attribute of each divided body region is recognized, such as the target figure is recognized to wear red clothes on the upper half and black trousers on the lower half, so that the clothing attribute recognition result of the figure in the segmented image is obtained. The body region can be divided by extracting Gabor features of M directions and N scales of the features of each body region to obtain a body region feature vector, and then the clothing attribute identification result of the person included in the segmented image is identified and determined according to the obtained feature vector.
Illustratively, when the clothing attribute recognition is performed on an image to be recognized including at least one target person, a plurality of detection frames are generated for the image to be processed by using a way of RPN (Region suggested Network layer) in the Fast-RCNN, and the like, the Fast-RCNN detector Network layer in the Fast-RCNN can extract appearance feature information of each detection frame generated by the RPN layer, the Fast-RCNN detector Network layer in the Fast-RCNN judges and processes the appearance feature information of each detection frame in the image to be processed to determine the probability that each detection frame belongs to each class, that is, after extracting the appearance feature information of each detection frame generated by the RPN layer, the Fast-RCNN detector Network layer can respectively predict the appearance feature information of each detection frame extracted by the Fast-RCNN detector Network layer, thus, an N-dimensional vector is formed for each detection frame and output, and one N-dimensional vector is the probability that one detection frame predicted by the N-dimensional vector belongs to N classes respectively, wherein N is the total number of the classes.
According to the embodiment of the application, the figure in the image to be recognized is divided into the body type areas through the pre-trained neural network model, and the clothing attribute of each body type area is recognized and determined, so that the problem of recognizing the clothing attribute (such as the color of clothing worn by the upper half and the lower half) of different body type areas of the figure is solved.
In one possible implementation manner, the step S102 includes,
step 1022 (not shown in the figure), performing segmentation processing on the image to be identified to obtain at least one segmented image including a single target character;
for the embodiment of the application, the image to be recognized is subjected to segmentation processing, so that one or more segmentation images only containing a single target person are obtained, in addition, image parts which are useless for recognizing the target clothing attribute, such as images of blank background parts, can be removed, the processing data amount is reduced, and the image can be subjected to normalized segmentation.
And 1023 (not shown in the figure), clothing attribute recognition is carried out on any segmented image comprising a single target character through a pre-trained neural network recognition model, so that a clothing attribute recognition result of the character in any segmented image is obtained.
For the embodiment of the application, any segmentation image comprising a single target figure is used as input and is input into the pre-trained neural network recognition model, and the pre-trained neural network recognition model outputs the clothing attribute recognition result aiming at the target figure in the input segmentation image. The pre-trained neural network recognition model is obtained by pre-training a training sample, wherein the training sample comprises a plurality of images containing target characters and clothing attributes marked by the target characters in the images, such as clothing colors.
For the embodiment of the application, at least one segmentation image comprising a single target is obtained by segmenting the image to be recognized, and then the clothing attribute recognition result of the target person in the image to be recognized is obtained by recognition of the pre-trained neural network recognition model, so that the clothing attribute of the target person in the image to be recognized is automatically recognized, and the clothing attribute recognition efficiency is improved.
Wherein, the clothing attribute recognition result of the person comprises at least one of the following items:
a type of apparel; clothing color; the number of clothes;
the apparel includes at least one of:
clothing, hats, shoes, accessories.
For the present application embodiment, the apparel attribute identification result includes, but is not limited to, the style of the apparel, the color of the apparel, and the number of the apparel, wherein the apparel includes, but is not limited to, clothing, hats, shoes, and accessories.
For the embodiment of the application, the corresponding clothing attribute information of the target person can be obtained based on different purposes or application scenes.
The embodiment of the present application provides a possible implementation manner, wherein step 101 includes,
step S1011 (not shown), extracting at least one image frame from the video acquired by the image acquisition device according to a preset extraction frequency, where the preset extraction frequency is determined according to the counted average duration of the pedestrian passing through the area controlled by the image acquisition device;
for the embodiment of the present application, any image capturing device has an effective monitoring area, an extraction frequency may be set according to the average time length of the pedestrian entering the effective monitoring area and leaving the effective monitoring area, and an image frame may be extracted from the video captured by the capturing device according to the extraction frequency, for example, the average time length of the pedestrian entering the capturing control range and leaving the capturing control range is 3 seconds, the captured video is 24 frames per second, and one frame of image may be extracted at intervals not greater than 72 frames.
Step S1012 (not shown in the figure), performing detection and recognition on at least one image frame through a pre-trained human image detection and recognition model, and recognizing and determining at least one image to be recognized including at least one target person.
For the embodiment of the application, a portrait detection recognition model may be obtained by training a plurality of positive and negative training samples including a target person and a target person, and then portrait detection recognition is performed on at least one image frame through the pre-trained portrait detection recognition model to obtain at least one to-be-recognized image including at least one target person, where the portrait detection recognition model may also be a portrait detection recognition model based on a background modeling algorithm, and a commonly used background modeling algorithm includes: gaussian Mixture model (Gaussian model), frame difference algorithm (background), gradient direction histogram (HoG), and the like.
For the embodiment of the application, at least one image frame is extracted from the acquired video according to the preset extraction frequency, and then at least one image to be recognized including at least one target person is determined from the at least one image frame through the pre-trained portrait detection recognition model, so that the problem of obtaining the image to be recognized including the at least one target person is solved, and a basis is provided for subsequent clothing attribute recognition of the target person.
The embodiment of the present application provides a possible implementation manner, wherein step S1022 may include but is not limited to at least one of step S10221 (not shown in the figure), step S10222 (not shown in the figure),
step S10221, performing segmentation processing on an image to be recognized based on a region segmentation method to obtain at least one segmentation image comprising a single target person;
step S10222, performing segmentation processing on the image to be recognized based on an edge segmentation method to obtain at least one segmentation image including a single target person.
For the embodiment of the application, the image to be recognized can be segmented by a region segmentation method and/or an edge segmentation method to obtain at least one segmented image comprising a single target person. The edge detection, that is, detecting where the gray level or structure has a sudden change, indicates the ending of one region, and is also where another region starts, such discontinuity is called an edge, the gray levels of different images are different, and the boundary generally has a distinct edge, and the image can be segmented by using this feature. The region segmentation method comprises two types of region production and region splitting and merging, and the basic idea of region growing is to assemble pixels with similar properties to form a region; the region splitting and merging is almost the reverse process of region growing, each sub-region is obtained by continuously splitting starting from the whole image, and then the foreground regions are merged to realize target extraction.
For the embodiment of the application, the segmentation problem of the image to be identified is solved through a region segmentation method and/or an edge segmentation method, and the segmentation of the image to be identified containing a plurality of target characters into the segmented image containing only a single target character is realized.
This embodiment of the present application provides a possible implementation manner, wherein step S1023 includes,
step S10231 (not shown in the figure), clothing feature extraction is carried out on any segmented image comprising a single target character, and clothing feature information for any target character is obtained;
step S10232 (not shown in the figure), the clothing feature information for any target person is input to the pre-trained neural network recognition model, and a clothing attribute recognition result of any person is obtained.
For the embodiment of the application, the clothing features of the target characters in any segmented image can be extracted through the feature extraction model, then the clothing features aiming at any target character are input into the pre-trained neural network recognition model, and the clothing attribute recognition result of any target character is recognized and determined.
For the embodiment of the application, the clothing features of the target person in the extracted segmentation image are input to the pre-trained neural network recognition model for clothing attribute recognition, so that the clothing attribute recognition problem of the target person is solved, and in addition, the data processing amount of the pre-trained neural network recognition model is reduced.
In another possible implementation manner, the embodiment of the present application further provides that step S102 further includes,
step S1024 (not shown in the figure), for the current image to be recognized, performing segmentation processing on a first predetermined number of preceding image frames and a second predetermined number of following image frames on the video time axis relative to the current image to be recognized to obtain a plurality of segmented images including a single target person;
for the embodiment of the application, a first predetermined number of quantity values and a second predetermined number of quantity values are preset, wherein the first predetermined number of quantity values and the second predetermined number of quantity values may be determined based on a predetermined monitoring time length, and segmentation is performed on a first predetermined number of image frames before and a second predetermined number of image frames after the determined current image to be recognized, wherein the segmentation may be implemented based on an image segmentation method such as a region segmentation method, an edge segmentation method, and the like, so as to obtain a plurality of segmented images including a single target person.
Step S1025 (not shown in the figure) of extracting character features of a plurality of segmented images including a single target character to obtain feature information for each character;
for the embodiment of the application, the character features of the target characters in the multiple segmentation images comprising the single target character are extracted through the feature extraction model, so that the feature information of each character is obtained, wherein the feature information can be represented through the feature vectors.
Step S1026 (not shown in the figure), which is to perform similarity calculation on the feature information of each character, and perform deduplication on a plurality of segmented images including a single target character according to the similarity calculation result, to obtain a deduplicated segmented image;
for example, the similarity between the character feature information is determined by calculating the euclidean distance or the cosine similarity between the character feature information, the same character with the similarity reaching a certain threshold is determined according to the similarity calculation result, and the cut image corresponding to the character determined as the same character is subjected to corresponding duplication removal to obtain the duplicated cut image.
Wherein, the step S1023 specifically comprises,
and 10233 (not shown in the figure), clothing attribute recognition is carried out on the cut images after the duplication removal through a pre-trained neural network recognition model, and clothing attribute recognition results of people included in any cut images after the duplication removal are obtained.
For the embodiment of the application, the cut images after the duplication removal are input to a pre-trained neural network recognition model, and clothing attribute recognition results of people included in any cut image after the duplication removal are obtained.
For the embodiment of the application, the duplication removal is carried out on the plurality of the segmented images comprising the single target character, and the clothing attribute identification of the target character is carried out on the duplicated segmented images, so that the repeated identification is avoided, the accuracy of subsequent clothing attribute information statistics can be improved, and in addition, the data processing amount of the pre-trained neural network identification model is reduced.
The embodiment of the present application also provides another possible implementation manner, and the method further includes,
step S103 (not shown in the figure), storing the clothing attribute recognition result, the image to be recognized and the corresponding relation between the clothing attribute recognition result and the image to be recognized;
for the embodiment of the application, the clothing attribute identification result, the direction to be identified and the corresponding relation between the clothing attribute identification result and the direction to be identified are stored through the corresponding storage device.
Wherein, the method also comprises:
step S104 (not shown in the figure), when a person query request including clothing attribute information is received, querying and determining image information of a person corresponding to the query request based on a correspondence between the clothing attribute identification result and the image to be identified.
For the embodiment of the application, when a person query request including clothing attribute information input by a user is received, the image information of a person corresponding to the query request is queried and determined through a corresponding storage device based on the index relationship between the clothing attribute identification result and the image to be identified.
According to the embodiment of the application, through the corresponding relation between the clothing attribute identification result and the image to be identified, the person image information corresponding to the clothing attribute information is inquired and determined when the inquiry request comprising the clothing attribute information is received.
Fig. 2 is a device for identifying clothing attribute based on neural network provided in an embodiment of the present application, where thedevice 20 includes: anidentification determination module 201 and anidentification module 202;
theidentification determining module 201 is configured to identify and determine that the image to be identified includes at least one target person in a preset identification manner;
therecognition module 202 performs clothing attribute recognition on the image to be recognized, which is recognized and determined by the recognition anddetermination module 201 and includes at least one target person, through a pre-trained neural network recognition model, so as to obtain a clothing attribute recognition result of the at least one target person.
The embodiment of the application provides a clothing attribute recognition device based on a neural network, compare with the clothing attribute of prior art through artifical mode identification target personage or target crowd, this application embodiment is through predetermined identification mode, the discernment confirms that including at least one target personage in the image of waiting to discern, then through the neural network recognition model of training in advance to waiting to discern the image including at least one target personage and carry out clothing attribute recognition, obtain the clothing attribute recognition result of at least one target personage, this application embodiment is through the neural network recognition model of training in advance promptly, the automatic identification of the clothing attribute of the target personage in the image of waiting to discern has been realized, thereby clothing attribute recognition's efficiency has been promoted, the problem that artifical recognition is easy to make mistakes has been avoided, the cost of labor has been reduced simultaneously.
The clothing attribute recognition device based on the neural network channel of the embodiment can execute the clothing attribute recognition method based on the neural network provided in the above embodiments of the present application, and the implementation principles are similar, and are not described herein again.
As shown in fig. 3, theapparatus 30 of this embodiment may include an identification determining module 301 and anidentification module 302, wherein,
the identification determining module 301 is configured to identify and determine that the image to be identified includes at least one target person in a preset identification manner;
here, the identification determination module 301 in fig. 3 has the same or similar function as theidentification determination module 201 in fig. 2.
Therecognition module 302 performs clothing attribute recognition on the image to be recognized, which is recognized and determined by the recognition and determination module 301 and includes at least one target person, through a pre-trained neural network recognition model, so as to obtain a clothing attribute recognition result of the at least one target person.
Wherein theidentification module 302 of fig. 3 has the same or similar function as theidentification module 202 of fig. 2.
Specifically, the identifyingmodule 302 is configured to perform body region segmentation on any target person in the image to be identified through a pre-trained neural network identification model, and perform clothing attribute identification on each body region to obtain a clothing attribute identification result of the target person in the image to be identified.
According to the embodiment of the application, the figure in the image to be recognized is divided into the body type areas through the pre-trained neural network model, and the clothing attribute of each body type area is recognized and determined, so that the problem of recognizing the clothing attribute (such as the color of clothing worn by the upper half and the lower half) of different body type areas of the figure is solved.
Specifically, theidentification module 302 includes afirst scoring unit 3021, anidentification unit 3022;
thefirst segmentation unit 3021 is configured to perform segmentation processing on an image to be identified to obtain at least one segmentation image including a single target person;
the identifyingunit 3022 is configured to perform clothing attribute identification on any segmented image including a single target character obtained through segmentation processing by thefirst segmenting unit 3021 through a pre-trained neural network identification model, so as to obtain a clothing attribute identification result of the character included in any segmented image.
For the embodiment of the application, at least one segmentation image comprising a single target is obtained by segmenting the image to be recognized, and then the clothing attribute recognition result of the target person in the image to be recognized is obtained by recognition of the pre-trained neural network recognition model, so that the clothing attribute of the target person in the image to be recognized is automatically recognized, and the clothing attribute recognition efficiency is improved.
Wherein, the clothing attribute recognition result of the person comprises at least one of the following items:
a type of apparel; clothing color; the number of clothes;
the apparel includes at least one of:
clothing, hats, shoes, accessories.
For the embodiment of the application, the corresponding clothing attribute information of the target person can be obtained based on different purposes or application scenes.
Specifically, the recognition determination module 301 includes an extraction unit 3011, a recognition determination unit 3012;
an extracting unit 3011, configured to extract at least one image frame from a video captured by an image capturing device according to a preset extracting frequency, where the preset extracting frequency is determined according to a counted average duration of a pedestrian passing through a region controlled by the image capturing device;
and the recognition determining unit 3012 is configured to perform detection and recognition on the at least one image frame extracted by the extracting unit 3012 through a pre-trained human image detection and recognition model, and recognize and determine at least one to-be-recognized image including at least one target person.
For the embodiment of the application, at least one image frame is extracted from the acquired video according to the preset extraction frequency, and then at least one image to be recognized including at least one target person is determined from the at least one image frame through the pre-trained portrait detection recognition model, so that the problem of obtaining the image to be recognized including the at least one target person is solved, and a basis is provided for subsequent clothing attribute recognition of the target person.
Specifically, thefirst segmentation unit 3021 is configured to perform segmentation processing on an image to be recognized based on a region segmentation method to obtain at least one segmentation image including a single target person;
and/or the method is used for performing segmentation processing on the image to be identified based on an edge segmentation method to obtain at least one segmentation image comprising a single target person.
For the embodiment of the application, the segmentation problem of the image to be identified is solved through a region segmentation method and/or an edge segmentation method, and the segmentation of the image to be identified containing a plurality of target characters into the segmented image containing only a single target character is realized.
Further, therecognition unit 3022 includes a feature extraction subunit (not shown in the figure) and an input subunit (not shown in the figure);
the feature extraction subunit is used for carrying out clothing feature extraction on any segmented image comprising a single target figure to obtain clothing feature information aiming at any target figure;
and the input subunit is used for inputting the clothing feature information, which is extracted by the feature extraction subunit and aims at any target character, into the pre-trained neural network recognition model to obtain the clothing attribute recognition result of any character.
For the embodiment of the application, the clothing features of the target person in the extracted segmentation image are input to the pre-trained neural network recognition model for clothing attribute recognition, so that the clothing attribute recognition problem of the target person is solved, and in addition, the data processing amount of the pre-trained neural network recognition model is reduced.
Further, theidentification module 302 further includes asecond segmentation unit 3023, afeature extraction unit 3024, and adeduplication unit 3025;
asecond segmentation unit 3023, configured to perform segmentation processing on a first predetermined number of preceding image frames and a second predetermined number of following image frames on a video time axis, which are relative to a current image to be recognized, for the current image to be recognized, to obtain a plurality of segmented images including a single target person;
afeature extraction unit 3024, configured to perform person feature extraction on a plurality of segmented images including a single target person obtained by the segmentation processing by thesecond segmentation unit 3023 to obtain feature information for each person;
aduplication removing unit 3025 configured to perform similarity calculation on the feature information for each person extracted by thefeature extracting unit 3024, and perform duplication removal on a plurality of split images including a single target person according to a result of the similarity calculation to obtain a duplication removed split image;
the identifyingunit 3022 is configured to perform clothing attribute identification on the deduplicated segmented image obtained after the deduplication processing by thededuplication unit 3025 through a pre-trained neural network identification model, so as to obtain a clothing attribute identification result of a person included in any deduplicated segmented image.
For the embodiment of the application, the duplication removal is carried out on the plurality of the segmented images comprising the single target character, and the clothing attribute identification of the target character is carried out on the duplicated segmented images, so that the repeated identification is avoided, the accuracy of subsequent clothing attribute information statistics can be improved, and in addition, the data processing amount of the pre-trained neural network identification model is reduced.
Specifically, the device further comprises astorage module 303 and aquery module 304;
thestorage module 303 is configured to store the clothing attribute identification result, the image to be identified, and the correspondence between the clothing attribute identification result and the image to be identified;
thequery determining module 304 is configured to, when a person query request including clothing attribute information is received, query and determine, through the storage module, image information of a person corresponding to the query request based on a correspondence between the clothing attribute identification result and the segmented image.
According to the embodiment of the application, through the corresponding relation between the clothing attribute identification result and the image to be identified, the person image information corresponding to the clothing attribute information is inquired and determined when the inquiry request comprising the clothing attribute information is received.
The embodiment of the application provides a clothing attribute recognition device based on a neural network, compare with the clothing attribute of prior art through artifical mode identification target personage or target crowd, this application embodiment is through predetermined identification mode, the discernment confirms that including at least one target personage in the image of waiting to discern, then through the neural network recognition model of training in advance to waiting to discern the image including at least one target personage and carry out clothing attribute recognition, obtain the clothing attribute recognition result of at least one target personage, this application embodiment is through the neural network recognition model of training in advance promptly, the automatic identification of the clothing attribute of the target personage in the image of waiting to discern has been realized, thereby clothing attribute recognition's efficiency has been promoted, the problem that artifical recognition is easy to make mistakes has been avoided, the cost of labor has been reduced simultaneously.
The embodiment of the application provides a clothing attribute identification device based on a neural network, which is suitable for the method shown in the embodiment and is not described herein again.
An embodiment of the present application provides an electronic device, as shown in fig. 4, anelectronic device 40 shown in fig. 4 includes: aprocessor 4001 and amemory 4003.Processor 4001 is coupled tomemory 4003, such as viabus 4002. Further, theelectronic device 40 may also include atransceiver 4004. In addition, thetransceiver 4004 is not limited to one in practical applications, and the structure of the electronic device 400 is not limited to the embodiment of the present application.
Theprocessor 4001 is applied in the embodiment of the present application, and is configured to implement the functions of the identification determining module and the identification module shown in fig. 2 or fig. 3, and to implement the functions of thestorage module 303 and thequery determining module 304 shown in fig. 3. Thetransceiver 4004 includes a receiver and a transmitter.
Processor 4001 may be a CPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. Theprocessor 4001 may also be a combination that performs a computational function, including, for example, a combination of one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 4002 may include a path that carries information between the aforementioned components.Bus 4002 may be a PCI bus, EISA bus, or the like. Thebus 4002 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
Memory 4003 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an EEPROM, a CD-ROM or other optical disk storage, an optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
Thememory 4003 is used for storing application codes for executing the scheme of the present application, and the execution is controlled by theprocessor 4001.Processor 4001 is configured to execute application code stored inmemory 4003 to implement the actions of the neural network-based apparel attribute identification apparatus provided by the embodiments shown in fig. 2 or fig. 3.
The embodiment of the application provides an electronic device suitable for the method embodiment. And will not be described in detail herein.
The embodiment of the application provides an electronic device, compare with prior art through the clothing attribute of artifical mode identification target personage or target crowd, this application embodiment is through predetermined identification mode, the discernment confirms to include at least one target personage in the image of waiting to discern, then carry out clothing attribute discernment to the image of waiting to discern including at least one target personage through the neural network recognition model of training in advance, obtain the clothing attribute recognition result of at least one target personage, this application embodiment is through the neural network recognition model of training in advance, the automatic identification of the clothing attribute of the target personage that has realized waiting to discern in the image, thereby clothing attribute recognition's efficiency has been promoted, the problem of easily makeing mistakes of manual identification has been avoided, the cost of labor has been reduced simultaneously.
The present application provides a computer-readable storage medium, on which a computer program is stored, and when the program is executed by a processor, the method shown in the above embodiments is implemented.
The embodiment of the application provides a computer-readable storage medium, compared with the prior art that the target person or the clothing attribute of the target group is identified in a manual mode, the embodiment of the application identifies and determines that the image to be identified comprises at least one target person through a preset identification mode, then clothing attribute identification is carried out on the image to be identified comprising at least one target person through a pre-trained neural network identification model, and the clothing attribute identification result of at least one target person is obtained.
The embodiment of the application provides a computer-readable storage medium which is suitable for the method embodiment. And will not be described in detail herein.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. A clothing attribute identification method based on a neural network is characterized by comprising the following steps,
identifying and determining at least one target person in the image to be identified in a preset identification mode;
and clothing attribute recognition is carried out on the image to be recognized comprising at least one target character through a pre-trained neural network recognition model, so that clothing attribute recognition results of the at least one target character are obtained.
2. The method of claim 1, wherein the clothing attribute recognition of the image to be recognized including at least one target person is performed by a pre-trained neural network recognition model to obtain the clothing attribute recognition result of the at least one target person, including,
and carrying out body region segmentation on any target person in the image to be recognized through a pre-trained neural network recognition model, and carrying out clothing attribute recognition on each body region to obtain a clothing attribute recognition result of the target person in the image to be recognized.
3. The method of claim 1, wherein the clothing attribute recognition of the image to be recognized including at least one target person is performed by a pre-trained neural network recognition model to obtain the clothing attribute recognition result of the at least one target person, including,
performing segmentation processing on the image to be identified to obtain at least one segmentation image comprising a single target figure;
and (3) clothing attribute recognition is carried out on any segmentation image comprising a single target character through a pre-trained neural network recognition model, so that clothing attribute recognition results of the characters in any segmentation image are obtained.
4. The method of claim 1, wherein the person's apparel attribute identification result comprises at least one of:
a type of apparel; clothing color; the number of clothes;
the apparel includes at least one of:
clothing, hats, shoes, accessories.
5. The method according to claim 1, wherein the recognition and determination of the image to be recognized including at least one target person by a preset recognition mode comprises,
extracting at least one image frame from a video acquired by image acquisition equipment according to a preset extraction frequency, wherein the preset extraction frequency is determined according to the counted average time length of a pedestrian passing through a control area of an image acquisition device;
and detecting and recognizing the at least one image frame through a pre-trained portrait detection and recognition model, and recognizing and determining at least one image to be recognized comprising at least one target person.
6. The method according to claim 3, wherein the segmenting the image to be recognized to obtain at least one segmented image including a single target person comprises at least one of,
performing segmentation processing on the image to be identified based on a region segmentation method to obtain at least one segmentation image comprising a single target figure;
and performing segmentation processing on the image to be identified based on an edge segmentation method to obtain at least one segmentation image comprising a single target figure.
7. The method according to claim 3, wherein the clothing attribute recognition of any segmented image including a single target character is performed by a pre-trained neural network recognition model to obtain the clothing attribute recognition result of the character included in any segmented image, including,
clothing feature extraction is carried out on any segmented image comprising a single target figure to obtain clothing feature information aiming at any target figure;
and inputting the clothing feature information aiming at any target person into a pre-trained neural network recognition model to obtain a clothing attribute recognition result of any person.
8. The clothing attribute recognition device based on the neural network is characterized by comprising a recognition determining module and a recognition module;
the identification determining module is used for identifying and determining that the image to be identified comprises at least one target person in a preset identification mode;
and the recognition module performs clothing attribute recognition on the image to be recognized, which is recognized and determined by the recognition determination module and comprises at least one target person, through a pre-trained neural network recognition model to obtain a clothing attribute recognition result of the at least one target person.
9. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: performing the neural network-based apparel attribute identification method of any of claims 1 to 7.
10. A computer-readable storage medium for storing computer instructions which, when executed on a computer, cause the computer to perform the neural network-based apparel attribute identification method of any preceding claim 1 to 7.
CN201811223714.3A2018-10-192018-10-19 Clothing attribute recognition method, device and electronic equipmentActiveCN111079757B (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201811223714.3ACN111079757B (en)2018-10-192018-10-19 Clothing attribute recognition method, device and electronic equipment

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201811223714.3ACN111079757B (en)2018-10-192018-10-19 Clothing attribute recognition method, device and electronic equipment

Publications (2)

Publication NumberPublication Date
CN111079757Atrue CN111079757A (en)2020-04-28
CN111079757B CN111079757B (en)2024-09-20

Family

ID=70308555

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201811223714.3AActiveCN111079757B (en)2018-10-192018-10-19 Clothing attribute recognition method, device and electronic equipment

Country Status (1)

CountryLink
CN (1)CN111079757B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111553327A (en)*2020-05-292020-08-18上海依图网络科技有限公司Clothing identification method, device, equipment and medium
CN113449714A (en)*2021-09-022021-09-28深圳奥雅设计股份有限公司Face recognition method and system for child playground
CN113486855A (en)*2021-07-302021-10-08浙江大华技术股份有限公司Clothing identification method, device, equipment and medium
CN114332940A (en)*2021-12-302022-04-12北京爱奇艺科技有限公司Model training method, clothing recognition processing method, related device and terminal
CN114359793A (en)*2021-12-212022-04-15广东电网有限责任公司广州供电局 Clothing style discrimination method and device based on few-shot metric learning
CN115129912A (en)*2022-04-142022-09-30腾讯科技(深圳)有限公司 Model training data acquisition method, device, computer equipment and storage medium
CN117037053A (en)*2022-04-292023-11-10北京爱笔科技有限公司Person identification method, device, computer equipment and storage medium
CN117152787A (en)*2022-05-182023-12-01熵基科技股份有限公司 A method, device, equipment and readable storage medium for character clothing recognition

Citations (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2002123834A (en)*2000-08-082002-04-26Ocean Network Co Ltd Image recognition method and image processing device
CN101079109A (en)*2007-06-262007-11-28北京中星微电子有限公司Identity identification method and system based on uniform characteristic
US20110293188A1 (en)*2010-06-012011-12-01Wei ZhangProcessing image data
CN102521565A (en)*2011-11-232012-06-27浙江晨鹰科技有限公司Garment identification method and system for low-resolution video
JP2012123626A (en)*2010-12-082012-06-28Toyota Central R&D Labs IncObject detector and program
CN105447529A (en)*2015-12-302016-03-30商汤集团有限公司 Method and system for clothing detection and attribute value recognition
CN106022343A (en)*2016-05-192016-10-12东华大学 A Garment Style Recognition Method Based on Fourier Descriptor and BP Neural Network
CN107563357A (en)*2017-09-292018-01-09北京奇虎科技有限公司Live dress ornament based on scene cut, which is dressed up, recommends method, apparatus and computing device
CN107729935A (en)*2017-10-122018-02-23杭州贝购科技有限公司The recognition methods of similar pictures and device, server, storage medium
CN107766861A (en)*2017-11-142018-03-06深圳码隆科技有限公司The recognition methods of character image clothing color, device and electronic equipment
CN107909580A (en)*2017-11-012018-04-13深圳市深网视界科技有限公司A kind of pedestrian wears color identification method, electronic equipment and storage medium clothes
CN108573268A (en)*2017-03-102018-09-25北京旷视科技有限公司Image-recognizing method and device, image processing method and device and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2002123834A (en)*2000-08-082002-04-26Ocean Network Co Ltd Image recognition method and image processing device
CN101079109A (en)*2007-06-262007-11-28北京中星微电子有限公司Identity identification method and system based on uniform characteristic
US20110293188A1 (en)*2010-06-012011-12-01Wei ZhangProcessing image data
JP2012123626A (en)*2010-12-082012-06-28Toyota Central R&D Labs IncObject detector and program
CN102521565A (en)*2011-11-232012-06-27浙江晨鹰科技有限公司Garment identification method and system for low-resolution video
CN105447529A (en)*2015-12-302016-03-30商汤集团有限公司 Method and system for clothing detection and attribute value recognition
CN106022343A (en)*2016-05-192016-10-12东华大学 A Garment Style Recognition Method Based on Fourier Descriptor and BP Neural Network
CN108573268A (en)*2017-03-102018-09-25北京旷视科技有限公司Image-recognizing method and device, image processing method and device and storage medium
CN107563357A (en)*2017-09-292018-01-09北京奇虎科技有限公司Live dress ornament based on scene cut, which is dressed up, recommends method, apparatus and computing device
CN107729935A (en)*2017-10-122018-02-23杭州贝购科技有限公司The recognition methods of similar pictures and device, server, storage medium
CN107909580A (en)*2017-11-012018-04-13深圳市深网视界科技有限公司A kind of pedestrian wears color identification method, electronic equipment and storage medium clothes
CN107766861A (en)*2017-11-142018-03-06深圳码隆科技有限公司The recognition methods of character image clothing color, device and electronic equipment

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
LUO, X等: "Exact Clothing Retrieval Approach Based On Deep Neural Network", 《IEEE INFORMATION TECHNOLOGY, NETWORKING, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (ITNEC)》, vol. 1, 31 December 2016 (2016-12-31), pages 396 - 400*
刘骊;郑源野;付晓东;刘利军;黄青松;: "多目标服装图像的协同分割方法", 小型微型计算机系统, no. 07, pages 222 - 227*
范彩霞;朱虹;蔺广逢;罗磊;: "多特征融合的人体目标再识别", 中国图象图形学报, no. 06, 16 June 2013 (2013-06-16), pages 102 - 108*
陶晨;段亚峰;印梅芬;: "服装廓形的识别与量化", 纺织学报, no. 05, pages 84 - 87*

Cited By (13)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN111553327A (en)*2020-05-292020-08-18上海依图网络科技有限公司Clothing identification method, device, equipment and medium
CN111553327B (en)*2020-05-292023-10-27上海依图网络科技有限公司Clothing identification method, device, equipment and medium
CN113486855A (en)*2021-07-302021-10-08浙江大华技术股份有限公司Clothing identification method, device, equipment and medium
CN113486855B (en)*2021-07-302025-07-04浙江大华技术股份有限公司 Clothing identification method, device, equipment and medium
CN113449714A (en)*2021-09-022021-09-28深圳奥雅设计股份有限公司Face recognition method and system for child playground
CN113449714B (en)*2021-09-022021-12-28深圳奥雅设计股份有限公司Identification method and system for child playground
CN114359793A (en)*2021-12-212022-04-15广东电网有限责任公司广州供电局 Clothing style discrimination method and device based on few-shot metric learning
CN114359793B (en)*2021-12-212025-05-09广东电网有限责任公司广州供电局 Clothing style identification method and device based on few-sample metric learning
CN114332940B (en)*2021-12-302024-06-07北京爱奇艺科技有限公司Model training method, clothing recognition processing method, related device and terminal
CN114332940A (en)*2021-12-302022-04-12北京爱奇艺科技有限公司Model training method, clothing recognition processing method, related device and terminal
CN115129912A (en)*2022-04-142022-09-30腾讯科技(深圳)有限公司 Model training data acquisition method, device, computer equipment and storage medium
CN117037053A (en)*2022-04-292023-11-10北京爱笔科技有限公司Person identification method, device, computer equipment and storage medium
CN117152787A (en)*2022-05-182023-12-01熵基科技股份有限公司 A method, device, equipment and readable storage medium for character clothing recognition

Also Published As

Publication numberPublication date
CN111079757B (en)2024-09-20

Similar Documents

PublicationPublication DateTitle
CN111079757A (en)Clothing attribute identification method and device and electronic equipment
US10402627B2 (en)Method and apparatus for determining identity identifier of face in face image, and terminal
CN110235138B (en) System and method for appearance search
CN111626371B (en)Image classification method, device, equipment and readable storage medium
CN109145742B (en)Pedestrian identification method and system
CN107346409B (en) Pedestrian re-identification method and device
CN106557728B (en)Query image processing and image search method and device and monitoring system
WO2019042195A1 (en)Method and device for recognizing identity of human target
WO2018121287A1 (en)Target re-identification method and device
MX2014012866A (en)Method for binary classification of a query image.
CN112417970B (en) Target object recognition method, device and electronic system
CN112733814B (en)Deep learning-based pedestrian loitering retention detection method, system and medium
CN110691202A (en)Video editing method, device and computer storage medium
CN101131728A (en) A Face Shape Matching Method Based on Shape Context
CN112016353A (en) A video-based face image identification method and device
CN112766139A (en)Target identification method and device, storage medium and electronic equipment
CN107315985B (en) A kind of iris recognition method and terminal
CN110659616A (en)Method for automatically generating gif from video
TW201820260A (en)All-weather thermal image-type pedestrian detecting method to express the LBP encoding in the same window by HOG as the feature representation, and use SVM and Adaboost to proceed classifier training
CN119274223A (en) Clustering face recognition method and device based on tracking
AU2017279658A1 (en)Pose-aligned descriptor for person re-id with geometric and orientation information
CN111079473B (en) Gender identification method, device, electronic device and computer-readable storage medium
WO2017101380A1 (en)Method, system, and device for hand recognition
CN117058736A (en)Facial false detection recognition method, device, medium and equipment based on key point detection
CN106296704B (en)Universal image partition method

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp