Movatterモバイル変換


[0]ホーム

URL:


CN112199541A - Image processing method and device based on shot object - Google Patents

Image processing method and device based on shot object
Download PDF

Info

Publication number
CN112199541A
CN112199541ACN201910609946.0ACN201910609946ACN112199541ACN 112199541 ACN112199541 ACN 112199541ACN 201910609946 ACN201910609946 ACN 201910609946ACN 112199541 ACN112199541 ACN 112199541A
Authority
CN
China
Prior art keywords
group
image
preset
group member
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910609946.0A
Other languages
Chinese (zh)
Other versions
CN112199541B (en
Inventor
刘欣怡
张成宇
李祥
刘义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dingtalk China Information Technology Co Ltd
Original Assignee
Nail Holding Cayman Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nail Holding Cayman Co ltdfiledCriticalNail Holding Cayman Co ltd
Priority to CN201910609946.0ApriorityCriticalpatent/CN112199541B/en
Priority to PCT/CN2020/099877prioritypatent/WO2021004364A1/en
Publication of CN112199541ApublicationCriticalpatent/CN112199541A/en
Application grantedgrantedCritical
Publication of CN112199541BpublicationCriticalpatent/CN112199541B/en
Activelegal-statusCriticalCurrent
Anticipated expirationlegal-statusCritical

Links

Images

Classifications

Landscapes

Abstract

One or more embodiments of the present specification provide a method and apparatus for image processing based on a subject, the method may include: acquiring an uploaded image set related to the group; identifying a shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group; respectively determining attention images corresponding to all group members in the group in the image set, wherein shot objects contained in the attention images are matched with preset associated objects corresponding to the corresponding group members; and when the information related to the image set is respectively pushed to each group member of the group, setting the concerned image corresponding to the pushed group member as a priority display.

Description

Image processing method and device based on shot object
Technical Field
One or more embodiments of the present disclosure relate to the field of image processing technologies, and in particular, to a method and an apparatus for image processing based on a subject.
Background
Communication applications typically support group functions. Topic discussion can be realized among all group members in the group; or, fast message notification, information sharing, etc. can be achieved based on the group without separate conversations with each group member. In addition to text, group members may send images, such as photographs, etc., within the chat interface of the group.
Group members may share a large number of images simultaneously, the order in which the images are presented depends on the sender's set order in which all recipients will receive and view the images. However, different group members have different concerns, so that the group members often cannot quickly view images of interest themselves, but have to sequentially view many images of no interest.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide a method and apparatus for image processing based on a subject.
To achieve the above object, one or more embodiments of the present disclosure provide the following technical solutions:
according to a first aspect of one or more embodiments herein, there is provided a subject-based image processing method including:
acquiring an uploaded image set related to the group;
identifying a shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
when a shot object in any image is matched with any preset associated object, adding the image into the album corresponding to the preset associated object.
According to a second aspect of one or more embodiments herein, there is provided a subject-based image processing method including:
acquiring an uploaded image set related to the group;
identifying a shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
respectively determining attention images corresponding to all group members in the group in the image set, wherein shot objects contained in the attention images are matched with preset associated objects corresponding to the corresponding group members;
and returning a message related to the image set to any group member according to an image acquisition request initiated by any group member in the group, wherein the message is used for preferentially displaying a corresponding attention image to any group member.
According to a third aspect of one or more embodiments of the present specification, there is provided a subject-based image processing method including:
receiving an image set pushed by a server, wherein the image set is pushed to each group member of a group to which a home terminal user belongs; when a shot object of any image in the image set is matched with the characteristic information corresponding to the home terminal user, marking the image as a concerned image corresponding to the home terminal user;
and when the image set is displayed, preferentially displaying the concerned image corresponding to the home terminal user.
According to a fourth aspect of one or more embodiments of the present specification, there is provided a subject-based image processing method including:
determining a set of images which need to be uploaded to a server in a group-specific manner; the server maintains characteristic information corresponding to each group member in the group, so that when a shot object of any image in the image set is matched with the characteristic information corresponding to any group member, the any image is marked as a concerned image corresponding to any group member;
uploading the image set to the server, so that when the server pushes the image set to any group member, the attention image corresponding to any group member is preferentially displayed.
According to a fifth aspect of one or more embodiments herein, there is provided a subject-based image processing method including:
identifying a subject in an image contained in a local album;
determining a preset object contained in each image according to the recognition result;
and displaying the images in the local photo album in an arranging way according to the predefined arranging sequence among the preset objects.
According to a sixth aspect of one or more embodiments of the present specification, there is provided a multimedia file processing method, including:
acquiring a multimedia file set related to a group;
identifying an acquired object in the multimedia files contained in the multimedia file set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
respectively determining concerned multimedia files corresponding to each group member in the group in the multimedia file set, wherein the collected objects contained in the concerned multimedia files are matched with preset associated objects corresponding to the corresponding group members;
and when the information related to the multimedia file set is respectively pushed to each group member of the group, setting the concerned multimedia files corresponding to the pushed group members as the first ranking.
According to a seventh aspect of one or more embodiments of the present specification, there is provided a subject-based image processing method including:
acquiring an uploaded image set related to the group;
identifying a shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
respectively determining attention images corresponding to all group members in the group in the image set, wherein shot objects contained in the attention images are matched with preset associated objects corresponding to the corresponding group members;
and when the information related to the image set is respectively pushed to each group member of the group, setting the concerned image corresponding to the pushed group member as a priority display.
According to an eighth aspect of one or more embodiments of the present specification, there is provided a subject-based image processing method including:
determining an image set which needs to be uploaded to a server aiming at a group, wherein the server maintains characteristic information corresponding to each group member in the group;
uploading the collection of images to the server; when the shot object of any image in the image set is matched with the characteristic information corresponding to any group member, the any image is added to the photo album corresponding to the any group member.
According to a ninth aspect of one or more embodiments herein, there is provided a subject-based image processing apparatus comprising:
the acquisition unit acquires the uploaded image set related to the group;
the identification unit is used for identifying the shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
and the adding unit is used for adding any image to the album corresponding to any preset associated object when the shot object in any image is matched with any preset associated object.
According to a tenth aspect of one or more embodiments of the present specification, there is provided a subject-based image processing apparatus including:
the acquisition unit acquires the uploaded image set related to the group;
the identification unit is used for identifying the shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
the determining unit is used for respectively determining the concerned images corresponding to all the group members in the group in the image set, and the shot objects contained in the concerned images are matched with the preset associated objects corresponding to the corresponding group members;
and the return unit is used for returning a message related to the image set to any group member according to an image acquisition request initiated by any group member in the group, wherein the message is used for preferentially displaying a corresponding attention image to any group member.
According to an eleventh aspect of one or more embodiments of the present specification, there is provided a subject-based image processing apparatus comprising:
the receiving unit is used for receiving an image set pushed by the server, and the image set is pushed to each group member of a group to which the home terminal user belongs; when a shot object of any image in the image set is matched with the characteristic information corresponding to the home terminal user, marking the image as a concerned image corresponding to the home terminal user;
and the display unit is used for preferentially displaying the concerned images corresponding to the home terminal user when the image set is displayed.
According to a twelfth aspect of one or more embodiments of the present specification, there is provided a subject-based image processing apparatus including:
the determining unit is used for determining an image set which needs to be uploaded to the server aiming at the group; the server maintains characteristic information corresponding to each group member in the group, so that when a shot object of any image in the image set is matched with the characteristic information corresponding to any group member, the any image is marked as a concerned image corresponding to any group member;
and the uploading unit uploads the image set to the server, so that when the server pushes the image set to any group member, the concerned image corresponding to any group member is preferentially displayed.
According to a thirteenth aspect of one or more embodiments of the present specification, there is provided a subject-based image processing apparatus including:
an identifying unit that identifies a subject in an image included in a local album;
the determining unit is used for determining a preset object contained in each image according to the recognition result;
and the display unit is used for displaying the images in the local photo album in an arranged manner according to the predefined arrangement sequence among the preset objects.
According to a fourteenth aspect of one or more embodiments of the present specification, there is provided a multimedia file processing apparatus including:
the acquisition unit is used for acquiring a multimedia file set related to the group;
the identification unit is used for identifying the collected object in the multimedia files contained in the multimedia file set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
the determining unit is used for respectively determining concerned multimedia files corresponding to all group members in the group in the multimedia file set, and collected objects contained in the concerned multimedia files are matched with preset associated objects corresponding to the corresponding group members;
and the setting unit is used for setting the concerned multimedia files corresponding to the pushed group members as the first ranking when the information related to the multimedia file set is respectively pushed to each group member of the group.
According to a fifteenth aspect of one or more embodiments herein, there is provided a subject-based image processing apparatus comprising:
the acquisition unit acquires the uploaded image set related to the group;
the identification unit is used for identifying the shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
the determining unit is used for respectively determining the concerned images corresponding to all the group members in the group in the image set, and the shot objects contained in the concerned images are matched with the preset associated objects corresponding to the corresponding group members;
and the setting unit is used for setting the attention image corresponding to the pushed group member as a priority display when the information related to the image set is respectively pushed to each group member of the group.
According to a sixteenth aspect of one or more embodiments of the present specification, there is provided a subject-based image processing apparatus including:
the system comprises a determining unit, a judging unit and a judging unit, wherein the determining unit determines an image set which needs to be uploaded to a server aiming at a group, and the server maintains characteristic information corresponding to each group member in the group;
an uploading unit for uploading the image set to the server; when the shot object of any image in the image set is matched with the characteristic information corresponding to any group member, the any image is added to the photo album corresponding to the any group member.
According to a seventeenth aspect of one or more embodiments of the present specification, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of the first aspect by executing the executable instructions.
According to an eighteenth aspect of one or more embodiments of the present specification, a computer-readable storage medium is presented, having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the first aspect.
According to a nineteenth aspect of one or more embodiments of the present specification, there is provided an electronic apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method according to the second aspect by executing the executable instructions.
According to a twentieth aspect of one or more embodiments of the present description, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the second aspect.
According to a twenty-first aspect of one or more embodiments of the present specification, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method according to the third aspect by executing the executable instructions.
According to a twenty-second aspect of one or more embodiments of the present specification, a computer-readable storage medium is presented, having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the third aspect.
According to a twenty-third aspect of one or more embodiments of the present specification, there is provided an electronic apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of the fourth aspect by executing the executable instructions.
According to a twenty-fourth aspect of one or more embodiments of the present specification, a computer-readable storage medium is presented, having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the fourth aspect.
According to a twenty-fifth aspect of one or more embodiments herein, there is provided an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method according to the fifth aspect by executing the executable instructions.
According to a twenty-sixth aspect of one or more embodiments of the present specification, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the fifth aspect.
According to a twenty-seventh aspect of one or more embodiments of the present specification, there is provided an electronic apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of the sixth aspect by executing the executable instructions.
According to a twenty-eighth aspect of one or more embodiments of the present specification, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method according to the sixth aspect.
According to a twenty-ninth aspect of one or more embodiments herein, there is provided an electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method according to the seventh aspect by executing the executable instructions.
According to a thirtieth aspect of one or more embodiments of the present specification, a computer-readable storage medium is presented, on which computer instructions are stored, which when executed by a processor, implement the steps of the method according to the seventh aspect.
According to a thirty-first aspect of one or more embodiments of the present specification, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method according to the eighth aspect by executing the executable instructions.
According to a thirty-second aspect of one or more embodiments of the present specification, a computer-readable storage medium is presented, having stored thereon computer instructions, which when executed by a processor, implement the steps of the method according to the eighth aspect.
Drawings
Fig. 1 is a schematic diagram of an architecture of a subject-based image processing system according to an exemplary embodiment.
Fig. 2A is a flowchart of a method for server-side subject-based image processing according to an exemplary embodiment.
Fig. 2B is a flowchart of a method for processing a subject-based image on a server side according to a second exemplary embodiment.
Fig. 2C is a flowchart of a client-side subject-based image processing method according to one embodiment.
Fig. 2D is a flowchart of a client-side object-based image processing method according to a second exemplary embodiment.
Fig. 2E is a flowchart of a client-side object-based image processing method according to another exemplary embodiment.
Fig. 2F is a flowchart of a method for processing a multimedia file on a server side according to an exemplary embodiment.
Fig. 3A is a flowchart of another method for server-side subject-based image processing according to an exemplary embodiment.
Fig. 3B is a flowchart of another client-side subject-based image processing method according to an exemplary embodiment.
FIG. 4 is a diagram of a group chat interface provided by an exemplary embodiment.
FIG. 5 is a schematic diagram of an input face interface provided by an exemplary embodiment.
FIG. 6 is a schematic diagram of a publishing class dynamic provided by an exemplary embodiment.
Fig. 7-10 are schematic diagrams of tagging of a subject included in an image according to an exemplary embodiment.
Fig. 11 is a diagram illustrating a push publish message according to an exemplary embodiment.
FIG. 12 is a schematic diagram of a class band display interface provided by an exemplary embodiment.
FIG. 13 is a schematic diagram of a growing album display interface according to an exemplary embodiment.
Fig. 14 is a schematic structural diagram of an apparatus according to an exemplary embodiment.
Fig. 15 is a block diagram of a subject-based image processing apparatus according to one exemplary embodiment.
Fig. 16A is a block diagram of an image processing apparatus based on a subject according to a second exemplary embodiment.
Fig. 16B is a block diagram of an image processing apparatus based on a subject according to a third exemplary embodiment.
Fig. 17 is a block diagram of an image processing apparatus based on a subject provided in the fourth exemplary embodiment.
Fig. 18 is a block diagram of a subject-based image processing apparatus according to a fifth exemplary embodiment.
Fig. 19 is a block diagram of an image processing apparatus based on a subject according to a sixth exemplary embodiment.
Fig. 20 is a block diagram of an image processing apparatus based on a subject provided in the seventh exemplary embodiment.
Fig. 21 is a schematic diagram of another apparatus provided in an exemplary embodiment.
Fig. 22 is a block diagram of a multimedia file processing apparatus according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with certain aspects of one or more embodiments of the specification, as detailed in the claims which follow.
It should be noted that: in other embodiments, the steps of the corresponding methods are not necessarily performed in the order shown and described herein. In some other embodiments, the method may include more or fewer steps than those described herein. Moreover, a single step described in this specification may be broken down into multiple steps for description in other embodiments; multiple steps described in this specification may be combined into a single step in other embodiments.
Fig. 1 is a schematic diagram of an architecture of a subject-based image processing system according to an exemplary embodiment. As shown in fig. 1, the system may include aserver 11, a network 12, a number of electronic devices such as acell phone 13, acell phone 14, acell phone 15, and the like.
Theserver 11 may be a physical server comprising a separate host, or theserver 11 may be a virtual server carried by a cluster of hosts. During operation, theserver 11 may operate a server-side program of a certain communication application to implement a related service function of the communication application. In the technical solution of one or more embodiments of the present specification, theserver 11 may cooperate with the client running on the mobile phone 13-15 to reasonably distribute the images in the group to the corresponding photo albums, so as to facilitate efficient and continuous viewing by interested group members.
The handsets 13-15 are just one type of electronic device that a user may use. In fact, it is obvious that the user can also use electronic devices of the type such as: a PC, a tablet device, a notebook computer, a pda (Personal Digital Assistants), a wearable device (such as smart glasses, a smart watch, etc.), etc., which are not limited by one or more embodiments of the present disclosure. During operation, the electronic device may operate a client-side program of a communication application to implement a related service function of the communication application. In one or more embodiments of the present disclosure, the mobile phones 13-15 may cooperate with a server running on theserver 11 to upload an image set related to a group and implement other related functions.
It should be noted that: an application program of a client of a communication application can be pre-installed on the electronic equipment, so that the client can be started and run on the electronic equipment; of course, when an online "client" such as HTML5 technology is employed, the client can be obtained and run without installing a corresponding application on the electronic device.
And the network 12 for interaction between the handsets 13-15 and theserver 11 may include various types of wired or wireless networks. In one embodiment, the Network 12 may include the Public Switched Telephone Network (PSTN) and the Internet. Meanwhile, the electronic devices such as themobile phones 13 to 15 may also perform communication interaction through the network 12, for example, a single chat between any two electronic devices is implemented, or several electronic devices participate in the same group, so as to implement a group chat or other operations based on the group.
Fig. 2A is a flowchart of a method for server-side subject-based image processing according to an exemplary embodiment. As shown in fig. 2A, the method applied to the server may include the following steps:
at step 202a, an uploaded group-related image set is obtained.
In one embodiment, the image collection may be uploaded by any group member within the group. Alternatively, the image set may be uploaded by a specific group member in the group, for example, the specific group member may be a group owner or an administrator, and the description does not limit this.
In one embodiment, the image set may be uploaded through a group chat interface corresponding to the group, which is similar to sending an image type group chat message. Or, the image set may be uploaded through an image uploading interface corresponding to the group, and an entry of the image uploading interface may be, for example, in a group chat interface corresponding to the group; of course, the image uploading interface may be independent of the group, and the user may select the associated group in the image uploading interface, so that the uploaded image set is related to the group. For example, the image upload interface may include a group album interface in the related art; alternatively, the image uploading interface may be a new interface different from the related art, such as an information flow display interface corresponding to the group (each piece of information may be arranged and displayed in reverse order according to the release time, and the image set is included in a piece of information), and the like.
In an embodiment, the image set may include one or more images, and this specification does not limit this.
In one embodiment, the access right of the image set is assigned to a group member of the group; in other words, only the group members of the group are able to view the set of images. The group members can share the image set, so that the non-group members can also view the images in the image set; of course, in some cases, even the sharing operation of the group member on the image set, the downloading operation of the images contained in the image set, the screen capture operation after opening the images, and the like may be limited, so as to avoid the images in the image set from being leaked as much as possible.
Step 204a, identifying the shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group.
In one embodiment, the preset correlation object includes at least one of: the group members of the group, the non-group members having a preset association with the group members, and the like, which are not limited in this specification. For example, when the group is a university class group, the group members include a tutor, students in the class, and the like, and the preset associated object may include the group members; when the group is a parent group of a primary school, the group members include a class owner, parents of students in the class, and the like, and then the preset associated object may include the students in the class; when the group is a pet communication group, the group members include a pet owner, and the like, and the preset associated object may include a pet kept by the pet owner.
In an embodiment, the preset object library includes feature information of the preset associated object, where the feature information may represent the corresponding preset associated object from one or more dimensions, and the dimension is not limited in this specification. For example, the feature information may include facial feature information or skin color feature, hair color feature, body proportion feature, and the like of a preset associated object. Of course, the above feature information should belong to visual features to ensure that feature information of the same dimension can be extracted from the shot object in the image, so as to be compared with the feature information of the preset associated object, thereby determining the preset associated object matching the shot object in the preset object library.
In an embodiment, the group member may upload the image of the preset associated object to the server, and construct or update the preset object library after extracting the corresponding feature information by the server. Or, after the group member processes the image of the preset associated object through the electronic device and extracts the corresponding feature information, the group member directly uploads the feature information to the server to construct or update the preset object library. Or, the server may obtain the feature information of the preset associated object in other manners, even directly obtain the preset object library, which is not limited in this specification; for example, when the feature information of the group member exists at a preset platform or a storage space, the group member may send the indication information to the server, so that the server obtains the feature information corresponding to the group member (the group member himself or another associated object besides the group member) from the preset platform or the storage space; or the group member does not need to send the indication information to the server, and the server can actively determine the association relationship between the group member and the feature information according to the information of the group member, the information of the personnel to which the feature information at the preset platform or the storage space belongs, and the like, so as to determine the feature information corresponding to the group member.
In an embodiment, especially in a case where the feature information of the preset associated object may relatively greatly change, such as when the age of the preset associated object is small, the accuracy of matching the photographic subject in the image by the server based on the preset object library can be ensured by updating the feature information contained in the preset object library. For example, the group members may provide a close-up of the preset associated objects periodically or non-periodically to update the corresponding feature information in the preset object library. For another example, when an image including each preset related object is frequently uploaded in the group, and when the object in the image is determined to be matched with the preset related object, the shooting time of the image is certainly later than the generation time of the feature information of the preset related object, so that the feature information of the corresponding preset related object can be updated according to the feature information of the object in the image without specially providing a close-up photograph by group members, and the feature information of the preset related object is updated in a non-perception manner in the process of continuously uploading the image.
In one embodiment, the feature information included in the preset object library is assumed to include: presetting facial feature information of an associated object; accordingly, the server can identify the face region of the subject in the images contained in the image set by face detection technology; then, the facial feature information extracted from the face region is compared with facial feature information contained in a preset object library to determine a subject matching a preset related object. When the preset associated object and the shot object are users, such as the aforementioned students, the facial feature information may be face feature information, and the adopted face detection technology is a face detection technology and the technology for comparing the facial feature information is a face recognition technology; when the associated object is preset and the subject is of another type, such as the aforementioned pet, the above scheme should employ face detection technology, face recognition technology, etc. corresponding to the object of the corresponding type.
Here, when the server recognizes the face area of the subject in the image by the face detection technique, it actually recognizes an area belonging to a "face" in the image based on the face detection technique, and takes the area as the above-mentioned face area, and the face area belongs to the corresponding subject, and thus is considered as equivalent to the face area in which the subject is recognized.
Step 206a, respectively determining the concerned images corresponding to each group member in the group in the image set, wherein the concerned images contain the shot objects matched with the preset associated objects corresponding to the corresponding group members.
And 208a, when the information related to the image set is respectively pushed to each group member of the group, setting the concerned image corresponding to the pushed group member as a priority display.
In an embodiment, the above-mentioned preferred display scheme may be a default display scheme, and the group members are not adjustable. Alternatively, the above-mentioned preferred display scheme is only an optional display scheme, and other one or more display schemes also exist, for example, one display scheme is that the display order of each image in the image set does not need to be adjusted, and a switch option or pop-up prompt for the display scheme can be provided to the group members at the electronic device, so that the group members can select an appropriate display scheme according to needs.
In one embodiment, the concerned images of each group member are determined, so that the group members can ensure that each group member can preferentially check the concerned images thereof only by uploading one image set, thereby meeting the individual requirements of each group member.
In one embodiment, the message related to the image collection may be a posting message for the image collection, the posting message for linking to a presentation interface of the image collection. For example, the posted message may be sent as a group chat message to a group chat interface corresponding to the group, and the sender of the posted message may be set as an uploader of the image collection (substantially automatically sent by the server), or presented as a system push message. Then, the server may set the preview image of the posting message as the attention image corresponding to the pushed group member. In other words, by determining the attention images of the group members, when each group member receives the release message, the seen preview image is the attention image corresponding to the group member, so that the preview image of the release message can be different from person to person, and the individual requirements of the group members are met.
In one embodiment, the message associated with the image collection is a publication message for the image collection. Then, the server may generate the publishing message as a presentation interface for linking to the attention image corresponding to the pushed group member in the image set, so that the group member may directly jump to the linked attention image after triggering the publishing message without sequentially reviewing the non-attention images arranged in front, thereby improving the image viewing efficiency of the group member, and the server does not need to perform personalized adjustment on the sequence of the images included in the image set. For example, when a posting message is pushed to a group member, if the images of interest corresponding to the group member are located at the 33 rd, 34 th and 35 th images of the image set, that is, a plurality of images of interest are arranged consecutively, the posting message may be linked to the 33 th image, that is, the first image of interest, so that the group member can view all the images of interest consecutively without looking over the previous 32 other images. For another example, if the images of interest corresponding to the group members are located at the 33 rd, 38 th and 49 th images of the image set, that is, the plurality of images of interest are not arranged consecutively, the posting messages may be linked to the 33 th, 38 th and 49 th images, respectively, so that the group members can directly view all the images of interest without looking over other images.
In one embodiment, the message associated with the image collection may be the image collection itself. Correspondingly, the server can set the display sequence among the images contained in the image set according to the pushed group members, so that the attention images corresponding to the pushed group members have the display sequence which is prior to the rest images, each group member can preferentially view the images which are more attention to the group members, and the same image set can realize different display effects.
In one embodiment, each group member may set a blacklist for itself, the blacklist including one or more blacklist objects. For example, the group member may upload an image containing a blacklist object to a server, and extract characteristic information of the blacklist object by the server; alternatively, the server may obtain the feature information of the blacklist object from another channel, and the process may refer to the obtaining manner of the feature information of the preset associated object. Then, for each group member, the server can screen out the image containing the blacklist object as the corresponding blacklist image of the corresponding group member by respectively matching the characteristic information of the blacklist object with the shot object of the images contained in the image set. Then, when the message related to the image set is pushed, the server may mask the blacklist image in the image set according to the pushed group member. For example, when the message related to the image set is a published message for the image set, and the published message is used for linking to a presentation interface of the image set, the content of the published message corresponding to different group members may be different, so as to link to different presentation interfaces, and ensure that the group members do not see the image containing the self-set blacklist object. For another example, when the message related to the image set is the image set itself, the image set including the blacklist object may be removed and then pushed to the corresponding group member.
In an embodiment, each group member may set a collection list, and the collection list includes a plurality of collection objects. Then, the server may identify the image set including the image of the aggregation object by obtaining the feature information of the aggregation object in advance, which is similar to the above process of obtaining the image of interest and is not described herein again. For a group member, if the subject of a certain image includes some or all of the collection objects set by the group member, the image may be set as the collection image corresponding to the group member. Then, when the message related to the image set is pushed to the group member, the set image can be preferentially displayed. Of course, if both the image of interest and the collective image are included, the image of interest may be prioritized over the collective image, so that the image of interest is presented first, the collective image is presented second, and then the other images are presented.
The plurality of collection objects contained in the collection list may be a plurality of objects of interest to the group members, and the server may determine whether one or more objects in the collection list are contained in the image by obtaining facial features of the objects, so that the server may perform facial recognition on the subject in the image contained in the image collection based on the facial features. For example, the collection object may include a group member or a group member's children, friends, colleagues, pets, etc., and this description is not intended to be limiting.
In some cases, some special images may contain both the preset association object and the collection object, and these special images may be further ordered as follows: determining the area size (whole area or face area), definition, position (located in the central area or edge area of the image) and the like of a preset associated object in each special image, so that the arrangement sequence of the collective images with relatively larger area, relatively higher definition and relatively closer position to the central area is relatively more forward; more specifically, for example, the area, the definition, the position, and the like may be used as the sorting parameters, and each sorting parameter has a corresponding preset weight value, so that a score may be calculated according to the value of the sorting parameter and the preset weight value corresponding to each image in the collection, and a plurality of special images may be sequentially sorted according to the score. For example, if the student parent a is interested in the classmate B and the cookie C of the cookie a in addition to paying attention to the child cookie a, the cookie a may be set as a preset association object of the student parent a and the cookie B and the cookie C are included in the union list of the student parent a. Then, assuming three-person-small-a, small-B, and small-C co-photos PIC1 and PIC2 are present, co-photo PIC1 may be presented to student parent-a in preference to co-photo PIC2 when the small-a's area of occupancy in co-photo PIC1 exceeds the area of occupancy in co-photo PIC 2.
In an embodiment, when a subject in any image matches any preset associated object, the server may add the any image to an album corresponding to the any preset associated object. And adding the images into the photo album corresponding to the preset associated object according to the association relationship between the shot object in the images and the preset associated object, so that the images in the photo album are all related to the preset associated object. Therefore, when a group member wants to view an image related to a certain preset related object, only the corresponding photo album needs to be accessed, and the group member does not need to respectively view and select all the images related to the group, so that the group member can efficiently, continuously and comprehensively view the image related to the preset related object. Taking the primary school parent group as an example, when the preset associated object is a student in a class, the photo album corresponding to the preset associated object can be a growth photo album of the related student, and is used for rapidly, continuously and comprehensively recording the growth process of the related student.
In an embodiment, due to the reasons of poor shooting angle, inaccurate extracted feature information, and the like, the server may fail to successfully identify all the objects in the images included in the image set that match the preset associated object, such as object a that substantially matches the preset associated object a, and the server determines that the objects do not match; alternatively, there may be a subject that does not match all the preset associated subjects in the images included in the image set. In summary, the server may find that there are no matching preset associated objects in several subjects in the images included in the image set, and the server may label these subjects, such as showing a box or other visual information at the face region corresponding to the subject in the image, to indicate to the user that the corresponding subject is identified as not having a matching preset associated object; correspondingly, if the user thinks that the server has made a false judgment, for example, the marked shot object actually has a matched preset associated object, the user can manually associate the marked shot object with the corresponding preset associated object by sending a user instruction, and the server can establish a matching relationship between the marked shot object and the preset associated object indicated by the user instruction according to the received user instruction, so as to make up for the defect of insufficient identification accuracy of the server. Further, the server may add the image including the marked object to the album corresponding to the corresponding preset associated object according to the newly created matching relationship, which is similar to the above-mentioned solution.
In an embodiment, each preset associated object has a corresponding tag, and the tag content includes description information of the corresponding preset associated object, for example, the description information may be a name or a nickname of the corresponding preset associated object. According to the user instruction, the server may add a corresponding tag to the tagged object, where the tag content includes object description information included in the user instruction, for example, a name or a nickname of a preset associated object input by the user may be included in the user instruction. Then, the server may add the image to the album corresponding to a preset related object by comparing the object description information (actually, the object description information of the photographic subject included in the image) included in the image with the description information of the preset related object so as to confirm that the photographic subject included in the image matches with the preset related object when the object description information included in a certain image matches with the description information of a certain preset related object. And when the matching result of the tag content indicates that the tagged photographic subject does not have the matched preset associated object, indicating that the user inputs the wrong description information, the server may delete the tag added to the tagged photographic subject or generate a tag error prompt to indicate the user to re-input.
In an embodiment, for the matching relationship established according to the user instruction, the server may update the feature information of the corresponding preset associated object based on the feature information of the photographic subject, so as to improve the accuracy of subsequent identification.
In an embodiment, when the image set is uploaded by a group member, the group member may establish the matching relationship through a user instruction; for example, in a group of primary school parents, the group member may be a shift master. Alternatively, after other group members view the images in the image set, they can also view the marked objects, so that these group members issue user instructions to establish the matching relationship, for example, these group members may be parents of students in the primary school parents.
In an embodiment, the server may add visual description information for a matching relationship between a subject contained in an image and a preset associated object to the corresponding image. The matching relationship here may include a matching relationship obtained by automatic identification by the server, or may include a matching relationship generated based on a user instruction in the above scheme. By generating the visual description information, the group members can quickly determine whether the shot objects contained in the image have matched preset associated objects or not and which preset associated objects are matched, so that the shot objects which do not generate the matching relation can be conveniently subjected to the user instruction to generate the matching relation, or the wrong matching relation can be deleted or modified.
In one embodiment, the image may include objects that are not detected by the server. For example, for a group of primary school parents, a class owner may upload a photograph of a class activity, where the subject in the photograph contains students in the class; due to the shooting angle, the shooting distance and the like, when the server performs face detection on the picture, the face area of a certain student may not be detected, and therefore subsequent operations such as face recognition and the like are not performed on the student. For example, for a subject detected and successfully identified by the server in the image, the server may generate a matching relationship and show the above-described visual description information; for another example, for a subject detected by the server but not successfully identified (it is determined that there is no matching preset associated object), the server may add the above-mentioned visual label to the subject. Therefore, the group members can quickly distinguish the objects which are not detected in the image according to the visual description information and the visual labels; if the undetected object actually has a matched preset associated object, the group member may issue a corresponding user instruction to the server, so that the server adds the undetected object in the image as a shot object contained in the image according to the received user instruction. For example, the group member may click or circle a face region of an undetected object in the image, information of the face region being included in the user instruction, so that the server may perform feature extraction for the face region and match with a preset object library; if the matching is successful, the server can generate corresponding visual description information; if the matching fails, the server may label the shot object, so that the group members manually assist in establishing the corresponding matching relationship, which may refer to the foregoing content. Of course, the group members can directly implement manual establishment of the matching relationship by sending user instructions to the server, and the server is not required to execute matching operation with the preset object library.
In one embodiment, whether the server establishes the matching relationship through automatic identification or the matching relationship is established manually by the group members, there may be some deviation or error. Therefore, the group member can send a user instruction to the server, and the server can modify or delete the matching relationship between the shot object and the preset associated object according to the received user instruction.
Fig. 2B is a flowchart of a method for processing a subject-based image on a server side according to a second exemplary embodiment. As shown in fig. 2B, the method applied to the server may include the following steps:
step 202b, acquiring the uploaded group-related image set.
In an embodiment, reference may be made to the related description of step 202A shown in fig. 2A, which is not repeated herein.
And 204b, identifying the shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains the characteristic information of a preset associated object corresponding to the group.
In an embodiment, reference may be made to the related description of step 204a shown in fig. 2A, which is not described herein again.
Step 206b, respectively determining the concerned images corresponding to each group member in the group in the image set, wherein the concerned images contain the shot objects matched with the preset associated objects corresponding to the corresponding group members.
In an embodiment, reference may be made to the related description of step 206a shown in fig. 2A, which is not described herein again.
Step 208b, according to an image acquisition request initiated by any group member in the group, returning a message related to the image set to the any group member, wherein the message is used for preferentially displaying a corresponding attention image to the any group member.
In one embodiment, reference may be made to the description related to step 208a shown in fig. 2A, but there is a difference between the interaction manner between the server and each group member in the group. In step 208a, a message related to the image collection is actively pushed by the server to each group member in the group. Instead of the server actively pushing to the group members, the group members may initiate an image acquisition request to the server, so that the server returns a message related to the image set accordingly in step 208 b. Of course, it is eventually possible to have the images of interest corresponding to the group members in the image set preferentially shown.
Fig. 2C is a flowchart of a client-side subject-based image processing method according to one embodiment. As shown in fig. 2C, the method applied to an electronic device (e.g., the mobile phones 13-15 shown in fig. 1, etc.) may include the following steps:
step 202c, receiving an image set pushed by the server, wherein the image set is pushed to each group member of the group to which the home terminal user belongs; when the shot object of any image in the image set is matched with the characteristic information corresponding to the home terminal user, the image is marked as the attention image corresponding to the home terminal user.
In an embodiment, a client that can run a communication application on an electronic device can log in an account corresponding to any group member in a group, so that a server can push an image set to any group member after receiving the image set uploaded for the group.
In one embodiment, the electronic device may upload an image to the server, so that the server extracts feature information included in the image and associates the feature information to the home terminal user. Similarly, each group member in the group may upload an image to the server, so that the server extracts the feature information included in the image, and thereby maintains a feature library corresponding to the group at the server, where the feature library includes feature information corresponding to each group member, respectively. The images uploaded by the group members can be any object of interest, such as the group members themselves or other objects; in other words, the feature information extracted by the server from the image uploaded by a certain group member belongs to the preset associated object corresponding to the group member, the preset associated object may be the group member itself or another object, and the feature library may be regarded as the preset object library corresponding to the preset associated objects. For example, when the group is a university class group, the group members include instructors, students in the class, and the like, and the preset associated object may include the group members themselves; when the group is a parent group of a primary school, the group members include a class owner, parents of students in the class, and the like, and then the preset associated object may include the students in the class; when the group is a pet communication group, the group members include a pet owner, and the like, and the preset associated object may include a pet kept by the pet owner.
Of course, in addition to uploading the image to the server by the group member, so that the server extracts the feature information corresponding to the group member from the image, the server may also obtain the feature information of the group member in other ways, which is not limited in this specification. For example, when the feature information of the group member exists at a preset platform or a storage space, the group member may send the indication information to the server, so that the server obtains the feature information corresponding to the group member (the group member himself or another associated object besides the group member) from the preset platform or the storage space; or the group member does not need to send the indication information to the server, and the server can actively determine the association relationship between the group member and the feature information according to the information of the group member, the information of the personnel to which the feature information at the preset platform or the storage space belongs, and the like, so as to determine the feature information corresponding to the group member.
In an embodiment, the feature information may represent the corresponding preset associated object from one or more dimensions, and the specification does not limit the dimensions. For example, the feature information may include facial feature information or skin color feature, hair color feature, body proportion feature, and the like of a preset associated object. Of course, the above feature information should belong to visual features to ensure that feature information of the same dimension can be extracted from the shot object in the image, so as to be compared with the feature information of the preset associated object, thereby determining the preset associated object matching the shot object in the preset object library.
In an embodiment, especially in a case where the feature information of the preset associated object may relatively greatly change, such as when the age of the preset associated object is small, the accuracy of matching the photographic subject in the image by the server based on the preset object library can be ensured by updating the feature information contained in the preset object library. For example, the group members may provide a close-up of the preset associated objects periodically or non-periodically to update the corresponding feature information in the preset object library. For another example, when an image including each preset related object is frequently uploaded in the group, and when the object in the image is determined to be matched with the preset related object, the shooting time of the image is certainly later than the generation time of the feature information of the preset related object, so that the feature information of the corresponding preset related object can be updated according to the feature information of the object in the image without specially providing a close-up photograph by group members, and the feature information of the preset related object is updated in a non-perception manner in the process of continuously uploading the image.
In an embodiment, the image set may include one or more images, and this specification does not limit this.
In one embodiment, the feature information included in the preset object library is assumed to include: presetting facial feature information of an associated object; accordingly, the face region of the subject in the images contained in the image set can be identified by face detection techniques; then, the facial feature information extracted from the face region is compared with facial feature information contained in a preset object library to determine a subject matching a preset related object. When the preset associated object and the shot object are users, such as the aforementioned students, the facial feature information may be face feature information, and the adopted face detection technology is a face detection technology and the technology for comparing the facial feature information is a face recognition technology; when the associated object is preset and the subject is of another type, such as the aforementioned pet, the above scheme should employ face detection technology, face recognition technology, etc. corresponding to the object of the corresponding type. Here, when the face area of the subject in the image is recognized by the face detection technique, the area belonging to the "face" in the image is actually recognized based on the face detection technique, and the area is regarded as the above-mentioned face area, and the face area belongs to the corresponding subject, and thus is regarded as the face area equivalent to the subject being recognized.
And 204c, preferentially displaying the concerned images corresponding to the home terminal user when the image set is displayed.
In an embodiment, the above-mentioned preferred display scheme may be a default display scheme, and the group members are not adjustable. Alternatively, the above-mentioned preferred display scheme is only an optional display scheme, and other one or more display schemes also exist, for example, one display scheme is that the display order of each image in the image set does not need to be adjusted, and a switch option or pop-up prompt for the display scheme can be provided to the group members at the electronic device, so that the group members can select an appropriate display scheme according to needs.
In one embodiment, when there are multiple group members in the group, the group members can all receive the image set pushed by the server. Meanwhile, due to the fact that the attention images corresponding to the group members are predetermined, only one image set needs to be uploaded, the pushed image sets of each group member can be different, each group member can view the attention images which are interested per se preferentially in the image sets, and personalized requirements of the group members are met.
In one embodiment, each group member may set a blacklist for itself, the blacklist including one or more blacklist objects. For example, the group member may upload an image containing a blacklist object to a server, and extract characteristic information of the blacklist object by the server; alternatively, the server may obtain the feature information of the blacklist object from another channel, and the process may refer to the obtaining manner of the feature information of the preset associated object. Then, for each group member, the server can screen out the image containing the blacklist object as the corresponding blacklist image of the corresponding group member by respectively matching the characteristic information of the blacklist object with the shot object of the images contained in the image set. Then, when the message related to the image set is pushed, the server may mask the blacklist image in the image set according to the pushed group member. For example, when the message related to the image set is a published message for the image set, and the published message is used for linking to a presentation interface of the image set, the content of the published message corresponding to different group members may be different, so as to link to different presentation interfaces, and ensure that the group members do not see the image containing the self-set blacklist object. For another example, when the message related to the image set is the image set itself, the image set including the blacklist object may be removed and then pushed to the corresponding group member.
In an embodiment, each group member may set a collection list, and the collection list includes a plurality of collection objects. Then, the server may identify the image set including the image of the aggregation object by obtaining the feature information of the aggregation object in advance, which is similar to the above process of obtaining the image of interest and is not described herein again. For a group member, if the subject of a certain image includes some or all of the collection objects set by the group member, the image may be set as the collection image corresponding to the group member. Then, when the message related to the image set is pushed to the group member, the set image can be preferentially displayed. Of course, if both the image of interest and the collective image are included, the image of interest may be prioritized over the collective image, so that the image of interest is presented first, the collective image is presented second, and then the other images are presented.
The plurality of collection objects contained in the collection list may be a plurality of objects of interest to the group members, and the server may determine whether one or more objects in the collection list are contained in the image by obtaining facial features of the objects, so that the server may perform facial recognition on the subject in the image contained in the image collection based on the facial features. For example, the collection object may include a group member or a group member's children, friends, colleagues, pets, etc., and this description is not intended to be limiting.
In some cases, some special images may contain both the preset association object and the collection object, and these special images may be further ordered as follows: determining the area size (whole area or face area), definition, position (located in the central area or edge area of the image) and the like of a preset associated object in each special image, so that the arrangement sequence of the collective images with relatively larger area, relatively higher definition and relatively closer position to the central area is relatively more forward; more specifically, for example, the area, the definition, the position, and the like may be used as the sorting parameters, and each sorting parameter has a corresponding preset weight value, so that a score may be calculated according to the value of the sorting parameter and the preset weight value corresponding to each image in the collection, and a plurality of special images may be sequentially sorted according to the score. For example, if the student parent a is interested in the classmate B and the cookie C of the cookie a in addition to paying attention to the child cookie a, the cookie a may be set as a preset association object of the student parent a and the cookie B and the cookie C are included in the union list of the student parent a. Then, assuming three-person-small-a, small-B, and small-C co-photos PIC1 and PIC2 are present, co-photo PIC1 may be presented to student parent-a in preference to co-photo PIC2 when the small-a's area of occupancy in co-photo PIC1 exceeds the area of occupancy in co-photo PIC 2.
Fig. 2D is a flowchart of a client-side object-based image processing method according to a second exemplary embodiment. As shown in fig. 2D, the method is applied to an electronic device (e.g., the mobile phones 13-15 shown in fig. 1, etc.), and may include the following steps:
step 202d, determining an image set which needs to be uploaded to a server aiming at a group; the server maintains characteristic information corresponding to each group member in the group, so that when a shot object of any image in the image set is matched with the characteristic information corresponding to any group member, the image is marked as a concerned image corresponding to the group member.
In an embodiment, a client of a communication application may be run on an electronic device, and an account corresponding to any group member in a group may be logged in on the client, so that the group member may upload an image set in the group. Of course, the members in the group having the image uploading authority may also be limited, such as a group owner or an administrator.
In an embodiment, the image set may include one or more images, and this specification does not limit this.
In one embodiment, there is an association relationship between the group members and the feature information, so that the server can mark the attention image corresponding to each group member accordingly. Each group member can upload the image of the preset associated object to the server, so that the server extracts the characteristic information of the preset associated object from the image. And the server maintains a preset feature library corresponding to the group, wherein the preset feature library comprises feature information of preset associated objects corresponding to all group members. Wherein, the preset associated object may include at least one of the following: the group members of the group, the non-group members having a preset association with the group members, and the like, which are not limited in this specification. For example, when the group is a university class group, the group members include a tutor, students in the class, and the like, and the preset associated object may include the group members; when the group is a parent group of a primary school, the group members include a class owner, parents of students in the class, and the like, and then the preset associated object may include the students in the class; when the group is a pet communication group, the group members include a pet owner, and the like, and the preset associated object may include a pet kept by the pet owner.
In an embodiment, the feature information may characterize the corresponding preset associated object from one or more dimensions, which is not limited in this specification. For example, the feature information may include facial feature information or skin color feature, hair color feature, body proportion feature, and the like of a preset associated object. Of course, the above feature information should belong to visual features to ensure that feature information of the same dimension can be extracted from the shot object in the image, so as to be compared with the feature information of the preset associated object, thereby determining the preset associated object matching the shot object in the preset object library.
In an embodiment, except for uploading an image, extracting feature information by a server and constructing a preset object library, a group member may process the image of a preset associated object through an electronic device, extract corresponding feature information, and directly upload the feature information to the server to construct or update the preset object library. Or, the server may obtain the feature information of the preset associated object in other manners, even directly obtain the preset object library, which is not limited in this specification; for example, when the feature information of the group member exists at a preset platform or a storage space, the group member may send the indication information to the server, so that the server obtains the feature information corresponding to the group member (the group member himself or another associated object besides the group member) from the preset platform or the storage space; or the group member does not need to send the indication information to the server, and the server can actively determine the association relationship between the group member and the feature information according to the information of the group member, the information of the personnel to which the feature information at the preset platform or the storage space belongs, and the like, so as to determine the feature information corresponding to the group member.
In an embodiment, especially in a case where the feature information of the preset associated object may relatively greatly change, such as when the age of the preset associated object is small, the accuracy of matching the photographic subject in the image by the server based on the preset object library can be ensured by updating the feature information contained in the preset object library. For example, the group members may provide a close-up of the preset associated objects periodically or non-periodically to update the corresponding feature information in the preset object library. For another example, when an image including each preset related object is frequently uploaded in the group, and when the object in the image is determined to be matched with the preset related object, the shooting time of the image is certainly later than the generation time of the feature information of the preset related object, so that the feature information of the corresponding preset related object can be updated according to the feature information of the object in the image without specially providing a close-up photograph by group members, and the feature information of the preset related object is updated in a non-perception manner in the process of continuously uploading the image.
In one embodiment, the feature information included in the preset object library is assumed to include: presetting facial feature information of an associated object; accordingly, the server can identify the face region of the subject in the images contained in the image set by face detection technology; then, the facial feature information extracted from the face region is compared with facial feature information contained in a preset object library to determine a subject matching a preset related object. When the preset associated object and the shot object are users, such as the aforementioned students, the facial feature information may be face feature information, and the adopted face detection technology is a face detection technology and the technology for comparing the facial feature information is a face recognition technology; when the associated object is preset and the subject is of another type, such as the aforementioned pet, the above scheme should employ face detection technology, face recognition technology, etc. corresponding to the object of the corresponding type.
Here, when the server recognizes the face area of the subject in the image by the face detection technique, it actually recognizes an area belonging to a "face" in the image based on the face detection technique, and takes the area as the above-mentioned face area, and the face area belongs to the corresponding subject, and thus is considered as equivalent to the face area in which the subject is recognized.
Step 204d, uploading the image set to the server, so that when the server pushes the image set to any group member, the attention image corresponding to any group member is preferentially displayed.
In an embodiment, the above-mentioned preferred display scheme may be a default display scheme, and the group members are not adjustable. Alternatively, the above-mentioned preferred display scheme is only an optional display scheme, and other one or more display schemes also exist, for example, one display scheme is that the display order of each image in the image set does not need to be adjusted, and a switch option or pop-up prompt for the display scheme can be provided to the group members at the electronic device, so that the group members can select an appropriate display scheme according to needs.
In one embodiment, when there are multiple group members in the group, the group members can all receive the image set pushed by the server. Meanwhile, due to the fact that the attention images corresponding to the group members are predetermined, only one image set needs to be uploaded, the pushed image sets of each group member can be different, each group member can view the attention images which are interested per se preferentially in the image sets, and personalized requirements of the group members are met.
In one embodiment, each group member may set a blacklist for itself, the blacklist including one or more blacklist objects. For example, the group member may upload an image containing a blacklist object to a server, and extract characteristic information of the blacklist object by the server; alternatively, the server may obtain the feature information of the blacklist object from another channel, and the process may refer to the obtaining manner of the feature information of the preset associated object. Then, for each group member, the server can screen out the image containing the blacklist object as the corresponding blacklist image of the corresponding group member by respectively matching the characteristic information of the blacklist object with the shot object of the images contained in the image set. Then, when the message related to the image set is pushed, the server may mask the blacklist image in the image set according to the pushed group member. For example, when the message related to the image set is a published message for the image set, and the published message is used for linking to a presentation interface of the image set, the content of the published message corresponding to different group members may be different, so as to link to different presentation interfaces, and ensure that the group members do not see the image containing the self-set blacklist object. For another example, when the message related to the image set is the image set itself, the image set including the blacklist object may be removed and then pushed to the corresponding group member.
In an embodiment, each group member may set a collection list, and the collection list includes a plurality of collection objects. Then, the server may identify the image set including the image of the aggregation object by obtaining the feature information of the aggregation object in advance, which is similar to the above process of obtaining the image of interest and is not described herein again. For a group member, if the subject of a certain image includes some or all of the collection objects set by the group member, the image may be set as the collection image corresponding to the group member. Then, when the message related to the image set is pushed to the group member, the set image can be preferentially displayed. Of course, if both the image of interest and the collective image are included, the image of interest may be prioritized over the collective image, so that the image of interest is presented first, the collective image is presented second, and then the other images are presented.
The plurality of collection objects contained in the collection list may be a plurality of objects of interest to the group members, and the server may determine whether one or more objects in the collection list are contained in the image by obtaining facial features of the objects, so that the server may perform facial recognition on the subject in the image contained in the image collection based on the facial features. For example, the collection object may include a group member or a group member's children, friends, colleagues, pets, etc., and this description is not intended to be limiting.
In some cases, some special images may contain both the preset association object and the collection object, and these special images may be further ordered as follows: determining the area size (whole area or face area), definition, position (located in the central area or edge area of the image) and the like of a preset associated object in each special image, so that the arrangement sequence of the collective images with relatively larger area, relatively higher definition and relatively closer position to the central area is relatively more forward; more specifically, for example, the area, the definition, the position, and the like may be used as the sorting parameters, and each sorting parameter has a corresponding preset weight value, so that a score may be calculated according to the value of the sorting parameter and the preset weight value corresponding to each image in the collection, and a plurality of special images may be sequentially sorted according to the score. For example, if the student parent a is interested in the classmate B and the cookie C of the cookie a in addition to paying attention to the child cookie a, the cookie a may be set as a preset association object of the student parent a and the cookie B and the cookie C are included in the union list of the student parent a. Then, assuming three-person-small-a, small-B, and small-C co-photos PIC1 and PIC2 are present, co-photo PIC1 may be presented to student parent-a in preference to co-photo PIC2 when the small-a's area of occupancy in co-photo PIC1 exceeds the area of occupancy in co-photo PIC 2.
In the above embodiment, by identifying and processing the image set uploaded to the server, the images included in the image set can be displayed in a differentiated manner, for example, the focused images corresponding to the group members are displayed preferentially. Similarly, the related technical scheme can be applied to processing the local photo album of the electronic equipment so as to reasonably display the images contained in the local photo album. For example, fig. 2E is a flowchart of a client-side object-based image processing method according to another exemplary embodiment. As shown in fig. 2E, the method applied to an electronic device (e.g., the mobile phones 13-15 shown in fig. 1, etc.) may include the following steps:
in step 202e, the subject in the image included in the local album is identified.
And step 204e, determining the preset objects contained in each image according to the recognition result.
In an embodiment, a user may preset a plurality of preset objects, where the preset objects may include the user himself, family members of the user, friends of the user, pets of the user, and the like, and this specification does not limit this.
In an embodiment, the electronic device may obtain feature information of each preset object on one hand and feature information of a subject included in the image on the other hand, and identify the preset object included in the image through comparison between the feature information. For example, when the face recognition technology is adopted, the feature information of the preset object and the subject is the face features of the corresponding objects, such as the face features of a human body, the face features of a pet, and the like.
In an embodiment, a user may provide an image of each preset object to the electronic device in advance, so that the electronic device may extract feature information of each preset object. Or, the user may set feature information of the preset object in another channel in advance, and instruct the electronic device to acquire the feature information of the preset object from the channel. Alternatively, the electronic device may also obtain the feature information of the preset object in other manners, which is not limited in this specification.
And step 206e, arranging and displaying the images in the local album according to the predefined arrangement sequence among the preset objects.
In an embodiment, a predefined arrangement order exists between the preset objects, for example, the predefined arrangement order may be predefined by the user or formed based on other manners.
In an embodiment, when the arrangement order of the preset objects UA is prior to the preset objects UB, for the images P1 containing the preset objects UA and the images P2 containing the preset objects UB in the local album, the images P1 may be displayed before the images P2 in an arrangement manner, so that the arrangement order between the images is consistent with the arrangement order between the preset objects.
In one embodiment, the same image may include multiple default objects. Therefore, the specific preset object with the most front arrangement order in each image can be respectively determined, the arrangement orders of the specific preset objects in different images are compared, and the specific preset object arranged in the front is selected, so that the image arrangement where the specific preset object arranged in the front is displayed in the front, and the image arrangement where the specific preset object arranged in the back is displayed in the back. If the specific preset objects contained in different images are the same, the number of the preset objects contained in each image can be respectively determined, and the image arrangement with the small number is displayed before the image arrangement with the large number is displayed after the image arrangement with the large number is displayed.
Fig. 2F is a flowchart of a method for processing a multimedia file on a server side according to an exemplary embodiment. As shown in fig. 2F, the method applied to the server may include the following steps:
step 202f, a set of multimedia files associated with the group is obtained.
In an embodiment, the server may obtain the multimedia file set in any manner. For example, a set of multimedia files may be uploaded by any group member within the group. Alternatively, the multimedia file set may be uploaded by a specific group member in the group, for example, the specific group member may be a group owner or an administrator, and the description does not limit this. For the uploading manner of the multimedia file set, reference may be made to the uploading manner of the image set in the embodiment shown in fig. 2A, which is not described herein again.
In one embodiment, the multimedia file collection may include any type of multimedia file, such as image, audio, video, etc., and this description is not intended to limit this. The multimedia file set may include one or more multimedia files, and this specification does not limit this.
And 204f, identifying the collected objects in the multimedia files contained in the multimedia file set according to a preset object library corresponding to the group, wherein the preset object library contains the characteristic information of a preset associated object corresponding to the group.
In one embodiment, the preset correlation object includes at least one of: the group members of the group, the non-group members having a preset association with the group members, and the like, which are not limited in this specification. For example, when the group is a university class group, the group members include a tutor, students in the class, and the like, and the preset associated object may include the group members; when the group is a parent group of a primary school, the group members include a class owner, parents of students in the class, and the like, and then the preset associated object may include the students in the class; when the group is a pet communication group, the group members include a pet owner, and the like, and the preset associated object may include a pet kept by the pet owner.
In one embodiment, the form of the characteristic information employed may vary based on the type differences of the multimedia files. For example, for images, videos, etc., the feature information may include visual features such as facial feature information, so as to perform identity recognition using techniques such as facial recognition; for example, for audio, video, etc., the feature information may include acoustic features such as voiceprint feature information, so as to perform identity recognition by using techniques such as voiceprint recognition.
In an embodiment, the group member may upload the multimedia file of the preset associated object to the server, and construct or update the preset object library after extracting the corresponding feature information from the server. Or, the group member may process the multimedia file of the preset associated object through the electronic device, extract the corresponding feature information, and directly upload the feature information to the server, so as to construct or update the preset object library. Or, the server may obtain the feature information of the preset associated object in other manners, even directly obtain the preset object library, which is not limited in this specification; for example, when the feature information of the group member exists at a preset platform or a storage space, the group member may send the indication information to the server, so that the server obtains the feature information corresponding to the group member (the group member himself or another associated object besides the group member) from the preset platform or the storage space; or the group member does not need to send the indication information to the server, and the server can actively determine the association relationship between the group member and the feature information according to the information of the group member, the information of the personnel to which the feature information at the preset platform or the storage space belongs, and the like, so as to determine the feature information corresponding to the group member.
Step 206f, respectively determining concerned multimedia files corresponding to each group member in the group in the multimedia file set, wherein the collected objects contained in the concerned multimedia files are matched with the preset associated objects corresponding to the corresponding group members.
And 208f, when the messages related to the multimedia file sets are respectively pushed to each group member of the group, setting the concerned multimedia files corresponding to the pushed group members as the first ranking.
In an embodiment, the above-mentioned ranking presentation scheme may be a default presentation scheme, and the group members are not adjustable. Alternatively, the above-mentioned preferred display scheme is only an optional display scheme, and other one or more display schemes also exist, for example, one display scheme is that the display order of each image in the image set does not need to be adjusted, and a switch option or pop-up prompt for the display scheme can be provided to the group members at the electronic device, so that the group members can select an appropriate display scheme according to needs.
In one embodiment, by determining the concerned multimedia files of each group member, the group members can ensure that each group member can preferentially check the concerned multimedia files by only uploading one multimedia file set, so as to meet the individual requirements of each group member.
In one embodiment, the message related to the set of multimedia files may be a publication message for the set of multimedia files, the publication message for linking to a presentation interface of the set of multimedia files. For example, the publish message may be sent as a group chat message to a group chat interface corresponding to the group, and the sender of the publish message may be set as an uploader of the collection of multimedia files (substantially automatically sent by the server), or presented as a system push message. Then, the server may set the preview multimedia file of the publication message as the focus multimedia file corresponding to the pushed group member. In other words, by determining the concerned multimedia files of each group member, when each group member receives the release message, the viewed preview multimedia file is the concerned multimedia file corresponding to the group member, so that the preview multimedia file of the release message can be different from person to person, and the individual requirements of each group member are met.
In one embodiment, the message associated with the set of multimedia files is a publish message for the set of multimedia files. Then, the server can generate the publishing message as a presentation interface for linking to the concerned multimedia files corresponding to the pushed group members in the multimedia file set, so that the group members can directly jump to the linked concerned multimedia files after triggering the publishing message without sequentially reviewing the non-concerned multimedia files arranged in the front, thereby improving the file viewing efficiency of the group members, and the server does not need to perform personalized adjustment on the sequence of the multimedia files contained in the multimedia file set. For example, when the distribution message is pushed to a group member, if the concerned multimedia files corresponding to the group member are located at the 33 rd, 34 th and 35 th of the multimedia file set, that is, a plurality of concerned multimedia files are arranged consecutively, the distribution message may be linked to the 33 th multimedia file, that is, the first multimedia file, so that the group member can view all the concerned multimedia files consecutively without looking over the 32 previous other multimedia files. For another example, if the concerned multimedia files corresponding to the group member are located at the 33 rd, 38 th and 49 th multimedia files of the image set, that is, the concerned multimedia files are not arranged consecutively, the publishing messages may be linked to the 33 th, 38 th and 49 th multimedia files, respectively, so that the group member may directly view all the concerned multimedia files without looking over other multimedia files.
In one embodiment, the message associated with the set of multimedia files may be the set of multimedia files itself. Correspondingly, the server can set the arrangement sequence of the multimedia files contained in the multimedia file set according to the pushed group members, so that the concerned multimedia files corresponding to the pushed group members have the arrangement sequence prior to the rest multimedia files, each group member can preferentially open the concerned multimedia files, and the same multimedia file set can be different from person to person.
In one embodiment, each group member may set a blacklist for itself, the blacklist including one or more blacklist objects. For example, the group member may upload a multimedia file containing a blacklist object to a server, and extract characteristic information of the blacklist object by the server; alternatively, the server may obtain the feature information of the blacklist object from another channel, and the process may refer to the obtaining manner of the feature information of the preset associated object. Then, for each group member, the server can screen out the multimedia file containing the blacklist object by matching the characteristic information of the blacklist object with the collected object of the multimedia file contained in the multimedia file set, so as to serve as the corresponding blacklist multimedia file of the corresponding group member. Then, when the message related to the multimedia file set is pushed, the server may correspondingly shield the blacklisted multimedia files in the multimedia file set according to the pushed group members. For example, when the message related to the multimedia file set is a publish message for the multimedia file set, and the publish message is used for linking to a presentation interface of the multimedia file set, the content of the publish message corresponding to different group members may be different, so as to link to different presentation interfaces, and ensure that the group members do not see the multimedia files containing the self-set blacklist objects. For another example, when the message related to the multimedia file set is the multimedia file set itself, the multimedia files including the blacklist object in the multimedia file set may be removed and then pushed to the corresponding group member.
In an embodiment, each group member may set a collection list, and the collection list includes a plurality of collection objects. Then, the server may identify the multimedia file including the aggregation object in the multimedia file set by obtaining the feature information of the aggregation object in advance, which is similar to the above process of obtaining the concerned multimedia file and is not described herein again. For a group member, if the object of a multimedia file includes some or all of the collection objects set by the group member, the multimedia file can be set as the collection multimedia file corresponding to the group member. Then, when the message related to the multimedia file set is pushed to the group members, the multimedia file set can be preferentially displayed. Of course, if both the focus multimedia file and the aggregate multimedia file are included, the focus multimedia file may be prioritized over the aggregate multimedia file, thereby preferentially presenting the focus multimedia file, less preferentially presenting the aggregate multimedia file, and then presenting the other multimedia files.
The plurality of collection objects contained in the collection list may be a plurality of objects of interest to the group members, and the server may determine whether one or more objects in the collection list are contained in the multimedia file by obtaining facial features of the objects, so that the server may perform facial recognition on the objects in the multimedia file contained in the collection of multimedia files based on the facial features. For example, the collection object may include a group member or a group member's children, friends, colleagues, pets, etc., and this description is not intended to be limiting.
In some cases, some special multimedia files may contain both the preset association object and the collection object, and these special multimedia files may be further ordered as follows: take a multimedia file of the image type as an example. Determining the area size (whole area or face area), definition, position (located in the central area or edge area of the image) and the like occupied by the specific user in each special image, so that the arrangement sequence of the collective images with relatively larger area, relatively higher definition and relatively closer position to the central area is relatively more forward; more specifically, for example, the area, the definition, the position, and the like may be used as the sorting parameters, and each sorting parameter has a corresponding preset weight value, so that a score may be calculated according to the value of the sorting parameter and the preset weight value corresponding to each image in the collection, and a plurality of special images may be sequentially sorted according to the score. For example, if the student parent a is interested in the classmate B and the cookie C of the cookie a in addition to paying attention to the child cookie a, the cookie a may be set as a preset association object of the student parent a and the cookie B and the cookie C are included in the union list of the student parent a. Then, assuming three-person-small-a, small-B, and small-C co-photos PIC1 and PIC2 are present, co-photo PIC1 may be presented to student parent-a in preference to co-photo PIC2 when the small-a's area of occupancy in co-photo PIC1 exceeds the area of occupancy in co-photo PIC 2.
Fig. 3A is a flowchart of a method for processing a subject-based image on a server side according to an exemplary embodiment. As shown in fig. 3A, the method is applied to a server (e.g., theserver 11 shown in fig. 1, etc.), and may include the following steps:
step 302a, an uploaded group-related image set is obtained.
In one embodiment, the image collection may be uploaded by any group member within the group. Alternatively, the image set may be uploaded by a specific group member in the group, for example, the specific group member may be a group owner or an administrator, and the description does not limit this.
In one embodiment, the image set may be uploaded through a group chat interface corresponding to the group, which is similar to sending an image type group chat message. Or, the image set may be uploaded through an image uploading interface corresponding to the group, and an entry of the image uploading interface may be, for example, in a group chat interface corresponding to the group; of course, the image uploading interface may be independent of the group, and the user may select the associated group in the image uploading interface, so that the uploaded image set is related to the group. For example, the image upload interface may include a group album interface in the related art; alternatively, the image uploading interface may be a new interface different from the related art, such as an information flow display interface corresponding to the group (each piece of information may be arranged and displayed in reverse order according to the release time, and the image set is included in a piece of information), and the like.
In an embodiment, the image set may include one or more images, and this specification does not limit this.
In one embodiment, the access right of the image set is assigned to a group member of the group; in other words, only the group members of the group are able to view the set of images. The group members can share the image set, so that the non-group members can also view the images in the image set; of course, in some cases, even the sharing operation of the group member on the image set, the downloading operation of the images contained in the image set, the screen capture operation after opening the images, and the like may be limited, so as to avoid the images in the image set from being leaked as much as possible.
Step 304a, identifying the shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group.
In one embodiment, the preset correlation object includes at least one of: the group members of the group, the non-group members having a preset association with the group members, and the like, which are not limited in this specification. For example, when the group is a university class group, the group members include a tutor, students in the class, and the like, and the preset associated object may include the group members; when the group is a parent group of a primary school, the group members include a class owner, parents of students in the class, and the like, and then the preset associated object may include the students in the class; when the group is a pet communication group, the group members include a pet owner, and the like, and the preset associated object may include a pet kept by the pet owner.
In an embodiment, the preset object library includes feature information of the preset associated object, where the feature information may represent the corresponding preset associated object from one or more dimensions, and the dimension is not limited in this specification. For example, the feature information may include facial feature information or skin color feature, hair color feature, body proportion feature, and the like of a preset associated object. Of course, the above feature information should belong to visual features to ensure that feature information of the same dimension can be extracted from the shot object in the image, so as to be compared with the feature information of the preset associated object, thereby determining the preset associated object matching the shot object in the preset object library.
In an embodiment, the group member may upload the image of the preset associated object to the server, and construct or update the preset object library after extracting the corresponding feature information by the server. Or, after the group member processes the image of the preset associated object through the electronic device and extracts the corresponding feature information, the group member directly uploads the feature information to the server to construct or update the preset object library. Or, the server may obtain the feature information of the preset associated object in other manners, even directly obtain the preset object library, which is not limited in this specification; for example, when the feature information of the group member exists at a preset platform or a storage space, the group member may send the indication information to the server, so that the server obtains the feature information corresponding to the group member (the group member himself or another associated object besides the group member) from the preset platform or the storage space; or the group member does not need to send the indication information to the server, and the server can actively determine the association relationship between the group member and the feature information according to the information of the group member, the information of the personnel to which the feature information at the preset platform or the storage space belongs, and the like, so as to determine the feature information corresponding to the group member.
In an embodiment, especially in a case where the feature information of the preset associated object may relatively greatly change, such as when the age of the preset associated object is small, the accuracy of matching the photographic subject in the image by the server based on the preset object library can be ensured by updating the feature information contained in the preset object library. For example, the group members may provide a close-up of the preset associated objects periodically or non-periodically to update the corresponding feature information in the preset object library. For another example, when an image including each preset related object is frequently uploaded in the group, and when the object in the image is determined to be matched with the preset related object, the shooting time of the image is certainly later than the generation time of the feature information of the preset related object, so that the feature information of the corresponding preset related object can be updated according to the feature information of the object in the image without specially providing a close-up photograph by group members, and the feature information of the preset related object is updated in a non-perception manner in the process of continuously uploading the image.
In one embodiment, the feature information included in the preset object library is assumed to include: presetting facial feature information of an associated object; accordingly, the server can identify the face region of the subject in the images contained in the image set by face detection technology; then, the facial feature information extracted from the face region is compared with facial feature information contained in a preset object library to determine a subject matching a preset related object. When the preset associated object and the shot object are users, such as the aforementioned students, the facial feature information may be face feature information, and the adopted face detection technology is a face detection technology and the technology for comparing the facial feature information is a face recognition technology; when the associated object is preset and the subject is of another type, such as the aforementioned pet, the above scheme should employ face detection technology, face recognition technology, etc. corresponding to the object of the corresponding type.
Here, when the server recognizes the face area of the subject in the image by the face detection technique, it actually recognizes an area belonging to a "face" in the image based on the face detection technique, and takes the area as the above-mentioned face area, and the face area belongs to the corresponding subject, and thus is considered as equivalent to the face area in which the subject is recognized.
Step 306a, when the shot object in any image is matched with any preset associated object, adding the image into the album corresponding to the preset associated object.
In one embodiment, images are added to an album corresponding to a preset associated object according to the association relationship between a shot object in the images and the preset associated object, so that the images in the album are all related to the preset associated object. Therefore, when a group member wants to view an image related to a certain preset related object, only the corresponding photo album needs to be accessed, and the group member does not need to respectively view and select all the images related to the group, so that the group member can efficiently, continuously and comprehensively view the image related to the preset related object. Taking the primary school parent group as an example, when the preset associated object is a student in a class, the photo album corresponding to the preset associated object can be a growth photo album of the related student, and is used for rapidly, continuously and comprehensively recording the growth process of the related student.
In an embodiment, due to the reasons of poor shooting angle, inaccurate extracted feature information, and the like, the server may fail to successfully identify all the objects in the images included in the image set that match the preset associated object, such as object a that substantially matches the preset associated object a, and the server determines that the objects do not match; alternatively, there may be a subject that does not match all the preset associated subjects in the images included in the image set. In summary, the server may find that there are no matching preset associated objects in several subjects in the images included in the image set, and the server may label these subjects, such as showing a box or other visual information at the face region corresponding to the subject in the image, to indicate to the user that the corresponding subject is identified as not having a matching preset associated object; correspondingly, if the user thinks that the server has made a false judgment, for example, the marked shot object actually has a matched preset associated object, the user can manually associate the marked shot object with the corresponding preset associated object by sending a user instruction, and the server can establish a matching relationship between the marked shot object and the preset associated object indicated by the user instruction according to the received user instruction, so as to make up for the defect of insufficient identification accuracy of the server. Further, the server may add the image including the marked object to the album corresponding to the corresponding preset associated object according to the newly created matching relationship, which is similar to the step 306 a.
In an embodiment, each preset associated object has a corresponding tag, and the tag content includes description information of the corresponding preset associated object, for example, the description information may be a name or a nickname of the corresponding preset associated object. According to the user instruction, the server may add a corresponding tag to the tagged object, where the tag content includes object description information included in the user instruction, for example, a name or a nickname of a preset associated object input by the user may be included in the user instruction. Then, the server may add the image to the album corresponding to a preset related object by comparing the object description information (actually, the object description information of the photographic subject included in the image) included in the image with the description information of the preset related object so as to confirm that the photographic subject included in the image matches with the preset related object when the object description information included in a certain image matches with the description information of a certain preset related object. And when the matching result of the tag content indicates that the tagged photographic subject does not have the matched preset associated object, indicating that the user inputs the wrong description information, the server may delete the tag added to the tagged photographic subject or generate a tag error prompt to indicate the user to re-input.
In an embodiment, for the matching relationship established according to the user instruction, the server may update the feature information of the corresponding preset associated object based on the feature information of the photographic subject, so as to improve the accuracy of subsequent identification.
In an embodiment, when the image set is uploaded by a group member, the group member may establish the matching relationship through a user instruction; for example, in a group of primary school parents, the group member may be a shift master. Alternatively, after other group members view the images in the image set, they can also view the marked objects, so that these group members issue user instructions to establish the matching relationship, for example, these group members may be parents of students in the primary school parents.
In an embodiment, the server may add visual description information for a matching relationship between a subject contained in an image and a preset associated object to the corresponding image. The matching relationship here may include a matching relationship obtained by automatic identification by the server, or may include a matching relationship generated based on a user instruction in the above scheme. By generating the visual description information, the group members can quickly determine whether the shot objects contained in the image have matched preset associated objects or not and which preset associated objects are matched, so that the shot objects which do not generate the matching relation can be conveniently subjected to the user instruction to generate the matching relation, or the wrong matching relation can be deleted or modified.
In one embodiment, the image may include objects that are not detected by the server. For example, for a group of primary school parents, a class owner may upload a photograph of a class activity, where the subject in the photograph contains students in the class; due to the shooting angle, the shooting distance and the like, when the server performs face detection on the picture, the face area of a certain student may not be detected, and therefore subsequent operations such as face recognition and the like are not performed on the student. For example, for a subject detected and successfully identified by the server in the image, the server may generate a matching relationship and show the above-described visual description information; for another example, for a subject detected by the server but not successfully identified (it is determined that there is no matching preset associated object), the server may add the above-mentioned visual label to the subject. Therefore, the group members can quickly distinguish the objects which are not detected in the image according to the visual description information and the visual labels; if the undetected object actually has a matched preset associated object, the group member may issue a corresponding user instruction to the server, so that the server adds the undetected object in the image as a shot object contained in the image according to the received user instruction. For example, the group member may click or circle a face region of an undetected object in the image, information of the face region being included in the user instruction, so that the server may perform feature extraction for the face region and match with a preset object library; if the matching is successful, the server can generate corresponding visual description information; if the matching fails, the server may label the shot object, so that the group members manually assist in establishing the corresponding matching relationship, which may refer to the foregoing content. Of course, the group members can directly implement manual establishment of the matching relationship by sending user instructions to the server, and the server is not required to execute matching operation with the preset object library.
In one embodiment, whether the server establishes the matching relationship through automatic identification or the matching relationship is established manually by the group members, there may be some deviation or error. Therefore, the group member can send a user instruction to the server, and the server can modify or delete the matching relationship between the shot object and the preset associated object according to the received user instruction.
In an embodiment, the server may determine, in the image set, an attention image corresponding to each group member in the group, respectively, where a subject included in the attention image matches a preset associated object corresponding to the corresponding group member. Accordingly, the server may push a distribution message for the image set to each group member of the group, respectively, where a preview image included in the distribution message is a focused image corresponding to the pushed group member. In other words, by determining the attention images of the group members, when each group member receives the release message, the seen preview image is the attention image corresponding to the group member, so that the preview image of the release message can be different from person to person, and the individual requirements of the group members are met.
In an embodiment, the server may determine, in the image set, an attention image corresponding to each group member in the group, respectively, where a subject included in the attention image matches a preset associated object corresponding to the corresponding group member. Correspondingly, when the server respectively pushes the image sets to each group member of the group, the display sequence of the attention images corresponding to the pushed group members can be set before the other images, and each group member can preferentially view the images which are more attention to the group member, so that the same image set can realize different display effects from person to person.
In an embodiment, the above-mentioned preferred display scheme may be a default display scheme, and the group members are not adjustable. Alternatively, the above-mentioned preferred display scheme is only an optional display scheme, and other one or more display schemes also exist, for example, one display scheme is that the display order of each image in the image set does not need to be adjusted, and a switch option or pop-up prompt for the display scheme can be provided to the group members at the electronic device, so that the group members can select an appropriate display scheme according to needs.
Fig. 3B is a flowchart of a client-side subject-based image processing method according to an exemplary embodiment. As shown in fig. 3B, the method applied to the electronic device may include the following steps:
step 302b, determining an image set which needs to be uploaded to a server aiming at a group, wherein the server maintains characteristic information corresponding to each group member in the group.
In an embodiment, a client of a communication application may be run on an electronic device, and an account corresponding to any group member in a group may be logged in on the client, so that the group member may upload an image set in the group. Of course, the members in the group having the image uploading authority may also be limited, such as a group owner or an administrator.
In an embodiment, the image set may include one or more images, and this specification does not limit this.
In an embodiment, each group member may cause the server to extract feature information of a preset associated object from an image by uploading the image of the preset associated object to the server. And the server maintains a preset feature library corresponding to the group, wherein the preset feature library comprises feature information of preset associated objects corresponding to all group members. Wherein, the preset associated object may include at least one of the following: the group members of the group, the non-group members having a preset association with the group members, and the like, which are not limited in this specification. For example, when the group is a university class group, the group members include a tutor, students in the class, and the like, and the preset associated object may include the group members; when the group is a parent group of a primary school, the group members include a class owner, parents of students in the class, and the like, and then the preset associated object may include the students in the class; when the group is a pet communication group, the group members include a pet owner, and the like, and the preset associated object may include a pet kept by the pet owner.
In an embodiment, the feature information may characterize the corresponding preset associated object from one or more dimensions, which is not limited in this specification. For example, the feature information may include facial feature information or skin color feature, hair color feature, body proportion feature, and the like of a preset associated object. Of course, the above feature information should belong to visual features to ensure that feature information of the same dimension can be extracted from the shot object in the image, so as to be compared with the feature information of the preset associated object, thereby determining the preset associated object matching the shot object in the preset object library.
In an embodiment, except for uploading an image, extracting feature information by a server and constructing a preset object library, a group member may process the image of a preset associated object through an electronic device, extract corresponding feature information, and directly upload the feature information to the server to construct or update the preset object library. Or, the server may obtain the feature information of the preset associated object in other manners, even directly obtain the preset object library, which is not limited in this specification; for example, when the feature information of the group member exists at a preset platform or a storage space, the group member may send the indication information to the server, so that the server obtains the feature information corresponding to the group member (the group member himself or another associated object besides the group member) from the preset platform or the storage space; or the group member does not need to send the indication information to the server, and the server can actively determine the association relationship between the group member and the feature information according to the information of the group member, the information of the personnel to which the feature information at the preset platform or the storage space belongs, and the like, so as to determine the feature information corresponding to the group member.
In an embodiment, especially in a case where the feature information of the preset associated object may relatively greatly change, such as when the age of the preset associated object is small, the accuracy of matching the photographic subject in the image by the server based on the preset object library can be ensured by updating the feature information contained in the preset object library. For example, the group members may provide a close-up of the preset associated objects periodically or non-periodically to update the corresponding feature information in the preset object library. For another example, when an image including each preset related object is frequently uploaded in the group, and when the object in the image is determined to be matched with the preset related object, the shooting time of the image is certainly later than the generation time of the feature information of the preset related object, so that the feature information of the corresponding preset related object can be updated according to the feature information of the object in the image without specially providing a close-up photograph by group members, and the feature information of the preset related object is updated in a non-perception manner in the process of continuously uploading the image.
In one embodiment, the feature information included in the preset object library is assumed to include: presetting facial feature information of an associated object; accordingly, the server can identify the face region of the subject in the images contained in the image set by face detection technology; then, the facial feature information extracted from the face region is compared with facial feature information contained in a preset object library to determine a subject matching a preset related object. When the preset associated object and the shot object are users, such as the aforementioned students, the facial feature information may be face feature information, and the adopted face detection technology is a face detection technology and the technology for comparing the facial feature information is a face recognition technology; when the associated object is preset and the subject is of another type, such as the aforementioned pet, the above scheme should employ face detection technology, face recognition technology, etc. corresponding to the object of the corresponding type.
Here, when the server recognizes the face area of the subject in the image by the face detection technique, it actually recognizes an area belonging to a "face" in the image based on the face detection technique, and takes the area as the above-mentioned face area, and the face area belongs to the corresponding subject, and thus is considered as equivalent to the face area in which the subject is recognized.
Step 304b, uploading the image set to the server; when the shot object of any image in the image set is matched with the characteristic information corresponding to any group member, the any image is added to the photo album corresponding to the any group member.
In an embodiment, according to the association relationship between the shot object in the image and the preset association object, the image is added to the album corresponding to the preset association object, and then the images in the album are all related to the preset association object. Therefore, when a group member wants to view an image related to a certain preset related object, only the corresponding photo album needs to be accessed, and the group member does not need to respectively view and select all the images related to the group, so that the group member can efficiently, continuously and comprehensively view the image related to the preset related object. Taking the primary school parent group as an example, when the preset associated object is a student in a class, the photo album corresponding to the preset associated object can be a growth photo album of the related student, and is used for rapidly, continuously and comprehensively recording the growth process of the related student.
For convenience of understanding, the technical solutions in the present specification will be described in detail below with reference to the accompanying drawings by taking the parent group of the primary school class as an example.
Assume that there is "grade 9 shifts" of "abc elementary school of Shanghai city", the teacher and the parents of the student of this class forming the group "grade 9 shifts" in the communication application T. Taking the student "white of a class as an example, the parent" white dad "can view thegroup chat interface 40 shown in fig. 4 at the client of the communication application T, and thegroup chat interface 40 can be used for communication between the teacher and the parents of the student of the class and between the parents of different students.
In addition to group chat communication,group chat interface 40 may also provide other functionality related to "grade 9 shifts a year". For example,function buttons 401, 402, and 403 may be shown on thegroup chat interface 40, and the parent "dad" may transfer to the relevant interface for implementing functions such as "enter face", "class ring", "growth album" by triggering the function buttons 401-403, etc.
For example, by toggling thefunction button 401, the parent "dad" can go to theenter face interface 50 shown in fig. 5. In theface entry interface 50, the parent "dad of whit" can assist the student "whit" to enter the face, so that the face feature of the student "whit" is saved at the server of the communication application T. After theface input interface 50 is opened, the client of the communication application T may call a camera of the electronic device, and perform image acquisition on the student's whitish through the camera, so that the client extracts the face features and uploads the face features to the server, or uploads the acquired image to the server to extract the face features from the server. Wherein theentry face interface 50 may include apreview area 501. When the front camera of the electronic device is called, the student's "pinwhite" can hold the electronic device in hand and face the screen of the electronic device, the front camera, and determine whether the front camera can perform image acquisition on his face by looking at the content in thepreview area 501. When the back camera of the electronic equipment is called, the parents 'dad' can hold the electronic equipment by hand, the students 'whits' are shot through the back camera, and whether the back camera can shoot the faces of the students 'whits' is determined through thepreview area 501, and the students 'whits' only need to look at the lens of the back camera, so that extra operation is not required. Especially, when the student's' small white 'ages, the face input is implemented through the rear camera, so that the operation of the student's 'small white' can be reduced, and the face input efficiency and accuracy are improved.
Assuming that the group "9 shifts a year" includes the executive "king teacher," he may publish the shift dynamics associated with "9 shifts a year" for review by the student parents. For example, when the class dynamics is published to "class circle", the parent "white dad" can go to the presentation interface corresponding to "class circle" by triggering thefunction button 402 shown in fig. 4 to view the published class dynamics.
As shown in fig. 6, the teacher may edit the dynamic class to be published in the dynamicclass publishing interface 60. For example, the class dynamics may includetext content 601 and aphoto collection 602, and the king teacher may describe event information, such as a shooting background of photos in the photo collection 602 (e.g., activities of a group in the class, etc.), through thetext content 601. Of course, in some cases, the class dynamics may include only thetext 601 or only thephoto collection 602, which is not limited by the present description. In addition, the teacher can also dynamically configure the geographical location information for the class to be issued by triggering 'adding the current location'.
The collection ofphotos 602 is presented in the classdynamic publishing interface 60 as thumbnail previews of the individual photos. The king teacher may view specific photo content by selecting a thumbnail preview. For example, the photo content may be presented in aphoto presentation interface 70 as shown in FIG. 7, assuming that the photo includes objects 701-704, etc. After thephoto collection 602 is uploaded to the server of the communication application, the server may perform face detection on each photo, detect a face area of each subject, and then compare features of the face area of each subject with features of faces of students in "9 class a year" of the class based on a face recognition technology, so as to automatically recognize the students included in the photo.
For example, when the server recognizes that the subject 701 is "whitish" and the subject 702 is "whitish" as shown in fig. 7, the server may addtags 701a-702a for the subject 701 and 702 on the photo, where the content of thetag 701a is "whitish" and the content of thetag 702a is "whitish", so as to determine whether the server has correctly recognized the students in the photo after being checked by the wang teacher or the parents of the students.
Due to the influence of factors such as the shooting angle and the shooting distance, the server may not be able to correctly identify all students contained in the photo. For example, as shown in fig. 7, although the server detects the face area of the subject 703, it does not identify which student the subject 703 is, and thus the server may add a facearea indication frame 703a to the photograph to indicate that a student matching the subject 703 is not identified. Whereas for unidentified students, a teacher or a parent of the student may be matched manually. For example, the wang teacher may call up aninput box 80 as shown in fig. 8 by triggering the facearea indication box 703a, and enter the student name of the subject 703 in theinput box 80. In the input process, the client may show the corresponding associatedcontent 801 for the content that the king teacher has input according to the names of all students in the class "9 shifts a year", for example, when the input content is "party", the associatedcontent 801 may include the names of all the students in the last name "party", such as "square gram", "square ink book" and the like shown in fig. 8, so that the king teacher may quickly complete the input by way of selection. Assuming that the last name of the student configured by the king teacher for the subject 703 is "square", the above-described facearea indication box 703a may be replaced by a label 703b as shown in fig. 9.
When the server side performs face detection on the shot object in the film, the detection failure may occur. For example, in fig. 7-9, the server fails to successfully detect the face region of the subject 704, so that the server does not even show a corresponding face region indication frame for the subject 704. Then, the teacher or the parent of the student can indicate the position of the facial region of the subject 704 to the server by clicking or selecting the facial region of the subject 704 in the photo, so that the server can perform face recognition on the facial region; if the recognition is successful, the server generates a tag corresponding to the subject 704, otherwise the server may show a face region indication box corresponding to the subject 704 to configure a name for the subject 704 by a teacher or a parent of a student, which may refer to the relevant operation on the subject 703. Of course, the teacher or the parent of the student can directly call up theinput box 80 shown in fig. 8 by clicking or selecting the facial area of the subject 704 in the photo, so as to directly configure the name for the subject 704 and form the corresponding tag, without the need for the server to perform face recognition.
In addition, the server may add an error label due to face recognition error, and a teacher or a parent of a student may configure the error label in some cases, so that the teacher or the parent of the studentThe parents of the student can call theinput box 80 to modify by triggering the corresponding label, or can trigger the label corresponding to the input box
Figure BDA0002121930960000471
Identify and delete the error tag.
For the photos shown in fig. 9, thetags 701a, 702a, 703b, etc. may be used in the corresponding growth album to which the photos are added in the subsequent process, and are not displayed in the class ring; alternatively, the tags may be released in a class circle and displayed in corresponding photographs, so that the students can still be accurately identified after a long time interval.
Other ways of tagging may be used if the mapping between the subject and the name need not be indicated. For example, as shown in fig. 10, each tag may be individually presented under the photograph without being presented near the subject within the photograph. The teacher or the parents of the students can modify or delete the generated tags and can add new tags.
It should be noted that: the technical scheme of face recognition is not necessarily adopted in the technical scheme of the specification. For example, the various labels in fig. 9-10 may be added manually by a teacher or parent of a student entirely.
Before the class dynamics is published, the king teacher can check the tag adding condition of the shot object in each photo, add a corresponding tag to the shot object without the tag, and then publish the class dynamics. Or, after the class dynamics is released, the king teacher may check and adjust the tag addition condition of the subject in the picture, and at this time, the parents of each student may also check and adjust the photo and the tag content thereof, so that the parents of the student may also check and adjust the tag contained in the photo.
By triggering the "publish" option as shown in fig. 6, the king teacher may publish the corresponding class dynamics at class "grade one 9. While in thegroup chat interface 1100 shown in fig. 11, the parent "dad can receive acorresponding push message 1101. Although the sender of thepush message 1101 is shown as a king teacher, thepush message 1101 may actually be automatically generated and sent by the server according to the class dynamics issued by the king teacher, without the need for manual sending by the king teacher. Similarly, other group members (teachers or parents of students) within the group may each receive a corresponding push message.
Take thepush message 1101 received by the parent "dad. Thepush message 1101 may include apreview image 1101a,preview text 1101b, and the like. Among them, it is inevitable that the parent "whited dad" is concerned with the student "whity" in the class but not other students, so that the server can set thepreview image 1101a specifically to a photo containing the student "whity" (such as the photo shown in fig. 9 described above) when generating thepush message 1101 according to the relationship between the parent "whited dad" and the student "whity", so that the parent "whited dad" can view the photo containing the student "whity" in the first time. Similarly, when other parents of students receive the corresponding push message, the preview image of the push message is a photo containing the corresponding student, so that different parents of students can dynamically receive different push messages for the same class.
Thepush message 1101 received by the parent "dad is still used as an example. By triggering a push message 1101 (such as the "view class circle" option corresponding to thepush message 1101 in fig. 11) or a "class circle" function button included in thegroup chat interface 1100, the parent "daddy" can be switched from thegroup chat interface 1100 to theclass circle interface 1200 shown in fig. 12 to view the above-mentioned class dynamics or other historical class dynamics published by the wang teacher.
As shown in fig. 12,text content 1201 and a set ofimages 1202 contained in a class dynamic may be shown in aclass circle interface 1200. Assume that theimage combination 1202 contains a total of 50 photographs, only some of which contain the student's "pinkish". However, since the parent "whitedad" is most concerned with the student "whites," in theclass circle interface 1200, the photograph containing the student "whites" can be arranged before the rest of the photographs in theimage collection 1202, such as by showing thephotographs 1202 a-1202 c containing the student "whites" first from the first row left, so that the parent "whitedad" can view the photograph containing the student "whites" the first time. Similarly, for other parents, the image sets 1202 pushed to the respective parents may be configured according to the relationship between each parent and the student, so that each parent can view the photos containing his child preferentially.
Because each photo uploaded by the king teacher is added with the label corresponding to the shot object, the server can determine which students each photo contains according to the names of the students contained in the labels. Meanwhile, parents of each student pay attention to the growth process of the children. Thus, the communication application T may provide the "grow-up album" function described above. Still take the parent "white dad" as an example: after receiving the photos uploaded by the king teacher, the server can determine all photos containing the student's Xiaobai' and add the photos to a growth photo album corresponding to the student's Xiaobai'; accordingly, the parent "whitedad" can view all photos that contain the student "whitedad" in the growingalbum presentation interface 1300 as shown in fig. 13, without having to choose from all photos within the class circle.
In the growingalbum display interface 1300, the photos including the student's "pinkish" may be arranged and displayed in reverse order in the form of information flow. For example, after the king teacher issues the class dynamics in "today", the growingalbum display interface 1300 may display 3 photos including the student "pinkish" in the class dynamics at the top of the interface and mark as "today", and sequentially display photos issued at historical times such as "last week", "6 month 1 day", "5 month 28 day" and the like below, so that the parent "pinkish dad" can browse the photos in sequence according to the time axis, thereby embodying the growing course of the student "pinkish", and the parent "pinkish dad" does not need to actively distinguish the shooting time, the shooting sequence and the like of the photos. Similarly, for other parents of students, each parent of student can view the growth album corresponding to the child of the student and view photos containing the child of the student in the growth album in a centralized manner. Of course, the grown album may adopt other display manners besides the display manner of the information stream, for example, arranging the information stream in order according to the shooting time, and the description does not limit this.
FIG. 14 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to FIG. 14, at the hardware level, the device includes aprocessor 1402, aninternal bus 1404, anetwork interface 1406, amemory 1408, and anon-volatile storage 1410, although other hardware required for service may be included. Theprocessor 1402 reads a corresponding computer program from thenonvolatile memory 1410 into thememory 1408 and then runs, forming an image processing apparatus based on a subject on a logical level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 15, in one software implementation, the subject-based image processing apparatus may include:
an acquisition unit 1501 which acquires an uploaded image set related to a group;
an identifying unit 1502 configured to identify a subject in an image included in the image set according to a preset object library corresponding to the group, where the preset object library includes feature information of a preset associated object corresponding to the group;
the adding unit 1503 is configured to add any image to an album corresponding to any preset associated object when the subject in the any image matches any preset associated object.
Optionally, the feature information included in the preset object library includes: facial feature information of the preset associated object; the identifying unit 1502 is specifically configured to:
identifying a face region of a subject in images contained in the image set by a face detection technique;
and comparing the facial feature information extracted from the facial region with facial feature information contained in the preset object library to determine the shot object matched with the preset associated object.
Optionally, the method further includes:
an annotation unit 1504 that annotates a photographic subject for which no matching preset associated object exists in the images included in the image set;
the establishing unit 1505 establishes a matching relationship between the marked shot object and a preset associated object indicated by the user instruction according to the received user instruction.
Alternatively to this, the first and second parts may,
each preset associated object has a corresponding label, and the label content comprises the description information of the corresponding preset associated object;
the establishing unit 1505 is specifically configured to: adding a corresponding label for the marked shot object according to the user instruction, wherein the label content comprises object description information contained in the user instruction;
the device further comprises: the deleting unit 1506 is configured to delete the tag added to the tagged subject or generate a tag error prompt when the matching result of the tag content indicates that the tagged subject does not have a preset associated object matching with the tagged subject.
Optionally, the method further includes:
an information adding unit 1507 adds visual description information for a matching relationship between a subject included in an image and a preset associated object to the corresponding image.
Optionally, the method further includes:
an object adding unit 1508 adds an object not detected in the image as a subject included in the image, in accordance with the received user instruction.
Optionally, the method further includes:
the relationship adjustment unit 1509 modifies or deletes the matching relationship between the photographic subject and the preset associated object according to the received user instruction.
Optionally, the method further includes:
a determining unit 1510, configured to determine an attention image corresponding to each group member in the group in the image set, respectively, where a subject contained in the attention image matches a preset associated object corresponding to the corresponding group member;
the message pushing unit 1511 is configured to push a release message for the image set to each group member of the group, where the preview image included in the release message is a focused image corresponding to the pushed group member.
Optionally, the method further includes:
a determining unit 1510, configured to determine an attention image corresponding to each group member in the group in the image set, respectively, where a subject contained in the attention image matches a preset associated object corresponding to the corresponding group member;
the image pushing unit 1512 sets the display order of the attention images corresponding to the pushed group members before the rest of the images when the image sets are pushed to each group member of the group.
Optionally, the preset associated object includes at least one of: the group member of the group and the non-group member which has preset association with the group member.
Optionally, the access right of the image set belongs to a group member of the group.
Referring to fig. 16A, in one software implementation, the subject-based image processing apparatus may include:
an obtaining unit 1601a, configured to obtain an uploaded group-related image set;
an identifying unit 1602a, configured to identify a subject in an image included in the image set according to a preset object library corresponding to the group, where the preset object library includes feature information of a preset associated object corresponding to the group;
a determining unit 1603a, which respectively determines the attention images corresponding to the group members in the group in the image set, wherein the attention images contain the shot objects matched with the preset associated objects corresponding to the corresponding group members;
the setting unit 1604a sets the attention image corresponding to the pushed group member as a priority display when pushing the message related to the image set to each group member of the group.
Optionally, the message related to the image set is a release message for the image set, and the release message is used for linking to a presentation interface of the image set; the setting unit 1604a is specifically configured to:
and setting the preview image of the release message as the attention image corresponding to the pushed group member.
Optionally, the message related to the image set is a release message for the image set; the setting unit 1604a is specifically configured to:
generating the publication message as a presentation interface for linking to an image of interest in the image collection corresponding to the pushed group member.
Optionally, the message related to the image set is the image set itself; the setting unit 1604a is specifically configured to:
and setting the display sequence among the images contained in the image set according to the pushed group members, so that the attention images corresponding to the pushed group members have the display sequence prior to the display sequence of the rest images.
Referring to fig. 16B, in one software implementation, the subject-based image processing apparatus may include:
an acquiring unit 1601b, configured to acquire an uploaded group-related image set;
an identifying unit 1602b, configured to identify a subject in an image included in the image set according to a preset object library corresponding to the group, where the preset object library includes feature information of a preset associated object corresponding to the group;
a determining unit 1603b, which respectively determines the attention images corresponding to the group members in the group in the image set, wherein the attention images contain the shot objects matched with the preset associated objects corresponding to the corresponding group members;
a returning unit 1604b, returns a message related to the image set to any group member according to the image acquisition request initiated by any group member in the group, where the message is used to preferentially display a corresponding attention image to any group member.
Optionally, the message related to the image set is a release message for the image set, and the release message is used for linking to a presentation interface of the image set; the return unit 1604b is specifically configured to:
and setting the preview image of the release message as the attention image corresponding to the pushed group member.
Optionally, the message related to the image set is a release message for the image set; the return unit 1604b is specifically configured to:
generating the publication message as a presentation interface for linking to an image of interest in the image collection corresponding to the pushed group member.
Optionally, the message related to the image set is the image set itself; the return unit 1604b is specifically configured to:
and setting the display sequence among the images contained in the image set according to the pushed group members, so that the attention images corresponding to the pushed group members have the display sequence prior to the display sequence of the rest images.
Referring to fig. 17, in one software implementation, the subject-based image processing apparatus may include:
a receiving unit 1701 that receives an image set pushed by a server, wherein the image set is pushed to each group member of a group to which a home terminal user belongs; when a shot object of any image in the image set is matched with the characteristic information corresponding to the home terminal user, marking the image as a concerned image corresponding to the home terminal user;
the display unit 1702 preferentially displays the attention image corresponding to the home terminal user when displaying the image set.
Optionally, the method further includes: an upload unit 1703 uploads an image to a server, so that the server extracts feature information included in the image and associates the feature information to a home terminal user.
Referring to fig. 18, in one software implementation, the subject-based image processing apparatus may include:
a determining unit 1801, configured to determine an image set that needs to be uploaded to a server in a group-by-group manner; the server maintains characteristic information corresponding to each group member in the group, so that when a shot object of any image in the image set is matched with the characteristic information corresponding to any group member, the any image is marked as a concerned image corresponding to any group member;
an upload unit 1802 uploads the image set to the server, so that when the server pushes the image set to any group member, the attention image corresponding to any group member is preferentially displayed.
Referring to fig. 19, in one software implementation, the subject-based image processing apparatus may include:
a determining unit 1901, configured to determine an image set that needs to be uploaded to a server in a group, where the server maintains feature information corresponding to each group member in the group;
an upload unit 1902 that uploads the image collection to the server; when the shot object of any image in the image set is matched with the characteristic information corresponding to any group member, the any image is added to the photo album corresponding to the any group member.
Referring to fig. 20, in one software implementation, the subject-based image processing apparatus may include:
an identifying unit 2001 that identifies a subject in an image included in a local album;
a determining unit 2002 that determines a preset object included in each image according to the recognition result;
the display unit 2003 displays images in the local album in an arranged manner according to a predefined arrangement sequence among the preset objects.
FIG. 21 is a schematic block diagram of an apparatus provided in an exemplary embodiment. Referring to fig. 21, at the hardware level, the apparatus includes aprocessor 2102, aninternal bus 2104, anetwork interface 2106, amemory 2108 and anon-volatile memory 2110, although it may also include hardware required for other services. Theprocessor 2102 reads a corresponding computer program from thenon-volatile memory 2110 into thememory 2108 and runs the computer program, thereby forming a processing device of the multimedia file on a logical level. Of course, besides software implementation, the one or more embodiments in this specification do not exclude other implementations, such as logic devices or combinations of software and hardware, and so on, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Referring to fig. 22, in a software implementation, the processing device of the multimedia file may include:
an acquiring unit 2201, acquiring a multimedia file set related to the group;
an identifying unit 2202, configured to identify an acquired object in the multimedia files included in the multimedia file set according to a preset object library corresponding to the group, where the preset object library includes feature information of a preset associated object corresponding to the group;
a determining unit 2203, configured to determine an attention multimedia file corresponding to each group member in the group in the multimedia file set, where an object to be acquired in the attention multimedia file matches a preset associated object corresponding to the corresponding group member;
the setting unit 2204 is configured to set the concerned multimedia files corresponding to the pushed group members as the ranking first when the group members of the group are pushed messages related to the multimedia file set.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. A typical implementation device is a computer, which may take the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email messaging device, game console, tablet computer, wearable device, or a combination of any of these devices.
In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage media or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The terminology used in the description of the one or more embodiments is for the purpose of describing the particular embodiments only and is not intended to be limiting of the description of the one or more embodiments. As used in one or more embodiments of the present specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in one or more embodiments of the present description to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of one or more embodiments herein. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The above description is only for the purpose of illustrating the preferred embodiments of the one or more embodiments of the present disclosure, and is not intended to limit the scope of the one or more embodiments of the present disclosure, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the one or more embodiments of the present disclosure should be included in the scope of the one or more embodiments of the present disclosure.

Claims (33)

1. An image processing method based on a subject, comprising:
acquiring an uploaded image set related to the group;
identifying a shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
respectively determining attention images corresponding to all group members in the group in the image set, wherein shot objects contained in the attention images are matched with preset associated objects corresponding to the corresponding group members;
and when the information related to the image set is respectively pushed to each group member of the group, setting the concerned image corresponding to the pushed group member as a priority display.
2. The method of claim 1, wherein the message related to the image collection is a posting message for the image collection, the posting message for linking to a presentation interface of the image collection; the setting of the attention image corresponding to the pushed group member as a priority display includes:
and setting the preview image of the release message as the attention image corresponding to the pushed group member.
3. The method of claim 1, wherein the message related to the image collection is a posting message for the image collection; the setting of the attention image corresponding to the pushed group member as a priority display includes:
generating the publication message as a presentation interface for linking to an image of interest in the image collection corresponding to the pushed group member.
4. The method of claim 1, wherein the message related to the image collection is the image collection itself; the setting of the attention image corresponding to the pushed group member as a priority display includes:
and setting the display sequence among the images contained in the image set according to the pushed group members, so that the attention images corresponding to the pushed group members have the display sequence prior to the display sequence of the rest images.
5. An image processing method based on a subject, comprising:
acquiring an uploaded image set related to the group;
identifying a shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
respectively determining attention images corresponding to all group members in the group in the image set, wherein shot objects contained in the attention images are matched with preset associated objects corresponding to the corresponding group members;
and returning a message related to the image set to any group member according to an image acquisition request initiated by any group member in the group, wherein the message is used for preferentially displaying a corresponding attention image to any group member.
6. A method for processing a multimedia file, comprising:
acquiring a multimedia file set related to a group;
identifying an acquired object in the multimedia files contained in the multimedia file set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
respectively determining concerned multimedia files corresponding to each group member in the group in the multimedia file set, wherein the collected objects contained in the concerned multimedia files are matched with preset associated objects corresponding to the corresponding group members;
and when the information related to the multimedia file set is respectively pushed to each group member of the group, setting the concerned multimedia files corresponding to the pushed group members as the first ranking.
7. An image processing method based on a subject, comprising:
acquiring an uploaded image set related to the group;
identifying a shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
when a shot object in any image is matched with any preset associated object, adding the image into the album corresponding to the preset associated object.
8. The method according to claim 7, wherein the feature information contained in the preset object library comprises: facial feature information of the preset associated object; the identifying the shot object in the image contained in the image set according to the preset object library corresponding to the group comprises:
identifying a face region of a subject in images contained in the image set by a face detection technique;
and comparing the facial feature information extracted from the facial region with facial feature information contained in the preset object library to determine the shot object matched with the preset associated object.
9. The method of claim 7, further comprising:
marking shot objects which do not have matched preset associated objects in the images contained in the image set;
and according to the received user instruction, establishing a matching relation between the marked shot object and a preset associated object indicated by the user instruction.
10. The method of claim 9,
each preset associated object has a corresponding label, and the label content comprises the description information of the corresponding preset associated object;
the establishing of the matching relationship between the marked shot object and the preset associated object indicated by the user instruction according to the received user instruction comprises the following steps: adding a corresponding label for the marked shot object according to the user instruction, wherein the label content comprises object description information contained in the user instruction;
the method further comprises the following steps: and when the matching result of the label content shows that the labeled shot object does not have the matched preset associated object, deleting the label added to the labeled shot object or generating a label error prompt.
11. The method of claim 7 or 9, further comprising:
and adding visual description information aiming at the matching relation between the shot object contained in the image and a preset associated object in the corresponding image.
12. The method of claim 7, further comprising:
according to the received user instruction, adding the object which is not detected in the image as the shot object contained in the image.
13. The method of claim 7, further comprising:
and modifying or deleting the matching relation between the shot object and the preset associated object according to the received user instruction.
14. The method of claim 7, further comprising:
respectively determining attention images corresponding to all group members in the group in the image set, wherein shot objects contained in the attention images are matched with preset associated objects corresponding to the corresponding group members;
and respectively pushing release messages aiming at the image set to each group member of the group, wherein preview images contained in the release messages are attention images corresponding to the pushed group members.
15. The method of claim 7, further comprising:
respectively determining attention images corresponding to all group members in the group in the image set, wherein shot objects contained in the attention images are matched with preset associated objects corresponding to the corresponding group members;
and when the image set is respectively pushed to each group member of the group, setting the display sequence of the concerned images corresponding to the pushed group members before the rest images.
16. The method of claim 7, wherein the preset correlation object comprises at least one of: the group member of the group and the non-group member which has preset association with the group member.
17. The method of claim 7, wherein the access rights of the set of images are attributed to group members of the group.
18. An image processing method based on a subject, comprising:
receiving an image set pushed by a server, wherein the image set is pushed to each group member of a group to which a home terminal user belongs; when a shot object of any image in the image set is matched with the characteristic information corresponding to the home terminal user, marking the image as a concerned image corresponding to the home terminal user;
and when the image set is displayed, preferentially displaying the concerned image corresponding to the home terminal user.
19. An image processing method based on a subject, comprising:
determining a set of images which need to be uploaded to a server in a group-specific manner; the server maintains characteristic information corresponding to each group member in the group, so that when a shot object of any image in the image set is matched with the characteristic information corresponding to any group member, the any image is marked as a concerned image corresponding to any group member;
uploading the image set to the server, so that when the server pushes the image set to any group member, the attention image corresponding to any group member is preferentially displayed.
20. An image processing method based on a subject, comprising:
identifying a subject in an image contained in a local album;
determining a preset object contained in each image according to the recognition result;
and displaying the images in the local photo album in an arranging way according to the predefined arranging sequence among the preset objects.
21. An image processing method based on a subject, comprising:
determining an image set which needs to be uploaded to a server aiming at a group, wherein the server maintains characteristic information corresponding to each group member in the group;
uploading the collection of images to the server; when the shot object of any image in the image set is matched with the characteristic information corresponding to any group member, the any image is added to the photo album corresponding to the any group member.
22. An image processing apparatus based on a subject, comprising:
the acquisition unit acquires the uploaded image set related to the group;
the identification unit is used for identifying the shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
the determining unit is used for respectively determining the concerned images corresponding to all the group members in the group in the image set, and the shot objects contained in the concerned images are matched with the preset associated objects corresponding to the corresponding group members;
and the setting unit is used for setting the attention image corresponding to the pushed group member as a priority display when the information related to the image set is respectively pushed to each group member of the group.
23. An image processing apparatus based on a subject, comprising:
the acquisition unit acquires the uploaded image set related to the group;
the identification unit is used for identifying the shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
the determining unit is used for respectively determining the concerned images corresponding to all the group members in the group in the image set, and the shot objects contained in the concerned images are matched with the preset associated objects corresponding to the corresponding group members;
and the return unit is used for returning a message related to the image set to any group member according to an image acquisition request initiated by any group member in the group, wherein the message is used for preferentially displaying a corresponding attention image to any group member.
24. A device for processing multimedia files, comprising:
the acquisition unit is used for acquiring a multimedia file set related to the group;
the identification unit is used for identifying the collected object in the multimedia files contained in the multimedia file set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
the determining unit is used for respectively determining concerned multimedia files corresponding to all group members in the group in the multimedia file set, and collected objects contained in the concerned multimedia files are matched with preset associated objects corresponding to the corresponding group members;
and the setting unit is used for setting the concerned multimedia files corresponding to the pushed group members as the first ranking when the information related to the multimedia file set is respectively pushed to each group member of the group.
25. An image processing apparatus based on a subject, comprising:
the acquisition unit acquires the uploaded image set related to the group;
the identification unit is used for identifying the shot object in the images contained in the image set according to a preset object library corresponding to the group, wherein the preset object library contains characteristic information of a preset associated object corresponding to the group;
and the adding unit is used for adding any image to the album corresponding to any preset associated object when the shot object in any image is matched with any preset associated object.
26. An image processing apparatus based on a subject, comprising:
the receiving unit is used for receiving an image set pushed by the server, and the image set is pushed to each group member of a group to which the home terminal user belongs; when a shot object of any image in the image set is matched with the characteristic information corresponding to the home terminal user, marking the image as a concerned image corresponding to the home terminal user;
and the display unit is used for preferentially displaying the concerned images corresponding to the home terminal user when the image set is displayed.
27. An image processing apparatus based on a subject, comprising:
the determining unit is used for determining an image set which needs to be uploaded to the server aiming at the group; the server maintains characteristic information corresponding to each group member in the group, so that when a shot object of any image in the image set is matched with the characteristic information corresponding to any group member, the any image is marked as a concerned image corresponding to any group member;
and the uploading unit uploads the image set to the server, so that when the server pushes the image set to any group member, the concerned image corresponding to any group member is preferentially displayed.
28. An image processing apparatus based on a subject, comprising:
an identifying unit that identifies a subject in an image included in a local album;
the determining unit is used for determining a preset object contained in each image according to the recognition result;
and the display unit is used for displaying the images in the local photo album in an arranged manner according to the predefined arrangement sequence among the preset objects.
29. An image processing apparatus based on a subject, comprising:
the system comprises a determining unit, a judging unit and a judging unit, wherein the determining unit determines an image set which needs to be uploaded to a server aiming at a group, and the server maintains characteristic information corresponding to each group member in the group;
an uploading unit for uploading the image set to the server; when the shot object of any image in the image set is matched with the characteristic information corresponding to any group member, the any image is added to the photo album corresponding to the any group member.
30. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-17 by executing the executable instructions.
31. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 1-17.
32. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 18-21 by executing the executable instructions.
33. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 18-21.
CN201910609946.0A2019-07-082019-07-08Image processing method and device based on shot objectActiveCN112199541B (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
CN201910609946.0ACN112199541B (en)2019-07-082019-07-08Image processing method and device based on shot object
PCT/CN2020/099877WO2021004364A1 (en)2019-07-082020-07-02Method and apparatus for processing image on basis of captured subjects

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910609946.0ACN112199541B (en)2019-07-082019-07-08Image processing method and device based on shot object

Publications (2)

Publication NumberPublication Date
CN112199541Atrue CN112199541A (en)2021-01-08
CN112199541B CN112199541B (en)2024-07-16

Family

ID=74004785

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910609946.0AActiveCN112199541B (en)2019-07-082019-07-08Image processing method and device based on shot object

Country Status (2)

CountryLink
CN (1)CN112199541B (en)
WO (1)WO2021004364A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101359334A (en)*2007-07-312009-02-04Lg电子株式会社Portable terminal and image information managing method therefor
US20090077186A1 (en)*2007-09-172009-03-19Inventec CorporationInterface, system and method of providing instant messaging service
CN104281657A (en)*2014-09-192015-01-14联想(北京)有限公司Information processing method and electronic device
CN108228715A (en)*2017-12-052018-06-29深圳市金立通信设备有限公司A kind of method, terminal and computer readable storage medium for showing image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN109241336A (en)*2018-08-232019-01-18珠海格力电器股份有限公司Music recommendation method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN101359334A (en)*2007-07-312009-02-04Lg电子株式会社Portable terminal and image information managing method therefor
US20090077186A1 (en)*2007-09-172009-03-19Inventec CorporationInterface, system and method of providing instant messaging service
CN104281657A (en)*2014-09-192015-01-14联想(北京)有限公司Information processing method and electronic device
CN108228715A (en)*2017-12-052018-06-29深圳市金立通信设备有限公司A kind of method, terminal and computer readable storage medium for showing image

Also Published As

Publication numberPublication date
CN112199541B (en)2024-07-16
WO2021004364A1 (en)2021-01-14

Similar Documents

PublicationPublication DateTitle
US11651619B2 (en)Private photo sharing system, method and network
US20210227284A1 (en)Providing visual content editing functions
US9619713B2 (en)Techniques for grouping images
EP3713159B1 (en)Gallery of messages with a shared interest
US9338242B1 (en)Processes for generating content sharing recommendations
US9531823B1 (en)Processes for generating content sharing recommendations based on user feedback data
US20170371496A1 (en)Rapidly skimmable presentations of web meeting recordings
US8983150B2 (en)Photo importance determination
US20160155475A1 (en)Method And System For Capturing Video From A Plurality Of Devices And Organizing Them For Editing, Viewing, And Dissemination Based On One Or More Criteria
US20150269236A1 (en)Systems and methods for adding descriptive metadata to digital content
CN106716393A (en) Method and apparatus for identifying and matching objects depicted in images
WO2020187012A1 (en)Communication method, apparatus and device, and group creation method, apparatus and device
CN105630954A (en)Method and device for synthesizing dynamic pictures on basis of photos
CN105404696A (en)Method, system and device for downloading photographs in photograph album
CN111480168B (en)Context-based image selection
US11455693B2 (en)Visual focal point composition for media capture based on a target recipient audience
US20160267068A1 (en)System, method and process for multi-modal annotation and distribution of digital object
CN105512328A (en)Method, system and device for realizing uploading of album photos
JP2019117553A (en)Information presentation device, method and program
WO2015022689A1 (en)Media object selection
CN112199541B (en)Image processing method and device based on shot object
HK40044614A (en)Image processing method and device based on photographed object
JP2014182650A (en)Image sharing device, method for controlling image sharing device and program
TW202037129A (en)Communication method, device and equipment and group establishment method, device and equipment wherein one or more message classification management functions are set according to the actual needs of the group
CN111158838B (en)Information processing method and device

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
REGReference to a national code

Ref country code:HK

Ref legal event code:DE

Ref document number:40044614

Country of ref document:HK

TA01Transfer of patent application right
TA01Transfer of patent application right

Effective date of registration:20240605

Address after:Room 527, 5th Floor, Building 3, No. 969 Wenyi West Road, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after:Nail (China) Information Technology Co.,Ltd.

Country or region after:China

Address before:PO Box 31119 KY1-1205, Hongge, Furong Road, 802 West Bay Road, Grand Cayman Islands, Cayman Islands

Applicant before:Nail holding (Cayman) Co.,Ltd.

Country or region before:Britain

GR01Patent grant
GR01Patent grant

[8]ページ先頭

©2009-2025 Movatter.jp