Movatterモバイル変換


[0]ホーム

URL:


CN105787976A - Method and apparatus for processing pictures - Google Patents

Method and apparatus for processing pictures
Download PDF

Info

Publication number
CN105787976A
CN105787976ACN201610101677.3ACN201610101677ACN105787976ACN 105787976 ACN105787976 ACN 105787976ACN 201610101677 ACN201610101677 ACN 201610101677ACN 105787976 ACN105787976 ACN 105787976A
Authority
CN
China
Prior art keywords
synthesized
data
expression information
picture
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610101677.3A
Other languages
Chinese (zh)
Inventor
刘佳
张建朝
刘国平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Jinli Communication Equipment Co Ltd
Original Assignee
Shenzhen Jinli Communication Equipment Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Jinli Communication Equipment Co LtdfiledCriticalShenzhen Jinli Communication Equipment Co Ltd
Priority to CN201610101677.3ApriorityCriticalpatent/CN105787976A/en
Publication of CN105787976ApublicationCriticalpatent/CN105787976A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The embodiments of the invention disclose a method and apparatus for processing pictures. The method comprises the following steps: obtaining expression information in an object picture; obtaining data to be synthesized matching the expression information; and synthesizing the data to be synthesized with the object picture to obtain a synthetic picture. According to the embodiments of the invention, the synthetic picture can be obtained by synthesizing the data to be synthesized matching the expression information into the object picture without professional picture processing tools, the operation is simple and convenient, and user experience is improved.

Description

Method and device for processing picture
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for processing a picture.
Background
With the popularization and development of electronic technology, most of terminals such as mobile phones and computers have a photographing function. The terminal is used for shooting, a common entertainment mode is achieved, pictures obtained through shooting are used as carriers for people to record life drops, and different expressions such as happiness, surprise, sadness, anger, fear and disgust of people at the moment of shooting are recorded in the pictures.
The inventor of the invention finds that in the research and practice process, in order to enhance the interest, some methods can be used for processing the picture at present, for example, text information corresponding to the facial expression is added on the picture by opening professional picture editing software. Specifically, when the picture is processed, the user firstly identifies the expression of the character in the picture, then edits the text information corresponding to the identified expression, and finally selects to synthesize the edited text information with the picture. At present, the method for processing the picture needs to use professional picture processing software to process the picture, and the operation flow is complex, so that the user experience is influenced.
Disclosure of Invention
The embodiment of the invention discloses a method and a device for processing pictures, which can improve user experience.
A first aspect of an embodiment of the present invention provides a method for processing an image, including:
obtaining expression information in a target picture;
acquiring data to be synthesized matched with the expression information;
and synthesizing the data to be synthesized and the target picture to obtain a synthesized picture.
The second aspect of the present invention provides an apparatus for processing pictures, comprising:
the first acquisition unit is used for acquiring expression information in the target picture;
the second acquisition unit is used for acquiring data to be synthesized matched with the expression information;
and the synthesizing unit is used for synthesizing the data to be synthesized acquired by the second acquiring unit and the picture to acquire a synthesized picture.
With reference to the second aspect, in a first possible implementation manner of the second aspect,
the data to be synthesized includes at least one of the following information: text data, picture data, audio data, and video data.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture. The data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, so that a synthesized picture is obtained, the operation is simple and convenient, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for processing pictures according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating another method for processing pictures according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating another method for processing pictures according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating another method for processing pictures according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating another method for processing pictures according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of an apparatus for processing pictures according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of another apparatus for processing pictures according to an embodiment of the present invention;
FIG. 8 is a schematic structural diagram of another apparatus for processing pictures according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of another apparatus for processing pictures according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a method and a device for processing pictures, which can simplify the picture processing process and improve the user experience.
In order to make the technical solutions of the present invention better understood by those skilled in the art, the technical solutions in the embodiments of the present invention will be clearly described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for processing a picture according to an embodiment of the present invention, and as shown in fig. 1, the method for processing a picture according to an embodiment of the present invention may include:
101. the terminal obtains expression information in the target picture.
The terminal in the embodiment of the present invention may be a digital camera, a smart phone, an intelligent wearable device (such as a smart watch and a smart bracelet), or other various electronic devices with a shooting function or a picture browsing function, which is not limited in the embodiment of the present invention.
The target picture can be a picture obtained by real-time shooting or a picture selected from a picture library; specifically, the picture may be a photographed picture or a picture including facial expression information drawn by other methods such as drawing software or a drawing tool. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
It should be noted that there are various methods for obtaining the expression information, which are not limited herein, for example, local positions or shape characteristics of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the picture, and the expression of the user may be determined by integrating the local positions or shape characteristics, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
102. And the terminal acquires the data to be synthesized matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
103. And the terminal synthesizes the data to be synthesized and the target picture to obtain a synthesized picture.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture. The data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, so that a synthesized picture is obtained, the operation is simple and convenient, and the user experience is improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for processing a picture according to an embodiment of the present invention, and as shown in fig. 2, the method for processing a picture according to an embodiment of the present invention may include:
201. the terminal obtains expression information in the target picture.
The terminal in the embodiment of the present invention may be a digital camera, a smart phone, an intelligent wearable device (such as a smart watch and a smart bracelet), or other various electronic devices with a shooting function or a picture browsing function, which is not limited in the embodiment of the present invention.
The target picture can be a picture obtained by real-time shooting or a picture selected from a gallery. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
It should be noted that there are various methods for obtaining the expression information, which are not limited herein, for example, local positions or shape characteristics of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the picture, and the expression of the user may be determined by integrating the local positions or shape characteristics, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
202. And the terminal acquires the data to be synthesized matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
203. And the terminal synthesizes the data to be synthesized and the target picture to obtain a synthesized picture.
204. And the terminal saves the synthesized picture.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture, and then storing the synthesized picture. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, so that the synthesized picture is obtained, the operation is simple and convenient, and the user experience is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for processing a picture according to an embodiment of the present invention, and as shown in fig. 3, the method for processing a picture according to an embodiment of the present invention may include:
301. the terminal obtains expression information in the target picture.
The terminal in the embodiment of the present invention may be a digital camera, a smart phone, an intelligent wearable device (such as a smart watch and a smart bracelet), or other various electronic devices with a shooting function or a picture browsing function, which is not limited in the embodiment of the present invention.
The target picture may be a picture taken in real time, a picture selected from a gallery, or a picture drawn by other methods, such as by drawing software or a drawing tool, including facial expression information. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
It should be noted that there are various methods for obtaining the expression information, which are not limited herein, for example, local positions or shape characteristics of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the picture, and the expression of the user may be determined by integrating the local positions or shape characteristics, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
302. And the terminal acquires the data to be synthesized matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
303. And the terminal synthesizes the data to be synthesized and the target picture to obtain a synthesized picture.
304. And the terminal sends the synthesized picture to third-party software and displays the synthesized picture in the third-party software.
For example, the composite picture may be sent to the circle of friends by WeChat, or the composite picture may be saved in the terminal and sent to the circle of friends, etc.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized with the target picture to obtain a synthesized picture, and then displaying the synthesized picture through third-party software. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, and the data to be synthesized is displayed through third-party software, so that the operation is simple and convenient, and the user experience is improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating a method for processing a picture according to an embodiment of the present invention, and as shown in fig. 4, the method for processing a picture according to an embodiment of the present invention may include:
401. the terminal obtains expression information in the target picture.
The terminal in the embodiment of the present invention may be a digital camera, a smart phone, an intelligent wearable device (such as a smart watch and a smart bracelet), or other various electronic devices with a shooting function or a picture browsing function, which is not limited in the embodiment of the present invention.
The target picture may be a picture obtained by real-time shooting, a picture selected from a gallery, or a picture including facial expression information drawn by other means, such as by drawing software or a drawing tool. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
For example, characteristics of local positions or shapes of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the photo and integrated to judge the expression of the user, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
402. And the terminal acquires the data to be synthesized matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
403. And the terminal synthesizes the data to be synthesized and the target picture to obtain a synthesized picture.
404. And the terminal saves the synthesized picture.
405. And the terminal sends the synthesized picture to third-party software and displays the synthesized picture in the third-party software.
For example, the composite picture may be sent to the circle of friends by WeChat, or the composite picture may be saved in the terminal and sent to the circle of friends, etc.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture, storing the synthesized picture and displaying the synthesized picture through third-party software. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, and the data to be synthesized is displayed through third-party software, so that the operation is simple and convenient, and the user experience is improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method for processing a picture according to an embodiment of the present invention, and as shown in fig. 5, the method for processing a picture according to an embodiment of the present invention may include:
501. and the terminal starts third-party software with a photographing function.
The terminal in the embodiment of the present invention may be a digital camera, a smart phone, an intelligent wearable device (such as a smart watch and a smart bracelet), or other various electronic devices with a shooting function or a picture browsing function, which is not limited in the embodiment of the present invention.
For example, the third-party software may be application software such as a WeChat installed in the terminal that can be used to display a photograph.
502. And the terminal calls the photographing function of the third-party software to acquire a target picture.
503. And the terminal acquires the expression information in the target picture.
The target picture can be a picture which is obtained by starting a photographing function through third-party software and is photographed in real time, or a picture selected from a gallery. The expression information is facial expression information obtained by recognizing a face in a photo, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
For example, characteristics of local positions or shapes of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the photo and integrated to judge the expression of the user, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
504. And the terminal acquires the data to be synthesized matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
505. And the terminal synthesizes the data to be synthesized and the target picture to obtain a synthesized picture.
506. And the terminal displays the synthetic picture in the third-party software.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture, and displaying the synthesized picture through third-party software. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, and the data to be synthesized is displayed through third-party software, so that the operation is simple and convenient, and the user experience is improved.
Referring to fig. 6, fig. 6 is a schematic diagram of an apparatus for processing pictures according to an embodiment of the present invention, as shown in fig. 6, the apparatus includes:
a first obtaining unit 610, configured to obtain expression information in a target picture.
The target picture may be a picture obtained by real-time shooting, a picture selected from a gallery, or a picture including facial expression information drawn by other means, such as by drawing software or a drawing tool. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
It should be noted that there are various methods for obtaining the expression information, which are not limited herein, for example, local positions or shape characteristics of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the picture, and the expression of the user may be determined by integrating the local positions or shape characteristics, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
A second obtaining unit 620, configured to obtain data to be synthesized that is matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
A synthesizing unit 630, configured to synthesize the data to be synthesized acquired by the second acquiring unit and the target picture to obtain a synthesized picture.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture. The data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, so that a synthesized picture is obtained, the operation is simple and convenient, and the user experience is improved.
Referring to fig. 7, fig. 7 is a schematic diagram of an apparatus for processing pictures according to an embodiment of the present invention, as shown in fig. 7, the apparatus includes:
the first obtaining unit 710 is configured to obtain expression information in the target picture.
The target picture may be a picture taken in real time, a picture selected from a gallery, a picture drawn by other methods such as drawing software or a drawing tool, and the like. The expression information is facial expression information obtained by recognizing a face in a photo, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
For example, characteristics of local positions or shapes of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the photo and integrated to judge the expression of the user, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
A second obtaining unit 720, configured to obtain data to be synthesized that is matched with the expression information;
for example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
A synthesizing unit 730, configured to synthesize the data to be synthesized acquired by the second acquiring unit and the target picture to obtain a synthesized picture.
A saving unit 740, configured to save the cooperation picture.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture, and then storing the synthesized picture. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, so that the synthesized picture is obtained, the operation is simple and convenient, and the user experience is improved.
Referring to fig. 8, fig. 8 is a schematic diagram of an apparatus for processing pictures according to an embodiment of the present invention, as shown in fig. 8, the apparatus includes:
a first obtaining unit 810, configured to obtain expression information in the target picture.
The target picture can be a picture obtained by real-time shooting or a picture selected from a gallery. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
It should be noted that there are various methods for obtaining the expression information, which are not limited herein, for example, local positions or shape characteristics of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the picture, and the expression of the user may be determined by integrating the local positions or shape characteristics, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
A second obtaining unit 820, configured to obtain data to be synthesized that is matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
The synthesizing unit 830 is configured to synthesize the data to be synthesized acquired by the second acquiring unit and the target picture to obtain a synthesized picture.
A sending unit 840, configured to send the composite picture to third-party software, and display the composite picture in the third-party software.
For example, the composite picture may be sent to the circle of friends by WeChat, or the composite picture may be saved in the terminal and sent to the circle of friends, etc.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture, storing the synthesized picture and displaying the synthesized picture through third-party software. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, and the data to be synthesized is displayed through third-party software, so that the operation is simple and convenient, and the user experience is improved.
Referring to fig. 9, fig. 9 is a schematic diagram of an apparatus for processing pictures according to an embodiment of the present invention, and as shown in fig. 9, the apparatus includes:
a first obtaining unit 910, configured to obtain expression information in a target picture.
The target picture can be a picture obtained by real-time shooting or a picture selected from a gallery. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
It should be noted that there are various methods for obtaining the expression information, which are not limited herein, for example, local positions or shape characteristics of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the picture, and the expression of the user may be determined by integrating the local positions or shape characteristics, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
A second obtaining unit 920, configured to obtain data to be synthesized that is matched with the expression information;
for example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
A synthesizing unit 930, configured to synthesize the data to be synthesized acquired by the second acquiring unit and the target picture to obtain a synthesized picture.
A saving unit 940, configured to combine the data to be combined with the target picture to obtain a combined picture.
A sending unit 950, configured to send the composite picture to third-party software, and display the composite picture in the third-party software.
For example, the composite picture may be sent to the circle of friends by WeChat, or the composite picture may be saved in the terminal and sent to the circle of friends, etc.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture, and displaying the synthesized picture through third-party software. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, and the data to be synthesized is displayed through third-party software, so that the operation is simple and convenient, and the user experience is improved.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present invention. Among them, terminal 1000 can include: at least 1 processor 1010, memory 1020, user interface 1030, and at least 1 communication bus 1040. The communication bus 1040 is used to realize connection communication between these components.
The user interface 1030 may include a touch screen or the like, and may be configured to receive instructions from a user and display pictures.
The memory 1020 may include read-only memory and random access memory, and may be used for storing emoticon information and data to be synthesized matched therewith, as well as for storing program code and providing instructions and data to the processor 1010.
In an embodiment of the present invention, the processor 1010 is configured to obtain the emotion information in the target picture by calling the program code or instructions stored in the memory 1020; acquiring data to be synthesized matched with the expression information; and synthesizing the data to be synthesized and the target picture to obtain a synthesized picture.
Optionally, in some possible embodiments of the present invention, the data to be synthesized includes at least one of the following information: text data, picture data, audio data, and video data.
Optionally, in some possible embodiments of the present invention, the data to be synthesized, which is matched with the expression information, is obtained; the method comprises the following steps: acquiring prestored data to be synthesized, which are matched with the expression information; or if the pre-stored data table to be synthesized does not have the data to be synthesized matched with the expression information, displaying an editing window and acquiring the data to be synthesized matched with the expression information, which is input by a user through the editing window.
Optionally, in some possible embodiments of the present invention, the acquiring the pre-stored data to be synthesized, which is matched with the expression information, includes: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; or; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
Optionally, in some possible embodiments of the present invention, the method further includes: and the processor stores the synthesized picture in a memory, and/or sends the synthesized picture to third-party software to be displayed in the third-party software.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture. The data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, so that a synthesized picture is obtained, the operation is simple and convenient, and the user experience is improved.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program includes, when executed, some or all of the steps of any one of the shooting parameter adjustment methods described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

CN201610101677.3A2016-02-242016-02-24Method and apparatus for processing picturesPendingCN105787976A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201610101677.3ACN105787976A (en)2016-02-242016-02-24Method and apparatus for processing pictures

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201610101677.3ACN105787976A (en)2016-02-242016-02-24Method and apparatus for processing pictures

Publications (1)

Publication NumberPublication Date
CN105787976Atrue CN105787976A (en)2016-07-20

Family

ID=56403609

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201610101677.3APendingCN105787976A (en)2016-02-242016-02-24Method and apparatus for processing pictures

Country Status (1)

CountryLink
CN (1)CN105787976A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107240143A (en)*2017-05-092017-10-10北京小米移动软件有限公司Bag generation method of expressing one's feelings and device
CN107578459A (en)*2017-08-312018-01-12北京麒麟合盛网络技术有限公司Expression is embedded in the method and device of candidates of input method
WO2019015522A1 (en)*2017-07-182019-01-24腾讯科技(深圳)有限公司Emoticon image generation method and device, electronic device, and storage medium
CN110019883A (en)*2017-07-182019-07-16腾讯科技(深圳)有限公司Obtain the method and device of expression picture
CN110675433A (en)*2019-10-312020-01-10北京达佳互联信息技术有限公司Video processing method and device, electronic equipment and storage medium
CN110807408A (en)*2019-10-302020-02-18百度在线网络技术(北京)有限公司 Person attribute recognition method and device
CN112036247A (en)*2020-08-032020-12-04北京小米松果电子有限公司 Expression package text generation method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2006185393A (en)*2004-12-282006-07-13Oki Electric Ind Co LtdInformation terminal device
CN102662961A (en)*2012-03-082012-09-12北京百舜华年文化传播有限公司Method, apparatus and terminal unit for matching semantics with image
CN104616330A (en)*2015-02-102015-05-13广州视源电子科技股份有限公司Picture generation method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
JP2006185393A (en)*2004-12-282006-07-13Oki Electric Ind Co LtdInformation terminal device
CN102662961A (en)*2012-03-082012-09-12北京百舜华年文化传播有限公司Method, apparatus and terminal unit for matching semantics with image
CN104616330A (en)*2015-02-102015-05-13广州视源电子科技股份有限公司Picture generation method and device

Cited By (10)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107240143A (en)*2017-05-092017-10-10北京小米移动软件有限公司Bag generation method of expressing one's feelings and device
WO2019015522A1 (en)*2017-07-182019-01-24腾讯科技(深圳)有限公司Emoticon image generation method and device, electronic device, and storage medium
CN110019883A (en)*2017-07-182019-07-16腾讯科技(深圳)有限公司Obtain the method and device of expression picture
CN107578459A (en)*2017-08-312018-01-12北京麒麟合盛网络技术有限公司Expression is embedded in the method and device of candidates of input method
CN110807408A (en)*2019-10-302020-02-18百度在线网络技术(北京)有限公司 Person attribute recognition method and device
CN110807408B (en)*2019-10-302022-08-19百度在线网络技术(北京)有限公司Character attribute identification method and device
CN115082984A (en)*2019-10-302022-09-20百度在线网络技术(北京)有限公司Character attribute identification method and device
CN110675433A (en)*2019-10-312020-01-10北京达佳互联信息技术有限公司Video processing method and device, electronic equipment and storage medium
US11450027B2 (en)2019-10-312022-09-20Beijing Dajia Internet Information Technologys Co., Ltd.Method and electronic device for processing videos
CN112036247A (en)*2020-08-032020-12-04北京小米松果电子有限公司 Expression package text generation method, device and storage medium

Similar Documents

PublicationPublication DateTitle
CN105787976A (en)Method and apparatus for processing pictures
JP7112508B2 (en) Animation stamp generation method, its computer program and computer device
CN105493512B (en) A video processing method, video processing device and display device
CN113302622B (en) System and method for providing personalized video
CN108280166B (en)Method and device for making expression, terminal and computer readable storage medium
CN109948093B (en)Expression picture generation method and device and electronic equipment
WO2015001437A1 (en)Image processing method and apparatus, and electronic device
CN105302315A (en)Image processing method and device
CN105279203B (en)Method, device and system for generating jigsaw puzzle
CN113330453B (en)System and method for providing personalized video for multiple persons
CN110677734B (en)Video synthesis method and device, electronic equipment and storage medium
CN108038892A (en)Expression, which packs, makees method, apparatus, electronic equipment and computer-readable recording medium
CN107240143A (en)Bag generation method of expressing one's feelings and device
CN107213642A (en)Virtual portrait outward appearance change method and device
CN105809618A (en)Picture processing method and device
CN110019897B (en)Method and device for displaying picture
CN111831615B (en)Method, device and system for generating video file
CN110830845A (en)Video generation method and device and terminal equipment
CN105204718B (en)Information processing method and electronic equipment
CN106919943A (en)A kind of data processing method and device
CN114222995A (en) Image processing method, device and electronic device
CN113012040B (en)Image processing method, image processing device, electronic equipment and storage medium
CN111462279B (en)Image display method, device, equipment and readable storage medium
KR20150135591A (en) Capture two or more faces using a face capture tool on a smart phone, combine and combine them with the animated avatar image, and edit the photo animation avatar and server system, avatar database interworking and transmission method , And photo animation on smartphone Avatar display How to display caller
JP2014110469A (en)Electronic device, image processing method, and program

Legal Events

DateCodeTitleDescription
C06Publication
PB01Publication
C10Entry into substantive examination
SE01Entry into force of request for substantive examination
WD01Invention patent application deemed withdrawn after publication

Application publication date:20160720

WD01Invention patent application deemed withdrawn after publication

[8]ページ先頭

©2009-2025 Movatter.jp