The terms "comprises" and "comprising," and any variations thereof, in the description and claims of this invention and the above-described drawings are intended to cover non-exclusive inclusions. For example, a process, method, or system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for processing a picture according to an embodiment of the present invention, and as shown in fig. 1, the method for processing a picture according to an embodiment of the present invention may include:
101. the terminal obtains expression information in the target picture.
The terminal in the embodiment of the present invention may be a digital camera, a smart phone, an intelligent wearable device (such as a smart watch and a smart bracelet), or other various electronic devices with a shooting function or a picture browsing function, which is not limited in the embodiment of the present invention.
The target picture can be a picture obtained by real-time shooting or a picture selected from a picture library; specifically, the picture may be a photographed picture or a picture including facial expression information drawn by other methods such as drawing software or a drawing tool. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
It should be noted that there are various methods for obtaining the expression information, which are not limited herein, for example, local positions or shape characteristics of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the picture, and the expression of the user may be determined by integrating the local positions or shape characteristics, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
102. And the terminal acquires the data to be synthesized matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
103. And the terminal synthesizes the data to be synthesized and the target picture to obtain a synthesized picture.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture. The data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, so that a synthesized picture is obtained, the operation is simple and convenient, and the user experience is improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for processing a picture according to an embodiment of the present invention, and as shown in fig. 2, the method for processing a picture according to an embodiment of the present invention may include:
201. the terminal obtains expression information in the target picture.
The terminal in the embodiment of the present invention may be a digital camera, a smart phone, an intelligent wearable device (such as a smart watch and a smart bracelet), or other various electronic devices with a shooting function or a picture browsing function, which is not limited in the embodiment of the present invention.
The target picture can be a picture obtained by real-time shooting or a picture selected from a gallery. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
It should be noted that there are various methods for obtaining the expression information, which are not limited herein, for example, local positions or shape characteristics of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the picture, and the expression of the user may be determined by integrating the local positions or shape characteristics, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
202. And the terminal acquires the data to be synthesized matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
203. And the terminal synthesizes the data to be synthesized and the target picture to obtain a synthesized picture.
204. And the terminal saves the synthesized picture.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture, and then storing the synthesized picture. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, so that the synthesized picture is obtained, the operation is simple and convenient, and the user experience is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for processing a picture according to an embodiment of the present invention, and as shown in fig. 3, the method for processing a picture according to an embodiment of the present invention may include:
301. the terminal obtains expression information in the target picture.
The terminal in the embodiment of the present invention may be a digital camera, a smart phone, an intelligent wearable device (such as a smart watch and a smart bracelet), or other various electronic devices with a shooting function or a picture browsing function, which is not limited in the embodiment of the present invention.
The target picture may be a picture taken in real time, a picture selected from a gallery, or a picture drawn by other methods, such as by drawing software or a drawing tool, including facial expression information. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
It should be noted that there are various methods for obtaining the expression information, which are not limited herein, for example, local positions or shape characteristics of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the picture, and the expression of the user may be determined by integrating the local positions or shape characteristics, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
302. And the terminal acquires the data to be synthesized matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
303. And the terminal synthesizes the data to be synthesized and the target picture to obtain a synthesized picture.
304. And the terminal sends the synthesized picture to third-party software and displays the synthesized picture in the third-party software.
For example, the composite picture may be sent to the circle of friends by WeChat, or the composite picture may be saved in the terminal and sent to the circle of friends, etc.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized with the target picture to obtain a synthesized picture, and then displaying the synthesized picture through third-party software. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, and the data to be synthesized is displayed through third-party software, so that the operation is simple and convenient, and the user experience is improved.
Referring to fig. 4, fig. 4 is a flowchart illustrating a method for processing a picture according to an embodiment of the present invention, and as shown in fig. 4, the method for processing a picture according to an embodiment of the present invention may include:
401. the terminal obtains expression information in the target picture.
The terminal in the embodiment of the present invention may be a digital camera, a smart phone, an intelligent wearable device (such as a smart watch and a smart bracelet), or other various electronic devices with a shooting function or a picture browsing function, which is not limited in the embodiment of the present invention.
The target picture may be a picture obtained by real-time shooting, a picture selected from a gallery, or a picture including facial expression information drawn by other means, such as by drawing software or a drawing tool. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
For example, characteristics of local positions or shapes of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the photo and integrated to judge the expression of the user, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
402. And the terminal acquires the data to be synthesized matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
403. And the terminal synthesizes the data to be synthesized and the target picture to obtain a synthesized picture.
404. And the terminal saves the synthesized picture.
405. And the terminal sends the synthesized picture to third-party software and displays the synthesized picture in the third-party software.
For example, the composite picture may be sent to the circle of friends by WeChat, or the composite picture may be saved in the terminal and sent to the circle of friends, etc.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture, storing the synthesized picture and displaying the synthesized picture through third-party software. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, and the data to be synthesized is displayed through third-party software, so that the operation is simple and convenient, and the user experience is improved.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method for processing a picture according to an embodiment of the present invention, and as shown in fig. 5, the method for processing a picture according to an embodiment of the present invention may include:
501. and the terminal starts third-party software with a photographing function.
The terminal in the embodiment of the present invention may be a digital camera, a smart phone, an intelligent wearable device (such as a smart watch and a smart bracelet), or other various electronic devices with a shooting function or a picture browsing function, which is not limited in the embodiment of the present invention.
For example, the third-party software may be application software such as a WeChat installed in the terminal that can be used to display a photograph.
502. And the terminal calls the photographing function of the third-party software to acquire a target picture.
503. And the terminal acquires the expression information in the target picture.
The target picture can be a picture which is obtained by starting a photographing function through third-party software and is photographed in real time, or a picture selected from a gallery. The expression information is facial expression information obtained by recognizing a face in a photo, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
For example, characteristics of local positions or shapes of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the photo and integrated to judge the expression of the user, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
504. And the terminal acquires the data to be synthesized matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
505. And the terminal synthesizes the data to be synthesized and the target picture to obtain a synthesized picture.
506. And the terminal displays the synthetic picture in the third-party software.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture, and displaying the synthesized picture through third-party software. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, and the data to be synthesized is displayed through third-party software, so that the operation is simple and convenient, and the user experience is improved.
Referring to fig. 6, fig. 6 is a schematic diagram of an apparatus for processing pictures according to an embodiment of the present invention, as shown in fig. 6, the apparatus includes:
a first obtaining unit 610, configured to obtain expression information in a target picture.
The target picture may be a picture obtained by real-time shooting, a picture selected from a gallery, or a picture including facial expression information drawn by other means, such as by drawing software or a drawing tool. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
It should be noted that there are various methods for obtaining the expression information, which are not limited herein, for example, local positions or shape characteristics of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the picture, and the expression of the user may be determined by integrating the local positions or shape characteristics, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
A second obtaining unit 620, configured to obtain data to be synthesized that is matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
A synthesizing unit 630, configured to synthesize the data to be synthesized acquired by the second acquiring unit and the target picture to obtain a synthesized picture.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture. The data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, so that a synthesized picture is obtained, the operation is simple and convenient, and the user experience is improved.
Referring to fig. 7, fig. 7 is a schematic diagram of an apparatus for processing pictures according to an embodiment of the present invention, as shown in fig. 7, the apparatus includes:
the first obtaining unit 710 is configured to obtain expression information in the target picture.
The target picture may be a picture taken in real time, a picture selected from a gallery, a picture drawn by other methods such as drawing software or a drawing tool, and the like. The expression information is facial expression information obtained by recognizing a face in a photo, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
For example, characteristics of local positions or shapes of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the photo and integrated to judge the expression of the user, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
A second obtaining unit 720, configured to obtain data to be synthesized that is matched with the expression information;
for example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
A synthesizing unit 730, configured to synthesize the data to be synthesized acquired by the second acquiring unit and the target picture to obtain a synthesized picture.
A saving unit 740, configured to save the cooperation picture.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture, and then storing the synthesized picture. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, so that the synthesized picture is obtained, the operation is simple and convenient, and the user experience is improved.
Referring to fig. 8, fig. 8 is a schematic diagram of an apparatus for processing pictures according to an embodiment of the present invention, as shown in fig. 8, the apparatus includes:
a first obtaining unit 810, configured to obtain expression information in the target picture.
The target picture can be a picture obtained by real-time shooting or a picture selected from a gallery. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
It should be noted that there are various methods for obtaining the expression information, which are not limited herein, for example, local positions or shape characteristics of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the picture, and the expression of the user may be determined by integrating the local positions or shape characteristics, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
A second obtaining unit 820, configured to obtain data to be synthesized that is matched with the expression information.
For example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
The synthesizing unit 830 is configured to synthesize the data to be synthesized acquired by the second acquiring unit and the target picture to obtain a synthesized picture.
A sending unit 840, configured to send the composite picture to third-party software, and display the composite picture in the third-party software.
For example, the composite picture may be sent to the circle of friends by WeChat, or the composite picture may be saved in the terminal and sent to the circle of friends, etc.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture, storing the synthesized picture and displaying the synthesized picture through third-party software. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, and the data to be synthesized is displayed through third-party software, so that the operation is simple and convenient, and the user experience is improved.
Referring to fig. 9, fig. 9 is a schematic diagram of an apparatus for processing pictures according to an embodiment of the present invention, and as shown in fig. 9, the apparatus includes:
a first obtaining unit 910, configured to obtain expression information in a target picture.
The target picture can be a picture obtained by real-time shooting or a picture selected from a gallery. The expression information is facial expression information obtained by recognizing a face in a picture, and specifically, the facial expression information may include a natural state, happiness, surprise, sadness, anger, fear, disgust, and the like.
It should be noted that there are various methods for obtaining the expression information, which are not limited herein, for example, local positions or shape characteristics of eyes, a nose, a mouth, eyebrows, and the like may be extracted from the picture, and the expression of the user may be determined by integrating the local positions or shape characteristics, for example, if the shape of the mouth of the user is a mouth corner, the expression information may be determined to be happy. If the eyes of the user are wide open and the mouth is open, it is possible to confirm that the expression is surprised, etc.
A second obtaining unit 920, configured to obtain data to be synthesized that is matched with the expression information;
for example, mapping relationships between various expressions and corresponding data to be synthesized may be set in advance. For example, if the expression information is happy, the data to be synthesized matched with the expression information may be set as characters "happy", the fonts of the characters may be artistic fonts, and the colors of the fonts may be set as gorgeous iridescent colors.
It should be noted that the data to be synthesized is not limited to text, and may be one or a combination of plural kinds of text data, picture data, audio data, and video data. For example, the expression information may be a picture, a character, an audio matched with the expression information, or a video.
It can be understood that the data to be synthesized may be pre-stored data corresponding to the expression information, or may be data input by the user in real time. For example, if the pre-stored data table to be synthesized does not have the data to be synthesized matching with the expression information, displaying an editing window, and acquiring the data to be synthesized matching with the expression information input by the user through the editing window.
It can be understood that, the obtaining of the pre-stored data to be synthesized, which is matched with the expression information, may be: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
A synthesizing unit 930, configured to synthesize the data to be synthesized acquired by the second acquiring unit and the target picture to obtain a synthesized picture.
A saving unit 940, configured to combine the data to be combined with the target picture to obtain a combined picture.
A sending unit 950, configured to send the composite picture to third-party software, and display the composite picture in the third-party software.
For example, the composite picture may be sent to the circle of friends by WeChat, or the composite picture may be saved in the terminal and sent to the circle of friends, etc.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture, and displaying the synthesized picture through third-party software. According to the embodiment, the data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, and the data to be synthesized is displayed through third-party software, so that the operation is simple and convenient, and the user experience is improved.
Referring to fig. 10, fig. 10 is a schematic structural diagram of a terminal according to an embodiment of the present invention. Among them, terminal 1000 can include: at least 1 processor 1010, memory 1020, user interface 1030, and at least 1 communication bus 1040. The communication bus 1040 is used to realize connection communication between these components.
The user interface 1030 may include a touch screen or the like, and may be configured to receive instructions from a user and display pictures.
The memory 1020 may include read-only memory and random access memory, and may be used for storing emoticon information and data to be synthesized matched therewith, as well as for storing program code and providing instructions and data to the processor 1010.
In an embodiment of the present invention, the processor 1010 is configured to obtain the emotion information in the target picture by calling the program code or instructions stored in the memory 1020; acquiring data to be synthesized matched with the expression information; and synthesizing the data to be synthesized and the target picture to obtain a synthesized picture.
Optionally, in some possible embodiments of the present invention, the data to be synthesized includes at least one of the following information: text data, picture data, audio data, and video data.
Optionally, in some possible embodiments of the present invention, the data to be synthesized, which is matched with the expression information, is obtained; the method comprises the following steps: acquiring prestored data to be synthesized, which are matched with the expression information; or if the pre-stored data table to be synthesized does not have the data to be synthesized matched with the expression information, displaying an editing window and acquiring the data to be synthesized matched with the expression information, which is input by a user through the editing window.
Optionally, in some possible embodiments of the present invention, the acquiring the pre-stored data to be synthesized, which is matched with the expression information, includes: acquiring data to be synthesized, which is recorded in a prestored data table to be synthesized and is matched with the expression information; or; if a plurality of data to be synthesized matched with the expression information exist in the data table to be synthesized, displaying a preview window, and displaying a set of all data to be synthesized matched with the expression information in the preview window; and acquiring a selection instruction, and taking the data to be synthesized indicated by the selection instruction as the data to be synthesized matched with the expression information.
Optionally, in some possible embodiments of the present invention, the method further includes: and the processor stores the synthesized picture in a memory, and/or sends the synthesized picture to third-party software to be displayed in the third-party software.
It can be seen that, with the technical solution provided by the embodiment of the present invention, when the picture needs to be processed according to the expression information in the picture, the expression information in the target picture is obtained first; then acquiring data to be synthesized matched with the expression information; and finally, synthesizing the data to be synthesized and the target picture to obtain a synthesized picture. The data to be synthesized matched with the expression information can be synthesized into the target picture without using a professional picture processing tool, so that a synthesized picture is obtained, the operation is simple and convenient, and the user experience is improved.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program includes, when executed, some or all of the steps of any one of the shooting parameter adjustment methods described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a Read-only memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.