Movatterモバイル変換


[0]ホーム

URL:


CN110297928A - Recommended method, device, equipment and the storage medium of expression picture - Google Patents

Recommended method, device, equipment and the storage medium of expression picture
Download PDF

Info

Publication number
CN110297928A
CN110297928ACN201910588794.0ACN201910588794ACN110297928ACN 110297928 ACN110297928 ACN 110297928ACN 201910588794 ACN201910588794 ACN 201910588794ACN 110297928 ACN110297928 ACN 110297928A
Authority
CN
China
Prior art keywords
picture
personage
expression
tag
expression picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910588794.0A
Other languages
Chinese (zh)
Inventor
王群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baidu Online Network Technology Beijing Co Ltd
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co LtdfiledCriticalBeijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910588794.0ApriorityCriticalpatent/CN110297928A/en
Publication of CN110297928ApublicationCriticalpatent/CN110297928A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The embodiment of the present invention proposes recommended method, device, equipment and the storage medium of a kind of expression picture, and method therein comprises determining that tag along sort corresponding with target voice, tag along sort include personage's feature label and affective tag;From picture database, obtain and personage's feature label and the matched expression picture of affective tag;Recommend expression picture.The method of the embodiment of the present invention can establish a variety of tag along sorts to target voice, match expression picture to various dimensions, make the expression picture recommended that can more embody the demand of user, and improve the rationality of recommendation, optimize user experience.

Description

Recommended method, device, equipment and the storage medium of expression picture
Technical field
The present invention relates to technical field of data processing more particularly to a kind of recommended method of expression picture, device, equipment andStorage medium.
Background technique
With the development of internet technology, expression picture is increasingly becoming the side that Internet user was commonly used and expressed the meaningFormula.Expression picture can not only give full expression to user emotion, and can enrich chat content, enhance user to chat toolUse duration.For getting used to character input method and collecting the user of expression picture, it is very convenient using expression picture.But it for the user for carrying out voice-enabled chat for being difficult to grasp conventional text input method, is neatly used according to chat scenarioExpression picture is a more difficult thing.
Summary of the invention
The embodiment of the present invention provides recommended method, device, equipment and the storage medium of a kind of expression picture, existing to solveOne or more technical problems in technology.
In a first aspect, the embodiment of the invention provides a kind of recommended methods of expression picture, comprising:
Determine tag along sort corresponding with target voice, the tag along sort includes personage's feature label and affective tag;
From picture database, obtain and personage's feature label and the matched expression picture of the affective tag;
Recommend the expression picture.
In one embodiment, it from picture database, obtains and personage's feature label and the affective tagMatched expression picture, comprising:
Determine the matching order of each tag along sort;
According to the matching order, the expression picture is obtained from the picture database.
In one embodiment, according to the matching order, the expression picture is obtained from the picture database,Include:
From the picture database, at least one first pre-selection picture is obtained, described first preselects picture and describedSequence ligand is that primary tag along sort matches;
From each first pre-selection picture, filter out an at least expression picture, the expression picture with it is describedMatching order is that deputy tag along sort matches.
In one embodiment, the tag along sort further includes semantic label, according to the matching order, from the figureThe expression picture is obtained in sheet data library, comprising:
From the picture database, at least one first pre-selection picture is obtained, described first preselects picture and describedSequence ligand is that primary tag along sort matches;
From each first pre-selection picture, filter out at least one second pre-selection picture, the second pre-selection picture andThe matching order is that deputy tag along sort matches;
From each second pre-selection picture, filter out an at least expression picture, the expression picture with it is describedMatching order is that the tag along sort of third position matches.
In one embodiment, it from picture database, obtains and personage's feature label and the affective tagMatched expression picture, comprising:
Judge whether to get from the picture database and be matched with personage's feature label and the affective tagExpression picture;
If have not been obtained with personage's feature label and the matched expression picture of the affective tag, according toPersonage's feature label and/or the affective tag, retrieval at least one for generating the conventional picture of the expression picture;
Text is added in the conventional picture, generates the expression picture for recommendation, the text is according at least oneThe tag along sort obtains.
In one embodiment, tag along sort corresponding with target voice is determined, comprising:
Corresponding text information is converted by the target voice;
From preset expression vocabulary, tag along sort corresponding with the text information is obtained.
In one embodiment, personage's feature label includes in personality information, age information and gender informationIt is at least one;It include emotion class information and emotional category information in the affective tag.
In one embodiment, tag along sort corresponding with target voice is determined, comprising:
The target voice is inputted into personage's feature disaggregated model, personage's feature disaggregated model is according to sample voiceAnd its corresponding labeled data trains neural network model to obtain;
According to the probability of multiple pre-selection personage feature labels of personage's feature disaggregated model output, the target is determinedPersonage's feature label of voice.
Second aspect, the embodiment of the present invention provide a kind of recommendation apparatus of expression picture, comprising:
Tag along sort determining module, for determining tag along sort corresponding with target voice, the tag along sort includes peopleObject feature label and affective tag;
Expression picture obtains module, for obtaining and personage's feature label and the emotion from picture databaseThe expression picture of tag match;
Recommending module, for recommending the expression picture.
In one embodiment, the expression picture acquisition module includes:
Matching order determines submodule, for determining the matching order of each tag along sort;
Expression picture acquisition submodule, for obtaining the table from the picture database according to the matching orderFeelings picture.
In one embodiment, the expression picture acquisition submodule includes:
First acquisition unit is used for from the picture database, at least one first pre-selection picture of acquisition, and described firstPre-selection picture is that primary tag along sort matches with the matching order;
First screening unit, for filtering out an at least expression picture, institute from each first pre-selection pictureIt is that deputy tag along sort matches that expression picture, which is stated, with the matching order.
In one embodiment, the tag along sort further includes semantic label, the expression picture acquisition submodule packetIt includes:
First acquisition unit is used for from the picture database, at least one first pre-selection picture of acquisition, and described firstPre-selection picture is that primary tag along sort matches with the matching order;
Second acquisition unit, for filtering out at least one second pre-selection picture, institute from each first pre-selection pictureIt is that deputy tag along sort matches that the second pre-selection picture, which is stated, with the matching order;
Second screening unit, for filtering out an at least expression picture, institute from each second pre-selection pictureIt states expression picture and matches with the matching order for the tag along sort of third position.
In one embodiment, the expression picture acquisition module includes:
Judging submodule is got and personage's feature label and institute for judging whether from the picture databaseState the matched expression picture of affective tag;
Conventional picture acquisition submodule, for being matched with personage's feature label and the affective tag having not been obtainedExpression picture in the case where, according to personage's feature label and/or the affective tag, retrieval at least one for generatingThe conventional picture of the expression picture;
Submodule is generated, for adding text in the conventional picture, generates the expression picture for recommendation, the textRadical is obtained according at least one tag along sort.
In one embodiment, the tag along sort determining module includes:
Submodule is converted, for converting corresponding text information for the target voice;
Tag along sort acquisition submodule, for obtaining corresponding with the text information point from preset expression vocabularyClass label.
In one embodiment, personage's feature label includes in personality information, age information and gender informationIt is at least one;It include emotion class information and emotional category information in the affective tag.
In one embodiment, the tag along sort determining module includes:
Input submodule, for the target voice to be inputted personage's feature disaggregated model, personage's feature classification mouldType is obtained according to sample voice and its corresponding labeled data training neural network model;
Personage's feature label determines submodule, multiple pre-selection personages for being exported according to personage's feature disaggregated modelThe probability of feature label determines personage's feature label of the target voice.
The third aspect, the embodiment of the invention provides a kind of recommendation apparatus of expression picture, the function of the equipment can be withBy hardware realization, corresponding software realization can also be executed by hardware.The hardware or software include it is one or more withThe corresponding module of above-mentioned function.
It include processor and memory in the structure of the equipment in a possible design, the memory is used forStorage supports the equipment to execute the program of the recommended method of above-mentioned expression picture, the processor is configured to for executing instituteState the program stored in memory.The equipment can also include communication interface, be used for and other equipment or communication.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage mediums, for storing expression pictureComputer software instructions used in recommendation apparatus comprising for executing journey involved in the recommended method of above-mentioned expression pictureSequence.
The recommended method of the label picture of the embodiment of the present invention can establish a variety of tag along sorts, various dimensions to target voiceGround matches expression picture, makes the expression picture recommended that can more embody the demand of user.Wherein, personage's feature label can make to recommendExpression picture be bonded with personage's feature of user, affective tag can make recommend expression picture and user emotional expression pasteIt closes.Therefore, the rationality of recommendation can be improved in the recommended method of the expression picture of the embodiment of the present invention, optimizes user experience.
Above-mentioned general introduction is merely to illustrate that the purpose of book, it is not intended to be limited in any way.Except foregoing descriptionSchematical aspect, except embodiment and feature, by reference to attached drawing and the following detailed description, the present invention is furtherAspect, embodiment and feature, which will be, to be readily apparent that.
Detailed description of the invention
In the accompanying drawings, unless specified otherwise herein, otherwise indicate the same or similar through the identical appended drawing reference of multiple attached drawingsComponent or element.What these attached drawings were not necessarily to scale.It should be understood that these attached drawings depict only according to the present inventionDisclosed some embodiments, and should not serve to limit the scope of the present invention.
Fig. 1 shows the flow chart of the recommended method of expression picture according to an embodiment of the present invention.
Fig. 2-1 shows the application scenario diagram of the recommended method of expression picture according to an embodiment of the present invention.
Fig. 2-2 shows the application scenario diagram of the recommended method of expression picture according to an embodiment of the present invention.
Fig. 2-3 shows the application scenario diagram of the recommended method of expression picture according to an embodiment of the present invention.
Fig. 3 shows a kind of flow chart of the recommended method of the expression picture of embodiment according to an embodiment of the present invention.
Fig. 4 shows the flow chart of the recommended method of the expression picture of another embodiment according to an embodiment of the present invention.
Fig. 5 shows the structural block diagram of the recommendation apparatus of expression picture according to an embodiment of the present invention.
Fig. 6 shows the structural frames of the recommendation apparatus of the expression picture in a kind of embodiment according to an embodiment of the present inventionFigure.
Fig. 7 shows the structural frames of the recommendation apparatus of the expression picture in another embodiment according to an embodiment of the present inventionFigure.
Fig. 8 shows the structural block diagram of the recommendation apparatus of expression picture according to an embodiment of the present invention.
Specific embodiment
Hereinafter, certain exemplary embodiments are simply just described.As one skilled in the art will recognize thatLike that, without departing from the spirit or scope of the present invention, described embodiment can be modified by various different modes.Therefore, attached drawing and description are considered essentially illustrative rather than restrictive.
Fig. 1 shows the flow chart of the recommended method of expression picture according to an embodiment of the present invention.As shown in Figure 1, of the inventionThe recommended method of the expression picture of embodiment may include:
Step S101, determination tag along sort corresponding with target voice, the tag along sort include personage's feature label andAffective tag.
In one example, target voice can be one end voice content of user's input, such as " I has nothing to say in reply unexpectedly "Target voice.Such as: it can be defeated based on the window or application program (Application, APP) provided on terminal device in userWhen entering target voice, the target voice of user's input is acquired by the recording module integrated on terminal device.Terminal device includesBut be not limited to mobile phone, tablet computer, personal computer etc..Fig. 2-1 and Fig. 2-2 shows an application in the embodiment of the present inventionScene figure.In Figure 2-1, the phonitic entry method that chat APP is provided can be used in user, is triggering the recording function of response (such asThe button of long-pressing " pin and speak "), input target voice.In Fig. 2-2, user is recording target voice, can put percussionExisting expression, triggering chat APP are that user recommends expression picture.
After getting target voice, speech analysis can be carried out to target voice, to determine corresponding point of target voiceClass label.Tag along sort includes personage's feature label and affective tag.
Personage's feature label may include personality information, such as " lively type ", " sedate type ";It can also include believing at the ageBreath, such as " children ", " youth ", " middle age " and " old age ";It can also include gender information, i.e. " male " and " women ".
In one example, can according to target voice volume, tone, the duration of a sound, word speed and in terms of pointValue, determines total score of the target voice under each personage's feature label, and the highest personage's feature label of total score is determined asPersonage's feature label of the target voice.In another example, target voice can be determined by personage's feature disaggregated modelPersonage's feature label.Such as: multiple personage's feature labels are preset, and each sample voice is manually marked, withDetermine the corresponding personage's feature label of each sample voice.Based on a large amount of sample voice and its corresponding personage manually markedFeature label, training neural network model, obtains personage's feature disaggregated model.Further, target voice input personage is specialProbability can be more than pre- by point disaggregated model, probability of the available target voice under each pre-selection personage's feature labelIf personage feature label of the pre-selection personage's feature label of value as the target voice.
Such as: preset value 0.6.Personage's feature disaggregated model is inputted to target voice " other likes well ", can be obtainedTo the target voice in each pre-selection personage's feature label " lively type ", " sedate type ", " children ", " youth ", " middle age ", " oldYear ", the probability under " male " and " women " be respectively 0.9,0.1,0.2,0.7,0.15,0.05,0 and 1, determine its corresponding peopleObject feature label is " lively type young women ".
Affective tag may include emotional category information.Such as: " happiness " may include liking, being happy, liking, is glad, is fastThe emotions such as happy;" anger " may include the emotions such as angry, angry, angry, animosity, detestation;" sorrow ", may include it is sad, grieved,Take pity on, pity, having pity on, sadness, the emotions such as sadness, grief;" pleasure " may include feelings that are happy, body and mind pleasure, flow with happinessSense;" frightened " may include the emotions such as surprised, stunned, alarmed, palpitation with fear, surprised, admiration, surprised;" probably ", it may include fear, fearThe emotions such as fear, fear, worrying, worrying, fearing;" think of " may include the emotions such as missing, missing, admiring.May be used also in affective tagTo include emotion class information, i.e. the corresponding emotion degree of emotional category information.Such as: target voice " other likes well " is rightThe affective tag answered is " liking very much ", wherein " very " is emotion class information.For another example: target voice " can be again with meSpeak " corresponding affective tag is " very anger ".
In one example, the affective tag of target voice can be determined by sentiment classification model.To each sample languageThe affective tag of sound is manually marked, and then based on a large amount of sample voice and its corresponding affective tag manually marked,Training neural network model, obtains sentiment classification model.Target voice is inputted into sentiment classification model, the available target languageThe affective tag of sound.
It in one embodiment, in step s101 may include: to convert corresponding text for the target voiceInformation;From preset expression vocabulary, tag along sort corresponding with the text information is obtained.In the present embodiment, image dataIt may include the various pictures in the picture as expression of user oneself storage or the expression packet of application service provider's offer in library.Tag along sort can be marked for every picture in picture database, and establish each tag along sort and each picture in expression vocabularyIndex relative.It is possible to further regularly update expression vocabulary.Target voice can be converted by speech-to-text technologyFor corresponding text, and corresponding tag along sort is matched from expression vocabulary according to text, and then obtain the classification of target voiceLabel.
Step S102, it from picture database, obtains and personage's feature label and the matched table of the affective tagFeelings picture.
After determining the corresponding personage's feature label of target voice and affective tag, it can be matched from picture database pairThe expression picture answered.Such as: it can be according to personage's feature label " lively type youth female of target voice " other likes well "Property " and affective tag " happiness " (or " liking very much "), corresponding expression picture is matched from picture database.
Step S103, recommend the expression picture.
Such as: expression picture can be shown to user in the user interface on the chat APP of subscriber terminal equipment.UserChatting object can be sent to by choosing the expression picture recommended.As Fig. 2-3 shows the method based on the embodiment of the present inventionDetermine an exemplary schematic diagram for arriving expression picture.In Fig. 2-3, chat APP is that user recommends multiple expression picture (tablesFeelings picture 1, expression picture 2, expression picture 3 and expression picture 4), user can select and send from this multiple expression pictureTo chatting object.
In one embodiment, as shown in figure 3, may include: in step s 102
Step S301, the matching order of each tag along sort is determined;
Step S302, according to the matching order, the expression picture is obtained from the picture database.
Due to when matching picture from picture database, can first determine matching order, so there are a variety of tag along sortsIt successively matches and screens afterwards, and then be bonded the expression picture recommended more with target voice, it is more influential.Wherein, it matches suitableSequence can manually be set previously according to experience, can also be trained model based on a large amount of sample data and be obtained.Such as: in languageThe demand to be showed emotion under sound chat scenario using expression picture would generally be higher than semanteme, then of affective tag can be setMatching order of the sequence ligand prior to semantic label and personage's feature label.For another example: in lovers' chat scenario, personage can be setThe matching order of feature label is prior to semantic label.Alternatively, considering that semantic label directly can turn text from target voice and obtainIt is obtained in text, can more precisely match expression picture, then the matching order of semantic label can be set prior to personage's featureLabel.
It in one embodiment, in step s 302 may include: to obtain at least one from the picture databaseFirst pre-selection picture, the first pre-selection picture are that primary tag along sort matches with the matching order;From each describedIn first pre-selection picture, an at least expression picture is filtered out, the expression picture and the matching order are secondTag along sort match.
Such as: the matching order of affective tag is first;The matching order of personage's feature label is second.So may be usedFrom picture database, to obtain with the matched plurality of pictures of affective tag as the first pre-selection picture;Then, from multiple firstIt preselects in picture, obtains with one of personage's feature tag match or plurality of picture as the expression picture for recommendation.
It in one embodiment, may include: to obtain at least one from the picture database in step S303First pre-selection picture, the first pre-selection picture are that primary tag along sort matches with the matching order;From each describedIn first pre-selection picture, at least one second pre-selection picture is filtered out, the second pre-selection picture and the matching order are theTwo tag along sorts match;From each second pre-selection picture, an at least expression picture, the table are filtered outFeelings picture matches with the tag along sort that the matching order is third position.
Such as: the matching order of affective tag is first;The matching order of personage's feature label is second;Semanteme markThe matching order of label is third position.It can so be obtained and the matched plurality of pictures conduct of affective tag from picture databaseFirst pre-selection picture;Then, from multiple first pre-selection pictures, one or plurality of picture with personage's feature tag match are obtainedAs the second pre-selection picture;Then, it from multiple second pre-selection pictures, obtains and is used as with the matched picture of semantic label for pushing awayThe expression picture recommended.
In one embodiment, as shown in figure 4, in step s 102, may include:
Step S401, judge whether to get from the picture database and personage's feature label and the emotionThe expression picture of tag match;If had not been obtained and personage's feature label and the matched expression figure of the affective tagPiece then enters step S402;
Step S402, according to personage's feature label and/or the affective tag, retrieval at least one for generating instituteState the conventional picture of expression picture;
Step S403, add text in the conventional picture, generate the expression picture for recommendation, the text according toAt least one tag along sort obtains.
During obtaining expression picture, preferentially matched from preset picture database.If be matched toExpression picture then recommends the expression picture.When picture database can not find expression picture, can from internet or otherThe conventional picture for generating expression picture is retrieved in database, and is added text on the conventional picture retrieved and then generatedExpression picture, and recommend the expression picture generated.It in one example, can be according to one or more contingency tables of target voiceThe description information of label and conventional picture, finds the similarity of description information and one or more tag along sorts of the target voiceHighest routine picture, then adds text in the routine picture, synthesizes a picture as expression picture.
In one example, in step s 102, it may include: step S401, judge whether from the picture databaseIn get and personage's feature label and the matched expression picture of the affective tag;If had not been obtained and the personageFeature label and the matched expression picture of the affective tag, then it is random to obtain at least one for generating the expression pictureConventional picture;Step S403, text is added in the conventional picture, generates the expression picture for recommendation, the characters rootIt is obtained according at least one tag along sort.That is, one can be obtained at random for generating the conventional figure of expression picturePiece, and text is added in the routine picture, to synthesize a picture as expression picture.
In the present embodiment, the text of addition can be all information or partial information of tag along sort.If tag along sortHave multiple, then can preselect setting.Such as: the emotion level information and emotional category information for setting affective tag are as additionText.
The method of the embodiment of the present invention can establish a variety of tag along sorts to target voice, match to various dimensions expression figurePiece makes the expression picture recommended that can more embody the demand of user.Wherein, personage's feature label can make recommend expression picture withPersonage's feature of user is bonded, and affective tag can be such that the expression picture recommended is bonded with the emotional expression of user.Therefore, this hairThe rationality of recommendation can be improved in the recommended method of the expression picture of bright embodiment, optimizes user experience.Further, from figureIn the case where obtaining in sheet data library less than expression picture corresponding with each tag along sort, text can be added in conventional pictureWord automatically generates expression picture, and then the success rate of expression picture can be improved.
Fig. 5 shows the result block diagram of the recommendation apparatus of expression picture according to an embodiment of the present invention.As shown in figure 5, the dressIt sets and may include:
Tag along sort determining module 501, for determining tag along sort corresponding with target voice, the tag along sort includesPersonage's feature label and affective tag;
Expression picture obtains module 502, for obtaining and personage's feature label and the feelings from picture databaseFeel the expression picture of tag match;
Recommending module 503, for recommending the expression picture.
In one embodiment, as shown in fig. 6, expression picture acquisition module 502 includes:
Matching order determines submodule 601, for determining the matching order of each tag along sort;
Expression picture acquisition submodule 602, described in being obtained from the picture database according to the matching orderExpression picture.
In one embodiment, expression picture acquisition submodule 602 includes:
First acquisition unit is used for from the picture database, at least one first pre-selection picture of acquisition, and described firstPre-selection picture is that primary tag along sort matches with the matching order;
First screening unit, for filtering out an at least expression picture, institute from each first pre-selection pictureIt is that deputy tag along sort matches that expression picture, which is stated, with the matching order.
In one embodiment, the tag along sort further includes semantic label, and expression picture acquisition submodule 602 is wrappedIt includes:
First acquisition unit is used for from the picture database, at least one first pre-selection picture of acquisition, and described firstPre-selection picture is that primary tag along sort matches with the matching order;
Second acquisition unit, for filtering out at least one second pre-selection picture, institute from each first pre-selection pictureIt is that deputy tag along sort matches that the second pre-selection picture, which is stated, with the matching order;
Second screening unit, for filtering out an at least expression picture, institute from each second pre-selection pictureIt states expression picture and matches with the matching order for the tag along sort of third position.
In one embodiment, as shown in fig. 7, expression picture acquisition module 502 includes:
Judging submodule 701 is got and personage's feature label for judging whether from the picture databaseWith the matched expression picture of the affective tag;
Conventional picture acquisition submodule 702, for having not been obtained and personage's feature label and the affective tagIn the case where matched expression picture, conventional picture is obtained;
Submodule 703 is generated, for adding text in the conventional picture, generates the expression picture for recommendation, instituteText is stated to be obtained according at least one tag along sort.
In one embodiment, tag along sort determining module 501 may include:
Submodule is converted, for converting corresponding text information for the target voice;
Tag along sort acquisition submodule, for obtaining corresponding with the text information point from preset expression vocabularyClass label.
In one embodiment, personage's feature label includes in personality information, age information and gender informationIt is at least one;It include emotion class information and emotional category information in the affective tag.
In one embodiment, tag along sort determining module 501 may include:
Input submodule, for the target voice to be inputted personage's feature disaggregated model, personage's feature classification mouldType is obtained according to sample voice and its corresponding labeled data training neural network model;
Personage's feature label determines submodule, multiple pre-selection personages for being exported according to personage's feature disaggregated modelThe probability of feature label determines personage's feature label of the target voice.
The function of each module in each device of the embodiment of the present invention may refer to the corresponding description in the above method, herein notIt repeats again.
Fig. 8 shows the structural block diagram of the recommendation apparatus of expression picture according to an embodiment of the present invention.As shown in figure 8, this setsStandby may include: memory 801 and processor 802, and the computer that can be run on processor 802 is stored in memory 801Program.The processor 802 realizes the recommended method of the expression picture in above-described embodiment when executing the computer program.InstituteThe quantity for stating memory 801 and processor 802 can be one or more.
The equipment can also include:
Communication interface 803 carries out data interaction for being communicated with external device.
Memory 801 may include high speed RAM memory, it is also possible to further include nonvolatile memory (non-Volatile memory), a for example, at least magnetic disk storage.
If memory 801, processor 802 and the independent realization of communication interface 803, memory 801,802 and of processorCommunication interface 803 can be connected with each other by bus and complete mutual communication.The bus can be Industry Standard ArchitectureStructure (ISA, Industry Standard Architecture) bus, external equipment interconnection (PCI, PeripheralComponentInterconnect) bus or extended industry-standard architecture (EISA, Extended IndustryStandardArchitecture) bus etc..The bus can be divided into address bus, data/address bus, control bus etc..For justIt is only indicated with a thick line in expression, Fig. 8, it is not intended that an only bus or a type of bus.
Optionally, in specific implementation, if memory 801, processor 802 and communication interface 803 are integrated in one piece of coreOn piece, then memory 801, processor 802 and communication interface 803 can complete mutual communication by internal interface.
The embodiment of the invention provides a kind of computer readable storage mediums, are stored with computer program, the program quiltProcessor realizes any method in above-described embodiment when executing.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically showThe description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or examplePoint is included at least one embodiment or example of the invention.Moreover, particular features, structures, materials, or characteristics describedIt may be combined in any suitable manner in any one or more of the embodiments or examples.In addition, without conflicting with each other, thisThe technical staff in field can be by the spy of different embodiments or examples described in this specification and different embodiments or examplesSign is combined.
In addition, term " first ", " second " are used for descriptive purposes only and cannot be understood as indicating or suggesting relative importanceOr implicitly indicate the quantity of indicated technical characteristic." first " is defined as a result, the feature of " second " can be expressed or hiddenIt include at least one this feature containing ground.In the description of the present invention, the meaning of " plurality " is two or more, unless otherwiseClear specific restriction.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includesIt is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portionPoint, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discussed suitableSequence, including according to related function by it is basic simultaneously in the way of or in the opposite order, Lai Zhihang function, this should be of the inventionEmbodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered useIn the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, forInstruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instructionThe instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or setIt is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or passDefeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipmentIt sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wiringsInterconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable read-only memory(CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other suitable JieMatter, because can then be edited, be interpreted or when necessary with other for example by carrying out optical scanner to paper or other mediaSuitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the invention can be realized with hardware, software, firmware or their combination.Above-mentionedIn embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storageOr firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardwareAny one of column technology or their combination are realized: having a logic gates for realizing logic function to data-signalDiscrete logic, with suitable combinational logic gate circuit specific integrated circuit, programmable gate array (PGA), sceneProgrammable gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carriesIt suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage mediumIn matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.
It, can also be in addition, each functional unit in each embodiment of the present invention can integrate in a processing moduleIt is that each unit physically exists alone, can also be integrated in two or more units in a module.Above-mentioned integrated mouldBlock both can take the form of hardware realization, can also be realized in the form of software function module.The integrated module is such asFruit is realized and when sold or used as an independent product in the form of software function module, also can store in a computerIn readable storage medium storing program for executing.The storage medium can be read-only memory, disk or CD etc..
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, anyThose familiar with the art in the technical scope disclosed by the present invention, can readily occur in its various change or replacement,These should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the guarantor of the claimIt protects subject to range.

Claims (14)

CN201910588794.0A2019-07-022019-07-02Recommended method, device, equipment and the storage medium of expression picturePendingCN110297928A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN201910588794.0ACN110297928A (en)2019-07-022019-07-02Recommended method, device, equipment and the storage medium of expression picture

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN201910588794.0ACN110297928A (en)2019-07-022019-07-02Recommended method, device, equipment and the storage medium of expression picture

Publications (1)

Publication NumberPublication Date
CN110297928Atrue CN110297928A (en)2019-10-01

Family

ID=68029819

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN201910588794.0APendingCN110297928A (en)2019-07-022019-07-02Recommended method, device, equipment and the storage medium of expression picture

Country Status (1)

CountryLink
CN (1)CN110297928A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110798397A (en)*2019-10-252020-02-14维沃移动通信有限公司 File transmission method, device and electronic device
CN111291184A (en)*2020-01-202020-06-16百度在线网络技术(北京)有限公司Expression recommendation method, device, equipment and storage medium
CN111368127A (en)*2020-03-062020-07-03腾讯科技(深圳)有限公司Image processing method, image processing device, computer equipment and storage medium
CN111460177A (en)*2020-03-272020-07-28北京奇艺世纪科技有限公司Method and device for searching film and television expression, storage medium and computer equipment
CN112383666A (en)*2020-11-092021-02-19维沃移动通信有限公司Content sending method and device and electronic equipment
CN112634407A (en)*2020-12-312021-04-09北京捷通华声科技股份有限公司Method and device for drawing image
CN112650399A (en)*2020-12-222021-04-13科大讯飞股份有限公司Expression recommendation method and device
CN112783332A (en)*2019-11-042021-05-11北京搜狗科技发展有限公司Information recommendation method and device and electronic equipment
CN113051427A (en)*2019-12-102021-06-29华为技术有限公司Expression making method and device
CN113434717A (en)*2020-03-202021-09-24华为技术有限公司Content recommendation method and device, electronic equipment and storage medium
CN114553810A (en)*2022-02-222022-05-27广州博冠信息科技有限公司 Expression picture synthesis method and device, electronic equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104021373A (en)*2014-05-272014-09-03江苏大学Semi-supervised speech feature variable factor decomposition method
CN104063427A (en)*2014-06-062014-09-24北京搜狗科技发展有限公司Expression input method and device based on semantic understanding
CN106297826A (en)*2016-08-182017-01-04竹间智能科技(上海)有限公司Speech emotional identification system and method
CN106933561A (en)*2015-12-312017-07-07北京搜狗科技发展有限公司Pronunciation inputting method and terminal device
CN107977928A (en)*2017-12-212018-05-01广东欧珀移动通信有限公司Expression generation method, apparatus, terminal and storage medium
US20180130459A1 (en)*2016-11-092018-05-10Microsoft Technology Licensing, LlcUser interface for generating expressive content
CN108701125A (en)*2015-12-292018-10-23Mz知识产权控股有限责任公司System and method for suggesting emoticon
CN109302339A (en)*2018-09-102019-02-01郭素英A kind of band personalized speech implementation method and its platform
CN109460485A (en)*2018-10-122019-03-12咪咕文化科技有限公司Image library establishing method and device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN104021373A (en)*2014-05-272014-09-03江苏大学Semi-supervised speech feature variable factor decomposition method
CN104063427A (en)*2014-06-062014-09-24北京搜狗科技发展有限公司Expression input method and device based on semantic understanding
CN108701125A (en)*2015-12-292018-10-23Mz知识产权控股有限责任公司System and method for suggesting emoticon
CN106933561A (en)*2015-12-312017-07-07北京搜狗科技发展有限公司Pronunciation inputting method and terminal device
CN106297826A (en)*2016-08-182017-01-04竹间智能科技(上海)有限公司Speech emotional identification system and method
US20180130459A1 (en)*2016-11-092018-05-10Microsoft Technology Licensing, LlcUser interface for generating expressive content
CN107977928A (en)*2017-12-212018-05-01广东欧珀移动通信有限公司Expression generation method, apparatus, terminal and storage medium
CN109302339A (en)*2018-09-102019-02-01郭素英A kind of band personalized speech implementation method and its platform
CN109460485A (en)*2018-10-122019-03-12咪咕文化科技有限公司Image library establishing method and device and storage medium

Cited By (15)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN110798397A (en)*2019-10-252020-02-14维沃移动通信有限公司 File transmission method, device and electronic device
CN112783332A (en)*2019-11-042021-05-11北京搜狗科技发展有限公司Information recommendation method and device and electronic equipment
CN113051427A (en)*2019-12-102021-06-29华为技术有限公司Expression making method and device
US11941323B2 (en)2019-12-102024-03-26Huawei Technologies Co., Ltd.Meme creation method and apparatus
CN111291184A (en)*2020-01-202020-06-16百度在线网络技术(北京)有限公司Expression recommendation method, device, equipment and storage medium
CN111368127A (en)*2020-03-062020-07-03腾讯科技(深圳)有限公司Image processing method, image processing device, computer equipment and storage medium
CN111368127B (en)*2020-03-062023-03-24腾讯科技(深圳)有限公司Image processing method, image processing device, computer equipment and storage medium
CN113434717A (en)*2020-03-202021-09-24华为技术有限公司Content recommendation method and device, electronic equipment and storage medium
CN111460177A (en)*2020-03-272020-07-28北京奇艺世纪科技有限公司Method and device for searching film and television expression, storage medium and computer equipment
CN111460177B (en)*2020-03-272023-12-15北京奇艺世纪科技有限公司Video expression search method and device, storage medium and computer equipment
CN112383666A (en)*2020-11-092021-02-19维沃移动通信有限公司Content sending method and device and electronic equipment
CN112650399A (en)*2020-12-222021-04-13科大讯飞股份有限公司Expression recommendation method and device
CN112650399B (en)*2020-12-222023-12-01科大讯飞股份有限公司Expression recommendation method and device
CN112634407A (en)*2020-12-312021-04-09北京捷通华声科技股份有限公司Method and device for drawing image
CN114553810A (en)*2022-02-222022-05-27广州博冠信息科技有限公司 Expression picture synthesis method and device, electronic equipment

Similar Documents

PublicationPublication DateTitle
CN110297928A (en)Recommended method, device, equipment and the storage medium of expression picture
MazzeiFollowing the contour of concepts toward a minor inquiry
SaxtonA true story: defining accuracy and authenticity in historical fiction
Boxenbaum et al.Towards an articulation of the material and visual turn in organization studies
PenningtonCoding of non-text data
Hancock et al.Influence of communication partner’s gender on language
KR101160193B1 (en)Affect and Voice Compounding Apparatus and Method therefor
KR20160089152A (en)Method and computer system of analyzing communication situation based on dialogue act information
StamouSynthesizing critical discourse analysis with language ideologies: The example of fictional discourse
Freeman et al.Thematic analysis
CN110442867B (en)Image processing method, device, terminal and computer storage medium
CN114037946A (en)Video classification method and device, electronic equipment and medium
CN115982400A (en)Multi-mode-based emotion image generation method and server
Tomassini et al.The case for linguistic narrative analysis, illustrated studying small firms in tourism
CN115186148A (en)Man-machine interaction system and method for realizing digital immortal
ThomsonIndexing and interpreting emotion: Joy and shame in oral history
Hermes et al.Hating Skyler White: audience engagement, gender politics and celebrity culture
Rovisco et al.Introduction: Performance and citizenship
Vitevitch et al.The influence of known-word frequency on the acquisition of new neighbours in adults: Evidence for exemplar representations in word learning
KR102388465B1 (en)Virtual contents creation method
Blomberg et al.Children helping to co-construct a digital tool that is designed to increase children’s participation in child welfare investigations in Sweden
BadleyAutoethnography as practice and process: Toward an honest appraisal?
Leigh et al.Making visible an invisible trade: Exploring the everyday experiences of doing social work and being a social worker
Donohue et al.Understanding the role of language in conflict
LiSensitizing social interaction with a mode-enhanced transcribing process

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
RJ01Rejection of invention patent application after publication
RJ01Rejection of invention patent application after publication

Application publication date:20191001


[8]ページ先頭

©2009-2025 Movatter.jp