Movatterモバイル変換


[0]ホーム

URL:


CN111639162A - Information interaction method and device, electronic equipment and storage medium - Google Patents

Information interaction method and device, electronic equipment and storage medium
Download PDF

Info

Publication number
CN111639162A
CN111639162ACN202010496181.7ACN202010496181ACN111639162ACN 111639162 ACN111639162 ACN 111639162ACN 202010496181 ACN202010496181 ACN 202010496181ACN 111639162 ACN111639162 ACN 111639162A
Authority
CN
China
Prior art keywords
user
corpus
feature
sentence
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010496181.7A
Other languages
Chinese (zh)
Inventor
陈迪
朱坤广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seashell Housing Beijing Technology Co Ltd
Original Assignee
Beike Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beike Technology Co LtdfiledCriticalBeike Technology Co Ltd
Priority to CN202010496181.7ApriorityCriticalpatent/CN111639162A/en
Publication of CN111639162ApublicationCriticalpatent/CN111639162A/en
Pendinglegal-statusCriticalCurrent

Links

Images

Classifications

Landscapes

Abstract

The embodiment of the disclosure discloses an information interaction method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: in response to receiving a statement sent by a user, generating a user statement feature based on the statement sent by the user; acquiring a user intention characteristic corresponding to the user statement characteristic; generating, using a first neural network model, a reply sentence to a sentence sent by the user based on the user sentence features, the user intent features, and user preference features generated from a user representation of the user. The embodiment of the disclosure can automatically generate accurate reply sentences, thereby improving the reply speed and the communication efficiency between the user and the customer service staff, and the reply sentences more meet the user requirements, thereby improving the user experience and the user reply rate, and being beneficial to providing products and services more meeting the user requirements for the user.

Description

Information interaction method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to internet technologies, and in particular, to an information interaction method and apparatus, an electronic device, and a storage medium.
Background
With the popularization and application of the internet and electronic commerce, the online services such as commodity transaction, project consultation, professional service and the like are provided more and more through the internet, and the online communication between users and customer service staff is more and more required. In an actual scene, when customer service personnel and a user perform online communication, due to the lack of professional and systematic training and communication skills, the user is completely replied by the aid of personal experience, the speaking quality is low, the requirements of the user cannot be met, the communication willingness of the user is reduced, and a large number of potential users are lost. Therefore, the speech quality of customer service personnel directly influences the user response rate, determines whether the whole online process can be completed or not, and directly influences the performance of enterprises.
The intelligent customer service is an artificial intelligence technology which is developed on the basis of large-scale knowledge processing and is applied to the industry, and the artificial intelligence technology collects a large number of conversation records of users and customer service personnel in advance, extracts questions and answers of the conversation records, classifies the questions and answers, and stores the questions and answers in a knowledge base for later use. When the intelligent customer service works, analyzing the major problems which the user may want to consult, obtaining a minor problem list which the user wants to consult in the major problems and feeding back the minor problem list to the user, so that the user can select the minor problems which the user wants to consult from the minor problem list, in the process, the user may need to select the minor problems step by step until the last-stage problems, and then the intelligent customer service goes to a knowledge base to obtain the existing knowledge and feeds back the knowledge to the user.
In the process of implementing the present disclosure, the inventors of the present disclosure found through research that the existing intelligent customer service has at least the following problems: after asking questions, a user needs to further select a subclass of questions to be consulted from a question list fed back by the intelligent customer service one by one, the operation process is complicated, the answer wanted by the user cannot be directly obtained, and the user experience is poor; moreover, the knowledge base may not comprehensively record the problems and classify the problems accurately, and a preset problem list may not have the problems that the user wants to consult, so that the user cannot ask questions and obtain feedback, and the user is lost.
Disclosure of Invention
The embodiment of the disclosure provides an information interaction method and device, electronic equipment and a storage medium, so as to assist customer service personnel in providing high-quality online communication service for users.
In one aspect of the embodiments of the present disclosure, an information interaction method is provided, including:
in response to receiving a statement sent by a user, generating a user statement feature based on the statement sent by the user;
acquiring a user intention characteristic corresponding to the user statement characteristic;
generating, using a first neural network model, a reply sentence to a sentence sent by the user based on the user sentence features, the user intent features, and user preference features generated from a user representation of the user.
In a further embodiment based on any one of the above method embodiments of the present disclosure, the generating a user sentence feature based on the sentence sent by the user includes:
performing word segmentation on the sentence sent by the user to obtain at least one word with the minimum semantic unit;
and performing word-to-vector conversion on the at least one word by using a word-to-vector technology to obtain the user sentence characteristics.
In a further embodiment based on any one of the above method embodiments of the present disclosure, the obtaining a user intention feature corresponding to the user sentence feature includes:
and acquiring the user intention characteristic based on the user sentence characteristic by utilizing a natural language understanding technology.
In a further embodiment of any of the above method embodiments based on the present disclosure, before generating a reply sentence for the sentence sent by the user based on the user sentence characteristic, the user intention characteristic, and a user preference characteristic generated from the user representation of the user, the method further comprises:
obtaining a user representation of the user from a user database based on a user Identification (ID) of the user;
segmenting the user portrait to obtain at least one word with the minimum semantic unit;
and performing word-to-vector conversion on at least one word obtained by segmenting the user portrait by using a word-to-vector technology to obtain the user preference characteristics.
In a further embodiment of any of the above method embodiments based on the disclosure, the generating a reply sentence to the sentence sent by the user based on the user sentence characteristic, the user intent characteristic, and a user preference characteristic generated from a user representation of the user comprises:
inputting the user sentence characteristics, the user intention characteristics and the user preference characteristics into the first neural network model, and outputting reply sentence characteristics through the first neural network model;
and acquiring the reply sentence corresponding to the reply sentence characteristic.
In a further embodiment of any of the above method embodiments based on the present disclosure, the training of the first neural network model comprises:
respectively inputting each group of first sample corpus features in at least one group of first sample corpus features into a first neural network model, and outputting at least one corresponding reply corpus feature through the first neural network; wherein each group of the first sample corpus features comprises: generating a first corpus feature based on a user corpus in a group of first sample dialogue corpuses, a first intention feature based on the first corpus feature, and a sample user preference feature based on a user image generated by a user corresponding to the user corpus; the first sample corpus feature is provided with labeling information of a second corpus feature generated based on the customer service staff corpus in the first sample dialogue corpus;
training the first neural network model based on a difference between at least one of the reply corpus features and the second corpus feature of the corresponding first sample corpus feature.
In a further embodiment of any one of the above method embodiments based on the present disclosure, before the respectively inputting each of the at least one group of first sample corpus features into the first neural network model, the method further includes:
respectively generating the first corpus features based on the user corpus in the first sample dialogue corpus, the sample user preference features based on the user image of the corresponding user, and the second corpus features based on the customer service staff corpus in the first sample dialogue corpus for the first sample dialogue corpus of each group of users and customer service staff;
and acquiring a first intention characteristic corresponding to the first corpus characteristic.
In a further embodiment of any one of the method embodiments described above based on the present disclosure, the generating the first corpus feature based on the user corpus in the first sample dialog corpus includes: performing word segmentation on the user corpus to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user corpus by using a word-to-vector technology to obtain the first corpus characteristics;
the generating the sample user preference feature based on the user imagery of the corresponding user comprises: segmenting the user portrait of the corresponding user to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user portrait of the corresponding user by using a word-to-vector technology to obtain the preference characteristics of the sample user;
the generating the second corpus feature based on the customer service person corpus in the first sample dialogue corpus comprises: performing word segmentation on the customer service staff corpus to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by the customer service staff corpus word segmentation by using a word-to-vector technology to obtain the second corpus feature;
the obtaining of the first intention feature corresponding to the first corpus feature includes: and acquiring the first intention characteristic based on the first corpus characteristic by utilizing a natural language understanding technology.
In a further embodiment based on any of the above method embodiments of the present disclosure, further comprising:
selecting at least one group of dialogue corpora meeting preset quality requirements from a dialogue corpus;
and removing invalid characters in the at least one group of dialogue linguistic data to obtain at least one group of first sample dialogue linguistic data.
In a further embodiment of any of the above method embodiments based on the present disclosure, after outputting the reply sentence feature via the first neural network model, the method further comprises:
acquiring quality evaluation information of the reply sentence based on the user preference feature, the user sentence feature, the user intention feature and the reply sentence feature by using a second neural network model;
based on the quality evaluation information of the reply statement, if the quality of the reply statement reaches a preset quality standard, acquiring the reply statement corresponding to the reply statement characteristic, and outputting the reply statement;
and if the quality of the reply sentence does not reach the preset quality standard, outputting the reply sentence based on a preset mode.
In a further embodiment of any of the above method embodiments based on the present disclosure, the obtaining, by using the second neural network model, quality assessment information of the reply sentence based on the user preference feature, the user sentence feature, the user intention feature, and the reply sentence feature includes:
acquiring the intention characteristic of the reply sentence based on the reply sentence characteristic by utilizing a natural language understanding technology;
inputting a first splicing feature obtained by splicing the user preference feature, the user sentence feature and the user intention feature and a second splicing feature obtained by splicing the reply sentence feature and the intention feature of the reply sentence into the second neural network model, and outputting quality evaluation information of the reply sentence through the second neural network model.
In a further embodiment of any of the above method embodiments based on the present disclosure, the training of the second neural network model comprises:
respectively inputting each group of second sample corpus features in at least one group of second sample corpus features into a second neural network model, and outputting quality evaluation information of the second corpus features in the at least one group of second sample corpus features through the second neural network; wherein each group of the second sample corpus features comprises: the method comprises the steps of obtaining a first sample splicing characteristic and a second sample splicing characteristic, wherein the first sample splicing characteristic is obtained by splicing a sample user preference characteristic generated based on a user image of a user corresponding to a user corpus, a first corpus characteristic generated based on a user corpus in a group of second sample dialogue corpuses and a first intention characteristic obtained based on the first corpus characteristic, and the second sample splicing characteristic is obtained by splicing a second corpus characteristic generated based on a customer service staff corpus in the second sample dialogue corpus and a second intention characteristic obtained based on the second corpus characteristic; the second corpus features have quality labeling information;
and training the second neural network model based on the difference between the quality evaluation information of the second corpus features in the at least one group of second sample corpus features and the corresponding quality marking information.
In a further embodiment based on any one of the above method embodiments of the present disclosure, before the respectively inputting each of the at least one set of second sample corpus features into the second neural network model, the method further includes:
respectively generating the first corpus features based on the user corpus in the second sample dialogue corpus, the sample user preference features based on the user image of the corresponding user, and the second corpus features based on the customer service staff corpus in the second sample dialogue corpus for the second sample dialogue corpus of each group of users and customer service staff;
acquiring a first intention characteristic corresponding to the first corpus characteristic, and acquiring a second intention characteristic corresponding to the second corpus characteristic;
splicing the sample user preference feature, the first corpus feature and the first intention feature to obtain a first sample splicing feature; and splicing the second corpus characteristic and the second intention characteristic to obtain the second sample splicing characteristic.
In a further embodiment of any one of the above method embodiments based on the present disclosure, the generating the first corpus feature based on the user corpus in the second sample dialog corpus includes: performing word segmentation on the user corpus to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user corpus by using a word-to-vector technology to obtain the first corpus characteristics;
the generating the sample user preference feature based on the user imagery of the corresponding user comprises: segmenting the user portrait of the corresponding user to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user portrait of the corresponding user by using a word-to-vector technology to obtain the preference characteristics of the sample user;
generating the second corpus features based on the customer service person corpus in the second sample dialog corpus comprises: performing word segmentation on the customer service staff corpus to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by the customer service staff corpus word segmentation by using a word-to-vector technology to obtain the second corpus feature;
the obtaining of the first intention feature corresponding to the first corpus feature includes: acquiring the first intention characteristic based on the first corpus characteristic by utilizing a natural language understanding technology;
the obtaining of the second intention characteristic corresponding to the second corpus characteristic includes: and acquiring the second intention characteristic based on the second corpus characteristic by utilizing a natural language understanding technology.
In a further embodiment based on any of the above method embodiments of the present disclosure, further comprising:
selecting at least one group of dialogue corpora from a dialogue corpus;
and removing invalid characters in the at least one group of dialogue linguistic data to obtain at least one group of second sample dialogue linguistic data.
In another aspect of the disclosed embodiments, an information interaction apparatus is provided, which includes:
the first generation module is used for responding to receiving a statement sent by a user and generating a user statement feature based on the statement sent by the user;
the first acquisition module is used for acquiring user intention characteristics corresponding to the user statement characteristics;
a second generation module to generate a reply sentence to the sentence sent by the user based on the user sentence feature, the user intent feature, and a user preference feature generated from the user representation of the user using the first neural network model.
In a further embodiment of any of the apparatus embodiments described above based on the present disclosure, the first generating module comprises:
the word segmentation unit is used for performing word segmentation on the sentences sent by the user to obtain at least one word with the minimum semantic unit;
and the conversion unit is used for performing word-to-vector conversion on the at least one word by using a word-to-vector technology to obtain the user sentence characteristics.
In a further embodiment based on any one of the apparatus embodiments of the present disclosure, the first obtaining module is specifically configured to: and acquiring the user intention characteristic based on the user sentence characteristic by utilizing a natural language understanding technology.
In a further embodiment of any of the apparatus embodiments described above based on the present disclosure, further comprising:
a second obtaining module, configured to obtain a user representation of the user from a user database based on a user identification ID of the user;
the first generation module is further configured to perform word segmentation on the user portrait to obtain at least one word having a minimum semantic unit; and performing word-to-vector conversion on at least one word obtained by segmenting words of the user portrait by using a word-to-vector technology to obtain the user preference characteristics.
In a further embodiment of any of the apparatus embodiments described above based on the present disclosure, the second generating module comprises:
a first neural network model for inputting the user sentence feature, the user intention feature and the user preference feature into the first neural network model and outputting a reply sentence feature via the first neural network model;
and the acquisition unit is used for acquiring the reply sentences corresponding to the reply sentence characteristics.
In a further embodiment based on any one of the apparatus embodiments disclosed above, the first neural network model is further configured to receive each of the input at least one group of first sample corpus features, and output at least one corresponding reply corpus feature; wherein each group of the first sample corpus features comprises: generating a first corpus feature based on a user corpus in a group of first sample dialogue corpuses, a first intention feature based on the first corpus feature, and a sample user preference feature based on a user image generated by a user corresponding to the user corpus; the first sample corpus feature is provided with labeling information of a second corpus feature generated based on the customer service staff corpus in the first sample dialogue corpus;
the device further comprises:
a first training module, configured to train the first neural network model based on a difference between at least one of the reply corpus features and the second corpus feature of the corresponding first sample corpus feature.
In a further embodiment based on any one of the apparatus embodiments disclosed herein, the first generating module is further configured to generate, for the first sample dialog corpus of each group of users and customer service staff, the first corpus feature based on the user corpus in the first sample dialog corpus, the sample user preference feature based on the user image of the corresponding user, and the second corpus feature based on the customer service staff corpus in the first sample dialog corpus, respectively;
the first obtaining module is further configured to obtain a first intention feature corresponding to the first corpus feature.
In a further embodiment based on any one of the apparatus embodiments of the present disclosure, the first generating module is specifically configured to:
performing word segmentation on the user corpus to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user corpus by using a word-to-vector technology to obtain the first corpus characteristics;
segmenting the user portrait of the corresponding user to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user portrait of the corresponding user by using a word-to-vector technology to obtain the preference characteristics of the sample user;
performing word segmentation on the customer service staff corpus to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by the customer service staff corpus word segmentation by using a word-to-vector technology to obtain the second corpus feature;
the first obtaining module is specifically configured to: and acquiring the first intention characteristic based on the first corpus characteristic by utilizing a natural language understanding technology.
In a further embodiment of any of the apparatus embodiments described above based on the present disclosure, further comprising:
the selection module is used for selecting at least one group of dialogue corpora meeting the preset quality requirement from the dialogue corpus;
and the removing module is used for removing invalid characters in the at least one group of dialogue linguistic data to obtain at least one group of first sample dialogue linguistic data.
In a further embodiment of any of the apparatus embodiments described above based on the present disclosure, further comprising:
a third obtaining module, configured to obtain, by using a second neural network model, quality evaluation information of a reply sentence based on the user preference feature, the user sentence feature, the user intention feature, and the reply sentence feature after the first neural network model outputs the reply sentence feature;
the acquiring unit is configured to acquire the reply sentence corresponding to the reply sentence characteristic and output the reply sentence if the quality of the reply sentence reaches a preset quality standard based on the quality evaluation information of the reply sentence;
and the third generation module is used for outputting the reply sentence based on a preset mode if the quality of the reply sentence does not reach a preset quality standard based on the quality evaluation information of the reply sentence.
In a further embodiment based on any one of the apparatus embodiments described above, the first obtaining module is further configured to obtain, by using a natural language understanding technology, an intention feature of the reply sentence based on the reply sentence feature;
the device further comprises:
the splicing module is used for splicing the user preference feature, the user statement feature and the user intention feature to obtain a first splicing feature, and splicing the reply statement feature and the intention feature of the reply statement to obtain a second splicing feature;
the third obtaining module is specifically configured to input the first splicing feature and the second splicing feature into the second neural network model, and output the quality evaluation information of the reply statement through the second neural network model.
In a further embodiment of any of the apparatus embodiments described above based on the present disclosure, the second neural network model is further configured to: respectively receiving each group of input second sample corpus characteristics in at least one group of second sample corpus characteristics, and outputting quality evaluation information of the second corpus characteristics in the at least one group of second sample corpus characteristics; wherein each group of the second sample corpus features comprises: the method comprises the steps of obtaining a first sample splicing characteristic and a second sample splicing characteristic, wherein the first sample splicing characteristic is obtained by splicing a sample user preference characteristic generated based on a user image of a user corresponding to a user corpus, a first corpus characteristic generated based on a user corpus in a group of second sample dialogue corpuses and a first intention characteristic obtained based on the first corpus characteristic, and the second sample splicing characteristic is obtained by splicing a second corpus characteristic generated based on a customer service staff corpus in the second sample dialogue corpus and a second intention characteristic obtained based on the second corpus characteristic; the second corpus features have quality labeling information;
the device further comprises:
and the second training module is used for training the second neural network model based on the difference between the quality evaluation information of the second corpus features in the at least one group of second sample corpus features and the corresponding quality marking information.
In a further embodiment based on any one of the apparatus embodiments disclosed herein, the first generating module is further configured to generate, for the second sample dialogue corpus of each group of users and customer service staff, the first corpus feature based on a user corpus in the second sample dialogue corpus, the sample user preference feature based on a user image of the corresponding user, and the second corpus feature based on a customer service staff corpus in the second sample dialogue corpus, respectively;
the first obtaining module is further configured to obtain a first intention feature corresponding to the first corpus feature, and obtain a second intention feature corresponding to the second corpus feature;
the splicing module is further configured to splice the sample user preference feature, the first corpus feature and the first intention feature to obtain the first sample splicing feature; and splicing the second corpus characteristic and the second intention characteristic to obtain the second sample splicing characteristic.
In a further embodiment based on any one of the apparatus embodiments of the present disclosure, the first generating module is specifically configured to: performing word segmentation on the user corpus to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user corpus by using a word-to-vector technology to obtain the first corpus characteristics;
segmenting the user portrait of the corresponding user to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user portrait of the corresponding user by using a word-to-vector technology to obtain the preference characteristics of the sample user;
performing word segmentation on the customer service staff corpus to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by the customer service staff corpus word segmentation by using a word-to-vector technology to obtain the second corpus feature;
the first obtaining module is specifically configured to: acquiring the first intention characteristic based on the first corpus characteristic by utilizing a natural language understanding technology;
and acquiring the second intention characteristic based on the second corpus characteristic by utilizing a natural language understanding technology.
In a further embodiment of any of the apparatus embodiments described above based on the present disclosure, further comprising:
the selection module is used for selecting at least one group of dialogue corpora from the dialogue corpus;
and the removing module is used for removing invalid characters in the at least one group of dialogue linguistic data to obtain at least one group of second sample dialogue linguistic data.
In another aspect of the disclosed embodiments, an electronic device is provided, including:
a memory for storing a computer program;
and a processor, configured to execute the computer program stored in the memory, and when the computer program is executed, implement the information interaction method according to any of the above embodiments of the present disclosure.
In a further aspect of the embodiments of the present disclosure, a computer-readable storage medium is provided, on which a computer program is stored, and the computer program, when executed by a processor, implements the information interaction method according to any of the above embodiments of the present disclosure.
Based on the information interaction method and device, the electronic device, and the storage medium provided by the embodiments of the present disclosure, after receiving a sentence sent by a user, a user sentence feature may be generated based on the sentence sent by the user, a user intention feature corresponding to the user sentence feature is obtained, and then, a reply sentence for the sentence sent by the user is generated based on the user sentence feature, the user intention feature, and a user preference feature generated from a user portrait of the user by using a first neural network model, so that an accurate reply sentence may be automatically generated for the sentence sent by the user, and a reply speed and an efficiency of communication between the user and a customer service person are improved; in addition, user portrait and user intention are combined when the reply sentences are generated, so that the reply sentences can better meet the user requirements, the user experience and the user reply rate are improved, products and services which better meet the user requirements can be provided for the user, and the success rate of the user for continuing subsequent business links is improved through an online communication link.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description, taken with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of an embodiment of an information interaction method according to the present disclosure.
Fig. 2 is a flowchart of another embodiment of the information interaction method of the present disclosure.
Fig. 3 is a flowchart of another embodiment of the information interaction method of the present disclosure.
FIG. 4 is a flow chart of one embodiment of training a first neural network model in an embodiment of the present disclosure.
FIG. 5 is a flow chart of one embodiment of training a second neural network model in an embodiment of the present disclosure.
Fig. 6 is a schematic structural diagram of an embodiment of an information interaction apparatus according to the present disclosure.
Fig. 7 is a schematic structural diagram of another embodiment of an information interaction device according to the present disclosure.
Fig. 8 is a schematic structural diagram of an embodiment of an electronic device according to the present disclosure.
Detailed Description
Various exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Fig. 1 is a flowchart of an embodiment of an information interaction method according to the present disclosure. The embodiment of the disclosure can be used in conversation scenes between users and customer service staff, such as various Instant Messaging (IM) conversation scenes, online services, and the like. As shown in fig. 1, the information interaction method of this embodiment includes:
and 102, responding to the received sentence sent by the user, and generating a user sentence characteristic based on the sentence sent by the user.
And 104, acquiring a user intention characteristic corresponding to the user statement characteristic.
And 106, generating a reply sentence aiming at the sentence sent by the user based on the user sentence characteristic, the user intention characteristic and the user preference characteristic generated by the user portrait of the user by utilizing a first neural network model.
The user portrait comprises personalized user information, and based on user preference characteristics generated by the user portrait, user preferences can be determined so as to pertinently introduce information concerned by the user to the user. The user intention feature is used for expressing the clear intention of the user expressed in the transmitted sentence, such as house purchase, first payment, periphery and the like.
Based on the information interaction method provided by the embodiment of the disclosure, after receiving a statement sent by a user, a user statement feature can be generated based on the statement sent by the user, a user intention feature corresponding to the user statement feature is acquired, and then, a reply statement for the statement sent by the user is generated based on the user statement feature, the user intention feature and a user preference feature generated by a user portrait of the user by using a first neural network model, so that an accurate reply statement can be automatically generated for the statement sent by the user, and the reply speed and the communication efficiency between the user and customer service staff are improved; in addition, user portrait and user intention are combined when the reply sentences are generated, so that the reply sentences can better meet the user requirements, the user experience and the user reply rate are improved, products and services which better meet the user requirements can be provided for the user, and the success rate of the user for continuing subsequent business links is improved through an online communication link.
Optionally, in some possible implementation manners of any embodiment of the present disclosure, inoperation 102, a sentence sent by a user may be segmented to obtain at least one word having a minimum semantic unit; and performing word-to-vector conversion on the at least one word by using a word-to-vector (word2vec) technology to obtain the user sentence characteristics.
In the chinese sentence, although the word is the smallest combination of pronunciation and meaning, the word has the function of expressing the complete semantic meaning, and most words are composed of a plurality of words. In this embodiment, a sentence sent by a user is segmented, the sentence sent by the user is split into at least one independent word with a minimum semantic unit, then a word2vec technology is used to map a plurality of independent words with the minimum semantic unit to a K-dimensional vector space, and a semantic relationship between words obtained by splitting is retained to obtain a K-dimensional language vector, that is, a user sentence characteristic. Wherein K is an integer greater than 0.
For example, in some alternative examples, the sentence sent by the user may be segmented using the Jieba chinese segmentation component of the cross-platform computer programming language Python, and the sentence sent by the user may be broken into words with independent meaning. Wherein, Jieba supports three word segmentation modes: precision mode, full mode, and search engine mode. The accurate mode can perform the most accurate segmentation on the sentences, has no redundant data and is suitable for text analysis; the full mode is to divide all words which are possible to be words in a sentence, so that the speed is high, but redundant data exists; the search engine mode is to segment the long words again on the basis of the accurate mode. The present embodiment may adopt the above three word segmentation modes, and may also be used in combination, which is not limited in particular.
In some optional examples, word2vec may be implemented by a neural network, and sparse word vector mapping in the form of one-hot (one-hot) may be referred to as a K-dimensional dense vector, i.e., a text feature extraction method. In data mining and data analysis, a neural network can only process vectors but cannot directly process languages, and one-hot loses the relationship between words, in this embodiment, a plurality of independent words with the minimum semantic unit are mapped to a K-dimensional vector space by using a word2vec technology, and the semantic relationship between words obtained by splitting can be simultaneously reserved so as to understand the meaning of a sentence sent by a user.
Optionally, in some possible implementations of any embodiment of the present disclosure, inoperation 104, the user intention feature may be obtained based on the user statement feature by using Natural Language Understanding (NLU) technology.
In some alternative examples, the user sentence features are K-dimensional language vectors, which may be incorporated into a user intent extraction model, via which user intent features representing user intent are output. The user intention extraction model can exemplarily input K-dimensional language vectors representing user statement features into the neural network through a pre-trained neural network, and classify the K-dimensional language vectors to obtain a classification result representing the user intention, namely the user intention features. When the neural network is trained, the user intention characteristics can be labeled on a K-dimensional language vector sample input into the neural network, and the neural network is trained through the difference between the user intention characteristics output by the neural network and the labeled user intention characteristics until the user intention characteristics output by the neural network are the same as or similar to the labeled user intention characteristics (namely, the difference between the user intention characteristics and the labeled user intention characteristics is smaller than a preset threshold).
Based on the embodiment, the user intention characteristics can be accurately determined by utilizing the natural language understanding technology, so that the user intention is clear, and accurate reply information is provided for the user.
Fig. 2 is a flowchart of another embodiment of the information interaction method of the present disclosure. As shown in fig. 2, on the basis of the embodiment shown in fig. 1, beforeoperation 106, the method further includes:
a user representation of the user is obtained from a user database based on a user Identification (ID) of theuser 202.
The user ID uniquely identifies a user, is obtained when the user registers, and after the user logs in, the user ID is used to identify the user identity for online information browsing, communication (for example, sending a statement to a customer service person), and the like, and the user ID may be, for example, a user name, a user number, a user nickname, an identity card number, a mobile phone number, and the like.
The user portrait comprises personalized user information, can be used for determining the current situation (such as sex, age, industry, interests and hobbies, marriage, old people and children at home and the like) and item preference information (such as favorite item characteristics and concerned item points and the like) of a user, and can be used for determining the user portrait according to the conversation, search records, item click logs, article browsing history, question and answer browsing history and the like of the user, so that the information of the state, behavior preference, item preference and the like of the user is determined, and the text introduction information of the second current object is obtained by selecting partial content or all content in a preset text template and slot values of slots in the selected content corresponding to the second current object based on the user portrait. The items in the embodiments of the present disclosure may include, for example, any objects that a user has a need for, such as goods, products, and services.
And 204, segmenting the user portrait to obtain at least one word with the minimum semantic unit.
For example, in some of the alternative examples, a user representation may be tokenized using the Jieba chinese tokenization component, and a sentence sent by the user may be broken down into words having independent meaning.
206, using a word-to-vector technology to perform word-to-vector conversion on at least one word obtained by word segmentation of the user portrait to obtain the user preference characteristics.
The above-mentionedoperations 202 and 206 and 102 and 104 have no execution time or sequence requirement, and may be executed after detecting the user login, or may be executed after receiving the statement sent by the user, as long as being executed before theoperation 106.
Optionally, in the above embodiment, after 202, invalid characters in the user representation may be removed, and then a user preference feature may be generated based on the user representation after removing the invalid characters.
In the embodiment, the user portrait is acquired from the user database based on the user ID, and the user preference characteristics can be acquired based on the user portrait, so that the concerned information is replied to the user in a targeted manner, personalized services are provided for the user, the reply sentences better meet the user requirements, the condition that the known information or the unconcerned information is replied to the user is avoided, the communication efficiency is reduced, the user attention is further improved, the user experience and the user reply rate are improved, and the success rate of the user in continuing the subsequent business links is improved.
Optionally, in some possible implementation manners of any embodiment of the present disclosure, inoperation 106, the user sentence feature, the user intention feature, and the user preference feature may be input into the first neural network model, a reply sentence feature is output through the first neural network model, and then, a reply sentence corresponding to the reply sentence feature is obtained. For example, the reverse process of the word-to-vector technology is utilized to perform vector-to-word conversion on the reply sentence characteristics, and then the converted words are spliced in sequence to obtain the corresponding reply sentence.
Fig. 3 is a flowchart of another embodiment of the information interaction method of the present disclosure. As shown in fig. 3, on the basis of the above embodiment, the embodiment includes:
in response to receiving the user-sent sentence, a user sentence feature is generated based on the user-sentsentence 302.
And 304, acquiring a user intention characteristic corresponding to the user statement characteristic.
Thereafter,operation 310 is performed.
A user representation of the user is obtained 306 from a user database based on the user ID of the user.
And 308, acquiring the user portrait to generate a user preference characteristic.
Thisoperation 308 may be implemented, for example, by theoperation 204 and 206 in the embodiment shown in fig. 2. There is no execution sequence or time limitation betweenoperations 306 and 308 andoperations 302 and 304, and they may be executed in any sequence or time sequence, or simultaneously.
And 310, inputting the user sentence characteristics, the user intention characteristics and the user preference characteristics into a first neural network model, and outputting reply sentence characteristics through the first neural network model.
312, obtaining the quality evaluation information of the reply sentence based on the user preference feature, the user sentence feature, the user intention feature and the reply sentence feature by using a second neural network model.
And 314, determining whether the quality of the reply sentence reaches a preset quality standard or not based on the quality evaluation information of the reply sentence.
If the quality of the reply sentence meets the predetermined quality criteria,operation 316 is performed. Otherwise, if the quality of the reply sentence does not meet the predetermined quality standard,operation 318 is performed.
316, obtaining the reply statement corresponding to the reply statement feature, and outputting the reply statement.
Thereafter, the subsequent flow of the present embodiment is not executed.
And 318, outputting the reply sentence based on a preset mode.
For example, in some possible implementation manners, a reply statement corresponding to a question in a preset dialog template, which is the same as or similar to the semantic meaning of the statement sent by the user, may be used as the reply statement replied to the user at this time, where the preset dialog template includes reply statements corresponding to various questions; or the customer service staff can provide the reply sentence which is replied to the user by self; alternatively, the reply sentence may also be output to the user in other manners, which is not limited in this disclosure.
Based on the embodiment, after the reply sentence characteristics are output by the first neural network model, the second neural network model can be used for objectively judging whether the quality of the corresponding reply sentence reaches the preset quality standard, the reply sentence is output to the user when the quality of the reply sentence reaches the preset quality standard, otherwise, if the quality of the reply sentence does not reach the preset quality standard, the reply sentence to the user can be acquired in other modes, the quality of the reply sentence output to the user can be effectively ensured, the communication efficiency between the user and customer service staff is further improved, and the user experience is improved.
In addition, in the information interaction method according to any embodiment of the present disclosure, after receiving a sentence sent by a user, invalid characters in the sentence sent by the user may be removed, and then a user sentence feature may be generated based on the sentence from which the invalid characters are removed.
For example, in some possible implementations, the invalid characters in the sentence sent by the user may be removed by using a regular matching technique, so as to avoid that the invalid characters affect the accurate recognition of the sentence sent by the user, and improve the quality of generating the sentence features of the user. The invalid characters may be set according to actual requirements, and may include, but are not limited to, punctuation, emoticons, network addresses such as uniform resource identifiers (URLs), tabs, and the like, which are not limited in this disclosure.
Optionally, in some possible implementation manners of any embodiment of the present disclosure, inoperation 308, an intention feature of the reply sentence may be obtained based on the reply sentence feature by using a natural language understanding technology; then, a first splicing feature obtained by splicing the user preference feature, the user sentence feature and the user intention feature, and a second splicing feature obtained by splicing the reply sentence feature and the intention feature of the reply sentence are input into a second neural network model, and the quality evaluation information of the reply sentence is output through the second neural network model.
In addition, in any of the above embodiments of the present disclosure, the first neural network model and/or the second neural network model may also be trained in advance.
For example, in some possible implementations, the first neural network model may be trained by:
respectively inputting each group of first sample corpus features in at least one group of first sample corpus features into a first neural network model, and outputting corresponding at least one reply corpus feature through the first neural network; wherein, every group's first sample corpus characteristic includes: generating a first corpus feature based on a user corpus in a group of first sample dialogue corpuses, a first intention feature based on the first corpus feature, and a sample user preference feature based on a user image generated by a user corresponding to the user corpus; each group of first sample corpus features has labeling information of second corpus features generated based on the customer service personnel corpus in the group of first sample dialogue corpus;
and training the first neural network model based on the difference between the at least one reply corpus feature and the second corpus feature of the corresponding sample corpus feature until a first preset training completion condition is met.
The first preset training completion condition may be, for example, that a difference between at least one reply corpus feature and a second corpus feature of the corresponding sample corpus feature is smaller than a first preset difference threshold, and/or the number of times of training of the first neural network model reaches a first preset number of times, and the like, which is not limited in the embodiment of the present disclosure.
FIG. 4 is a flow chart of one embodiment of training a first neural network model in an embodiment of the present disclosure.
As shown in fig. 4, this embodiment includes:
402, respectively aiming at the first sample dialogue corpus of each group of users and customer service personnel, generating a first corpus feature based on the user corpus in the first sample dialogue corpus, generating a sample user preference feature based on the user portrait of the corresponding user, and generating a second corpus feature based on the customer service personnel corpus in the first sample dialogue corpus.
404, obtaining a first intention characteristic corresponding to the first corpus characteristic.
406, each group of the first sample corpus features of the at least one group of first sample corpus features is input into the first neural network model, and the corresponding at least one reply corpus feature is output through the first neural network.
Wherein, every group's first sample corpus characteristic includes: the method comprises the steps of generating a first corpus feature based on a user corpus in a group of first sample dialogue corpuses, obtaining a first intention feature based on the first corpus feature, and generating a sample user preference feature based on a user image of a user corresponding to the user corpus.
Each group of the first sample corpus features has labeling information of a second corpus feature generated based on the customer service speaker corpus in the group of the first sample dialogue corpus.
Training 408 a first neural network model based on a difference between the at least one reply corpus feature and a second corpus feature of the corresponding sample corpus feature.
For example, in some alternative examples, the first neural network model may be trained using a gradient descent method with cross-entropy loss (i.e., the difference between the at least one reply corpus feature and the second corpus feature of the corresponding sample corpus feature) as a loss function.
Based on the embodiment, the first neural network model is trained based on the first sample dialogue corpus of at least one group of users and customer service staff, so that the first neural network model can effectively learn the customer service staff corpus for various user corpora, and after the training of the first neural network model is completed, corresponding reply sentence characteristics can be output for various sentences sent by the users so as to obtain corresponding reply sentences.
Optionally, in some possible implementation manners of any embodiment of the present disclosure, inoperation 402, the user corpus may be segmented according to the first sample dialog corpus of each group of users and customer service staff, so as to obtain at least one word with the smallest semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user corpus by using a word-to-vector technology to obtain first corpus characteristics; segmenting a user portrait of a corresponding user to obtain at least one word with a minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user portrait of the user by using a word-to-vector technology to obtain sample user preference characteristics; performing word segmentation on the customer service staff corpus to obtain at least one word with the minimum semantic unit; and performing word-to-vector conversion on at least one word obtained by the customer service staff by word segmentation by using a word-to-vector technology to obtain a second corpus characteristic. The Jieba chinese word segmentation component can be used to segment the user corpus, the user portrait, and the customer service staff corpus, and the specific implementation manner may refer to the records of the above embodiments and is not described in detail.
Optionally, in some possible implementations of any embodiment of the present disclosure, inoperation 404, a first intention feature may be obtained based on the first corpus feature by using a natural language understanding technology.
Optionally, referring back to fig. 4, in a further embodiment, beforeoperation 402, may further include:
at least one set of corpus meeting the predetermined quality requirement is selected 400 from the corpus.
Conversation corpora which are communicated between users and customer service staff in modes of IM, online service and the like are important components of business data, can be applied in a wide range of scenes, and play a role in basic data support in applications such as user portrait, user preference analysis, conversation system construction and the like. The historical dialog corpus is stored in a corpus database, which may be any type of database such as a Hadoop Distributed File System (HDFS) database. The group of question-answer conversations between the user and the customer service staff is a group of conversation corpora, which may be a question-answer, a continuous multi-question-answer, or a continuous multi-question-multi-answer, and the embodiment of the present disclosure does not limit this.
In some possible implementation manners, after the customer service staff sends the message, the user reply is obtained as a preset quality requirement, and at least one group of dialogue corpora is selected from the corpus database.
Or, in other possible implementation manners, the quality of each group of dialog corpus may also be manually labeled, and at least one group of dialog corpus meeting the preset quality requirement is selected according to the manually labeled quality. For example, the quality of a group of dialogue corpora can be classified into two levels, namely qualified level and unqualified level according to whether a user replies after a customer service person sends a message, and the group of dialogue corpora marked as the qualified level is a group of dialogue corpora meeting the preset quality requirement; or, the quality of a group of dialogue corpora can be divided into three levels, namely high, medium and low according to whether the customer service staff sends the message and obtains the user response and the enthusiasm (such as positive response or negative response) of the user response statement, and the group of dialogue corpora marked as the high level is the group of dialogue corpora meeting the preset quality requirement. Alternatively, the embodiment of the present disclosure may also adopt other quality labeling manners and quality selection criteria to select at least one group of dialog corpus that meets the preset quality requirement, which is not limited by the embodiment of the present disclosure.
401, removing the invalid characters in the at least one group of dialogue corpus to obtain at least one group of first sample dialogue corpus.
In some possible implementations, the invalid characters in the first sample dialog corpus may be removed by using a regular matching technique, so as to avoid the invalid characters from affecting the subsequent effective recognition of the first sample dialog corpus. The invalid characters may be set according to actual requirements, and may include, but are not limited to, network addresses such as punctuations, expressions, URLs, tab characters, and the like, which is not limited in this disclosure.
In some possible implementations, the second neural network model may be trained by:
and respectively inputting each group of second sample corpus features in at least one group of second sample corpus features into a second neural network model, and outputting quality evaluation information of the second corpus features in at least one group of second sample corpus features through the second neural network. Wherein, every group second sample corpus characteristic includes: the method comprises a first sample splicing feature and a second sample splicing feature, wherein the first sample splicing feature is obtained by splicing a sample user preference feature generated based on a user image of a user corresponding to a user corpus, a first corpus feature generated based on a user corpus in a group of second sample dialogue corpora and a first intention feature obtained based on the first corpus feature, and the second sample splicing feature is obtained by splicing a second corpus feature generated based on a customer service staff corpus in the second sample dialogue corpus and a second intention feature obtained based on the second corpus feature. The second corpus characteristic has quality labeling information;
and training a second neural network model based on the difference between the quality evaluation information of the second corpus features in the at least one group of second sample corpus features and the corresponding quality marking information until a second preset training completion condition is met.
The second preset training completion condition may be, for example, that a difference between quality evaluation information of the second corpus feature in the at least one group of second sample corpus features and corresponding quality labeling information is smaller than a second preset difference threshold, and/or that the training frequency of the second neural network model reaches a second preset frequency, and so on, which is not limited in the embodiment of the present disclosure.
FIG. 5 is a flow chart of one embodiment of training a second neural network model in an embodiment of the present disclosure.
As shown in fig. 5, this embodiment includes:
502, respectively aiming at a second sample dialogue corpus of each group of users and customer service staff, generating a first corpus feature based on a user corpus in the second sample dialogue corpus, generating a sample user preference feature based on a user portrait of a corresponding user, and generating a second corpus feature based on the customer service staff corpus in the second sample dialogue corpus.
And 504, acquiring a first intention characteristic corresponding to the first corpus characteristic, and acquiring a second intention characteristic corresponding to the second corpus characteristic.
506, splicing the sample user preference feature, the first corpus feature and the first intention feature to obtain a first sample splicing feature; and splicing the second corpus characteristic and the second intention characteristic to obtain the second sample splicing characteristic.
And 508, respectively inputting each group of second sample corpus features in the at least one group of second sample corpus features into a second neural network model, and outputting quality evaluation information of the second corpus features in the at least one group of second sample corpus features through the second neural network.
Wherein, every group second sample corpus characteristic includes: the method comprises a first sample splicing feature and a second sample splicing feature, wherein the first sample splicing feature is obtained by splicing a sample user preference feature generated based on a user image of a user corresponding to a user corpus, a first corpus feature generated based on a user corpus in a group of second sample dialogue corpora and a first intention feature obtained based on the first corpus feature, and the second sample splicing feature is obtained by splicing a second corpus feature generated based on a customer service staff corpus in the second sample dialogue corpus and a second intention feature obtained based on the second corpus feature. The second corpus characteristic has quality labeling information.
And 510, training a second neural network model based on the difference between the quality evaluation information of the second corpus feature in the at least one group of second sample corpus features and the corresponding quality marking information until a second preset training completion condition is met.
For example, in some optional examples, a Gradient Boost Decision Tree (GBDT) may be used as a model to obtain a cross entropy loss (i.e., a difference between quality assessment information of a second corpus feature in at least one set of second sample corpus features and corresponding quality labeling information) as a loss function, a Gradient descent method is used to train the second neural network model, wherein GBDT is an iterative decision tree algorithm, which comprises a plurality of decision trees, the conclusions of all the trees (i.e. whether the difference between the quality evaluation information of the second corpus feature in each group of second sample corpus features and the corresponding quality marking information is smaller than a second preset difference threshold) are accumulated to be used as the final conclusion, for example, whether the ratio between the number of the second sample corpus features with the difference smaller than the second preset difference threshold and the total number of the second sample corpus features input into the second neural network model reaches a preset ratio or not.
Based on the embodiment, the second neural network model is trained based on the second sample dialogue corpus of at least one group of users and customer service personnel and the quality labeling information of the second corpus characteristics generated by the customer service personnel corpus, so that the second neural network model can effectively learn the quality labeling information aiming at the customer service personnel corpus in each pair of dialogue corpora, and thus after the training of the second neural network model is completed, accurate quality evaluation can be carried out aiming at the reply sentences output by the user sending sentences, and the quality of the reply sentences can be accurately determined.
Optionally, in some possible implementation manners of any embodiment of the present disclosure, inoperation 502, the user corpus may be segmented according to the second sample dialogue corpus of each group of users and customer service staff, so as to obtain at least one word with a minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user corpus by using a word-to-vector technology to obtain first corpus characteristics; segmenting a user portrait of a corresponding user to obtain at least one word with a minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user portrait of the user by using a word-to-vector technology to obtain sample user preference characteristics; performing word segmentation on the customer service staff corpus to obtain at least one word with the minimum semantic unit; and performing word-to-vector conversion on at least one word obtained by the customer service staff by word segmentation by using a word-to-vector technology to obtain a second corpus characteristic. The Jieba chinese word segmentation component can be used to segment the user corpus, the user portrait, and the customer service staff corpus, and the specific implementation manner may refer to the records of the above embodiments and is not described in detail.
Optionally, in some possible implementation manners of any embodiment of the present disclosure, inoperation 504, a first intention feature may be obtained based on the first corpus feature by using a natural language understanding technology; and acquiring a second intention characteristic based on the second corpus characteristic by utilizing a natural language understanding technology.
Based on the embodiment, the intention characteristics corresponding to the user corpus and the customer service staff corpus can be accurately determined by utilizing the natural language understanding technology, so that the user intention and the customer service staff intention are clear.
Optionally, referring back to fig. 5, in a further embodiment, beforeoperation 502, the method may further include:
at least one corpus of dialogues is selected 500 from the corpus of dialogues.
And 501, removing invalid characters in the at least one group of dialogue corpus to obtain at least one group of second sample dialogue corpus.
In some possible implementations, the invalid characters in the second sample dialog corpus may be removed by using a regular matching technique, so as to avoid the invalid characters from affecting subsequent effective recognition of the second sample dialog corpus. The invalid characters may be set according to actual requirements, and may include, but are not limited to, network addresses such as punctuations, expressions, URLs, tab characters, and the like, which is not limited in this disclosure.
In the embodiment of the present disclosure, the first neural network model and the second neural network model may adopt neural networks with the same or different structures, and each neural network may be any multilayer neural network (i.e., a deep neural network), such as convolutional neural networks like LeNet, AlexNet, google LeNet, VGG, and ResNet, or cyclic neural networks like cyclic neural networks (RNN) and long-short term memory models (LSTM), or may be a countermeasure generation network, a regional convolutional neural network, and the like.
Any of the information interaction methods provided by the embodiments of the present disclosure may be performed by any suitable device having data processing capability, including but not limited to: terminal equipment, a server and the like. Alternatively, any information interaction method provided by the embodiments of the present disclosure may be executed by a processor, for example, the processor may execute any information interaction method mentioned in the embodiments of the present disclosure by calling a corresponding instruction stored in a memory. And will not be described in detail below.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Fig. 6 is a schematic structural diagram of an embodiment of an information interaction apparatus according to the present disclosure. The information interaction device of this embodiment can be used to implement the above-mentioned information interaction method embodiments of the present disclosure. As shown in fig. 6, the information interaction apparatus of this embodiment includes: the device comprises a first generation module, a first acquisition module and a second generation module. Wherein:
the first generation module is used for responding to receiving the sentences sent by the user and generating user sentence characteristics based on the sentences sent by the user.
And the first acquisition module is used for acquiring the user intention characteristics corresponding to the user sentence characteristics.
A second generation module to generate a reply sentence to the sentence sent by the user based on the user sentence feature, the user intent feature, and a user preference feature generated from the user representation of the user using the first neural network model.
Based on the information interaction device provided by the above embodiment of the present disclosure, after receiving a sentence sent by a user, a user sentence feature may be generated based on the sentence sent by the user, a user intention feature corresponding to the user sentence feature may be acquired, and then, a reply sentence for the sentence sent by the user may be generated based on the user sentence feature, the user intention feature, and a user preference feature generated from a user portrait of the user by using a first neural network model, so that an accurate reply sentence may be automatically generated for the sentence sent by the user, and a reply speed and an efficiency of communication between the user and a customer service person are improved; in addition, user portrait and user intention are combined when the reply sentences are generated, so that the reply sentences can better meet the user requirements, the user experience and the user reply rate are improved, products and services which better meet the user requirements can be provided for the user, and the success rate of the user for continuing subsequent business links is improved through an online communication link.
Optionally, in some possible implementations of any embodiment of the disclosure, the first generating module includes: the word segmentation unit is used for performing word segmentation on the sentences sent by the user to obtain at least one word with the minimum semantic unit; and the conversion unit is used for performing word-to-vector conversion on the at least one word by using a word-to-vector technology to obtain the user sentence characteristics.
Optionally, in some possible implementation manners of any embodiment of the present disclosure, the first obtaining module is specifically configured to: and acquiring the user intention characteristic based on the user sentence characteristic by utilizing a natural language understanding technology.
Fig. 7 is a schematic structural diagram of another embodiment of an information interaction device according to the present disclosure. As shown in fig. 7, compared with the embodiment shown in fig. 6, the information interaction apparatus of this embodiment further includes: a second obtaining module to obtain a user representation of the user from a user database based on the user ID of the user. Correspondingly, in this embodiment, the first generating module is further configured to perform word segmentation on the user portrait to obtain at least one word having a minimum semantic unit; and performing word-to-vector conversion on at least one word obtained by segmenting words of the user portrait by using a word-to-vector technology to obtain the user preference characteristics.
Optionally, in some possible implementation manners of any embodiment of the present disclosure, the second generating module includes: a first neural network model for inputting the user sentence feature, the user intention feature and the user preference feature into the first neural network model and outputting a reply sentence feature via the first neural network model; and the acquisition unit is used for acquiring the reply sentences corresponding to the reply sentence characteristics.
In addition, referring to fig. 7 again, in another embodiment of the information interaction apparatus of the present disclosure, the information interaction apparatus may further include: a third obtaining module and a third generating module. Wherein:
a third obtaining module, configured to obtain, by using a second neural network model, after the first neural network model outputs a reply sentence feature, quality evaluation information of the reply sentence based on the user preference feature, the user sentence feature, the user intention feature, and the reply sentence feature.
Correspondingly, the obtaining unit is configured to obtain the reply statement corresponding to the reply statement feature and output the reply statement based on the quality evaluation information of the reply statement, if the quality of the reply statement meets a preset quality standard.
And the third generation module is used for outputting the reply sentence based on a preset mode if the quality of the reply sentence does not reach a preset quality standard based on the quality evaluation information of the reply sentence.
Optionally, in any embodiment above, the first obtaining module may be further configured to obtain, by using a natural language understanding technology, an intention characteristic of the reply sentence based on the reply sentence characteristic. Referring to fig. 7 again, the information interaction apparatus of this embodiment may further include: and the splicing module is used for splicing the user preference characteristic, the user statement characteristic and the user intention characteristic to obtain a first splicing characteristic, and splicing the reply statement characteristic and the intention characteristic of the reply statement to obtain a second splicing characteristic. Correspondingly, the third obtaining module is specifically configured to input the first splicing feature and the second splicing feature into the second neural network model, and output the quality evaluation information of the reply statement through the second neural network model.
In addition, in another embodiment of the information interaction apparatus of the present disclosure, the first neural network model may be further configured to receive each of the input at least one group of first sample corpus features, and output at least one corresponding reply corpus feature; wherein each group of the first sample corpus features comprises: generating a first corpus feature based on a user corpus in a group of first sample dialogue corpuses, a first intention feature based on the first corpus feature, and a sample user preference feature based on a user image generated by a user corresponding to the user corpus; the first sample corpus feature has labeling information of a second corpus feature generated based on the customer service person corpus in the first sample dialogue corpus. Accordingly, referring back to fig. 7, the information interaction apparatus of this embodiment may further include: a first training module, configured to train the first neural network model based on a difference between at least one of the reply corpus features and the second corpus feature of the corresponding first sample corpus feature.
Optionally, in the above embodiment, the first generating module may be further configured to generate, for the first sample dialog corpus of each group of users and customer service staff, the first corpus feature based on the user corpus in the first sample dialog corpus, generate the sample user preference feature based on the user image of the corresponding user, and generate the second corpus feature based on the customer service staff corpus in the first sample dialog corpus, respectively. Correspondingly, the first obtaining module is further configured to obtain a first intention feature corresponding to the first corpus feature.
Optionally, in some possible implementation manners of any embodiment of the present disclosure, the first generating module is specifically configured to: performing word segmentation on the user corpus to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user corpus by using a word-to-vector technology to obtain the first corpus characteristics; segmenting the user portrait of the corresponding user to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user portrait of the corresponding user by using a word-to-vector technology to obtain the preference characteristics of the sample user; performing word segmentation on the customer service staff corpus to obtain at least one word with the minimum semantic unit; and performing word-to-vector conversion on at least one word obtained by the customer service staff corpus word segmentation by using a word-to-vector technology to obtain the second corpus feature. Correspondingly, the first obtaining module is specifically configured to: and acquiring the first intention characteristic based on the first corpus characteristic by utilizing a natural language understanding technology.
In addition, referring to fig. 7 again, in yet another embodiment of the information interaction apparatus of the present disclosure, the information interaction apparatus may further include: the device comprises a selecting module and a removing module.
In some possible implementations, the selecting module is configured to select at least one group of corpus of dialogues that meet a predetermined quality requirement from the corpus of dialogues. And the removing module is used for removing invalid characters in the at least one group of dialogue linguistic data to obtain at least one group of first sample dialogue linguistic data.
Additionally, in yet another embodiment of the information interaction apparatus of the present disclosure, the second neural network model is further configured to: respectively receiving each group of input second sample corpus characteristics in at least one group of second sample corpus characteristics, and outputting quality evaluation information of the second corpus characteristics in the at least one group of second sample corpus characteristics; wherein each group of the second sample corpus features comprises: the method comprises the steps of obtaining a first sample splicing characteristic and a second sample splicing characteristic, wherein the first sample splicing characteristic is obtained by splicing a sample user preference characteristic generated based on a user image of a user corresponding to a user corpus, a first corpus characteristic generated based on a user corpus in a group of second sample dialogue corpuses and a first intention characteristic obtained based on the first corpus characteristic, and the second sample splicing characteristic is obtained by splicing a second corpus characteristic generated based on a customer service staff corpus in the second sample dialogue corpus and a second intention characteristic obtained based on the second corpus characteristic; the second corpus features have quality labeling information. Accordingly, referring back to fig. 7, the information interaction apparatus of this embodiment further includes: and the second training module is used for training the second neural network model based on the difference between the quality evaluation information of the second corpus features in the at least one group of second sample corpus features and the corresponding quality marking information.
Optionally, in a further embodiment of the information interaction apparatus of the present disclosure, the first generating module is further configured to generate, for the second sample dialog corpus of each group of users and customer service staff, the first corpus feature based on the user corpus in the second sample dialog corpus, the sample user preference feature based on the user image of the corresponding user, and the second corpus feature based on the customer service staff corpus in the second sample dialog corpus, respectively. Correspondingly, the first obtaining module is further configured to obtain a first intention feature corresponding to the first corpus feature, and obtain a second intention feature corresponding to the second corpus feature. The splicing module is further configured to splice the sample user preference feature, the first corpus feature and the first intention feature to obtain the first sample splicing feature; and splicing the second corpus characteristic and the second intention characteristic to obtain the second sample splicing characteristic.
Optionally, in some possible implementation manners of any embodiment of the present disclosure, the first generating module is specifically configured to: performing word segmentation on the user corpus to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user corpus by using a word-to-vector technology to obtain the first corpus characteristics; segmenting the user portrait of the corresponding user to obtain at least one word with the minimum semantic unit; performing word-to-vector conversion on at least one word obtained by word segmentation of the user portrait of the corresponding user by using a word-to-vector technology to obtain the preference characteristics of the sample user; and performing word segmentation on the customer service staff corpus to obtain at least one word with the minimum semantic unit; and performing word-to-vector conversion on at least one word obtained by the customer service staff corpus word segmentation by using a word-to-vector technology to obtain the second corpus feature. Correspondingly, the first obtaining module is specifically configured to: acquiring the first intention characteristic based on the first corpus characteristic by utilizing a natural language understanding technology; and acquiring the second intention characteristic based on the second corpus characteristic by utilizing a natural language understanding technology.
In other possible implementations, the selecting module is configured to select at least one group of corpus of dialogues from the corpus of dialogues. And the removing module is used for removing invalid characters in the at least one group of dialogue linguistic data to obtain at least one group of second sample dialogue linguistic data.
Fig. 8 is a schematic structural diagram of an embodiment of an electronic device according to the present disclosure. The electronic device may be either or both of the first device and the second device, or a stand-alone device separate from them, which stand-alone device may communicate with the first device and the second device to receive the acquired input signals therefrom. As shown in fig. 8, the electronic device includes one or more processors and memory.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device to perform desired functions.
The memory may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by a processor to implement the information interaction of the various embodiments of the disclosure described above and/or other desired functionality.
In one example, the electronic device may further include: an input device and an output device, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
The input means may comprise, for example, a keyboard, a mouse, etc. The output device may output various information including the determined distance information, direction information, and the like to the outside. The output devices may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, among others.
Of course, for simplicity, only some of the components of the electronic device relevant to the present disclosure are shown in fig. 8, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device may include any other suitable components, depending on the particular application.
In addition to the above methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the interaction of information according to the various embodiments of the present disclosure described in the above sections of this specification.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in information interaction according to various embodiments of the present disclosure described in the above sections of this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

CN202010496181.7A2020-06-032020-06-03Information interaction method and device, electronic equipment and storage mediumPendingCN111639162A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202010496181.7ACN111639162A (en)2020-06-032020-06-03Information interaction method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202010496181.7ACN111639162A (en)2020-06-032020-06-03Information interaction method and device, electronic equipment and storage medium

Publications (1)

Publication NumberPublication Date
CN111639162Atrue CN111639162A (en)2020-09-08

Family

ID=72329803

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202010496181.7APendingCN111639162A (en)2020-06-032020-06-03Information interaction method and device, electronic equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN111639162A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112527991A (en)*2020-12-162021-03-19平安国际智慧城市科技股份有限公司Information processing method, apparatus and medium
CN113656557A (en)*2021-08-202021-11-16北京小米移动软件有限公司Message reply method, device, storage medium and electronic equipment
WO2022126963A1 (en)*2020-12-162022-06-23平安科技(深圳)有限公司Customer profiling method based on customer response corpora, and device related thereto
CN114782078A (en)*2022-04-012022-07-22重庆邮电大学 A business information evaluation method and system for high-dimensional data
CN114817514A (en)*2022-03-222022-07-29青岛海尔科技有限公司 Method and device for determining reply audio, storage medium and electronic device
CN114969195A (en)*2022-05-272022-08-30北京百度网讯科技有限公司 Dialogue Content Mining Method and Dialogue Content Evaluation Model Generation Method
CN115186092A (en)*2022-07-112022-10-14贝壳找房(北京)科技有限公司Online interaction processing method and apparatus, storage medium, and program product
CN115424606A (en)*2022-09-012022-12-02北京捷通华声科技股份有限公司Voice interaction method, voice interaction device and computer readable storage medium
CN114817514B (en)*2022-03-222025-10-10青岛海尔科技有限公司 Method and device for determining reply audio, storage medium, and electronic device

Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106528530A (en)*2016-10-242017-03-22北京光年无限科技有限公司Method and device for determining sentence type
CN108304468A (en)*2017-12-272018-07-20中国银联股份有限公司A kind of file classification method and document sorting apparatus
US20180240013A1 (en)*2017-02-172018-08-23Google Inc.Cooperatively training and/or using separate input and subsequent content neural networks for information retrieval
CN109146610A (en)*2018-07-162019-01-04众安在线财产保险股份有限公司It is a kind of intelligently to insure recommended method, device and intelligence insurance robot device
CN110390108A (en)*2019-07-292019-10-29中国工商银行股份有限公司Task exchange method and system based on deeply study

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN106528530A (en)*2016-10-242017-03-22北京光年无限科技有限公司Method and device for determining sentence type
US20180240013A1 (en)*2017-02-172018-08-23Google Inc.Cooperatively training and/or using separate input and subsequent content neural networks for information retrieval
CN108304468A (en)*2017-12-272018-07-20中国银联股份有限公司A kind of file classification method and document sorting apparatus
CN109146610A (en)*2018-07-162019-01-04众安在线财产保险股份有限公司It is a kind of intelligently to insure recommended method, device and intelligence insurance robot device
CN110390108A (en)*2019-07-292019-10-29中国工商银行股份有限公司Task exchange method and system based on deeply study

Cited By (12)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112527991A (en)*2020-12-162021-03-19平安国际智慧城市科技股份有限公司Information processing method, apparatus and medium
WO2022126963A1 (en)*2020-12-162022-06-23平安科技(深圳)有限公司Customer profiling method based on customer response corpora, and device related thereto
CN112527991B (en)*2020-12-162025-05-23平安国际智慧城市科技股份有限公司Information processing method, device and medium
CN113656557A (en)*2021-08-202021-11-16北京小米移动软件有限公司Message reply method, device, storage medium and electronic equipment
CN114817514A (en)*2022-03-222022-07-29青岛海尔科技有限公司 Method and device for determining reply audio, storage medium and electronic device
CN114817514B (en)*2022-03-222025-10-10青岛海尔科技有限公司 Method and device for determining reply audio, storage medium, and electronic device
CN114782078A (en)*2022-04-012022-07-22重庆邮电大学 A business information evaluation method and system for high-dimensional data
CN114969195A (en)*2022-05-272022-08-30北京百度网讯科技有限公司 Dialogue Content Mining Method and Dialogue Content Evaluation Model Generation Method
CN114969195B (en)*2022-05-272023-10-27北京百度网讯科技有限公司Dialogue content mining method and dialogue content evaluation model generation method
CN115186092A (en)*2022-07-112022-10-14贝壳找房(北京)科技有限公司Online interaction processing method and apparatus, storage medium, and program product
CN115186092B (en)*2022-07-112023-06-20贝壳找房(北京)科技有限公司Online interactive processing method and device, storage medium and program product
CN115424606A (en)*2022-09-012022-12-02北京捷通华声科技股份有限公司Voice interaction method, voice interaction device and computer readable storage medium

Similar Documents

PublicationPublication DateTitle
US10853577B2 (en)Response recommendation system
US10827024B1 (en)Realtime bandwidth-based communication for assistant systems
CN111639162A (en)Information interaction method and device, electronic equipment and storage medium
CN111428010B (en)Man-machine intelligent question-answering method and device
CN110019742B (en)Method and device for processing information
US20220300716A1 (en)System and method for designing artificial intelligence (ai) based hierarchical multi-conversation system
US11217236B2 (en)Method and apparatus for extracting information
CN111985249A (en)Semantic analysis method and device, computer-readable storage medium and electronic equipment
CN107846350A (en)A kind of method, computer-readable medium and the system of context-aware Internet chat
US11436446B2 (en)Image analysis enhanced related item decision
CN110175323B (en)Method and device for generating message abstract
US11924375B2 (en)Automated response engine and flow configured to exchange responsive communication data via an omnichannel electronic communication channel independent of data source
CN108268450B (en)Method and apparatus for generating information
CN111368066B (en)Method, apparatus and computer readable storage medium for obtaining dialogue abstract
CN110399465A (en) Method and device for processing information
US11423219B2 (en)Generation and population of new application document utilizing historical application documents
CN114048319B (en)Humor text classification method, device, equipment and medium based on attention mechanism
CN110232920B (en)Voice processing method and device
KR20220079336A (en)Method and apparatus for providing a chat service including an emotional expression item
US11593567B2 (en)Intelligent conversational gateway
CN111046151A (en)Message processing method and device
CN111555960A (en)Method for generating information
CN113761111B (en)Intelligent dialogue method and device
CN114610863A (en) Dialogue text push method and device, storage medium and terminal
CN117059082B (en)Outbound call conversation method, device, medium and computer equipment based on large model

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination
TA01Transfer of patent application right

Effective date of registration:20201104

Address after:100085 Floor 102-1, Building No. 35, West Second Banner Road, Haidian District, Beijing

Applicant after:Seashell Housing (Beijing) Technology Co.,Ltd.

Address before:300 457 days Unit 5, Room 1, 112, Room 1, Office Building C, Nangang Industrial Zone, Binhai New Area Economic and Technological Development Zone, Tianjin

Applicant before:BEIKE TECHNOLOGY Co.,Ltd.

TA01Transfer of patent application right
RJ01Rejection of invention patent application after publication

Application publication date:20200908

RJ01Rejection of invention patent application after publication

[8]ページ先頭

©2009-2025 Movatter.jp