Movatterモバイル変換


[0]ホーム

URL:


CN119513268A - Knowledge question answering method, device, computer equipment and storage medium - Google Patents

Knowledge question answering method, device, computer equipment and storage medium
Download PDF

Info

Publication number
CN119513268A
CN119513268ACN202411710491.9ACN202411710491ACN119513268ACN 119513268 ACN119513268 ACN 119513268ACN 202411710491 ACN202411710491 ACN 202411710491ACN 119513268 ACN119513268 ACN 119513268A
Authority
CN
China
Prior art keywords
question
answer
retrieved
category
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411710491.9A
Other languages
Chinese (zh)
Inventor
梁华
刘文轩
黄欢
秦宗国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Life Insurance Co ltd
Original Assignee
China Life Insurance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Life Insurance Co ltdfiledCriticalChina Life Insurance Co ltd
Priority to CN202411710491.9ApriorityCriticalpatent/CN119513268A/en
Publication of CN119513268ApublicationCriticalpatent/CN119513268A/en
Pendinglegal-statusCriticalCurrent

Links

Classifications

Landscapes

Abstract

The embodiment of the application provides a knowledge question-answering method, a knowledge question-answering device, computer equipment, a storage medium and a computer program product, and relates to the technical field of artificial intelligence. The method comprises the steps of receiving a to-be-searched question sent by a search terminal, obtaining a plurality of question-answer categories contained in a pre-built question-answer pair library, obtaining a target matching question corresponding to the to-be-searched question based on the to-be-searched question and the plurality of question-answer categories by using a large language model, and obtaining matching answer information corresponding to the to-be-searched question according to the target matching question. The method improves response accuracy.

Description

Knowledge question and answer method and device computer device and storage medium
Technical Field
The present application relates to the field of artificial intelligence technology, and in particular, to a knowledge question-answering method, apparatus, computer device, storage medium and computer program product.
Background
In the field of knowledge question-answering, a man-machine conversation needs to be realized, and a knowledge question-answering system needs to be capable of directly giving an answer to a question which is presented by a user in natural language. The main indicator for evaluating the quality of an automatic question-answer is the correctness of the answer. Automatic question-answering systems involve a number of directionally diverse techniques such as natural language processing, information retrieval, knowledge bases, etc.
In the related technology at present, the technology adopts 'search recall + model fine-ranking' in part of the technologies, and the technology adopts search enhancement technology in the other part of the technologies, so that the problem of low response accuracy exists.
Disclosure of Invention
Based on this, it is necessary to provide a knowledge question answering method, an apparatus, a computer device, a storage medium and a computer program product for the above technical problems.
In a first aspect, the present application provides a knowledge question-answering method. The method comprises the following steps:
receiving a to-be-searched problem sent by a search terminal;
Acquiring a plurality of question-answer categories contained in a pre-constructed question-answer pair library;
acquiring a target matching problem corresponding to the to-be-searched problem based on the to-be-searched problem and a plurality of question-answer categories by using a large language model;
And obtaining matching answer information corresponding to the to-be-searched question according to the target matching question.
In one embodiment, the obtaining the target matching problem corresponding to the to-be-searched question based on the to-be-searched question and the multiple question-answer categories by using a large language model includes obtaining at least one candidate question-answer category corresponding to the to-be-searched question based on the to-be-searched question and the multiple question-answer categories by using the large language model, and obtaining the target matching problem corresponding to the to-be-searched question based on the to-be-searched question and the at least one candidate question-answer category by using the large language model.
In one embodiment, the obtaining, by using a large language model, the target matching problem corresponding to the to-be-searched problem based on the to-be-searched problem and at least one candidate question-answer category includes:
And acquiring the target matching problem corresponding to the to-be-searched problem based on the to-be-searched problem and the plurality of candidate problems by using a large language model.
In one embodiment, the obtaining the target matching problem corresponding to the to-be-searched problem based on the to-be-searched problem and the plurality of candidate problems by using a large language model includes obtaining a correlation degree between each candidate problem and the to-be-searched problem, determining a problem evaluation score of each candidate problem according to the correlation degree, and determining the candidate problem with the highest problem evaluation score as the target matching problem corresponding to the to-be-searched problem.
In one embodiment, the obtaining at least one candidate question-answer category corresponding to the to-be-searched question based on the to-be-searched question and the multiple question-answer categories by using the large language model includes inputting category name information of each of the to-be-searched question and the multiple question-answer categories into the large language model, obtaining a category screening result of the large language model, and obtaining at least one candidate question-answer category corresponding to the to-be-searched question according to the category screening result.
In one embodiment, the obtaining at least one candidate question-answer category corresponding to the to-be-searched question according to the category screening result includes outputting at least one candidate question-answer category if the category screening result indicates that the large language model has screened the candidate question-answer category, obtaining category association information of each of a plurality of question-answer categories if the category screening result indicates that the large language model has not screened the candidate question-answer category, and obtaining at least one candidate question-answer category corresponding to the to-be-searched question according to the category association information.
In a second aspect, the present application provides a knowledge question-answering apparatus. The device comprises:
the receiving module is used for receiving the problem to be searched sent by the searching terminal;
The acquisition module is used for acquiring a plurality of question-answer categories contained in a pre-constructed question-answer pair library;
The screening module is used for acquiring a target matching problem corresponding to the to-be-searched problem based on the to-be-searched problem and a plurality of question-answer categories by using a large language model;
and the answer output module is used for acquiring matching answer information corresponding to the to-be-searched question according to the target matching question.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
receiving a to-be-searched problem sent by a search terminal;
Acquiring a plurality of question-answer categories contained in a pre-constructed question-answer pair library;
acquiring a target matching problem corresponding to the to-be-searched problem based on the to-be-searched problem and a plurality of question-answer categories by using a large language model;
And obtaining matching answer information corresponding to the to-be-searched question according to the target matching question.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
receiving a to-be-searched problem sent by a search terminal;
Acquiring a plurality of question-answer categories contained in a pre-constructed question-answer pair library;
acquiring a target matching problem corresponding to the to-be-searched problem based on the to-be-searched problem and a plurality of question-answer categories by using a large language model;
And obtaining matching answer information corresponding to the to-be-searched question according to the target matching question.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
receiving a to-be-searched problem sent by a search terminal;
Acquiring a plurality of question-answer categories contained in a pre-constructed question-answer pair library;
acquiring a target matching problem corresponding to the to-be-searched problem based on the to-be-searched problem and a plurality of question-answer categories by using a large language model;
and obtaining matching answer information corresponding to the to-be-retrieved problem according to the target matching problem.
The knowledge question-answering method, the knowledge question-answering device, the knowledge question-answering computer equipment, the knowledge question-answering storage medium and the knowledge question-answering computer program product are used for receiving questions to be searched sent by a search terminal, obtaining a plurality of question-answering categories contained in a pre-built question-answering pair library, obtaining target matching questions corresponding to the questions to be searched based on the questions to be searched and the plurality of question-answering categories by using a large language model, and obtaining matching answer information corresponding to the questions to be searched according to the target matching questions. In the method provided by the embodiment of the application, the technical route of the large model is adopted, and when the large model is used, the large model is not directly enabled to directly generate the answer, but is enabled to select the question matched with the user intention from the pre-constructed question-answer pair library, and the answer corresponding to the question is output to the user, so that the response accuracy is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the related art, the drawings that are needed in the description of the embodiments of the present application or the related technologies will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other related drawings may be obtained according to these drawings without inventive effort to those of ordinary skill in the art.
FIG. 1 is a schematic flow chart of a knowledge question-answering method according to an embodiment of the present application;
FIG. 2 is a flowchart of another knowledge question-answering method according to an embodiment of the present application;
FIG. 3 is a schematic flow chart of a method for obtaining a target matching problem according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating another method for obtaining a target matching problem according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of screening at least one candidate question-answer category according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of a loop call large language model according to an embodiment of the present application;
FIG. 7 is a block diagram of a knowledge question-answering device according to an embodiment of the present application;
Fig. 8 is an internal structure diagram of a computer device according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The user information (including but not limited to user equipment information, user personal information, etc.) and the data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or sufficiently authorized by each party.
In an exemplary embodiment, as shown in fig. 1 and 2, a knowledge question-and-answer method is provided, and this embodiment is illustrated by applying the method to a server, it will be understood that the method may also be applied to a terminal, and may also be applied to a system including a terminal and a server, and implemented through interaction between the terminal and the server. In this embodiment, the method includes the steps of:
Step 102, receiving the problem to be searched sent by the search terminal.
The knowledge question and answer method provided by the embodiment of the application can be applied to a knowledge question and answer system, the knowledge question and answer system can comprise a retrieval terminal and a system server, the retrieval terminal can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things equipment and portable wearable equipment, and the Internet of things equipment can be an intelligent sound box, an intelligent television, an intelligent air conditioner, intelligent vehicle-mounted equipment, projection equipment and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The head-mounted device may be a Virtual Reality (VR) device, an augmented Reality (Augmented Reality, AR) device, smart glasses, or the like. The server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud computing service. The user can register and log in the knowledge question-answering system on the search terminal, the search terminal responds to the log-in operation triggered by the user to display an information query page, and the user can trigger an answer query request aiming at the to-be-searched question on the information query page. The answer query request may include question text information of the question to be retrieved, e.g., how the question to be retrieved may be an account. The question text information may be in the form of keywords, natural language (e.g., how to transact accounts), and/or entity-relationship structures.
Step 104, obtaining a plurality of question-answer categories contained in a pre-constructed question-answer pair library.
In this step, as shown in fig. 2, a question-answer pair library may be pre-constructed, where the question-answer pairs may include multiple sets of question-answer pairs, that is, question-answer pairs, and the multiple sets of question-answer pairs may be classified into multiple question-answer categories, for example, a new single application, an information query, an online payment, and a claim questions. The construction of the question-answer pair library may include a first step of mining an initial question-answer pair of a target service according to historical service data of the target service, (1) acquiring associated information of the target service, for example, data such as insurance clauses, claim cases, service flow documents, product descriptions, and the like in an associated service system of the knowledge question-answer system. Extracting question-answer information of the target service from the associated information and converting the question-answer information into a form of question-answer pairs, for example, (the question is that ' how much compensation can be obtained by insured person due to accidental injury die ', the answer is that ' the company pays die insurance money according to the insurance amount contracted by contract ') (2) question-answer pairs related to the target service can be obtained in daily operation of the target service, for example, question-answer pairs are constructed (the question is that ' how long is the hesitation period of the insurance product. And secondly, preprocessing the questions in the initial question-answer pair of the target service, normalizing the expression of the questions, classifying and clustering the questions, specifically, uniformly and normative processing the collected questions, converting the questions into a uniform format, ensuring that the language of the questions is clear, concise and easy to understand, and avoiding the use of fuzzy, ambiguous or excessively professional vocabulary (unless a specific domain term cannot be avoided). And classifying and clustering the problems according to the factors such as the theme, the intention, the service field and the like of the problems. The existing business classification system can be referred to, for example, the problems in insurance business can be classified into the category of insurance class, claim settlement class, query class and the like, and the problems with the similarity degree larger than the preset similarity threshold are classified into the same category. And thirdly, preprocessing the answers in the initial question-answer pair of the target service, for example, compiling and optimizing the answers, and specifically compiling accurate, complete and authoritative answers for each question. The answer should be based on reliable sources of knowledge such as corporate internal regulations, industry standards, legal regulations, etc. For questions related to a particular business process or operation, answers should describe the steps and notes in detail. For example for the question of "how to apply for claims? the answer should include the specific flow of claim application (such as report, submit material, audit flow, etc.), the key information of time node, material requirement, etc. that need to be noted; the expression mode of the answer is converted into a format with high readability, if the answer content is more, the answer can be organized in a dot-division, segment-division mode and the like, and the readability of the answer is improved. For example, the answers to complex claim settlement processes can be described in steps, such as '1', the report is performed by dialing customer service telephone [ specific number ] in time after the accident occurs, and the basic conditions of time, place, reason and the like of the accident are described. 2. Preparing materials, namely preparing corresponding claim settlement materials such as medical expense invoices, diagnosis certificates, insurance policy originals and the like according to the accident types (the detailed bill of materials can be inquired or consulted with customer service in a official network). 3. And submitting the prepared material to a designated claim settlement service website or submitting the prepared material through an online claim settlement platform. 4. The auditing flow is to audit the materials submitted by you, and the auditing time is generally [ specific duration ], during which the conditions related to the verification of the contact of you can be verified. 5. And (3) paying, namely paying according to contract agreements after the examination is passed, and paying the pay amount to the account appointed by the user within the specific time. ". And fourthly, (1) selecting a proper storage mode, and selecting the proper storage mode to store question-answer pairs according to the architecture and the requirements of the knowledge question-answer system. Databases (e.g., relational database MySQL and non-relational database mongo db, etc.) may be used for storage, and questions and answers may be managed as different fields, respectively, for easy query and retrieval, or file storage (e.g., JSON format file, XML file, etc.) may be employed. (2) In order to improve the retrieval efficiency of question and answer pairs, a corresponding indexing mechanism is established, such as full-text indexing is established for the question fields, so that related questions can be rapidly positioned and matched when the question and answer system operates. Meanwhile, an effective question-answer management mechanism is established, which comprises regular update (such as timely updating answers according to business changes, legal regulation and the like), addition of new question-answer pairs (a question-answer pair library is continuously expanded along with business development and user demand changes), deletion or modification of invalid or inaccurate question-answer equivalent operation, and the quality and timeliness of the question-answer pair library are ensured. For example, when the terms of the insurance product are changed, the answers of the related question-answer pairs are updated in time, and when a certain question is found to be no longer applicable or is wrongly expressed, the answers are deleted or modified in time.
And 106, acquiring a target matching problem corresponding to the to-be-searched problem based on the to-be-searched problem and a plurality of question-answer categories by using the large language model.
Firstly, a plurality of question-answer categories contained in a question-answer pair library can be obtained, and at least one candidate question-answer category matched with the question to be searched is screened out from the plurality of question-answer categories based on the question to be searched and the plurality of question-answer categories by utilizing a large language model, specifically, firstly, an input data structure is constructed, a category list consisting of the extracted plurality of question-answer categories and the question to be searched are combined into an input format suitable for the large language model, for example, a data object containing a category set field and a question to be searched field is constructed, for example, a JSON format is used for representing [ "category set" [ "new single application category", "claim question category", "information query category", "online payment category" ] ":" I want to query my insurance policy information ". Secondly, designing large language model prompt words, carefully designing the prompt words, and guiding the large model to execute category screening tasks. Taking insurance service as an example for illustration, the hint may contain key information such as role setting, specifying the role of the large language model, e.g. "you are a professional insurance service advisor". Task description the specification model needs to screen the class that best matches the customer problem from a given set of classes, e.g. "please choose 1-3 classes from the provided set of classes that best match the customer problem in terms of semantics, intent and business scale". The input explanation informs the model of the constitution of input data, for example, "category set is [ category list ], and the question to be searched is [ text of the question to be searched ]. The output requirement is that the format and the content requirement of the specified model output, such as ' please output the screened category names in list form ', the number is between 1-3 '. An example hint word is "you are a professional insurance business advisor. Now give you a customer question and a set of categories, your task is to pick 1-3 categories from the set of categories that match the customer question most semantically, with intent and business. The category set is [ new list application class, claim settlement class, information inquiry class and online payment class ], and the client problem is [ I want to inquire about my insurance policy information ]. Please output the screened category names in list form, the number is between 1-3. And thirdly, processing the large language model, namely after receiving the built input data and the prompt words, starting to perform semantic understanding and analysis, namely performing deep semantic analysis on the client problem, and identifying key semantic elements, theme keywords, intention expressions and the like in the problem. For example, for the problem of "I want to query for my insurance policy information," the model may extract key semantic content such as "query", "insurance policy information," etc. And (3) carrying out category matching evaluation, namely carrying out matching evaluation on each category in the category set by the model according to category names, semantic information related to the category in pre-training knowledge and understanding of customer problems. For category names, their relevance to the key semantics of the customer problem is directly compared. For example, the "information query class" is highly relevant to the "query insurance policy information" because the name directly represents the query-related service, while the "new policy application class" is less relevant to the problem because the problem mainly surrounds the existing policy information query, rather than the new policy application service. And the model is used for knowing the specific meaning and the related range of each category in the insurance business field by utilizing pre-training knowledge, and further judging the matching degree with the client problem. For example, model-aware "information query classes" typically encompass query operations for various insurance-related information, including policy information, claim progress, premium details, etc., to determine their high relevance to customer questions. And screening and deciding, namely screening the class which is most in line with the customer problem from the class set according to the matching evaluation result by the model. If the model determines that only one category matches a customer problem very well, e.g., the "information query class" matches the customer problem closely in terms of semantics and business, and the other categories have low relevance, the model may return only that one category. If there are multiple categories with higher relevance, such as "information query class" and "claim questions class" (assuming that some information query intent related to claims may be implied in the question to be retrieved), but the relevance of "information query class" is slightly higher than "claim questions class", the model may select "information query class" as the main category and may return "claim questions class" at the same time to provide more comprehensive possible relevant categories, with the number of candidate question-answer categories that are eventually returned being between 1-3, according to a pre-set policy (e.g., selecting the most relevant category preferentially or selecting the more common or more basic category when the relevance is similar). Then, question and answer questions contained in each of the at least one candidate question and answer category in the question and answer library can be obtained to obtain a plurality of candidate questions, and further, a large language model can be utilized to obtain target matching questions corresponding to the to-be-searched questions based on the to-be-searched questions and the plurality of candidate questions. Specifically, in a first step, the candidate questions and questions to be retrieved are organized into input structures that the large model can understand, for example, a data object is created that contains a "candidate question list" field and a "questions to be retrieved" field, such as represented using JSON format as { "candidate question list" [ "what materials are required to be submitted for a claim. Secondly, designing a large language model prompt word, carefully writing the prompt word to guide the large model to execute a screening task, and taking insurance service as an example for explanation, wherein the prompt word comprises the following key parts of setting roles, namely defining the roles of the large model in the task, such as 'you are professional insurance service answering specialists'. Task description clearly illustrates that the model needs to pick out 1 question from the candidate question list that is the best match with the question to be retrieved in terms of semantics, intent and business logic, for example, "please select one question from the provided candidate question list that is the best match with the question to be retrieved, require the best match to be achieved in terms of semantics understanding, question intent and insurance business relevance". And (3) inputting a description, namely informing the model of the composition of input data, such as' a candidate problem list is [ a candidate problem set ], and the problem to be searched is [ a text of the problem to be searched ]. Output requirements, which specify the format and content requirements of the model output, such as "please directly output the best matching problem screened". The following is an example prompt, "you are a professional insurance business answering expert. Now give you a set of candidate questions and a question to be retrieved, your task is to select a question from the candidate question list that best fits the question to be retrieved, requiring an optimal match in terms of semantic understanding, question intent, and insurance business relevance. The candidate problem list is: what materials are the claims to be submitted? how long is the audit time of the claim? how long after the my insurance claim application is submitted, the claim can be received. Please directly output the screened best matching problem. And thirdly, processing the large language model, namely after the large language model receives the built input data and the prompt words, carrying out semantic understanding and analysis, namely carrying out deep semantic analysis on the problem to be searched, and identifying key semantic elements, topic keywords, intention expressions and logic relations in the problem to be searched. For the question of "how long a claim can be received after submission of my insurance claim application? the model can extract key semantic information such as 'insurance claim application submission', 'claim receipt', 'time', and the like, and understand that the information related to the claim settlement time is mainly concerned by clients. Candidate question matching evaluation, namely, aiming at each question in a candidate question list, performing matching evaluation operation on a model by using vocabulary level matching, namely, comparing the proportion and importance of the candidate questions and the same vocabulary in the questions to be searched. For example, "how long the audit time of a claim is" has more of the same vocabulary as the problem to be retrieved, such as "claim" and "time", which increases the likelihood of its match. And calculating semantic similarity, namely calculating the similarity of the candidate problem and the problem to be searched at the semantic level by using a semantic understanding technology. The model may consider the semantic relationships of words, the similarity of sentence structures, and consistency of semantic logic. For example, "how long is the audit time of a claim to be verified" and the problem to be retrieved are semantically related problems in the process of the claim to be verified, the similarity is high, and "what materials are required to be submitted by the claim to be verified" although related to the claim to be verified, the semantics are focused on the material submission, and the semantic logic difference from the problem to be retrieved is large. And judging whether the intention of the candidate problem is consistent with the intention of the problem to be searched or not through intention matching. The model judges the intention behind each candidate problem according to the understanding and pre-training knowledge of insurance business. For example, the intent of a question to be retrieved is obviously the query of the time of the claim, how long is the time of the review of the claim. And screening and deciding, namely calculating a comprehensive matching score for each candidate problem according to the matching evaluation result by the model, wherein the score comprehensively considers factors such as vocabulary matching, semantic similarity, intention matching and the like. The model then selects the highest scoring question from the candidate question list as the question that best matches the customer intent. For example, by calculation, how long is the audit time of the claim "the overall match score is highest, and the model determines it as the best question to be finally screened, i.e., the target match question corresponding to the question to be retrieved.
And step 108, obtaining matching answer information corresponding to the to-be-retrieved question according to the target matching question.
After obtaining the target matching question corresponding to the question to be searched, the answer information corresponding to the target matching question can be obtained in the question-answer library, and the answer information is used as the matching answer information corresponding to the question to be searched, and further, the matching answer information corresponding to the question to be searched can be output, and the matching answer information is displayed through the search terminal.
In the method of the embodiment, a to-be-searched question sent by a search terminal is received, a plurality of question-answer categories contained in a pre-constructed question-answer pair library are obtained, a large language model is utilized to obtain a target matching question corresponding to the to-be-searched question based on the to-be-searched question and the plurality of question-answer categories, and matching answer information corresponding to the to-be-searched question is obtained according to the target matching question. In the method provided by the embodiment of the application, the technical route of the large model is adopted, and when the large model is used, the large model is not directly enabled to directly generate the answer, but is enabled to select the question matched with the user intention from the pre-constructed question-answer pair library, and the answer corresponding to the question is output to the user, so that the response accuracy is improved.
In an exemplary embodiment, as shown in fig. 3, step 106 may include steps 302 through 304:
Step 302, obtaining at least one candidate question-answer category corresponding to the to-be-searched question based on the to-be-searched question and the multiple question-answer categories by using the large language model.
Step 304, acquiring a target matching problem corresponding to the to-be-searched problem based on the to-be-searched problem and at least one candidate question-answer category by using the large language model.
In an exemplary embodiment, as shown in fig. 4, step 304 may include steps 402 to 404:
step 402, obtaining a plurality of candidate questions corresponding to at least one candidate question-answer category in a question-answer pair library.
Step 404, acquiring a target matching problem corresponding to the problem to be searched based on the problem to be searched and the plurality of candidate problems by using the large language model.
In an exemplary embodiment, step 404 may include:
The method comprises the steps of obtaining the correlation degree of each candidate problem and the problem to be searched, determining the problem evaluation score of each candidate problem according to the correlation degree, and determining the candidate problem with the highest problem evaluation score as the target matching problem corresponding to the problem to be searched.
Specifically, in a first step, the candidate questions and questions to be retrieved are organized into input structures that can be understood by a large model, for example, a data object is created that contains a "candidate question list" field and a "questions to be retrieved" field, such as represented using JSON format as { "candidate question list" [ "what materials are required to be submitted for claim. Secondly, designing a large language model prompt word, carefully writing the prompt word to guide the large model to execute a screening task, and taking insurance service as an example for explanation, wherein the prompt word comprises the following key parts of setting roles, namely defining the roles of the large model in the task, such as 'you are professional insurance service answering specialists'. Task description clearly illustrates that the model needs to pick out 1 question from the candidate question list that is the best match with the question to be retrieved in terms of semantics, intent and business logic, for example, "please select one question from the provided candidate question list that is the best match with the question to be retrieved, require the best match to be achieved in terms of semantics understanding, question intent and insurance business relevance". And (3) inputting a description, namely informing the model of the composition of input data, such as' a candidate problem list is [ a candidate problem set ], and the problem to be searched is [ a text of the problem to be searched ]. Output requirements, which specify the format and content requirements of the model output, such as "please directly output the best matching problem screened". The following is an example prompt, "you are a professional insurance business answering expert. Now give you a set of candidate questions and a question to be retrieved, your task is to select a question from the candidate question list that best fits the question to be retrieved, requiring an optimal match in terms of semantic understanding, question intent, and insurance business relevance. The candidate problem list is: what materials are the claims to be submitted? how long is the audit time of the claim? how long after the my insurance claim application is submitted, the claim can be received. Please directly output the screened best matching problem. And thirdly, processing the large language model, namely after the large language model receives the built input data and the prompt words, carrying out semantic understanding and analysis, namely carrying out deep semantic analysis on the problem to be searched, and identifying key semantic elements, topic keywords, intention expressions and logic relations in the problem to be searched. For the question of "how long a claim can be received after submission of my insurance claim application? the model can extract key semantic information such as 'insurance claim application submission', 'claim receipt', 'time', and the like, and understand that the information related to the claim settlement time is mainly concerned by clients. Candidate question matching evaluation, namely, aiming at each question in a candidate question list, performing matching evaluation operation on a model by using vocabulary level matching, namely, comparing the proportion and importance of the candidate questions and the same vocabulary in the questions to be searched. For example, "how long the audit time of a claim is" has more of the same vocabulary as the problem to be retrieved, such as "claim" and "time", which increases the likelihood of its match. And calculating semantic similarity, namely calculating the similarity of the candidate problem and the problem to be searched at the semantic level by using a semantic understanding technology. The model may consider the semantic relationships of words, the similarity of sentence structures, and consistency of semantic logic. For example, "how long is the audit time of a claim to be verified" and the problem to be retrieved are semantically related problems in the process of the claim to be verified, the similarity is high, and "what materials are required to be submitted by the claim to be verified" although related to the claim to be verified, the semantics are focused on the material submission, and the semantic logic difference from the problem to be retrieved is large. And judging whether the intention of the candidate problem is consistent with the intention of the problem to be searched or not through intention matching. The model judges the intention behind each candidate problem according to the understanding and pre-training knowledge of insurance business. For example, the intent of a question to be retrieved is obviously the query of the time of the claim, how long is the time of the review of the claim. And screening and deciding, namely calculating a comprehensive matching score, namely a problem evaluation score, for each candidate problem according to the matching evaluation result by the model, wherein the score comprehensively considers factors such as vocabulary matching, semantic similarity, intention matching and the like. The model then selects the highest scoring question from the candidate question list as the question that best matches the customer intent. For example, by calculation, how long is the audit time of the claim "the overall match score is highest, and the model determines it as the best question to be finally screened, i.e., the target match question corresponding to the question to be retrieved.
In the method of the embodiment, a large language model is introduced, and when the large language model is used, the large language model is not directly used for directly generating answers, but is used for selecting questions matched with the intention of a user from a pre-built question-answer pair library, and outputting answers corresponding to the questions to the user, so that the answer accuracy is improved.
In an exemplary embodiment, as shown in fig. 5, step 302 may include steps 502 through 504:
step 502, inputting the to-be-searched questions and the category name information of each question-answer category in the multiple question-answer categories into the large language model, and obtaining the category screening result of the large language model.
Step 504, according to the category screening result, at least one candidate question-answer category corresponding to the to-be-searched question is obtained.
In an exemplary embodiment, step 504 may include:
Outputting at least one candidate question-answer category if the category screening result indicates that the large language model has screened the candidate question-answer category, acquiring category association information of each question-answer category in a plurality of question-answer categories if the category screening result indicates that the large language model has not screened the candidate question-answer category, and acquiring at least one candidate question-answer category corresponding to the to-be-searched question according to the category association information.
In which, as shown in fig. 6, a flow chart of a large language model is called circularly, in which, in the first step, an input data structure is constructed, a class list composed of multiple question-answer classes and questions to be searched are combined into an input format suitable for the large language model, for example, a data object containing a 'class set' field and a 'questions to be searched' field is constructed, for example, a JSON format is used to indicate { "class set": [ "new list application class", "claim questions", "information inquiry class", "online payment class" ], "questions to be searched": i want to inquire about my insurance policy information ". Secondly, designing large language model prompt words, carefully designing the prompt words, and guiding the large model to execute category screening tasks. thirdly, inputting the questions to be searched, the category list and the prompt words into the large language model to obtain category screening results of the large language model, wherein the category screening results can be used for judging whether the large language model can accurately screen the categories, outputting at least one candidate question-answer category if the category screening results indicate that the large language model screens out the candidate question-answer categories, and acquiring category association information of each question-answer category in a plurality of question-answer categories if the category screening results indicate that the large language model does not screen out the candidate question-answer categories, specifically, inputting the questions to be searched, the category list, the prompt words and the application programming interface (Application Programming Interface, API) types and parameters into the large language model, and calling the large language model again. The large language model at this time has the ability to analyze and select APIs based on the input information to obtain more information that helps to screen the best categories. The large language model selects from among the provided APIs the API that it considers to be able to obtain the most valuable information based on an understanding of the question to be retrieved and the set of categories. For example, if a large language model determines that a specific meaning of a certain category needs to be understood more deeply for a current question to be retrieved for accurate screening, it may select a meaning description API for the category, if a category needs to be determined with the aid of a typical question, a representative question API for the category may be selected, and if it is difficult to distinguish between several categories, a meaning distinction API for the different categories may be selected. Further, the knowledge question-answering system can obtain parameters according to the large language model selection and category names, and the like, specifically, the category meaning description API parameters are obtained, namely, when the large language model selects the category meaning description API, the system accurately extracts detailed meaning description contents of the category from the category meaning description database according to the category names appointed by the large language model, and the detailed meaning description contents are transmitted to the API as parameters. For example, the large language model selects the "claims questions" category and the system will obtain and pass detailed meaning descriptions about the "claims questions" to the API. And acquiring the representative problem API parameters of the category, namely if the large language model selects the representative problem API of the category, the system finds out the representative problem list of the category from the representative problem data source of the category according to the designated category name of the representative problem API of the category, and transmits the representative problem list of the category as parameters to the API. For example, the large language model specifies a "claims questions" category, and the system obtains a representative questions list of the "claims questions" category and provides it to the API. And acquiring meaning distinguishing API parameters of different categories, namely inquiring and acquiring detailed explanation content of meaning distinction between the two categories from a data structure for storing the meaning distinguishing explanation according to two category names to be distinguished given by the large language model by the system when the large language model selects the meaning distinguishing API of the different categories, and then transmitting the detailed explanation content as parameters to the API. For example, the large language model chooses to distinguish between "new single insurance" and "claims questions," the system will obtain and pass a detailed explanation of the distinction between these two categories to the API. furthermore, the knowledge question-answering system accurately calls the corresponding API according to the API selected by the large language model and the acquired corresponding parameters. The API, upon receiving the parameters, retrieves and returns the required information from its data source. For example, the meaning description API of a category returns detailed meaning text for a given category, the representatives of the category return category representative question lists, and the meaning differentiation API of a different category returns interpretation text for meaning differentiation between the two categories. The returned information is used as an important input in the step of calling the large language model to screen the category in the subsequent circulation to help the large language model to screen the category more accurately, and the task fails if the large language model cannot screen after 3 times at most.
In the method of the embodiment, under the condition that the large language model cannot accurately screen at least one candidate question-answer category corresponding to the to-be-searched problem, category association information of each question-answer category in a plurality of question-answer categories can be obtained, and the large language model is circularly called for screening for multiple times according to the category association information, so that screening accuracy of the candidate question-answer categories is improved.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a knowledge question-answering device for realizing the knowledge question-answering method. The implementation scheme of the solution provided by the device is similar to the implementation scheme recorded in the method, so the specific limitation of one or more embodiments of the knowledge question answering device provided below can be referred to the limitation of the knowledge question answering method hereinabove, and the description is omitted here.
In one embodiment, as shown in FIG. 7, a knowledge question-answering apparatus is provided, which includes a receiving module 702, an obtaining module 704, a screening module 706, and an answer output module 708, wherein:
a receiving module 702, configured to receive a question to be retrieved sent by a retrieving terminal;
an obtaining module 704, configured to obtain a plurality of question-answer categories contained in a pre-constructed question-answer pair library;
a screening module 706, configured to obtain, using a large language model, a target matching problem corresponding to the to-be-searched problem based on the to-be-searched problem and a plurality of question-answer categories;
And the answer output module 708 is configured to obtain matching answer information corresponding to the to-be-retrieved question according to the target matching question.
In one embodiment, the screening module 706 is further configured to obtain at least one candidate question-answer category corresponding to the to-be-searched question based on the to-be-searched question and a plurality of question-answer categories by using the large language model, and obtain the target matching question corresponding to the to-be-searched question based on the to-be-searched question and at least one candidate question-answer category by using the large language model.
In one embodiment, the screening module 706 is further configured to obtain, from the question-answer pair library, a plurality of candidate questions corresponding to at least one of the candidate question-answer categories, and obtain, by using a large language model, the target matching question corresponding to the question to be searched based on the question to be searched and the plurality of candidate questions.
In one embodiment, the screening module 706 is further configured to obtain a correlation degree between each candidate problem and the problem to be searched, determine a problem evaluation score of each candidate problem according to the correlation degree, and determine the candidate problem with the highest problem evaluation score as the target matching problem corresponding to the problem to be searched.
In one embodiment, the screening module 706 is further configured to input the question to be searched and category name information of each of the question and answer categories into the large language model, obtain a category screening result of the large language model, and obtain at least one candidate question and answer category corresponding to the question to be searched according to the category screening result.
In one embodiment, the screening module 706 is further configured to output at least one candidate question-answer category if the category screening result indicates that the large language model has screened the candidate question-answer category, obtain category association information of each of the multiple question-answer categories if the category screening result indicates that the large language model has not screened the candidate question-answer category, and obtain at least one candidate question-answer category corresponding to the to-be-searched question according to the category association information.
The respective modules in the knowledge question and answer device can be realized in whole or in part by software, hardware and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing knowledge question-answer related data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a knowledge question-answering method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 8 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment, there is also provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magneto-resistive random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (PHASE CHANGE Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in various forms such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc. The databases referred to in the embodiments provided herein may include at least one of a relational database and a non-relational database. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processor referred to in the embodiments provided in the present application may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic unit, a data processing logic unit based on quantum computing, or the like, but is not limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of the application should be assessed as that of the appended claims.

Claims (10)

Translated fromChinese
1.一种知识问答方法,其特征在于,所述方法包括:1. A knowledge question answering method, characterized in that the method comprises:接收检索终端发送的待检索问题;Receiving the questions to be searched sent by the search terminal;获取预先构建的问答对库中包含的多种问答类别;Get a variety of question and answer categories included in the pre-built question and answer library;利用大语言模型,基于所述待检索问题和多种所述问答类别,获取所述待检索问题对应的目标匹配问题;Using a large language model, based on the question to be retrieved and the multiple question and answer categories, obtaining a target matching question corresponding to the question to be retrieved;根据所述目标匹配问题,获取所述待检索问题对应的匹配答案信息。According to the target matching question, matching answer information corresponding to the question to be retrieved is obtained.2.根据权利要求1所述的方法,其特征在于,所述利用大语言模型,基于所述待检索问题和多种所述问答类别,获取所述待检索问题对应的目标匹配问题,包括:2. The method according to claim 1, characterized in that the step of using a large language model to obtain a target matching question corresponding to the question to be retrieved based on the question to be retrieved and the multiple question and answer categories comprises:利用所述大语言模型,基于所述待检索问题和多种所述问答类别,获取所述待检索问题对应的至少一种候选问答类别;Using the large language model, based on the question to be retrieved and the multiple question and answer categories, obtaining at least one candidate question and answer category corresponding to the question to be retrieved;利用大语言模型,基于所述待检索问题和至少一种所述候选问答类别,获取所述待检索问题对应的所述目标匹配问题。The target matching question corresponding to the question to be retrieved is obtained by utilizing a large language model based on the question to be retrieved and at least one of the candidate question and answer categories.3.根据权利要求2所述的方法,其特征在于,所述利用大语言模型,基于所述待检索问题和至少一种所述候选问答类别,获取所述待检索问题对应的所述目标匹配问题,包括:3. The method according to claim 2, characterized in that the step of using a large language model to obtain the target matching question corresponding to the question to be retrieved based on the question to be retrieved and at least one of the candidate question and answer categories comprises:在所述问答对库中,获取至少一种所述候选问答类别对应的多个候选问题;In the question-answer pair library, a plurality of candidate questions corresponding to at least one of the candidate question-answer categories are obtained;利用大语言模型,基于所述待检索问题和多个所述候选问题,获取所述待检索问题对应的所述目标匹配问题。The target matching question corresponding to the question to be retrieved is obtained by utilizing a large language model based on the question to be retrieved and the plurality of candidate questions.4.根据权利要求3所述的方法,其特征在于,所述利用大语言模型,基于所述待检索问题和多个所述候选问题,获取所述待检索问题对应的所述目标匹配问题,包括:4. The method according to claim 3, characterized in that the step of using a large language model to obtain the target matching question corresponding to the question to be retrieved based on the question to be retrieved and the plurality of candidate questions comprises:获取各所述候选问题与所述待检索问题的相关度;Obtaining the relevance of each of the candidate questions to the question to be searched;根据所述相关度,确定各所述候选问题的问题评估分数;Determining a question evaluation score for each of the candidate questions according to the relevance;将所述问题评估分数最高的所述候选问题确定为所述待检索问题对应的所述目标匹配问题。The candidate question with the highest question evaluation score is determined as the target matching question corresponding to the question to be retrieved.5.根据权利要求2所述的方法,其特征在于,所述利用所述大语言模型,基于所述待检索问题和多种所述问答类别,获取所述待检索问题对应的至少一种候选问答类别,包括:5. The method according to claim 2, characterized in that the step of using the large language model to obtain at least one candidate question and answer category corresponding to the question to be retrieved based on the question to be retrieved and the multiple question and answer categories comprises:将所述待检索问题和多种所述问答类别中每一所述问答类别的类别名称信息输入至所述大语言模型中,获取所述大语言模型的类别筛选结果;Inputting the question to be retrieved and the category name information of each of the multiple question and answer categories into the large language model, and obtaining the category screening result of the large language model;根据所述类别筛选结果,获取所述待检索问题对应的至少一种所述候选问答类别。At least one of the candidate question and answer categories corresponding to the question to be retrieved is obtained according to the category screening result.6.根据权利要求5所述的方法,其特征在于,所述根据所述类别筛选结果,获取所述待检索问题对应的至少一种所述候选问答类别,包括:6. The method according to claim 5, characterized in that the step of filtering the result according to the category to obtain at least one candidate question and answer category corresponding to the question to be retrieved comprises:若所述类别筛选结果表示所述大语言模型已筛选出所述候选问答类别,则输出至少一种所述候选问答类别;If the category screening result indicates that the large language model has screened out the candidate question and answer categories, outputting at least one of the candidate question and answer categories;若所述类别筛选结果表示所述大语言模型未筛选出所述候选问答类别,则获取多种所述问答类别中每一所述问答类别的类别关联信息;If the category screening result indicates that the large language model does not screen out the candidate question and answer category, then obtaining category association information of each of the multiple question and answer categories;根据所述类别关联信息,获取所述待检索问题对应的至少一种所述候选问答类别。At least one of the candidate question and answer categories corresponding to the question to be retrieved is obtained according to the category association information.7.一种知识问答装置,其特征在于,所述装置包括:7. A knowledge question-answering device, characterized in that the device comprises:接收模块,用于接收检索终端发送的待检索问题;A receiving module, used for receiving the questions to be searched sent by the search terminal;获取模块,用于获取预先构建的问答对库中包含的多种问答类别;An acquisition module, used to acquire multiple question and answer categories contained in a pre-built question and answer pair library;筛选模块,用于利用大语言模型,基于所述待检索问题和多种所述问答类别,获取所述待检索问题对应的目标匹配问题;A screening module, used for obtaining a target matching question corresponding to the question to be retrieved based on the question to be retrieved and the multiple question and answer categories by using a large language model;答案输出模块,用于根据所述目标匹配问题,获取所述待检索问题对应的匹配答案信息。The answer output module is used to obtain matching answer information corresponding to the question to be retrieved according to the target matching question.8.一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1-6任一项所述的方法的步骤。8. A computer device, comprising a memory and a processor, wherein the memory stores a computer program, wherein the processor implements the steps of the method according to any one of claims 1 to 6 when executing the computer program.9.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1-6任一项所述的方法的步骤。9. A computer-readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 6 are implemented.10.一种计算机程序产品,包括计算机程序,其特征在于,该计算机程序被处理器执行时实现权利要求1-6中任一项所述的方法的步骤。10. A computer program product, comprising a computer program, characterized in that when the computer program is executed by a processor, the steps of the method according to any one of claims 1 to 6 are implemented.
CN202411710491.9A2024-11-272024-11-27 Knowledge question answering method, device, computer equipment and storage mediumPendingCN119513268A (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
CN202411710491.9ACN119513268A (en)2024-11-272024-11-27 Knowledge question answering method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application NumberPriority DateFiling DateTitle
CN202411710491.9ACN119513268A (en)2024-11-272024-11-27 Knowledge question answering method, device, computer equipment and storage medium

Publications (1)

Publication NumberPublication Date
CN119513268Atrue CN119513268A (en)2025-02-25

Family

ID=94668327

Family Applications (1)

Application NumberTitlePriority DateFiling Date
CN202411710491.9APendingCN119513268A (en)2024-11-272024-11-27 Knowledge question answering method, device, computer equipment and storage medium

Country Status (1)

CountryLink
CN (1)CN119513268A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN120493908A (en)*2025-07-172025-08-15浪潮通用软件有限公司 A contract automation review system and method based on multi-agent collaboration

Cited By (2)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN120493908A (en)*2025-07-172025-08-15浪潮通用软件有限公司 A contract automation review system and method based on multi-agent collaboration
CN120493908B (en)*2025-07-172025-10-03浪潮通用软件有限公司Contract automatic auditing system and method based on multi-agent cooperation

Similar Documents

PublicationPublication DateTitle
US11681694B2 (en)Systems and methods for grouping and enriching data items accessed from one or more databases for presentation in a user interface
US11263277B1 (en)Modifying computerized searches through the generation and use of semantic graph data models
US8131684B2 (en)Adaptive archive data management
US7567906B1 (en)Systems and methods for generating an annotation guide
CN104750776B (en) Use metadata to access information content in database platforms
CN114579599B (en)Interactive question-answering method and system based on form
US20250200088A1 (en)Data source mapper for enhanced data retrieval
CN118796982B (en)Data processing method, computing device, storage medium and program product
CN119513268A (en) Knowledge question answering method, device, computer equipment and storage medium
CN118885587A (en) Question and answer processing method, device and non-volatile storage medium
CN119166740A (en) Knowledge base construction method, data processing method, device, storage medium and program product
CN118210874A (en)Data processing method, device, computer equipment and storage medium
KR102574784B1 (en)Method for recommending suitable texts to auto-complete ESG documents and ESG service providing system performing the same
CN120105736A (en) Scenario simulation generation method, device, equipment and medium
CN118820402A (en) Information extraction method, device and electronic equipment based on large model
CN118656659A (en) Job matching method, device, computer equipment, and storage medium
CN117851563A (en) Automatic question-answering method, device, electronic device, and readable storage medium
EP4587944A1 (en)Interactive tool for determining a headnote report
CN114861107A (en)Landing page display method and device and electronic equipment
CN114676237A (en) Sentence similarity determination method, device, computer equipment and storage medium
CN118981306B (en) Code interface specification generation method and device
US20250251850A1 (en)Interactive patent visualization systems and methods
US20250285155A1 (en)Systems and methods for modification of machine learning model-generated text and images based on user queries and profiles
US20240361890A1 (en)Interactive patent visualization systems and methods
US20250284755A1 (en)Systems and methods for user selection of machine learning model-based results based on user queries and profiles

Legal Events

DateCodeTitleDescription
PB01Publication
PB01Publication
SE01Entry into force of request for substantive examination
SE01Entry into force of request for substantive examination

[8]ページ先頭

©2009-2025 Movatter.jp