Movatterモバイル変換


[0]ホーム

URL:


US20200257856A1 - Systems and methods for machine learning based multi intent segmentation and classification - Google Patents

Systems and methods for machine learning based multi intent segmentation and classification
Download PDF

Info

Publication number
US20200257856A1
US20200257856A1US16/783,604US202016783604AUS2020257856A1US 20200257856 A1US20200257856 A1US 20200257856A1US 202016783604 AUS202016783604 AUS 202016783604AUS 2020257856 A1US2020257856 A1US 2020257856A1
Authority
US
United States
Prior art keywords
utterance
intent
domain
distinct
utterances
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/783,604
Inventor
Joseph Peper
Parker Hill
Kevin Leach
Sean Stapleton
Jonathan K. Kummerfeld
Johann Hauswald
Michael A. Laurenzano
Lingjia Tang
Jason Mars
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Clinc Inc
Original Assignee
Clinc Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Clinc IncfiledCriticalClinc Inc
Priority to US16/783,604priorityCriticalpatent/US20200257856A1/en
Assigned to Clinc, Inc.reassignmentClinc, Inc.ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS).Assignors: STAPLETON, Sean, TANG, LINGJIA, HILL, Parker, PEPER, Joseph, LAURENZANO, MICHAEL, HAUSWALD, JOHANN, KUMMERFELD, JONATHAN K., LEACH, Kevin, MARS, JASON
Priority to US16/854,834prioritypatent/US10824818B2/en
Publication of US20200257856A1publicationCriticalpatent/US20200257856A1/en
Abandonedlegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

Systems and methods for synthesizing training data for multi-intent utterance segmentation include identifying a first corpus of utterances comprising a plurality of distinct single-intent in-domain utterances; identifying a second corpus of utterances comprising a plurality of distinct single-intent out-of-domain utterances; identifying a third corpus comprising a plurality of distinct conjunction terms; forming a multi-intent training corpus comprising synthetic multi-intent utterances, wherein forming each distinct multi-intent utterance includes: selecting a first distinct in-domain utterance from the first corpus of utterances; probabilistically selecting one of a first out-of-domain utterance from the second corpus and a second in-domain utterance from the first corpus; probabilistically selecting or not selecting a distinct conjunction term from the third corpus; and forming a synthetic multi-intent utterance including appending the first in-domain utterance with one of the first out-of-domain utterance from the second corpus of utterances and the second in-domain utterance from the first corpus of utterances.

Description

Claims (19)

What is claimed is:
1. A method for synthesizing training data for multi-intent utterance segmentation within a machine learning-based dialogue system, the method comprising:
identifying a first corpus of utterances comprising a plurality of distinct single-intent in-domain utterances;
identifying a second corpus of utterances comprising a plurality of distinct single-intent out-of-domain utterances;
identifying a third corpus comprising a plurality of distinct conjunction terms;
forming, by the machine learning-based dialogue system, a multi-intent training corpus comprising synthetic multi-intent utterances, wherein forming each distinct multi-intent utterance of the multi-intent training corpus includes:
selecting a first distinct in-domain utterance from the plurality of distinct single-intent in-domain utterances of the first corpus of utterances;
probabilistically selecting one of a first out-of-domain utterance from the second corpus of utterances and a second in-domain utterance from the first corpus of utterances;
probabilistically selecting or not selecting a distinct conjunction term from the third corpus of conjunction terms; and
forming a synthetic multi-intent utterance including appending the first in-domain utterance with one of the first out-of-domain utterance from the second corpus of utterances and the second in-domain utterance from the first corpus of utterances.
2. The method according toclaim 1, further comprising:
identifying a conjunction-inclusion probability that a conjunction term would be appended to the first distinct in-domain utterance; and
if the conjunction-inclusion probability satisfies or exceeds a conjunction-inclusion threshold, randomly selecting a distinct conjunction term from the plurality of distinct conjunction terms of the third corpus.
3. The method according toclaim 1, further comprising:
identifying an out-of-domain-inclusion probability that an out-of-domain utterance would be appended to the first distinct in-domain utterance, wherein if the out-of-domain-inclusion probability satisfies or exceeds an out-of-domain-inclusion threshold, randomly selecting a first distinct out-of-domain utterance from the plurality of distinct single-intent out-of-domain utterances of the second corpus of utterances.
4. The method according toclaim 3, further comprising:
in response to selecting the first distinct out-of-domain utterance, concatenating the distinct conjunction term to a boundary of the first in-domain utterance and concatenating the first distinct out-of-domain utterance after the distinct conjunction term.
5. The method according toclaim 1, further comprising:
identifying an out-of-domain-inclusion probability that an out-of-domain utterance would be appended to the first distinct in-domain utterance, wherein if the out-of-domain-inclusion probability does not satisfy the out-of-domain-inclusion threshold, randomly selecting a second distinct in-domain utterance from the plurality of distinct single-intent in-domain utterances of the first corpus of utterances.
6. The method according toclaim 1, further comprising:
identifying a conjunction-inclusion probability that a conjunction term would be appended to the first distinct in-domain utterance;
if the conjunction-inclusion probability satisfies or exceeds a conjunction-inclusion threshold, randomly selecting a distinct conjunction term from the plurality of distinct conjunction terms of the third corpus;
identifying an out-of-domain-inclusion probability that an out-of-domain utterance would be appended to the first distinct in-domain utterance, wherein:
(i) if the out-of-domain-inclusion probability satisfies or exceeds an out-of-domain-inclusion threshold, randomly selecting a first distinct out-of-domain utterance from the plurality of distinct single-intent out-of-domain utterances of the second corpus of utterances, or
(ii) if the out-of-domain-inclusion probability does not satisfy the out-of-domain-inclusion threshold, randomly selecting a second distinct in-domain utterance from the plurality of distinct single-intent in-domain utterances of the first corpus of utterances.
7. The method according toclaim 1, wherein:
each of the plurality of distinct single-intent in-domain utterances of the first corpus comprise a single-intent in-domain utterance, and
each of the plurality of distinct single-intent out-of-domain utterances of the second corpus comprise a single-intent out-of-domain utterance.
8. The method according toclaim 1, further comprising:
training a span-predicting utterance segmentation model using the multi-intent training corpus, wherein the span-predicting utterance segmentation model classifies each distinct utterance span of a subject multi-intent utterance that forms a complete semantic expression within the subject multi-intent utterance.
9. The method according toclaim 8, further comprising:
receiving an input multi-intent utterance at the machine learning-based dialogue system;
predicting two or more boundary classification labels for two or more distinct tokens of the input multi-intent utterance; and
segmenting, at two or more boundary classification labels, the input multi-intent utterance into two or more distinct single-intent utterance components.
10. The method according toclaim 9, further comprising:
providing each of the two or more distinct single-intent utterance components to one of a plurality of concurrently operating distinct single-intent machine learning classifiers; and
generating by each respective one of the plurality of concurrently operating distinct machine learning classifiers an intent classification label for each of the two or more distinct single-intent utterance components.
11. The method according toclaim 1, further comprising:
training a joint model using the multi-intent training corpus comprising synthetic multi-intent utterances, wherein the joint model perform multiple distinct machine learning tasks, the joint model comprising an intent machine learning classifier that predicts an intent label for a target utterance and a slot segment machine learning model that predicts a slot label that identifies a semantic concept of a given segment of the target utterance.
12. The method according toclaim 11, further comprising:
receiving an input multi-intent utterance; and
identifying whether the input multi-intent utterance is an entangled multi-intent utterance based on an entanglement threshold, wherein an entangled multi-intent utterance relates to a subject multi-intent utterance in which two or more distinct intents within the subject multi-intent utterance cannot be disintegrated with ease and satisfy or exceed an entanglement threshold.
13. The method according toclaim 12, wherein:
if the input multi-intent utterance comprises the entangled multi-intent utterance, providing the entangled multi-intent utterance as input into the joint model;
at the joint model, predicting an intent classification label and a slot value classification label for each identified token of the entangled multi-intent utterance.
14. The method according toclaim 1, further comprising:
training a joint model with segmentation using the multi-intent training corpus comprising synthetic multi-intent utterances, where the joint model with segmentation performs multi-distinct machine learning tasks, the joint model with segmentation including (i) a combination of a segmentation model, (ii) an intent classification model, and (iii) a slot value classification model.
15. The method according toclaim 14, further comprising:
receiving an input multi-intent utterance; and
identifying whether the input multi-intent utterance comprises a long, multi-intent utterance based on an aggregated span threshold, wherein the long, multi-intent utterance relates to a subject multi-intent utterance in which an aggregate of multiple distinct utterance spans of the subject multi-intent utterance satisfies or exceeds an aggregated span threshold.
16. The method according toclaim 15, wherein:
if the input multi-intent utterance comprises the long multi-intent utterance, providing the long multi-intent utterance as input into the joint model with segmentation;
at the joint model with segmentation, (i) predicting two or more boundary classification labels for two or more distinct tokens of the long multi-intent utterance, (ii) predicting an intent classification label and (iii) a slot value classification label for each identified token of the long multi-intent utterance.
17. A method for synthesizing training data for multi-intent utterance segmentation within a single-intent machine learning-based dialogue system, the method comprising:
sourcing a first corpus of utterances comprising a plurality of distinct single-intent in-domain utterances;
sourcing a second corpus of utterances comprising a plurality of distinct single-intent out-of-domain utterances;
sourcing a third corpus comprising a plurality of distinct conjunction terms;
constructing, by the machine learning-based dialogue system, a multi-intent training corpus comprising synthetic multi-intent utterances, wherein forming each distinct multi-intent utterance of the multi-intent training corpus includes:
selecting a first distinct in-domain utterance from the plurality of distinct single-intent in-domain utterances of the first corpus of utterances;
probabilistically selecting one of a first out-of-domain utterance from the second corpus of utterances and a second in-domain utterance from the first corpus of utterances;
probabilistically selecting or not selecting a distinct conjunction term from the third corpus of conjunction terms; and
constructing a synthetic multi-intent utterance including appending the first in-domain utterance with one of the first out-of-domain utterance from the second corpus of utterances and the second in-domain utterance from the first corpus of utterances.
18. The method according toclaim 1, further comprising:
computing, by the machine learning-based system, a conjunction-inclusion probability that a conjunction term would be appended to the first distinct in-domain utterance;
if the conjunction-inclusion probability satisfies or exceeds a conjunction-inclusion threshold, randomly selecting a distinct conjunction term from the plurality of distinct conjunction terms of the third corpus;
computing, by the machine learning-based system, an out-of-domain-inclusion probability that an out-of-domain utterance would be appended to the first distinct in-domain utterance, wherein:
(i) if the out-of-domain-inclusion probability satisfies or exceeds an out-of-domain-inclusion threshold, randomly selecting a first distinct out-of-domain utterance from the plurality of distinct single-intent out-of-domain utterances of the second corpus of utterances, or
(ii) if the out-of-domain-inclusion probability does not satisfy the out-of-domain-inclusion threshold, randomly selecting a second distinct in-domain utterance from the plurality of distinct single-intent in-domain utterances of the first corpus of utterances.
19. A system for intelligently synthesizing training data for multi-intent utterance segmentation within a machine learning-based dialogue system, the system comprising:
a datastore comprising:
a first corpus of utterances comprising a plurality of distinct single-intent in-domain utterances;
a second corpus of utterances comprising a plurality of distinct single-intent out-of-domain utterances;
a third corpus comprising a plurality of distinct conjunction terms;
a machine learning-based dialogue system being implemented by a distributed network of computers includes:
a training data synthesis module that:
constructs a multi-intent training corpus comprising synthetic multi-intent utterances, wherein forming each distinct multi-intent utterance of the multi-intent training corpus includes:
selects a first distinct in-domain utterance from the plurality of distinct single-intent in-domain utterances of the first corpus of utterances;
probabilistically selects one of a first out-of-domain utterance from the second corpus of utterances and a second in-domain utterance from the first corpus of utterances;
probabilistically selects or not selects a distinct conjunction term from the third corpus of conjunction terms; and
constructs a synthetic multi-intent utterance including appending the first in-domain utterance with one of the first out-of-domain utterance from the second corpus of utterances and the second in-domain utterance from the first corpus of utterances.
US16/783,6042019-02-072020-02-06Systems and methods for machine learning based multi intent segmentation and classificationAbandonedUS20200257856A1 (en)

Priority Applications (2)

Application NumberPriority DateFiling DateTitle
US16/783,604US20200257856A1 (en)2019-02-072020-02-06Systems and methods for machine learning based multi intent segmentation and classification
US16/854,834US10824818B2 (en)2019-02-072020-04-21Systems and methods for machine learning-based multi-intent segmentation and classification

Applications Claiming Priority (4)

Application NumberPriority DateFiling DateTitle
US201962802342P2019-02-072019-02-07
US201962890247P2019-08-222019-08-22
US202062969695P2020-02-042020-02-04
US16/783,604US20200257856A1 (en)2019-02-072020-02-06Systems and methods for machine learning based multi intent segmentation and classification

Related Child Applications (1)

Application NumberTitlePriority DateFiling Date
US16/854,834ContinuationUS10824818B2 (en)2019-02-072020-04-21Systems and methods for machine learning-based multi-intent segmentation and classification

Publications (1)

Publication NumberPublication Date
US20200257856A1true US20200257856A1 (en)2020-08-13

Family

ID=71945661

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US16/783,604AbandonedUS20200257856A1 (en)2019-02-072020-02-06Systems and methods for machine learning based multi intent segmentation and classification
US16/854,834ActiveUS10824818B2 (en)2019-02-072020-04-21Systems and methods for machine learning-based multi-intent segmentation and classification

Family Applications After (1)

Application NumberTitlePriority DateFiling Date
US16/854,834ActiveUS10824818B2 (en)2019-02-072020-04-21Systems and methods for machine learning-based multi-intent segmentation and classification

Country Status (2)

CountryLink
US (2)US20200257856A1 (en)
WO (1)WO2020163627A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN112417894A (en)*2020-12-102021-02-26上海方立数码科技有限公司Conversation intention identification method and system based on multi-task learning
US20210304075A1 (en)*2020-03-302021-09-30Oracle International CorporationBatching techniques for handling unbalanced training data for a chatbot
CN114116984A (en)*2021-12-012022-03-01杜凡凡 A Coarse-to-fine Explicit Memory Network Spoken Comprehension Model
US20220230000A1 (en)*2021-01-202022-07-21Oracle International CorporationMulti-factor modelling for natural language processing
WO2022198750A1 (en)*2021-03-262022-09-29南京邮电大学Semantic recognition method
CN115359786A (en)*2022-08-192022-11-18思必驰科技股份有限公司Multi-intention semantic understanding model training and using method and device
CN115408509A (en)*2022-11-012022-11-29杭州一知智能科技有限公司Intention identification method, system, electronic equipment and storage medium
CN116306685A (en)*2023-05-222023-06-23国网信息通信产业集团有限公司Multi-intention recognition method and system for power business scene
CN119558964A (en)*2023-08-162025-03-04中国工商银行股份有限公司 A transaction dialogue breakpoint takeover control method and device

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US11210470B2 (en)*2019-03-282021-12-28Adobe Inc.Automatic text segmentation based on relevant context
US11106874B2 (en)*2019-05-082021-08-31Sap SeAutomated chatbot linguistic expression generation
US11651768B2 (en)*2019-09-162023-05-16Oracle International CorporationStop word data augmentation for natural language processing
JP7448350B2 (en)*2019-12-182024-03-12トヨタ自動車株式会社 Agent device, agent system, and agent program
WO2021188719A1 (en)*2020-03-172021-09-23MeetKai, Inc.An intelligent layer to power cross platform, edge-cloud hybrid artificail intelligence services
US12106055B2 (en)*2020-08-212024-10-01Oracle International CorporationTechniques for providing explanations for text classification
US11645476B2 (en)*2020-09-292023-05-09International Business Machines CorporationGenerating symbolic domain models from multimodal data
US12026468B2 (en)*2020-11-302024-07-02Oracle International CorporationOut-of-domain data augmentation for natural language processing
US12153881B2 (en)*2020-11-302024-11-26Oracle International CorporationKeyword data augmentation tool for natural language processing
US11854528B2 (en)2020-12-222023-12-26Samsung Electronics Co., Ltd.Method and system for detecting unsupported utterances in natural language understanding
US12165019B2 (en)2020-12-232024-12-10International Business Machines CorporationSymbolic model training with active learning
FR3118527A1 (en)*2020-12-312022-07-01Alten Method for automatic understanding of expressions in natural language, and associated understanding device
EP4272108A1 (en)*2020-12-312023-11-08AltenMethod for automatically understanding multiple instructions in natural language expressions
US12032911B2 (en)*2021-01-082024-07-09Nice Ltd.Systems and methods for structured phrase embedding and use thereof
US11922141B2 (en)*2021-01-292024-03-05Walmart Apollo, LlcVoice and chatbot conversation builder
US11705113B2 (en)*2021-06-242023-07-18Amazon Technologies, Inc.Priority and context-based routing of speech processing
US12211493B2 (en)2021-06-242025-01-28Amazon Technologies, Inc.Early invocation for contextual data processing
US20230026945A1 (en)*2021-07-212023-01-26Wellspoken, Inc.Virtual Conversational Agent
US12288194B2 (en)2021-09-172025-04-29Optum, Inc.Computer systems and computer-based methods for automated callback scheduling utilizing call duration prediction
US12288552B2 (en)*2021-09-172025-04-29Optum, Inc.Computer systems and computer-based methods for automated caller intent prediction
US12019984B2 (en)*2021-09-202024-06-25Salesforce, Inc.Multi-lingual intent model with out-of-domain detection
US12141533B2 (en)*2021-09-232024-11-12International Business Machines CorporationLeveraging knowledge records for chatbot local search
US12277394B2 (en)2022-03-012025-04-15Tata Consultancy Services LimitedSystems and methods for multi-utterance generation of data with immutability regulation and punctuation-memory
US11985097B2 (en)2022-05-092024-05-14International Business Machines CorporationMulti-agent chatbot with multi-intent recognition
US11934794B1 (en)*2022-09-302024-03-19Knowbl Inc.Systems and methods for algorithmically orchestrating conversational dialogue transitions within an automated conversational system
US20240169165A1 (en)*2022-11-172024-05-23Samsung Electronics Co., Ltd.Automatically Generating Annotated Ground-Truth Corpus for Training NLU Model
US20240362529A1 (en)*2023-04-262024-10-31Capital One Services, LlcSystems and methods for assistive document retrieval in data-sparse environments
US12412030B2 (en)2023-06-092025-09-09Bank Of America CorporationUtterance building to convey user input to conversational agents
CN117235629B (en)*2023-11-152024-04-12中邮消费金融有限公司Intention recognition method, system and computer equipment based on knowledge domain detection

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9009046B1 (en)*2005-09-272015-04-14At&T Intellectual Property Ii, L.P.System and method for disambiguating multiple intents in a natural language dialog system
US7664644B1 (en)*2006-06-092010-02-16At&T Intellectual Property Ii, L.P.Multitask learning for spoken language understanding
JP5218052B2 (en)*2006-06-262013-06-26日本電気株式会社 Language model generation system, language model generation method, and language model generation program
US8572087B1 (en)*2007-10-172013-10-29Google Inc.Content identification
US8983840B2 (en)*2012-06-192015-03-17International Business Machines CorporationIntent discovery in audio or text-based conversation
US10096316B2 (en)*2013-11-272018-10-09Sri InternationalSharing intents to provide virtual assistance in a multi-person dialog
US9292492B2 (en)*2013-02-042016-03-22Microsoft Technology Licensing, LlcScaling statistical language understanding systems across domains and intents
US10079013B2 (en)*2013-11-272018-09-18Sri InternationalSharing intents to provide virtual assistance in a multi-person dialog
US20160055240A1 (en)*2014-08-222016-02-25Microsoft CorporationOrphaned utterance detection system and method
US10134389B2 (en)*2015-09-042018-11-20Microsoft Technology Licensing, LlcClustering user utterance intents with semantic parsing
US11783173B2 (en)*2016-06-232023-10-10Microsoft Technology Licensing, LlcMulti-domain joint semantic frame parsing
US10679144B2 (en)*2016-07-122020-06-09International Business Machines CorporationGenerating training data for machine learning
DK201770432A1 (en)*2017-05-152018-12-21Apple Inc.Hierarchical belief states for digital assistants
US10929754B2 (en)*2017-06-062021-02-23Google LlcUnified endpointer using multitask and multidomain learning
US10565986B2 (en)*2017-07-202020-02-18Intuit Inc.Extracting domain-specific actions and entities in natural language commands
US10733538B2 (en)*2017-09-292020-08-04Oracle International CorporationTechniques for querying a hierarchical model to identify a class from multiple classes
US11030400B2 (en)*2018-02-222021-06-08Verizon Media Inc.System and method for identifying and replacing slots with variable slots
US11163961B2 (en)*2018-05-022021-11-02Verint Americas Inc.Detection of relational language in human-computer conversation
US11416741B2 (en)*2018-06-082022-08-16International Business Machines CorporationTeacher and student learning for constructing mixed-domain model
US20200019641A1 (en)*2018-07-102020-01-16International Business Machines CorporationResponding to multi-intent user input to a dialog system
US11094317B2 (en)*2018-07-312021-08-17Samsung Electronics Co., Ltd.System and method for personalized natural language understanding
US11822888B2 (en)*2018-10-052023-11-21Verint Americas Inc.Identifying relational segments
GB201818237D0 (en)*2018-11-082018-12-26PolyalA dialogue system, a dialogue method, a method of generating data for training a dialogue system, a system for generating data for training a dialogue system
US10559307B1 (en)*2019-02-132020-02-11Karen Elaine KhaleghiImpaired operator detection and interlock apparatus

Cited By (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US20210304075A1 (en)*2020-03-302021-09-30Oracle International CorporationBatching techniques for handling unbalanced training data for a chatbot
US12236321B2 (en)*2020-03-302025-02-25Oracle International CorporationBatching techniques for handling unbalanced training data for a chatbot
CN112417894A (en)*2020-12-102021-02-26上海方立数码科技有限公司Conversation intention identification method and system based on multi-task learning
US20220230000A1 (en)*2021-01-202022-07-21Oracle International CorporationMulti-factor modelling for natural language processing
US12099816B2 (en)*2021-01-202024-09-24Oracle International CorporationMulti-factor modelling for natural language processing
WO2022198750A1 (en)*2021-03-262022-09-29南京邮电大学Semantic recognition method
CN114116984A (en)*2021-12-012022-03-01杜凡凡 A Coarse-to-fine Explicit Memory Network Spoken Comprehension Model
CN115359786A (en)*2022-08-192022-11-18思必驰科技股份有限公司Multi-intention semantic understanding model training and using method and device
CN115408509A (en)*2022-11-012022-11-29杭州一知智能科技有限公司Intention identification method, system, electronic equipment and storage medium
CN116306685A (en)*2023-05-222023-06-23国网信息通信产业集团有限公司Multi-intention recognition method and system for power business scene
CN119558964A (en)*2023-08-162025-03-04中国工商银行股份有限公司 A transaction dialogue breakpoint takeover control method and device

Also Published As

Publication numberPublication date
WO2020163627A1 (en)2020-08-13
US10824818B2 (en)2020-11-03
US20200257857A1 (en)2020-08-13

Similar Documents

PublicationPublication DateTitle
US10824818B2 (en)Systems and methods for machine learning-based multi-intent segmentation and classification
US10936936B2 (en)Systems and methods for intelligently configuring and deploying a control structure of a machine learning-based dialogue system
US10970493B1 (en)Systems and methods for slot relation extraction for machine learning task-oriented dialogue systems
US11042800B2 (en)System and method for implementing an artificially intelligent virtual assistant using machine learning
US20210256345A1 (en)System and method for implementing an artificially intelligent virtual assistant using machine learning
US10679150B1 (en)Systems and methods for automatically configuring training data for training machine learning models of a machine learning-based dialogue system including seeding training samples or curating a corpus of training data based on instances of training data identified as anomalous
US10296848B1 (en)Systems and method for automatically configuring machine learning models
US10796104B1 (en)Systems and methods for constructing an artificially diverse corpus of training data samples for training a contextually-biased model for a machine learning-based dialogue system
US10679100B2 (en)Systems and methods for intelligently curating machine learning training data and improving machine learning model performance
CN112955893B (en) Automatic hyperlinking of documents
US11043208B1 (en)Systems and methods for mixed setting training for slot filling machine learning tasks in a machine learning task-oriented dialogue system
US11183175B2 (en)Systems and methods implementing data query language and utterance corpus implements for handling slot-filling and dialogue intent classification data in a machine learning task-oriented dialogue system
US10937417B2 (en)Systems and methods for automatically categorizing unstructured data and improving a machine learning-based dialogue system
WO2019103738A1 (en)System and method for implementing an artificially intelligent virtual assistant using machine learning
CN114266252A (en) Named entity identification method, device, device and storage medium
US20210166138A1 (en)Systems and methods for automatically detecting and repairing slot errors in machine learning training data for a machine learning-based dialogue system
WO2019088969A1 (en)System and method for implementing an artificially intelligent virtual assistant using machine learning
WO2025071985A1 (en)Using large language models to generate view-based accessibility information

Legal Events

DateCodeTitleDescription
STPPInformation on status: patent application and granting procedure in general

Free format text:DOCKETED NEW CASE - READY FOR EXAMINATION

STPPInformation on status: patent application and granting procedure in general

Free format text:NON FINAL ACTION MAILED

STCBInformation on status: application discontinuation

Free format text:ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION


[8]ページ先頭

©2009-2025 Movatter.jp