Movatterモバイル変換


[0]ホーム

URL:


US20160379140A1 - Weight benefit evaluator for training data - Google Patents

Weight benefit evaluator for training data
Download PDF

Info

Publication number
US20160379140A1
US20160379140A1US15/261,390US201615261390AUS2016379140A1US 20160379140 A1US20160379140 A1US 20160379140A1US 201615261390 AUS201615261390 AUS 201615261390AUS 2016379140 A1US2016379140 A1US 2016379140A1
Authority
US
United States
Prior art keywords
function
data
test
training
artificial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/261,390
Inventor
Yaser Said Abu-Mostafa
Carlos Roberto Gonzalez
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
California Institute of Technology
Original Assignee
California Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/451,899external-prioritypatent/US9858534B2/en
Priority claimed from US14/451,935external-prioritypatent/US10535014B2/en
Priority claimed from US14/451,870external-prioritypatent/US9953271B2/en
Application filed by California Institute of TechnologyfiledCriticalCalifornia Institute of Technology
Priority to US15/261,390priorityCriticalpatent/US20160379140A1/en
Publication of US20160379140A1publicationCriticalpatent/US20160379140A1/en
Abandonedlegal-statusCriticalCurrent

Links

Images

Classifications

Definitions

Landscapes

Abstract

Technologies are generally described for methods and systems effective to determine a weight benefit associated with application of weights to training data in a machine learning environment. In an example, a device may determine a first function based on the training data, where the training data includes training inputs and training labels. The device may determine a second function based on weighted training data, which is based on application of weights to the training data. The device may determine a third function based on target data, where the target data is generated based on a target function. The target data may include target labels different from the training labels. The device may determine a fourth function based on weighted target data, which is a result of application of weights to the target data. The device may determine the weight benefit based on the first, second, third, and fourth functions.

Description

Claims (20)

What is claimed is:
1. A method to determine whether to apply a set of weights to training data in a machine learning environment, the method comprising:
applying, by a device, test inputs to a first function to generate test data, wherein the first function is based on the training data, the training data includes training inputs and training labels, and the test data includes the test inputs and test labels;
applying, by the device, the test inputs to a second function to generate weighted test data, wherein the second function is based on weighted training data, the weighted training data is based on the set of weights, and the weighted test data includes the test inputs and weighted test labels;
determining, by the device, a third function based on target data, wherein the target data is based on a target function, the target data includes the training inputs, and the target data includes target labels different from the training labels;
applying, by the device, the test inputs to the third function to generate artificial test data, wherein the artificial test data includes the test inputs and artificial test labels;
determining, by the device, a fourth function based on the set of weights and the target data;
applying, by the device, the test inputs to the fourth function to generate artificial weighted test data, wherein the artificial weighted test data includes the test inputs and artificial weighted test labels;
determining, by the device, an evaluation value based on the test data, the weighted test data, the artificial test data, and the artificial weighted test data;
determining, by the device, the weight benefit based on the evaluation value, wherein the weight benefit is associated with a benefit to apply the set of weights to the training data; and
determining, by the device, whether to apply the set of weights to the training data based on the weight benefit.
2. The method ofclaim 1, further comprising, prior to applying the test inputs to the first function:
receiving, by the device, the first function;
receiving, by the device, the second function;
receiving, by the device, the target function; and
generating, by the device, the target data based on the target function.
3. The method ofclaim 1, further comprising, prior to applying the test inputs to the first function:
receiving, by the device, the first function;
receiving, by the device, the second function;
generating, by the device, the target function; and
generating, by the device, the target data based on the target function.
4. The method ofclaim 3, wherein generating the target function comprises:
determining, by the device, a set of parameters to generate an artificial function;
generating, by the device, artificial data based on the training inputs and the artificial function, wherein the artificial data includes the training inputs and artificial labels; and
generating, by the device, the target function based on the training data and the artificial data.
5. The method ofclaim 1, wherein determining the weight benefit comprises:
determining, by the device, an expected value between the first function and the second function;
comparing, by the device, the evaluation value with the expected value;
determining, by the device, a count based on the comparison of the evaluation value with the expected value;
comparing, by the evaluation module of the device, the count with a threshold; and
determining, by the device, the weight benefit based on the comparison of the count with the threshold.
6. The method ofclaim 5, wherein the expected value is a first expected value, the method further comprises:
determining, by the device, a second expected value between the third function and the target function;
determining, by the device, a third expected value between the fourth function and the target function;
determining, by the device, a fourth expected value between the third function and the fourth function; and
determining, by the device, the evaluation value based on the second, third, and fourth expected values.
7. The method ofclaim 5, further comprising:
determining, by the device, the count is greater than the threshold based on the comparison of the count with the threshold; and
deploying, by the device, the first function in the machine learning environment in response to the determination that the count is greater than the threshold.
8. The method ofclaim 5, further comprising:
determining, by the device, the count is less than the threshold based on the comparison of the count with the threshold; and
applying, by the device, the set of weights to the training data in response to the determination that the count is less than the threshold.
9. The method ofclaim 8, further comprising deploying, by the device, the second function in the machine learning environment in response to applying the set of weights to the training data.
10. A system effective to whether to apply a set of weights to training data in a machine learning environment, the system comprising:
a memory configured to:
store the training data, wherein the training data includes training inputs and training labels;
store the set of weights; and
store a set of test inputs;
a machine learning module configured to be in communication with the memory, the machine learning module being configured to:
apply the test inputs to a first function to generate test data, wherein the first function is based on the training data, the training data includes training inputs and training labels, and the test data includes the test inputs and test labels;
apply the test inputs to a second function to generate weighted test data, wherein the second function is based on weighted training data, the weighted training data is based on the set of weights, and the weighted test data includes the test inputs and weighted test labels;
determine a third function based on the target data based on target data, wherein the target data is based on a target function, the target data includes the training inputs, and the target data includes target labels different from the training labels;
apply the test inputs to the third function to generate artificial test data, wherein the artificial test data includes the test inputs and artificial test labels;
determine a fourth function based on the set of weights and the target data;
apply the test inputs to the fourth function to generate artificial weighted test data, wherein the artificial weighted test data includes the test inputs and artificial weighted test labels;
an evaluation module configured to be in communication with the machine learning module and the memory, the evaluation module being configured to:
determine an evaluation value based on the test data, the weighted test data, the artificial test data, and the artificial weighted test data; and
determine the weight benefit based on the evaluation value, wherein the weight benefit is associated with a benefit to apply the set of weights to the training data; and
a processing module configured to be in communication with the evaluation module, the machine learning module, and the memory, the processing module being configured to determine whether to apply the set of weights to the training data based on the weight benefit.
11. The system ofclaim 10, further comprising a target function generation module configured to be in communication with the processing module, the evaluation module, the machine learning module, and the memory, the target function generation module being configured to:
determine a set of parameters to generate an artificial function;
generate artificial data based on the training inputs and the artificial function, wherein the artificial data includes the training inputs and artificial labels;
generate the target function based on the training data and the artificial data;
generate the target data based on the target function; and
store the target data in the memory;
12. The system ofclaim 10, wherein the evaluation device is further configured to:
determine an expected value between the first function and the second function;
compare the evaluation value with the expected value;
determine a count based on the comparison of the evaluation value with the expected value;
compare the count with a threshold; and
determine the weight benefit based on the comparison of the count with the threshold.
13. The system ofclaim 12, wherein the processing module is further configured to:
determine the count is greater than the threshold based on the comparison of the count with the threshold; and
deploy the first function in the machine learning environment in response to the determination that the count is greater than the threshold.
14. The system ofclaim 12, wherein the processing module is further configured to:
determine the count is less than the threshold based on the comparison of the count with the threshold; and
apply the set of weights to the training data in response to the determination that the count is less than the threshold.
15. The system ofclaim 14, wherein the processing module is further configured to deploy the second function in the machine learning environment in response to applying the set of weights to the training data.
16. A method to determine whether to deploy a first function or a second function in a machine learning environment, the method comprising:
applying, by a device, test inputs to a first function to generate test data, wherein the first function is based on the training data, the training data includes training inputs and training labels, and the test data includes the test inputs and test labels;
applying, by the device, the test inputs to a second function to generate weighted test data, wherein the second function is based on weighted training data, the weighted training data is based on the set of weights, and the weighted test data includes the test inputs and weighted test labels;
determining, by the device, a third function based on target data, wherein the target data is based on a target function, the target data includes the training inputs, and the target data includes target labels different from the training labels;
applying, by the device, the test inputs to the third function to generate artificial test data, wherein the artificial test data includes the test inputs and artificial test labels;
determining, by the device, a fourth function based on the set of weights and the target data;
applying, by the device, the test inputs to the fourth function to generate artificial weighted test data, wherein the artificial weighted test data includes the test inputs and artificial weighted test labels;
determining, by the device, an evaluation value based on the third and fourth functions;
comparing, by the device, the evaluation value with an expected value between the first function and the second function;
determining, by the device, a count based on the comparison of the evaluation value with the expected value;
comparing, by the evaluation module of the device, the count with a threshold; and
determining, by the device, whether to deploy the first function or the second function in the machine learning environment based on the comparison of the count with the threshold.
17. The method ofclaim 16, wherein the expected value is a first expected value, the method further comprises:
determining, by the device, a second expected value between the third function and the target function;
determining, by the device, a third expected value between the fourth function and the target function;
determining, by the device, a fourth expected value between the third function and the fourth function; and
determining, by the device, the evaluation value based on the second, third, and fourth expected values.
18. The method ofclaim 16, further comprising:
determining, by the device, the count is greater than the threshold based on the comparison of the count with the threshold; and
deploying, by the device, the first function in the machine learning environment in response to the determination that the count is greater than the threshold.
19. The method ofclaim 16, further comprising:
determining, by the device, the count is less than the threshold based on the comparison of the count with the threshold; and
applying, by the device, the set of weights to the training data in response to the determination that the count is less than the threshold.
20. The method ofclaim 16, further comprising deploying, by the device, the second function in the machine learning environment in response to applying the set of weights to the training data.
US15/261,3902013-11-222016-09-09Weight benefit evaluator for training dataAbandonedUS20160379140A1 (en)

Priority Applications (1)

Application NumberPriority DateFiling DateTitle
US15/261,390US20160379140A1 (en)2013-11-222016-09-09Weight benefit evaluator for training data

Applications Claiming Priority (7)

Application NumberPriority DateFiling DateTitle
US201361907504P2013-11-222013-11-22
US201462015133P2014-06-202014-06-20
US14/451,899US9858534B2 (en)2013-11-222014-08-05Weight generation in machine learning
US14/451,935US10535014B2 (en)2014-03-102014-08-05Alternative training distribution data in machine learning
US14/451,870US9953271B2 (en)2013-11-222014-08-05Generation of weights in machine learning
US14/451,859US10558935B2 (en)2013-11-222014-08-05Weight benefit evaluator for training data
US15/261,390US20160379140A1 (en)2013-11-222016-09-09Weight benefit evaluator for training data

Related Parent Applications (1)

Application NumberTitlePriority DateFiling Date
US14/451,859ContinuationUS10558935B2 (en)2013-11-222014-08-05Weight benefit evaluator for training data

Publications (1)

Publication NumberPublication Date
US20160379140A1true US20160379140A1 (en)2016-12-29

Family

ID=53180398

Family Applications (2)

Application NumberTitlePriority DateFiling Date
US14/451,859ActiveUS10558935B2 (en)2013-11-222014-08-05Weight benefit evaluator for training data
US15/261,390AbandonedUS20160379140A1 (en)2013-11-222016-09-09Weight benefit evaluator for training data

Family Applications Before (1)

Application NumberTitlePriority DateFiling Date
US14/451,859ActiveUS10558935B2 (en)2013-11-222014-08-05Weight benefit evaluator for training data

Country Status (5)

CountryLink
US (2)US10558935B2 (en)
EP (1)EP3072060A4 (en)
JP (1)JP6276857B2 (en)
KR (1)KR101889451B1 (en)
WO (1)WO2015077555A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9858534B2 (en)2013-11-222018-01-02California Institute Of TechnologyWeight generation in machine learning
US9953271B2 (en)2013-11-222018-04-24California Institute Of TechnologyGeneration of weights in machine learning
WO2018174873A1 (en)*2017-03-222018-09-27Visa International Service AssociationPrivacy-preserving machine learning
US10535014B2 (en)2014-03-102020-01-14California Institute Of TechnologyAlternative training distribution data in machine learning
US10558935B2 (en)2013-11-222020-02-11California Institute Of TechnologyWeight benefit evaluator for training data
US20200250270A1 (en)*2019-02-012020-08-06International Business Machines CorporationWeighting features for an intent classification system
US20210279635A1 (en)*2020-03-052021-09-09Qualcomm IncorporatedAdaptive quantization for execution of machine learning models

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
CN107832852B (en)*2017-11-142021-03-02深圳码隆科技有限公司Data processing learning method and system and electronic equipment
WO2019141905A1 (en)*2018-01-192019-07-25Nokia Technologies OyAn apparatus, a method and a computer program for running a neural network
JP6950647B2 (en)*2018-08-282021-10-13株式会社豊田中央研究所 Data determination device, method, and program
KR102411885B1 (en)2019-08-012022-06-21박상훈Apparatus and method for evaluating training maturity
US12405975B2 (en)2020-03-302025-09-02Oracle International CorporationMethod and system for constraint based hyperparameter tuning
US11379748B2 (en)2020-06-152022-07-05Bank Of America CorporationSystem for threshold detection using learning reinforcement
KR20220037765A (en)*2020-09-182022-03-25삼성전자주식회사Electronic device and operating method for the same
CN114910960B (en)*2021-02-092025-01-28中国石油天然气股份有限公司 Reservoir parameter prediction method and device

Citations (37)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5819247A (en)*1995-02-091998-10-06Lucent Technologies, Inc.Apparatus and methods for machine learning hypotheses
US6119083A (en)*1996-02-292000-09-12British Telecommunications Public Limited CompanyTraining process for the classification of a perceptual signal
US6594586B1 (en)*1997-10-282003-07-15California Institute Of TechnologyIncorporation of contextual information in object identification
US6789069B1 (en)*1998-05-012004-09-07Biowulf Technologies LlcMethod for enhancing knowledge discovered from biological data using a learning machine
US20050071301A1 (en)*2003-09-292005-03-31Nec CorporationLearning system and learning method
US20050216426A1 (en)*2001-05-182005-09-29Weston Jason Aaron EMethods for feature selection in a learning machine
US20050228783A1 (en)*2004-04-122005-10-13Shanahan James GMethod and apparatus for adjusting the model threshold of a support vector machine for text classification and filtering
US20060248049A1 (en)*2005-04-272006-11-02Microsoft CorporationRanking and accessing definitions of terms
US20070094171A1 (en)*2005-07-182007-04-26Microsoft CorporationTraining a learning system with arbitrary cost functions
US20070203940A1 (en)*2006-02-272007-08-30Microsoft CorporationPropagating relevance from labeled documents to unlabeled documents
US20070203908A1 (en)*2006-02-272007-08-30Microsoft CorporationTraining a ranking function using propagated document relevance
US7275018B2 (en)*2006-02-102007-09-25Alstom Technology Ltd.Method of condition monitoring
US7426497B2 (en)*2004-08-312008-09-16Microsoft CorporationMethod and apparatus for analysis and decomposition of classifier data anomalies
US20090132515A1 (en)*2007-11-192009-05-21Yumao LuMethod and Apparatus for Performing Multi-Phase Ranking of Web Search Results by Re-Ranking Results Using Feature and Label Calibration
US7617164B2 (en)*2006-03-172009-11-10Microsoft CorporationEfficiency of training for ranking systems based on pairwise training with aggregated gradients
US7689520B2 (en)*2005-02-252010-03-30Microsoft CorporationMachine learning system and method for ranking sets of data using a pairing cost function
US7720830B2 (en)*2006-07-312010-05-18Microsoft CorporationHierarchical conditional random fields for web extraction
US20100169243A1 (en)*2008-12-272010-07-01Kibboko, Inc.Method and system for hybrid text classification
US20100287125A1 (en)*2008-05-212010-11-11Sony CorporationInformation processing unit, information processing method, and program
US20120078825A1 (en)*2010-09-282012-03-29Ebay Inc.Search result ranking using machine learning
US8175384B1 (en)*2008-03-172012-05-08Adobe Systems IncorporatedMethod and apparatus for discriminative alpha matting
US20120223889A1 (en)*2009-03-302012-09-06Touchtype LtdSystem and Method for Inputting Text into Small Screen Devices
US20120290319A1 (en)*2010-11-112012-11-15The Board Of Trustees Of The Leland Stanford Junior UniversityAutomatic coding of patient outcomes
US20120330971A1 (en)*2011-06-262012-12-27Itemize LlcItemized receipt extraction using machine learning
US8386401B2 (en)*2008-09-102013-02-26Digital Infuzion, Inc.Machine learning methods and systems for identifying patterns in data using a plurality of learning machines wherein the learning machine that optimizes a performance function is selected
US20140079297A1 (en)*2012-09-172014-03-20Saied TadayonApplication of Z-Webs and Z-factors to Analytics, Search Engine, Learning, Recognition, Natural Language, and Other Utilities
US20140180738A1 (en)*2012-12-212014-06-26Cloudvu, Inc.Machine learning for systems management
US20140180980A1 (en)*2011-07-252014-06-26International Business Machines CorporationInformation identification method, program product, and system
US20140195466A1 (en)*2013-01-082014-07-10Purepredictive, Inc.Integrated machine learning for a data management product
US20140201126A1 (en)*2012-09-152014-07-17Lotfi A. ZadehMethods and Systems for Applications for Z-numbers
US8788439B2 (en)*2012-12-212014-07-22InsideSales.com, Inc.Instance weighted learning machine learning model
US8798984B2 (en)*2011-04-272014-08-05Xerox CorporationMethod and system for confidence-weighted learning of factored discriminative language models
US20150206065A1 (en)*2013-11-222015-07-23California Institute Of TechnologyWeight benefit evaluator for training data
US20150206066A1 (en)*2013-11-222015-07-23California Institute Of TechnologyGeneration of weights in machine learning
US20150206067A1 (en)*2013-11-222015-07-23California Institute Of TechnologyWeight generation in machine learning
US20150254573A1 (en)*2014-03-102015-09-10California Institute Of TechnologyAlternative training distribution data in machine learning
US20170011307A1 (en)*2015-07-072017-01-12California Institute Of TechnologyAlternative training distribution based on density modification

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US6373483B1 (en)1997-01-132002-04-16Silicon Graphics, Inc.Method, system and computer program product for visually approximating scattered data using color to represent values of a categorical variable
US6453307B1 (en)1998-03-032002-09-17At&T Corp.Method and apparatus for multi-class, multi-label information categorization
ID28800A (en)1998-05-012001-07-05Barnhill Technologies Llc BEFORE-PROCESSING AND THE PROCESSING CENTER TO IMPROVE KNOWLEDGE DISCOVERY USING THE PREVENTION VECTOR MACHINE
US6850873B1 (en)1999-09-292005-02-01Eric T BaxUsing validation by inference to select a hypothesis function
US7970718B2 (en)2001-05-182011-06-28Health Discovery CorporationMethod for feature selection and for evaluating features identified as significant for classifying data
US6701311B2 (en)2001-02-072004-03-02International Business Machines CorporationCustomer self service system for resource search and selection
EP1449108A4 (en)2001-11-072006-11-22Health Discovery CorpPre-processed feature ranking for a support vector machine
US6876955B1 (en)2001-12-282005-04-05Fannie MaeMethod and apparatus for predicting and reporting a real estate value based on a weighted average of predicted values
JP2005044330A (en)2003-07-242005-02-17Univ Of California San Diego Weak hypothesis generation apparatus and method, learning apparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial expression recognition apparatus and method, and robot apparatus
US7480667B2 (en)2004-12-242009-01-20Microsoft CorporationSystem and method for using anchor text as training data for classifier-based search systems
US7561158B2 (en)2006-01-112009-07-14International Business Machines CorporationMethod and apparatus for presenting feature importance in predictive modeling
AU2006201210A1 (en)2006-03-232007-10-11Canon Information Systems Research Australia Pty LtdMotion characterisation
US20080169975A1 (en)2007-01-122008-07-17Young Paul YeeProcess for generating spatially continuous wind profiles from wind profiler measurements
US8005771B2 (en)2007-10-042011-08-23Siemens CorporationSegment-based change detection method in multivariate data stream
JP2010092266A (en)2008-10-082010-04-22Nec CorpLearning device, learning method and program
AU2010324517A1 (en)2009-11-272012-06-14New Ideas Company Pty LtdMethod and system for consumer centred care management
CA2805446C (en)2010-07-292016-08-16Exxonmobil Upstream Research CompanyMethods and systems for machine-learning based simulation of flow
US20120271821A1 (en)2011-04-202012-10-25Microsoft CorporationNoise Tolerant Graphical Ranking Model
US20130066452A1 (en)2011-09-082013-03-14Yoshiyuki KobayashiInformation processing device, estimator generating method and program
US20130097103A1 (en)2011-10-142013-04-18International Business Machines CorporationTechniques for Generating Balanced and Class-Independent Training Data From Unlabeled Data Set
US9031897B2 (en)2012-03-232015-05-12Nuance Communications, Inc.Techniques for evaluation, building and/or retraining of a classification model
EP2883173A1 (en)*2013-03-152015-06-17The Echo Nest CorporationDemographic and media preference prediction using media content data analysis
US9582490B2 (en)2013-07-122017-02-28Microsoft Technolog Licensing, LLCActive labeling for computer-human interactive learning
US9679258B2 (en)2013-10-082017-06-13Google Inc.Methods and apparatus for reinforcement learning
US10262272B2 (en)2014-12-072019-04-16Microsoft Technology Licensing, LlcActive machine learning

Patent Citations (37)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US5819247A (en)*1995-02-091998-10-06Lucent Technologies, Inc.Apparatus and methods for machine learning hypotheses
US6119083A (en)*1996-02-292000-09-12British Telecommunications Public Limited CompanyTraining process for the classification of a perceptual signal
US6594586B1 (en)*1997-10-282003-07-15California Institute Of TechnologyIncorporation of contextual information in object identification
US6789069B1 (en)*1998-05-012004-09-07Biowulf Technologies LlcMethod for enhancing knowledge discovered from biological data using a learning machine
US20050216426A1 (en)*2001-05-182005-09-29Weston Jason Aaron EMethods for feature selection in a learning machine
US20050071301A1 (en)*2003-09-292005-03-31Nec CorporationLearning system and learning method
US20050228783A1 (en)*2004-04-122005-10-13Shanahan James GMethod and apparatus for adjusting the model threshold of a support vector machine for text classification and filtering
US7426497B2 (en)*2004-08-312008-09-16Microsoft CorporationMethod and apparatus for analysis and decomposition of classifier data anomalies
US7689520B2 (en)*2005-02-252010-03-30Microsoft CorporationMachine learning system and method for ranking sets of data using a pairing cost function
US20060248049A1 (en)*2005-04-272006-11-02Microsoft CorporationRanking and accessing definitions of terms
US20070094171A1 (en)*2005-07-182007-04-26Microsoft CorporationTraining a learning system with arbitrary cost functions
US7275018B2 (en)*2006-02-102007-09-25Alstom Technology Ltd.Method of condition monitoring
US20070203940A1 (en)*2006-02-272007-08-30Microsoft CorporationPropagating relevance from labeled documents to unlabeled documents
US20070203908A1 (en)*2006-02-272007-08-30Microsoft CorporationTraining a ranking function using propagated document relevance
US7617164B2 (en)*2006-03-172009-11-10Microsoft CorporationEfficiency of training for ranking systems based on pairwise training with aggregated gradients
US7720830B2 (en)*2006-07-312010-05-18Microsoft CorporationHierarchical conditional random fields for web extraction
US20090132515A1 (en)*2007-11-192009-05-21Yumao LuMethod and Apparatus for Performing Multi-Phase Ranking of Web Search Results by Re-Ranking Results Using Feature and Label Calibration
US8175384B1 (en)*2008-03-172012-05-08Adobe Systems IncorporatedMethod and apparatus for discriminative alpha matting
US20100287125A1 (en)*2008-05-212010-11-11Sony CorporationInformation processing unit, information processing method, and program
US8386401B2 (en)*2008-09-102013-02-26Digital Infuzion, Inc.Machine learning methods and systems for identifying patterns in data using a plurality of learning machines wherein the learning machine that optimizes a performance function is selected
US20100169243A1 (en)*2008-12-272010-07-01Kibboko, Inc.Method and system for hybrid text classification
US20120223889A1 (en)*2009-03-302012-09-06Touchtype LtdSystem and Method for Inputting Text into Small Screen Devices
US20120078825A1 (en)*2010-09-282012-03-29Ebay Inc.Search result ranking using machine learning
US20120290319A1 (en)*2010-11-112012-11-15The Board Of Trustees Of The Leland Stanford Junior UniversityAutomatic coding of patient outcomes
US8798984B2 (en)*2011-04-272014-08-05Xerox CorporationMethod and system for confidence-weighted learning of factored discriminative language models
US20120330971A1 (en)*2011-06-262012-12-27Itemize LlcItemized receipt extraction using machine learning
US20140180980A1 (en)*2011-07-252014-06-26International Business Machines CorporationInformation identification method, program product, and system
US20140201126A1 (en)*2012-09-152014-07-17Lotfi A. ZadehMethods and Systems for Applications for Z-numbers
US20140079297A1 (en)*2012-09-172014-03-20Saied TadayonApplication of Z-Webs and Z-factors to Analytics, Search Engine, Learning, Recognition, Natural Language, and Other Utilities
US20140180738A1 (en)*2012-12-212014-06-26Cloudvu, Inc.Machine learning for systems management
US8788439B2 (en)*2012-12-212014-07-22InsideSales.com, Inc.Instance weighted learning machine learning model
US20140195466A1 (en)*2013-01-082014-07-10Purepredictive, Inc.Integrated machine learning for a data management product
US20150206065A1 (en)*2013-11-222015-07-23California Institute Of TechnologyWeight benefit evaluator for training data
US20150206066A1 (en)*2013-11-222015-07-23California Institute Of TechnologyGeneration of weights in machine learning
US20150206067A1 (en)*2013-11-222015-07-23California Institute Of TechnologyWeight generation in machine learning
US20150254573A1 (en)*2014-03-102015-09-10California Institute Of TechnologyAlternative training distribution data in machine learning
US20170011307A1 (en)*2015-07-072017-01-12California Institute Of TechnologyAlternative training distribution based on density modification

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Chiu et al., "Learning to generate novel views of objects for class recognition," Computer Vision and Image Understanding 113 (2009) 1183–1197, 2009.*
Guo et al., "A Reformulation of Support Vector Machines for General Confidence Functions," In Z.-H. Zhou and T. Washio (Eds.): ACML 2009, LNAI 5828, pp. 109-119, 2009.*
Hachiya et al., "Importance-Weighted Least-Squares Probabilistic Classifier for Covariate Shift Adaptation with Application to Human Activity Recognition," Neurocomputing, vol.80, pp.93-101, 2012.*
Steffen Bickel et al., "Discriminative Learning for Differing Training and Test Distributions," 8 pages, Proceedings of the 24 th International Conference on Machine Learning, Corvallis, OR, 2007.*
Steffen Bickel, "Learning under Differing Training and Test Distributions," Dissertation, Universit¨at Potsdam, 2009.*

Cited By (11)

* Cited by examiner, † Cited by third party
Publication numberPriority datePublication dateAssigneeTitle
US9858534B2 (en)2013-11-222018-01-02California Institute Of TechnologyWeight generation in machine learning
US9953271B2 (en)2013-11-222018-04-24California Institute Of TechnologyGeneration of weights in machine learning
US10558935B2 (en)2013-11-222020-02-11California Institute Of TechnologyWeight benefit evaluator for training data
US10535014B2 (en)2014-03-102020-01-14California Institute Of TechnologyAlternative training distribution data in machine learning
WO2018174873A1 (en)*2017-03-222018-09-27Visa International Service AssociationPrivacy-preserving machine learning
US11562230B2 (en)2017-03-222023-01-24Visa International Service AssociationPrivacy-preserving machine learning
US11847564B2 (en)2017-03-222023-12-19Visa International Service AssociationPrivacy-preserving machine learning
US20200250270A1 (en)*2019-02-012020-08-06International Business Machines CorporationWeighting features for an intent classification system
US10977445B2 (en)*2019-02-012021-04-13International Business Machines CorporationWeighting features for an intent classification system
US20210279635A1 (en)*2020-03-052021-09-09Qualcomm IncorporatedAdaptive quantization for execution of machine learning models
US11861467B2 (en)*2020-03-052024-01-02Qualcomm IncorporatedAdaptive quantization for execution of machine learning models

Also Published As

Publication numberPublication date
US20150206065A1 (en)2015-07-23
KR20160083111A (en)2016-07-11
JP6276857B2 (en)2018-02-07
JP2017500637A (en)2017-01-05
WO2015077555A2 (en)2015-05-28
EP3072060A2 (en)2016-09-28
EP3072060A4 (en)2017-08-09
WO2015077555A3 (en)2015-10-29
US10558935B2 (en)2020-02-11
KR101889451B1 (en)2018-08-17

Similar Documents

PublicationPublication DateTitle
US10558935B2 (en)Weight benefit evaluator for training data
US10535014B2 (en)Alternative training distribution data in machine learning
US9858534B2 (en)Weight generation in machine learning
US9953271B2 (en)Generation of weights in machine learning
WO2011094934A1 (en)Method and apparatus for modelling personalized contexts
EP3115939A1 (en)Alternative training distribution based on density modification
US20230229896A1 (en)Method and computing device for determining optimal parameter
CN114049530A (en)Hybrid precision neural network quantization method, device and equipment
JP2023501103A (en) artificial intelligence transparency
US8478009B2 (en)Generation and analysis of representations of skin conditions
CN116523064A (en)Quantum genetic algorithm-based searching method and device, storage medium and electronic equipment
US9590933B2 (en)Generation of a communication request based on visual selection
CN111461328B (en)Training method of neural network
US20160253832A1 (en)Scene image generator
US9740763B2 (en)Ontology decomposer
CN118295664A (en)Code generation method, code generation device, computer equipment, storage medium and product
US9767016B2 (en)Generation of search index

Legal Events

DateCodeTitleDescription
STCBInformation on status: application discontinuation

Free format text:ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION


[8]ページ先頭

©2009-2025 Movatter.jp