[edit]
Probabilistic task modelling for meta-learning
Cuong C. Nguyen, Thanh-Toan Do, Gustavo CarneiroProceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:781-791, 2021.
Abstract
We proposeprobabilistic task modelling – a generative probabilistic model for collections of tasks used in meta-learning. The proposed model combines variational auto-encoding and latent Dirichlet allocation to model each task as a mixture of Gaussian distribution in an embedding space. Such modelling provides an explicit representation of a task through its task-theme mixture. We present an efficient approximation inference technique based on variational inference method for empirical Bayes parameter estimation. We perform empirical evaluations to validate thetask uncertainty andtask distance produced by the proposed method through correlation diagrams of the prediction accuracy on testing tasks. We also carry out experiments of task selection in meta-learning to demonstrate how the task relatedness inferred from the proposed model help to facilitate meta-learning algorithms.
Cite this Paper
BibTeX
@InProceedings{pmlr-v161-nguyen21b, title = {Probabilistic task modelling for meta-learning}, author = {Nguyen, Cuong C. and Do, Thanh-Toan and Carneiro, Gustavo}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {781--791}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/nguyen21b/nguyen21b.pdf}, url = {https://proceedings.mlr.press/v161/nguyen21b.html}, abstract = {We proposeprobabilistic task modelling – a generative probabilistic model for collections of tasks used in meta-learning. The proposed model combines variational auto-encoding and latent Dirichlet allocation to model each task as a mixture of Gaussian distribution in an embedding space. Such modelling provides an explicit representation of a task through its task-theme mixture. We present an efficient approximation inference technique based on variational inference method for empirical Bayes parameter estimation. We perform empirical evaluations to validate thetask uncertainty andtask distance produced by the proposed method through correlation diagrams of the prediction accuracy on testing tasks. We also carry out experiments of task selection in meta-learning to demonstrate how the task relatedness inferred from the proposed model help to facilitate meta-learning algorithms.}}
Endnote
%0 Conference Paper%T Probabilistic task modelling for meta-learning%A Cuong C. Nguyen%A Thanh-Toan Do%A Gustavo Carneiro%B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence%C Proceedings of Machine Learning Research%D 2021%E Cassio de Campos%E Marloes H. Maathuis%F pmlr-v161-nguyen21b%I PMLR%P 781--791%U https://proceedings.mlr.press/v161/nguyen21b.html%V 161%X We proposeprobabilistic task modelling – a generative probabilistic model for collections of tasks used in meta-learning. The proposed model combines variational auto-encoding and latent Dirichlet allocation to model each task as a mixture of Gaussian distribution in an embedding space. Such modelling provides an explicit representation of a task through its task-theme mixture. We present an efficient approximation inference technique based on variational inference method for empirical Bayes parameter estimation. We perform empirical evaluations to validate thetask uncertainty andtask distance produced by the proposed method through correlation diagrams of the prediction accuracy on testing tasks. We also carry out experiments of task selection in meta-learning to demonstrate how the task relatedness inferred from the proposed model help to facilitate meta-learning algorithms.
APA
Nguyen, C.C., Do, T. & Carneiro, G.. (2021). Probabilistic task modelling for meta-learning.Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, inProceedings of Machine Learning Research 161:781-791 Available from https://proceedings.mlr.press/v161/nguyen21b.html.