Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

The code to reproduce results from paper "MultiFiT: Efficient Multi-lingual Language Model Fine-tuning"https://arxiv.org/abs/1909.04761

License

NotificationsYou must be signed in to change notification settings

n-waves/multifit

Repository files navigation

PWCPWCPWC

MultiFiT: Efficient Multi-lingual Language Model Fine-tuning

Code to reproduce the paper "MultiFiT: Efficient Multi-lingual Language Model Fine-tuning".

Here is a blog post with an introducing to our paper:http://nlp.fast.ai/classification/2019/09/10/multifit.html

This repository contains a small framework on top of fastai v1.0; the code is compatible with v1.0.47 up to v1.0.59 (the current as of 2019.11.03).The results between fastai versions may differ due to optimizations added to fastai. Our models were trained using 1.0.47.

The framework was rewritten to make it easier to use with the newest fastai.

We released 7 language models trained on corresponding Wikipedia dumps:

  • de_multifit_paper_version
  • es_multifit_paper_version
  • fr_multifit_paper_version
  • it_multifit_paper_version
  • ja_multifit_paper_version
  • ru_multifit_paper_version
  • zh_multifit_paper_version

To fetch the model just usemultifit.from_pretrained function.Here are some example notebook showing how to train a classifier using a pretrained models.

Results

MLDoc

Document classification results on MLDoc datasetSchwenk and Li, 2018

Modeldeesfritjaruzh
LASER92.7088.7590.8085.9385.1584.6588.98
MultiBERT94.095.1593.2085.8287.4886.8590.72
MultiFiT95.9096.0794.7790.2590.0387.6592.52

Amazon CLS

Sentiment classification results on CLS datasetPrettenhofer and Stein, 2010

DEFRJA
MultiBERT86.05 / 84.90 / 82.0086.15 / 86.90 / 86.6580.87 / 82.83 / 79.95
MultiFiT93.19 / 90.54 / 93.0091.25 / 89.55 / 93.4086.29 / 85.75 / 86.59

How to use it with fastai v1.0

You can use the pretrained models with fastai library as follows:

from fastai.text import *import multifitexp = multifit.from_pretrained("name of the model")fa_config =  exp.pretrain_lm.tokenizer.get_fastai_config(add_open_file_processor=True)data_lm = (TextList.from_folder(imdb_path, **fa_config)            .filter_by_folder(include=['train', 'test', 'unsup'])             .split_by_rand_pct(0.1)            .label_for_lm()                       .databunch(bs=bs))learn = exp.finetune_lm.get_learner(data_lm)  # learn is a preconfigured fastai learner with a pretrained model loadedlearn.fit_one_cycle(10)learn.save_encoder("enc")...

Reproducing the results

This repository is a rewrite of the original training scripts so it lacks all the scripts used in the paper.We are working on a port to fastai v2.0 and then we will be adding the scripts that show how to reproduce the results.In case you need to use the scripts faster you can access the original scriptshere.

Citation

@article{Eisenschlos2019MultiFit,  title={MultiFiT: Efficient Multi-lingual Language Model Fine-tuning},  author={Julian Eisenschlos, Sebastian Ruder, Piotr Czapla, Marcin Kardas, Sylvain Gugger, Jeremy Howard}  journal={Proceedings of EMNLP-IJCNLP 2019},  year={2019}}

About

The code to reproduce results from paper "MultiFiT: Efficient Multi-lingual Language Model Fine-tuning"https://arxiv.org/abs/1909.04761

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp