Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Experiments for XLM-V Transformers Integeration

NotificationsYou must be signed in to change notification settings

stefan-it/xlm-v-experiments

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repository documents the XLM-V Integration into 🤗 Transformers.

Basic steps were also documented in thisissue.

Please openan issue or PR for bugs/comments - it is highly appreciated!!

Changelog

  • 08.05.2023: XLM-V model is available underMeta AI organization and it was also added to🤗 TransformersDocumentation.
  • 06.05.2023: Mentionfairseq PR for XLM-V and add results on XQuAD.
  • 05.02.2023: Initial version of this repo.

XLM-V background

XLM-V is multilingual language model with a one million token vocabulary trained on 2.5TB of data from Common Crawl (same as XLM-R).It was introduced in theXLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Modelspaper by Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer and Madian Khabsa.

From the abstract of the XLM-V paper:

Large multilingual language models typically rely on a single vocabulary shared across 100+ languages.As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged.This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R.In this paper, we introduce a new approach for scaling to very large multilingual vocabularies byde-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacityto achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typicallymore semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V,a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task wetested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), andnamed entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).

Weights conversion

At the moment, XLM-V is not officially integrated intofairseq library, but the model itself can be loaded with it. But here's an openmerge requests that adds model and usage readme intofairseq.

The first author of the XLM-V paper, Davis Liang,tweetedabout the model weights, so they can be downloaded via:

$ wget https://dl.fbaipublicfiles.com/fairseq/xlmv/xlmv.base.tar.gz

The scriptconvert_xlm_v_original_pytorch_checkpoint_to_pytorch.py is needed to load these weights and converts them intoa 🤗 Transformers PyTorch model. It also checks, if everything went right during weight conversion:

torch.Size([1, 11, 901629]) torch.Size([1, 11, 901629])max_absolute_diff = 7.62939453125e-06Do both models output the same tensors? 🔥Saving model to /media/stefan/89914e9b-0644-4f79-8e65-a8c5245df168/xlmv/exported-workingConfiguration savedin /media/stefan/89914e9b-0644-4f79-8e65-a8c5245df168/xlmv/exported-working/config.jsonModel weights savedin /media/stefan/89914e9b-0644-4f79-8e65-a8c5245df168/xlmv/exported-working/pytorch_model.bin

Notice: On my laptop, 16GB of CPU RAM were not enough to convert the model weights. So I had to convert it on my server...

Tokenizer checks

Another crucial part of integrating a model into 🤗 Transformers is on the Tokenizer side. The tokenizer in 🤗 Transformersshould output the same ids/subtokens as thefairseq tokenizer.

For this reason, thexlm_v_tokenizer_comparison.py script loads all 176 languages from theWikiANN dataset,tokenizes each sentence and compares it.

Unfortunately, some sentences have a slightly different output compared to thefairseq tokenizer, but this happens not quite often.The output of thexlm_v_tokenizer_comparison.py script with all tokenizer differences can be viewedhere.

MLM checks

After the model conversion and tokenizer checks, it is time to check the MLM performance:

fromtransformersimportpipelineunmasker=pipeline('fill-mask',model='stefan-it/xlm-v-base')unmasker("Paris is the <mask> of France.")

It outputs:

[{'score':0.9286897778511047,'token':133852,'token_str':'capital','sequence':'Paris is the capital of France.'}, {'score':0.018073994666337967,'token':46562,'token_str':'Capital','sequence':'Paris is the Capital of France.'}, {'score':0.013238662853837013,'token':8696,'token_str':'centre','sequence':'Paris is the centre of France.'}, {'score':0.010450296103954315,'token':550136,'token_str':'heart','sequence':'Paris is the heart of France.'}, {'score':0.005028395913541317,'token':60041,'token_str':'center','sequence':'Paris is the center of France.'}]

Results for masked LM are pretty good!

Downstream task performance

The last part of integrating a model into 🤗 Transformers is to test the performance on downstream tasks and compare theirperformance with the paper results. Both QA and NER downstream tasks are covered here.

QA

A recentmaster version of Transformers (commit:59d5ede) is used to reproduce the XQuAD results using the PyTorchquestion answering exampleon a single A100 (40GB) GPU.

First, 5 models (with different seed!) are fine-tuned on English SQuAD dataset.

Fine-tuning for first model (XLM-R):

python3 run_qa.py \--model_name_or_path xlm-roberta-base \--dataset_name squad \--do_train \--do_eval \--max_seq_length 512 \--doc_stride 128 \--per_device_train_batch_size 6 \--learning_rate 3e-5 \--weight_decay 0.0 \--warmup_steps 0 \--num_train_epochs 2 \--seed 1 \--output_dir xlm-r-1 \--fp16 \--save_steps 14646

For XLM-V is looks similar:

 python3 run_qa.py \ --model_name_or_path stefan-it/xlm-v-base \ --dataset_name squad \ --do_train \ --do_eval \ --max_seq_length 512 \ --doc_stride 128 \ --per_device_train_batch_size 6 \ --learning_rate 3e-5 \ --weight_decay 0.0 \ --warmup_steps 0 \ --num_train_epochs 2 \ --seed 1 \ --output_dir xlm-v-1 \ --fp16 \ --save_steps 14618

Then this fine-tuned model can be zero-shot evaluated on the 11 languages in XQuAD. Here's an example for Hindi (shortened):

python3 run_qa.py --model_name_or_path xlm-r-1 \--dataset_name xquad \--dataset_config_name xquad.hi \--do_eval \--max_seq_length 512 \--doc_stride 128 \--output_dir xlm-r-1-hi \--fp16

This is done for each fine-tuned model on each language. Detailed results for all 5 different models can be seen here:

Here's the overall performance table (inspired by Table 9 in the XLM-V paper with their results):

Modelenesdeelrutr
XLM-R (Paper)72.1 / 83.558.5 / 76.557.6 / 73.055.4 / 72.256.6 / 73.152.2 / 68.3
XLM-R (Reproduced)73.1 / 83.859.5 / 76.860.0 / 75.355.8 / 73.058.0 / 74.451.1 / 67.3
XLM-V (Paper)72.9 / 84.260.3 / 78.157.3 / 75.153.5 / 72.456.0 / 73.251.8 / 67.5
XLM-V (Reproduced)72.5 / 83.158.7 / 76.359.5 / 75.254.2 / 72.056.2 / 72.950.4 / 66.5
ModelarvithzhhiAvg.
XLM-R (Paper)49.2 / 65.953.5 / 72.955.7 / 66.355.5 / 65.349.8 / 57.756.0 / 71.3
XLM-R (Reproduced)49.8 / 66.355.0 / 74.056.3 / 66.555.5 / 64.251.9 / 68.056.9 / 71.8
XLM-V (Paper)51.2 / 67.553.7 / 73.156.9 / 67.053.5 / 63.151.9 / 69.456.3 / 71.9
XLM-V (Reproduced)50.5 / 67.054.1 / 72.755.3 / 65.156.7 / 65.352.4 / 68.556.4 / 71.3

Summary: The F1-Score results for XLM-V could be reproduced (56.3 vs. 56.4). For exact match there are slightly differentresults (71.9 vs. 71.3). For the XLM-R model there's a larger difference: our XLM-R models perform better on XQuAD comparedto their XLM-R reimplementation. Our XLM-R model also achieves better results than XLM-V on XQuAD.

NER

For NER, theflair-fine-tuner.py fine-tunes a model on the English WikiANN (Rahimi et al.) split with the hyper-parameters,mentioned in the paper (only difference is that we use 512 as sequence length compared to 128!). We fine-tune 5 models withdifferent seeds and average performance over these 5 different models. The scripts expects a model configuration as first input argument.All configuration files are located under the./configs folder. Fine-tuning XLM-V can be started with:

$ python3 flair-fine-tuner.py ./configs/xlm_v_base.json

Fine-tuning is done on a A100 (40GB) instances fromLambda Cloud using Flair.A 40GB is definitely necessary to fine-tune this model with that given batch size! Latest Flair master (commit23618cd) is also needed.

MasakhaNER v1

The scriptmasakhaner-zero-shot.py performs zero-shot evaluation on the MasakhaNER v1 datset, that is used in the XLM-V paper.One crucial part is to deal withDATE entities: they do not exist in the English WikiANN (Rahimi et al.) split, but they areannotated in MasakhaNER v1. For this reason, we convert allDATE entities intoO to disable them for evaluation. The scriptmasakhaner-zero-shot.py is used for performing zero-shot evaluation and will output a nice results table.

Detailed results for all 5 different models can be seen here:

Here's the overall performance table (inspired by Table 11 in the XLM-V paper with their results):

ModelamhhauibokinlugluopcmswawolyorAvg.
XLM-R (Paper)25.143.511.69.49.58.436.848.95.310.020.9
XLM-R (Reproduced)27.142.414.212.414.310.040.650.26.311.522.9
XLM-V (Paper)20.635.945.925.048.710.438.244.016.735.832.1
XLM-V (Reproduced)25.345.755.633.256.116.540.750.826.347.239.7

Diff. between XLM-V and XLM-R in the paper: (32.1 - 20.9) = 11.2%.

Diff. between reproduced XLM-V and XLM-R: (39.7 - 22.9) = 16.8%.

WikiANN (Rahimi et al.)

The scriptwikiann-zero-shot.py performs zero-shot evaluation on the WikiANN (Rahimi et al.) dataset. Ths scriptwikiann-zero-shot.pyis used for zero-shot evaluation and will also output a nice results table. Notice: it uses a high batch size for evaluating the model,so a A100 (40GB) GPU is definitely useful.

Detailed results for all 5 different models can be seen here:

Here's the overall performance table (inspired by Table 10 in the XLM-V paper with their results):

Modelrogupaltazukplquhufiettrkkzhmyyosw
XLM-R (Paper)73.562.953.672.761.072.477.560.475.874.471.275.442.225.348.933.666.3
XLM-R (Reproduced)73.865.550.674.364.076.578.460.877.775.973.076.445.229.852.337.667.0
XLM-V (Paper)73.866.448.775.666.765.779.570.079.578.775.077.350.430.261.554.272.4
XLM-V (Reproduced)77.265.453.674.966.069.479.866.979.077.976.276.848.528.158.462.671.6
Modelthkokajarubgesptitfrfaurmrhibnelde
XLM-R (Paper)5.249.465.421.063.176.170.277.076.976.544.651.461.567.269.073.874.4
XLM-R (Reproduced)4.749.467.521.965.277.576.779.077.777.949.055.161.367.869.674.175.4
XLM-V (Paper)3.353.069.522.468.179.874.580.578.777.650.648.959.867.372.676.776.8
XLM-V (Reproduced)2.651.671.220.667.879.476.279.979.577.551.751.561.969.273.275.977.1
ModelennlaftetamleutlmsjvidvihearAvg.
XLM-R (Paper)83.080.075.849.256.361.957.269.868.359.448.667.753.243.861.3
XLM-R (Reproduced)83.480.875.849.356.862.259.172.262.358.350.067.952.647.862.6
XLM-V (Paper)83.481.478.351.854.963.167.175.670.067.552.667.160.145.864.7
XLM-V (Reproduced)84.181.378.950.955.963.065.775.970.864.853.969.661.147.265.0

Diff. between XLM-V and XLM-R in the paper: (64.7 - 61.3) = 3.4%.

Diff. between reproduced XLM-V and XLM-R: (65.0 - 62.6) = 2.4%.

🤗 Transformers Model Hub

After all checks (weights, tokenizer and downstream tasks) the model was uploaded to the 🤗 Transformers Model Hub:

XLM-V was also added to the 🤗 Transformers Documentation withthis PRand now lives athere.

About

Experiments for XLM-V Transformers Integeration

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp