Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
This repository was archived by the owner on Jun 18, 2024. It is now read-only.
/albertPublic archive

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

License

NotificationsYou must be signed in to change notification settings

google-research/albert

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

90 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

***************New March 28, 2020 ***************

Add a colabtutorial to run fine-tuning for GLUE datasets.

***************New January 7, 2020 ***************

v2 TF-Hub models should be working now with TF 1.15, as we removed thenative Einsum op from the graph. See updated TF-Hub links below.

***************New December 30, 2019 ***************

Chinese models are released. We would like to thankCLUE team for providing the training data.

Version 2 of ALBERT models is released.

In this version, we apply 'no dropout', 'additional training data' and 'long training time' strategies to all models. We train ALBERT-base for 10M steps and other models for 3M steps.

The result comparison to the v1 models is as followings:

AverageSQuAD1.1SQuAD2.0MNLISST-2RACE
V2
ALBERT-base82.390.2/83.282.1/79.384.692.966.8
ALBERT-large85.791.8/85.284.9/81.886.594.975.2
ALBERT-xlarge87.992.9/86.487.9/84.187.995.480.7
ALBERT-xxlarge90.994.6/89.189.8/86.990.696.886.8
V1
ALBERT-base80.189.3/82.380.0/77.181.690.364.0
ALBERT-large82.490.6/83.982.3/79.483.591.768.5
ALBERT-xlarge85.592.5/86.186.1/83.186.492.474.8
ALBERT-xxlarge91.094.8/89.390.2/87.490.896.986.5

The comparison shows that for ALBERT-base, ALBERT-large, and ALBERT-xlarge, v2 is much better than v1, indicating the importance of applying the above three strategies. On average, ALBERT-xxlarge is slightly worse than the v1, because of the following two reasons: 1) Training additional 1.5 M steps (the only difference between these two models is training for 1.5M steps and 3M steps) did not lead to significant performance improvement. 2) For v1, we did a little bit hyperparameter search among the parameters sets given by BERT, Roberta, and XLnet. For v2, we simply adopt the parameters from v1 except for RACE, where we use a learning rate of 1e-5 and 0ALBERT DR (dropout rate for ALBERT in finetuning). The original (v1) RACE hyperparameter will cause model divergence for v2 models. Given that the downstream tasks are sensitive to the fine-tuning hyperparameters, we should be careful about so called slight improvements.

ALBERT is "A Lite" version of BERT, a popular unsupervised languagerepresentation learning algorithm. ALBERT uses parameter-reduction techniquesthat allow for large-scale configurations, overcome previous memory limitations,and achieve better behavior with respect to model degradation.

For a technical description of the algorithm, see our paper:

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut

Release Notes

  • Initial release: 10/9/2019

Results

Performance of ALBERT on GLUE benchmark results using a single-model setup ondev:

ModelsMNLIQNLIQQPRTESSTMRPCCoLASTS
BERT-large86.692.391.370.493.288.060.690.0
XLNet-large89.893.991.883.895.689.263.691.8
RoBERTa-large90.294.792.286.696.490.968.092.4
ALBERT (1M)90.495.292.088.196.890.268.792.7
ALBERT (1.5M)90.895.392.289.296.990.971.493.0

Performance of ALBERT-xxl on SQuaD and RACE benchmarks using a single-modelsetup:

ModelsSQuAD1.1 devSQuAD2.0 devSQuAD2.0 testRACE test (Middle/High)
BERT-large90.9/84.181.8/79.089.1/86.372.0 (76.6/70.1)
XLNet94.5/89.088.8/86.189.1/86.381.8 (85.5/80.2)
RoBERTa94.6/88.989.4/86.589.8/86.883.2 (86.5/81.3)
UPM--89.9/87.2-
XLNet + SG-Net Verifier++--90.1/87.2-
ALBERT (1M)94.8/89.289.9/87.2-86.0 (88.2/85.1)
ALBERT (1.5M)94.8/89.390.2/87.490.9/88.186.5 (89.0/85.5)

Pre-trained Models

TF-Hub modules are available:

Example usage of the TF-Hub module in code:

tags = set()if is_training:  tags.add("train")albert_module = hub.Module("https://tfhub.dev/google/albert_base/1", tags=tags,                           trainable=True)albert_inputs = dict(    input_ids=input_ids,    input_mask=input_mask,    segment_ids=segment_ids)albert_outputs = albert_module(    inputs=albert_inputs,    signature="tokens",    as_dict=True)# If you want to use the token-level output, use# albert_outputs["sequence_output"] instead.output_layer = albert_outputs["pooled_output"]

Most of the fine-tuning scripts in this repository support TF-hub modulesvia the--albert_hub_module_handle flag.

Pre-training Instructions

To pretrain ALBERT, userun_pretraining.py:

pip install -r albert/requirements.txtpython -m albert.run_pretraining \    --input_file=... \    --output_dir=... \    --init_checkpoint=... \    --albert_config_file=... \    --do_train \    --do_eval \    --train_batch_size=4096 \    --eval_batch_size=64 \    --max_seq_length=512 \    --max_predictions_per_seq=20 \    --optimizer='lamb' \    --learning_rate=.00176 \    --num_train_steps=125000 \    --num_warmup_steps=3125 \    --save_checkpoints_steps=5000

Fine-tuning on GLUE

To fine-tune and evaluate a pretrained ALBERT on GLUE, please see theconvenience scriptrun_glue.sh.

Lower-level use cases may want to use therun_classifier.py script directly.Therun_classifier.py script is used both for fine-tuning and evaluation ofALBERT on individual GLUE benchmark tasks, such as MNLI:

pip install -r albert/requirements.txtpython -m albert.run_classifier \  --data_dir=... \  --output_dir=... \  --init_checkpoint=... \  --albert_config_file=... \  --spm_model_file=... \  --do_train \  --do_eval \  --do_predict \  --do_lower_case \  --max_seq_length=128 \  --optimizer=adamw \  --task_name=MNLI \  --warmup_step=1000 \  --learning_rate=3e-5 \  --train_step=10000 \  --save_checkpoints_steps=100 \  --train_batch_size=128

Good default flag values for each GLUE task can be found inrun_glue.sh.

You can fine-tune the model starting from TF-Hub modules instead of rawcheckpoints by setting e.g.--albert_hub_module_handle=https://tfhub.dev/google/albert_base/1 insteadof--init_checkpoint.

You can find the spm_model_file in the tar files or under the assets folder ofthe tf-hub module. The name of the model file is "30k-clean.model".

After evaluation, the script should report some output like this:

***** Eval results *****  global_step = ...  loss = ...  masked_lm_accuracy = ...  masked_lm_loss = ...  sentence_order_accuracy = ...  sentence_order_loss = ...

Fine-tuning on SQuAD

To fine-tune and evaluate a pretrained model on SQuAD v1, use therun_squad_v1.py script:

pip install -r albert/requirements.txtpython -m albert.run_squad_v1 \  --albert_config_file=... \  --output_dir=... \  --train_file=... \  --predict_file=... \  --train_feature_file=... \  --predict_feature_file=... \  --predict_feature_left_file=... \  --init_checkpoint=... \  --spm_model_file=... \  --do_lower_case \  --max_seq_length=384 \  --doc_stride=128 \  --max_query_length=64 \  --do_train=true \  --do_predict=true \  --train_batch_size=48 \  --predict_batch_size=8 \  --learning_rate=5e-5 \  --num_train_epochs=2.0 \  --warmup_proportion=.1 \  --save_checkpoints_steps=5000 \  --n_best_size=20 \  --max_answer_length=30

You can fine-tune the model starting from TF-Hub modules instead of rawcheckpoints by setting e.g.--albert_hub_module_handle=https://tfhub.dev/google/albert_base/1 insteadof--init_checkpoint.

For SQuAD v2, use therun_squad_v2.py script:

pip install -r albert/requirements.txtpython -m albert.run_squad_v2 \  --albert_config_file=... \  --output_dir=... \  --train_file=... \  --predict_file=... \  --train_feature_file=... \  --predict_feature_file=... \  --predict_feature_left_file=... \  --init_checkpoint=... \  --spm_model_file=... \  --do_lower_case \  --max_seq_length=384 \  --doc_stride=128 \  --max_query_length=64 \  --do_train \  --do_predict \  --train_batch_size=48 \  --predict_batch_size=8 \  --learning_rate=5e-5 \  --num_train_epochs=2.0 \  --warmup_proportion=.1 \  --save_checkpoints_steps=5000 \  --n_best_size=20 \  --max_answer_length=30

You can fine-tune the model starting from TF-Hub modules instead of rawcheckpoints by setting e.g.--albert_hub_module_handle=https://tfhub.dev/google/albert_base/1 insteadof--init_checkpoint.

Fine-tuning on RACE

For RACE, use therun_race.py script:

pip install -r albert/requirements.txtpython -m albert.run_race \  --albert_config_file=... \  --output_dir=... \  --train_file=... \  --eval_file=... \  --data_dir=...\  --init_checkpoint=... \  --spm_model_file=... \  --max_seq_length=512 \  --max_qa_length=128 \  --do_train \  --do_eval \  --train_batch_size=32 \  --eval_batch_size=8 \  --learning_rate=1e-5 \  --train_step=12000 \  --warmup_step=1000 \  --save_checkpoints_steps=100

You can fine-tune the model starting from TF-Hub modules instead of rawcheckpoints by setting e.g.--albert_hub_module_handle=https://tfhub.dev/google/albert_base/1 insteadof--init_checkpoint.

SentencePiece

Command for generating the sentence piece vocabulary:

spm_train \--input all.txt --model_prefix=30k-clean --vocab_size=30000 --logtostderr--pad_id=0 --unk_id=1 --eos_id=-1 --bos_id=-1--control_symbols=[CLS],[SEP],[MASK]--user_defined_symbols="(,),\",-,.,–,£,€"--shuffle_input_sentence=true --input_sentence_size=10000000--character_coverage=0.99995 --model_type=unigram

About

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp