Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A Lite Bert For Self-Supervised Learning Language Representations

License

NotificationsYou must be signed in to change notification settings

lonePatient/albert_pytorch

Repository files navigation

English Version |中文版说明

albert_pytorch

This repository contains a PyTorch implementation of the albert model from the paper

A Lite Bert For Self-Supervised Learning Language Representations

by Zhenzhong Lan. Mingda Chen....

Dependencies

  • pytorch=1.10
  • cuda=9.0
  • cudnn=7.5
  • scikit-learn
  • sentencepiece

Download Pre-trained Models of English

Official download links:google albert

Adapt to this version,download pytorch model (google drive):

v1

v2

Fine-tuning

1. Placeconfig.json and30k-clean.model into theprev_trained_model/albert_base_v2 directory.example:

├── prev_trained_model|  └── albert_base_v2|  |  └── pytorch_model.bin|  |  └── config.json|  |  └── 30k-clean.model

2.convert albert tf checkpoint to pytorch

pythonconvert_albert_tf_checkpoint_to_pytorch.py \--tf_checkpoint_path=./prev_trained_model/albert_base_tf_v2 \--bert_config_file=./prev_trained_model/albert_base_v2/config.json \--pytorch_dump_path=./prev_trained_model/albert_base_v2/pytorch_model.bin

TheGeneral Language Understanding Evaluation (GLUE) benchmark is a collection of nine sentence- or sentence-pair language understanding tasks for evaluating and analyzing natural language understanding systems.

Before running anyone of these GLUE tasks you should download theGLUE data by runningthis script and unpack it to some directory $DATA_DIR.

3.runsh scripts/run_classifier_sst2.shto fine tuning albert model

Result

Performance of ALBERT on GLUE benchmark results using a single-model setup ondev:

ColaSst-2MnliSts-b
metricmatthews_corrcoefaccuracyaccuracypearson
modelColaSst-2MnliSts-b
albert_base_v20.57560.9260.84180.9091
albert_large_v20.58510.95070.9151
albert_xlarge_v20.60230.9221

About

A Lite Bert For Self-Supervised Learning Language Representations

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp