- Notifications
You must be signed in to change notification settings - Fork12
DBMDZ BERT, DistilBERT, ELECTRA, GPT-2 and ConvBERT models
License
dbmdz/berts
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
In this repository the MDZ Digital Library team (dbmdz) at the Bavarian StateLibrary open sources another BERT models 🎉
- 13.12.2021: Public release of Historic Language Model for Dutch.
- 06.12.2021: Public release of smaller multilingual Historic Language Models.
- 18.11.2021: Public release of multilingual and monolingual Historic Language Models.
- 24.09.2021: Public release of cased/uncased Turkish ELECTRA and ConvBERT models, trained on mC4 corpus.
- 17.08.2021: Public release of re-trained German GPT-2 model.
- 24.06.2021: Public release of Turkish ELECTRA model, trained on Turkish part of multilingual C4 corpus.
- 16.03.2021: Public release of ConvBERT model for Turkish:ConvBERTurk.
- 06.02.2021: Public release of German Europeana DistilBERT and ConvBERT models.
- 16.11.2020: Public release of French Europeana BERT and ELECTRA models.
- 15.11.2020: Public release of a German GPT-2 model.
- 11.11.2020: Public release of Ukrainian ELECTRA model.
- 02.11.2020: Public release of Italian XXL ELECTRA model.
- 26.10.2020: In collaboration withBranden Chan andTimo Möller fromdeepset we've trained larger language models for German. See ourpaper for more information!
- 12.05.2020: Public release of small and base ELECTRA models for Turkish
- 25.03.2020: Public release ofBERTurk uncased model andBERTurk models with larger vocab size (128k, cased and uncased)
- 11.03.2020: Public release of cased distilled BERT model for Turkish:DistilBERTurk
- 17.02.2020: Public release of cased BERT model for Turkish:BERTurk
- 10.02.2020: Public release of cased and uncased BERT models for Historic German: German Europeana BERT
- 20.01.2019: Public release of cased and uncased XXL BERT models for Italian. They can be downloaded fromtheHuggingface model hub.
- 30.12.2019: Public release of cased and uncased BERT models for Italian.
- 08.12.2019: If you consider using our model for the upcoming GermEval 2020 shared task,please read at least thisblog postby Emily Bender on ethical issues!
- 10.10.2019: Public release
- 24.09.2019: Initial version
In addition to the recently releasedGerman BERTmodel bydeepset we provide another German-language model.
The source data for the model consists of a recent Wikipedia dump, EU Bookshop corpus,Open Subtitles, CommonCrawl, ParaCrawl and News Crawl. This results in a dataset witha size of 16GB and 2,350,234,427 tokens.
For sentence splitting, we usespacy. Our preprocessing steps(sentence piece model for vocab generation) follow those used for trainingSciBERT. The model is trained with an initialsequence length of 512 subwords and was performed for 1.5M steps.
This release includes both cased and uncased models.
Currently only PyTorch-Transformerscompatible weights are available. If you need access to TensorFlow checkpoints,please raise an issue!
Model | Downloads |
---|---|
bert-base-german-dbmdz-cased | config.json •pytorch_model.bin •vocab.txt |
bert-base-german-dbmdz-uncased | config.json •pytorch_model.bin •vocab.txt |
With Transformers >= 2.3 our German BERT models can be loaded like:
fromtransformersimportAutoModel,AutoTokenizertokenizer=AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")model=AutoModel.from_pretrained("dbmdz/bert-base-german-cased")
For results on downstream tasks like NER or PoS tagging, please refer tothis repository.
The source data for the Italian BERT model consists of a recent Wikipedia dump andvarious texts from theOPUS corpora collection. The finaltraining corpus has a size of 13GB and 2,050,057,573 tokens.
For sentence splitting, we use NLTK (faster compared to spacy).Our cased and uncased models are training with an initial sequence length of 512subwords for ~2-3M steps.
For the XXL Italian models, we use the same training data from OPUS and extendit with data from the Italian part of theOSCAR corpus.Thus, the final training corpus has a size of 81GB and 13,138,379,147 tokens.
Note: Unfortunately, a wrong vocab size was used when training the XXL models.This explains the mismatch of the "real" vocab size of 31102, compared to thevocab size specified inconfig.json
. However, the model is working and allevaluations were done under those circumstances.Seethis issue for more information.
The Italian ELECTRA model was trained on the "XXL" corpus for 1M steps in total using a batchsize of 128. We pretty much following the ELECTRA training procedure as used forBERTurk.
Currently only PyTorch-Transformerscompatible weights are available. If you need access to TensorFlow checkpoints,please raise an issue!
Model | Downloads |
---|---|
dbmdz/bert-base-italian-cased | config.json •pytorch_model.bin •vocab.txt |
dbmdz/bert-base-italian-uncased | config.json •pytorch_model.bin •vocab.txt |
dbmdz/bert-base-italian-xxl-cased | config.json •pytorch_model.bin •vocab.txt |
dbmdz/bert-base-italian-xxl-uncased | config.json •pytorch_model.bin •vocab.txt |
dbmdz/electra-base-italian-xxl-cased-discriminator | config.json •pytorch_model.bin •vocab.txt |
dbmdz/electra-base-italian-xxl-cased-generator | config.json •pytorch_model.bin •vocab.txt |
For results on downstream tasks like NER or PoS tagging, please refer tothis repository.
With Transformers >= 2.3 our Italian BERT models can be loaded like:
fromtransformersimportAutoModel,AutoTokenizertokenizer=AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-cased")model=AutoModel.from_pretrained("dbmdz/bert-base-italian-cased")
To load the (recommended) Italian XXL BERT models, just use:
fromtransformersimportAutoModel,AutoTokenizertokenizer=AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-xxl-cased")model=AutoModel.from_pretrained("dbmdz/bert-base-italian-xxl-cased")
We use the open sourceEuropeana newspapersthat were provided byThe European Library. The finaltraining corpus has a size of 51GB and consists of 8,035,986,369 tokens.
Detailed information about the data and pretraining steps can be found inthis repository.
The following models are available from the Hugging Face model hub:
Model | Downloads |
---|---|
dbmdz/bert-base-german-europeana-cased | Seemodel hub |
dbmdz/bert-base-german-europeana-uncased | Seemodel hub |
dbmdz/electra-base-german-europeana-cased-discriminator | Seemodel hub |
dbmdz/electra-base-german-europeana-cased-generator | Seemodel hub |
dbmdz/convbert-base-german-europeana-cased | Seemodel hub |
dbmdz/distilbert-base-german-europeana-cased | Seemodel hub |
For results on Historic NER, please refer tothis repository.
With Transformers >= 2.3 our German Europeana BERT models can be loaded like:
fromtransformersimportAutoModel,AutoTokenizertokenizer=AutoTokenizer.from_pretrained("dbmdz/bert-base-german-europeana-cased")model=AutoModel.from_pretrained("dbmdz/bert-base-german-europeana-cased")
The German Europeana BERT uncased model can be loaded like:
fromtransformersimportAutoModel,AutoTokenizertokenizer=AutoTokenizer.from_pretrained("dbmdz/bert-base-german-europeana-uncased")model=AutoModel.from_pretrained("dbmdz/bert-base-german-europeana-uncased")
We use the open sourceEuropeana newspapersthat were provided byThe European Library. The finaltraining corpus has a size of 63GB and consists of 11,052,528,456 tokens.
Detailed information about the data and pretraining steps can be found inthis repository.
Model | Downloads |
---|---|
dbmdz/bert-base-french-europeana-cased | Seemodel hub |
dbmdz/electra-base-french-europeana-cased-discriminator | Seemodel hub |
dbmdz/electra-base-french-europeana-cased-generator | Seemodel hub |
With Transformers >= 2.3 our French Europeana BERT and ELECTRA models can be loaded like:
fromtransformersimportAutoModel,AutoTokenizermodel_name="dbmdz/bert-base-french-europeana-cased"tokenizer=AutoTokenizer.from_pretrained(model_name)model=AutoModel.from_pretrained(model_name)
The ELECTRA (discriminator) model can be used with:
fromtransformersimportAutoModel,AutoTokenizermodel_name="dbmdz/electra-base-french-europeana-cased-discriminator"tokenizer=AutoTokenizer.from_pretrained(model_name)model=AutoModel.from_pretrained(model_name)
BERTurk are community-driven cased models for Turkish.
Some datasets used for pretraining and evaluation are contributed from theawesome Turkish NLP community, as well as the decision for the BERT model name: BERTurk.
The final training corpus has a size of 35GB and 44,04,976,662 tokens.
Detailed information about the data and pretraining steps can be found inthis repository.
Additionally, we trained a distilled version of BERTurk:DistilBERTurk, thatuses knowledge-distillation from BERTurk (teacher model). More information ondistillation can be found in the excellent"DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter"paper by Sanh et al. (2019).
Furthermore, we provide cased and uncased models trained with a larger vocab size (128k instead of 32k).
We also trained small and base ELECTRA models. ELECTRA is a new method for self-supervised languagerepresentation learning. More details about ELECTRA can be found in theICLR paper.
In addition to the BERT and ELECTRA based models, we also trained a ConvBERT model. The ConvBERT architecture is presentedin the"ConvBERT: Improving BERT with Span-based Dynamic Convolution" paper.
Evaluation of our models can be found inthis repository.
We've also trained an ELECTRA (cased) model on the recently released Turkish part of themultiligual C4 (mC4) corpus from the AI2 team.
After filtering documents with a broken encoding, the training corpus has a size of 242GB resultingin 31,240,963,926 tokens.
All trained models can be used from theDBMDZ Hugging Facemodel hub pageusing their model name. The following models are available:
- BERTurk models with 32k vocabulary:
dbmdz/bert-base-turkish-cased
anddbmdz/bert-base-turkish-uncased
- BERTurk models with 128k vocabulary:
dbmdz/bert-base-turkish-128k-cased
anddbmdz/bert-base-turkish-128k-uncased
- ELECTRA small and base cased models (discriminator):
dbmdz/electra-small-turkish-cased-discriminator
anddbmdz/electra-base-turkish-cased-discriminator
- ELECTRA base cased and uncased models, trained on Turkish part of mC4 corpus (discriminator):
dbmdz/electra-small-turkish-mc4-cased-discriminator
anddbmdz/electra-small-turkish-mc4-uncased-discriminator
- ConvBERTurk model with 32k vocabulary:
dbmdz/convbert-base-turkish-cased
- ConvBERTurk base cased and uncased models, trained on Turkish part of mC4 corpus:
dbmdz/convbert-base-turkish-mc4-cased
anddbmdz/convbert-base-turkish-mc4-uncased
For results on PoS tagging or NER tasks, please refer tothis repository.
With Transformers >= 2.3 our BERTurk cased model can be loaded like:
fromtransformersimportAutoModel,AutoTokenizertokenizer=AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-cased")model=AutoModel.from_pretrained("dbmdz/bert-base-turkish-cased")
The DistilBERTurk model can be loaded with:
fromtransformersimportAutoModel,AutoTokenizertokenizer=AutoTokenizer.from_pretrained("dbmdz/distilbert-base-turkish-cased")model=AutoModel.from_pretrained("dbmdz/distilbert-base-turkish-cased")
Our ELECTRA models can be used with Transformers >= 2.8 and can be loaded with:
fromtransformersimportAutoModelWithLMHead,AutoTokenizertokenizer=AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator")model=AutoModelWithLMHead.from_pretrained("dbmdz/electra-base-turkish-cased-discriminator")
and
fromtransformersimportAutoModelWithLMHead,AutoTokenizertokenizer=AutoTokenizer.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-discriminator")model=AutoModelWithLMHead.from_pretrained("dbmdz/electra-base-turkish-mc4-cased-discriminator")
Our ConvBERT model can be used with Transformers >= 4.3 and can be loaded with:
fromtransformersimportAutoModelWithLMHead,AutoTokenizertokenizer=AutoTokenizer.from_pretrained("dbmdz/convbert-base-turkish-cased")model=AutoModelWithLMHead.from_pretrained("dbmdz/convbert-base-turkish-cased")
The source data for the Ukrainian ELECTRA model consists of two corpora:
- Recent Wikipedia dump
- Deduplicated Ukrainian part from theOSCAR corpus
The final training corpus has a size of 30GB and consits of exactly 2,402,761,324 tokens.
Detailed information about the data and pretraining steps can be found inthis repository.
Currently only PyTorch-Transformerscompatible weights are available. If you need access to TensorFlow checkpoints,please raise an issue!
Model | Downloads |
---|---|
dbmdz/electra-base-ukrainian-cased-discriminator | Seemodel hub |
dbmdz/electra-base-ukrainian-cased-generator | Seemodel hub |
For results on PoS tagging and NER downstream tasks, please refer tothis repository.
With Transformers >= 2.3 our Ukrainian ELECTRA model can be loaded like:
fromtransformersimportAutoModel,AutoTokenizermodel_name="dbmdz/electra-base-ukrainian-cased-discriminator"tokenizer=AutoTokenizer.from_pretrained(model_name)model=AutoModelWithLMHead.from_pretrained(model_name)
The German GPT-2 model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous"as the English GPT-3 model.
For training we use pretty much the same corpora as used for training the DBMDZ BERT model. We created a 50K byte-level BPE vocab basedon the training corpora.
The model was trained on one v3-8 TPU over the whole training corpus for 20 epochs.
Detailed information can be found inthis repository.
Note: we have released a re-trained version of this model with better results!
In addition to the German GPT-2 model, we release a GPT-2 model, that was fine-tuned on a normalized version of Faust I and II.
Model | Downloads |
---|---|
dbmdz/german-gpt2 | Seemodel hub |
dbmdz/german-gpt2-faust (old model) | Seemodel hub |
With Transformers >= 2.3 our German GPT-2 model can be used for text generation:
fromtransformersimportpipelinepipe=pipeline('text-generation',model="dbmdz/german-gpt2",tokenizer="dbmdz/german-gpt2",config={'max_length':800})text=pipe2("Der Sinn des Lebens ist es")[0]["generated_text"]print(text)
We release several BERT-based language models, incl. a multilingual Historic language models that includesGerman, French, English, Finnish and Swedish, as well monolingual Historic language models for English,Finnish and Swedish. The multilingual Historic language model was trained on 130GB of texts, extractedfrom Europeana Newspapers and British Library corpus.
More details about our Historic Language Models can be found inthis repository.
All models are available on the Hugging Face model hub:
Model identifier | Model Hub link |
---|---|
dbmdz/bert-base-historic-multilingual-cased | here |
dbmdz/bert-base-historic-english-cased | here |
dbmdz/bert-base-finnish-europeana-cased | here |
dbmdz/bert-base-swedish-europeana-cased | here |
We also released smaller Historic Language Models:
Model identifier | Model Hub link |
---|---|
dbmdz/bert-tiny-historic-multilingual-cased | here |
dbmdz/bert-mini-historic-multilingual-cased | here |
dbmdz/bert-small-historic-multilingual-cased | here |
dbmdz/bert-medium-historic-multilingual-cased | here |
We train a language model on theDelpher Corpus,that includes digitized texts from Dutch newspapers, ranging from 1618 to 1879.
The total training corpus consists of 427,181,269 sentences and 3,509,581,683 tokens (counted viawc
),resulting in a total corpus size of 21GB.
More details about the Historic Dutch language model can be found inthis repository.
The following models for Historic Dutch are available on the Hugging Face Model Hub:
Model identifier | Model Hub link |
---|---|
dbmdz/bert-base-historic-dutch-cased | here |
All models are licensed underMIT.
All models are available on theHuggingface model hub.
Here you can find a list papers, that used one of our trained models.Feel free to open a PR/issue if you want your paper to be included!
For questions about our BERT models just open an issuehere 🤗
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).Thanks for providing access to the TFRC ❤️
Thanks to the generous support from theHugging Face team,it is possible to download both cased and uncased models from their S3 storage 🤗