- Notifications
You must be signed in to change notification settings - Fork935
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
License
huggingface/tokenizers
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Provides an implementation of today's most used tokenizers, with a focus on performance andversatility.
- Train new vocabularies and tokenize, using today's most used tokenizers.
- Extremely fast (both training and tokenization), thanks to the Rust implementation. Takesless than 20 seconds to tokenize a GB of text on a server's CPU.
- Easy to use, but also extremely versatile.
- Designed for research and production.
- Normalization comes with alignments tracking. It's always possible to get the part of theoriginal sentence that corresponds to a given token.
- Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.
Performances can vary depending on hardware, but running the~/bindings/python/benches/test_tiktoken.py should give the following on a g6 aws instance:
We provide bindings to the following languages (more to come!):
You can install from source using:
pip install git+https://github.com/huggingface/tokenizers.git#subdirectory=bindings/python
our install the released versions with
pip install tokenizers
Choose your model between Byte-Pair Encoding, WordPiece or Unigram and instantiate a tokenizer:
fromtokenizersimportTokenizerfromtokenizers.modelsimportBPEtokenizer=Tokenizer(BPE())
You can customize how pre-tokenization (e.g., splitting into words) is done:
fromtokenizers.pre_tokenizersimportWhitespacetokenizer.pre_tokenizer=Whitespace()
Then training your tokenizer on a set of files just takes two lines of codes:
fromtokenizers.trainersimportBpeTrainertrainer=BpeTrainer(special_tokens=["[UNK]","[CLS]","[SEP]","[PAD]","[MASK]"])tokenizer.train(files=["wiki.train.raw","wiki.valid.raw","wiki.test.raw"],trainer=trainer)
Once your tokenizer is trained, encode any text with just one line:
output=tokenizer.encode("Hello, y'all! How are you 😁 ?")print(output.tokens)# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]
Check thedocumentationor thequicktour to learn more!
About
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.