Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

💥 Fast State-of-the-Art Tokenizers optimized for Research and Production

License

NotificationsYou must be signed in to change notification settings

huggingface/tokenizers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation



BuildGitHub

Provides an implementation of today's most used tokenizers, with a focus on performance andversatility.

Main features:

  • Train new vocabularies and tokenize, using today's most used tokenizers.
  • Extremely fast (both training and tokenization), thanks to the Rust implementation. Takesless than 20 seconds to tokenize a GB of text on a server's CPU.
  • Easy to use, but also extremely versatile.
  • Designed for research and production.
  • Normalization comes with alignments tracking. It's always possible to get the part of theoriginal sentence that corresponds to a given token.
  • Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.

Performances

Performances can vary depending on hardware, but running the~/bindings/python/benches/test_tiktoken.py should give the following on a g6 aws instance:image

Bindings

We provide bindings to the following languages (more to come!):

Installation

You can install from source using:

pip install git+https://github.com/huggingface/tokenizers.git#subdirectory=bindings/python

our install the released versions with

pip install tokenizers

Quick example using Python:

Choose your model between Byte-Pair Encoding, WordPiece or Unigram and instantiate a tokenizer:

fromtokenizersimportTokenizerfromtokenizers.modelsimportBPEtokenizer=Tokenizer(BPE())

You can customize how pre-tokenization (e.g., splitting into words) is done:

fromtokenizers.pre_tokenizersimportWhitespacetokenizer.pre_tokenizer=Whitespace()

Then training your tokenizer on a set of files just takes two lines of codes:

fromtokenizers.trainersimportBpeTrainertrainer=BpeTrainer(special_tokens=["[UNK]","[CLS]","[SEP]","[PAD]","[MASK]"])tokenizer.train(files=["wiki.train.raw","wiki.valid.raw","wiki.test.raw"],trainer=trainer)

Once your tokenizer is trained, encode any text with just one line:

output=tokenizer.encode("Hello, y'all! How are you 😁 ?")print(output.tokens)# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]

Check thedocumentationor thequicktour to learn more!


[8]ページ先頭

©2009-2025 Movatter.jp