Provides an implementation of today's most used tokenizers, with a focus on performance and versatility.
Train new vocabularies and tokenize, using today's most used tokenizers.
Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server's CPU.
Easy to use, but also extremely versatile.
Designed for research and production.
Normalization comes with alignments tracking. It's always possible to get the part of the original sentence that corresponds to a given token.
Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.
Performances can vary depending on hardware, but running the ~/bindings/python/benches/test_tiktoken.py should give the following on a g6 aws instance:
We provide bindings to the following languages (more to come!):
Rust (Original implementation)
Python
Node.js
Ruby (Contributed by @ankane, external repo)
You can install from source using:
pip install git+https://github.com/huggingface/tokenizers.git#subdirectory=bindings/python
our install the released versions with
pip install tokenizers
Choose your model between Byte-Pair Encoding, WordPiece or Unigram and instantiate a tokenizer:
from tokenizers import Tokenizerfrom tokenizers.models import BPEtokenizer = Tokenizer(BPE())
You can customize how pre-tokenization (e.g., splitting into words) is done:
from tokenizers.pre_tokenizers import Whitespacetokenizer.pre_tokenizer = Whitespace()
Then training your tokenizer on a set of files just takes two lines of codes:
from tokenizers.trainers import BpeTrainertrainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])tokenizer.train(files=["wiki.train.raw", "wiki.valid.raw", "wiki.test.raw"], trainer=trainer)
Once your tokenizer is trained, encode any text with just one line:
output = tokenizer.encode("Hello, y'all! How are you ? ?")print(output.tokens)# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]
Check the documentation or the quicktour to learn more!