- Notifications
You must be signed in to change notification settings - Fork0
Library used by Meilisearch to tokenize queries and documents
License
ehdgurdkf/charabia
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Library used by Meilisearch to tokenize queries and documents
The tokenizer’s role is to take a sentence or phrase and split it into smaller units of language, called tokens. It finds and retrieves all the words in a string based on the language’s particularities.
Charabia provides a simple API to segment, normalize, or tokenize (segment + normalize) a text of a specific language by detecting its Script/Language and choosing the specialized pipeline for it.
Charabia is multilingual, featuring optimized support for:
Script / Language | specialized segmentation | specialized normalization | Segmentation Performance level | Tokenization Performance level |
---|---|---|---|---|
Latin | ✅ CamelCase segmentation | ✅compatibility decomposition + lowercase +nonspacing-marks removal | 🟩 ~23MiB/sec | 🟨 ~9MiB/sec |
Greek | ❌ | ✅compatibility decomposition + lowercase + final sigma normalization | 🟩 ~27MiB/sec | 🟨 ~8MiB/sec |
Cyrillic -Georgian | ❌ | ✅compatibility decomposition + lowercase | 🟩 ~27MiB/sec | 🟨 ~9MiB/sec |
ChineseCMN 🇨🇳 | ✅jieba | ✅compatibility decomposition + pinyin conversion | 🟨 ~10MiB/sec | 🟧 ~5MiB/sec |
Hebrew 🇮🇱 | ❌ | ✅compatibility decomposition +nonspacing-marks removal | 🟩 ~33MiB/sec | 🟨 ~11MiB/sec |
Arabic | ✅ال segmentation | ✅compatibility decomposition +nonspacing-marks removal + [Tatweel, Alef, Yeh, and Taa Marbuta normalization] | 🟩 ~36MiB/sec | 🟨 ~11MiB/sec |
Japanese 🇯🇵 | ✅lindera IPA-dict | ❌compatibility decomposition | 🟧 ~3MiB/sec | 🟧 ~3MiB/sec |
Korean 🇰🇷 | ✅lindera KO-dict | ❌compatibility decomposition | 🟥 ~2MiB/sec | 🟥 ~2MiB/sec |
Thai 🇹🇭 | ✅dictionary based | ✅compatibility decomposition +nonspacing-marks removal | 🟩 ~22MiB/sec | 🟨 ~11MiB/sec |
We aim to provide global language support, and your feedback helps usmove closer to that goal. If you notice inconsistencies in your search results or the way your documents are processed, please open an issue on ourGitHub repository.
If you have a particular need that charabia does not support, please share it in the product repository by creating adedicated discussion.
Performances are based on the throughput (MiB/sec) of the tokenizer (computed on ascaleway Elastic Metal server EM-A410X-SSD - CPU: Intel Xeon E5 1650 - RAM: 64 Go) using jemalloc:
- 0️⃣⬛️: 0 -> 1 MiB/sec
- 1️⃣🟥: 1 -> 3 MiB/sec
- 2️⃣🟧: 3 -> 8 MiB/sec
- 3️⃣🟨: 8 -> 20 MiB/sec
- 4️⃣🟩: 20 -> 50 MiB/sec
- 5️⃣🟪: 50 MiB/sec or more
use charabia::Tokenize;let orig ="Thé quick (\"brown\") fox can't jump 32.3 feet, right? Brr, it's 29.3°F!";// tokenize the text.letmut tokens = orig.tokenize();let token = tokens.next().unwrap();// the lemma into the token is normalized: `Thé` became `the`.assert_eq!(token.lemma(),"the");// token is classfied as a wordassert!(token.is_word());let token = tokens.next().unwrap();assert_eq!(token.lemma()," ");// token is classfied as a separatorassert!(token.is_separator());
use charabia::Segment;let orig ="The quick (\"brown\") fox can't jump 32.3 feet, right? Brr, it's 29.3°F!";// segment the text.letmut segments = orig.segment_str();assert_eq!(segments.next(),Some("The"));assert_eq!(segments.next(),Some(" "));assert_eq!(segments.next(),Some("quick"));