Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Library used by Meilisearch to tokenize queries and documents

License

NotificationsYou must be signed in to change notification settings

Soham1803/charabia

 
 

Repository files navigation

Library used by Meilisearch to tokenize queries and documents

Role

The tokenizer’s role is to take a sentence or phrase and split it into smaller units of language, called tokens. It finds and retrieves all the words in a string based on the language’s particularities.

Details

Charabia provides a simple API to segment, normalize, or tokenize (segment + normalize) a text of a specific language by detecting its Script/Language and choosing the specialized pipeline for it.

Supported languages

Charabia is multilingual, featuring optimized support for:

Script / Languagespecialized segmentationspecialized normalizationSegmentation Performance levelTokenization Performance level
Latin✅ CamelCase segmentationcompatibility decomposition + lowercase +nonspacing-marks removal +Ð vs Đ spoofing normalization🟩 ~23MiB/sec🟨 ~9MiB/sec
Greekcompatibility decomposition + lowercase + final sigma normalization🟩 ~27MiB/sec🟨 ~8MiB/sec
Cyrillic -Georgiancompatibility decomposition + lowercase🟩 ~27MiB/sec🟨 ~9MiB/sec
ChineseCMN 🇨🇳jiebacompatibility decomposition + kvariant conversion🟨 ~10MiB/sec🟧 ~5MiB/sec
Hebrew 🇮🇱compatibility decomposition +nonspacing-marks removal🟩 ~33MiB/sec🟨 ~11MiB/sec
Arabicال segmentationcompatibility decomposition +nonspacing-marks removal + [Tatweel, Alef, Yeh, and Taa Marbuta normalization]🟩 ~36MiB/sec🟨 ~11MiB/sec
Japanese 🇯🇵lindera IPA-dictcompatibility decomposition🟧 ~3MiB/sec🟧 ~3MiB/sec
Korean 🇰🇷lindera KO-dictcompatibility decomposition🟥 ~2MiB/sec🟥 ~2MiB/sec
Thai 🇹🇭dictionary basedcompatibility decomposition +nonspacing-marks removal🟩 ~22MiB/sec🟨 ~11MiB/sec
Khmer 🇰🇭✅ dictionary basedcompatibility decomposition🟧 ~7MiB/sec🟧 ~5MiB/sec

We aim to provide global language support, and your feedback helps usmove closer to that goal. If you notice inconsistencies in your search results or the way your documents are processed, please open an issue on ourGitHub repository.

If you have a particular need that charabia does not support, please share it in the product repository by creating adedicated discussion.

About Performance level

Performances are based on the throughput (MiB/sec) of the tokenizer (computed on ascaleway Elastic Metal server EM-A410X-SSD - CPU: Intel Xeon E5 1650 - RAM: 64 Go) using jemalloc:

  • 0️⃣⬛️: 0 -> 1 MiB/sec
  • 1️⃣🟥: 1 -> 3 MiB/sec
  • 2️⃣🟧: 3 -> 8 MiB/sec
  • 3️⃣🟨: 8 -> 20 MiB/sec
  • 4️⃣🟩: 20 -> 50 MiB/sec
  • 5️⃣🟪: 50 MiB/sec or more

Examples

Tokenization

use charabia::Tokenize;let orig ="Thé quick (\"brown\") fox can't jump 32.3 feet, right? Brr, it's 29.3°F!";// tokenize the text.letmut tokens = orig.tokenize();let token = tokens.next().unwrap();// the lemma into the token is normalized: `Thé` became `the`.assert_eq!(token.lemma(),"the");// token is classfied as a wordassert!(token.is_word());let token = tokens.next().unwrap();assert_eq!(token.lemma()," ");// token is classfied as a separatorassert!(token.is_separator());

Segmentation

use charabia::Segment;let orig ="The quick (\"brown\") fox can't jump 32.3 feet, right? Brr, it's 29.3°F!";// segment the text.letmut segments = orig.segment_str();assert_eq!(segments.next(),Some("The"));assert_eq!(segments.next(),Some(" "));assert_eq!(segments.next(),Some("quick"));

About

Library used by Meilisearch to tokenize queries and documents

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Rust99.8%
  • Shell0.2%

[8]ページ先頭

©2009-2025 Movatter.jp