Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork52
The Jieba Chinese Word Segmentation Implemented in Rust
License
NotificationsYou must be signed in to change notification settings
messense/jieba-rs
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
🚀 Help me to become a full-time open-source developer bysponsoring me on GitHub
The Jieba Chinese Word Segmentation Implemented in Rust
Add it to yourCargo.toml
:
[dependencies]jieba-rs ="0.7"
then you are good to go. If you are using Rust 2015 you have toextern crate jieba_rs
to your crate root as well.
use jieba_rs::Jieba;fnmain(){let jieba =Jieba::new();let words = jieba.cut("我们中出了一个叛徒",false);assert_eq!(words, vec!["我们","中","出","了","一个","叛徒"]);}
default-dict
feature enables embedded dictionary, this features is enabled by defaulttfidf
feature enables TF-IDF keywords extractortextrank
feature enables TextRank keywords extractor
[dependencies]jieba-rs = {version ="0.7",features = ["tfidf","textrank"] }
cargo bench --all-features
@node-rs/jieba
NodeJS bindingjieba-php
PHP bindingrjieba-py
Python bindingcang-jie
Chinese tokenizer for tantivytantivy-jieba
An adapter that bridges between tantivy and jieba-rsjieba-wasm
the WebAssembly binding
This work is released under the MIT license. A copy of the license is provided in theLICENSE file.
About
The Jieba Chinese Word Segmentation Implemented in Rust
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Sponsor this project
Uh oh!
There was an error while loading.Please reload this page.
Packages0
No packages published