Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

📐 Compute distance between sequences. 30+ algorithms, pure python implementation, common interface, optional external libs usage.

License

NotificationsYou must be signed in to change notification settings

life4/textdistance

Repository files navigation

TextDistance logo

Build StatusPyPI versionStatusLicense

TextDistance -- python library for comparing distance between two or more sequences by many algorithms.

Features:

  • 30+ algorithms
  • Pure python implementation
  • Simple usage
  • More than two sequences comparing
  • Some algorithms have more than one implementation in one class.
  • Optional numpy usage for maximum speed.

Algorithms

Edit based

AlgorithmClassFunctions
HammingHamminghamming
MLIPNSMLIPNSmlipns
LevenshteinLevenshteinlevenshtein
Damerau-LevenshteinDamerauLevenshteindamerau_levenshtein
Jaro-WinklerJaroWinklerjaro_winkler,jaro
Strcmp95StrCmp95strcmp95
Needleman-WunschNeedlemanWunschneedleman_wunsch
GotohGotohgotoh
Smith-WatermanSmithWatermansmith_waterman

Token based

AlgorithmClassFunctions
Jaccard indexJaccardjaccard
Sørensen–Dice coefficientSorensensorensen,sorensen_dice,dice
Tversky indexTverskytversky
Overlap coefficientOverlapoverlap
Tanimoto distanceTanimototanimoto
Cosine similarityCosinecosine
Monge-ElkanMongeElkanmonge_elkan
Bag distanceBagbag

Sequence based

AlgorithmClassFunctions
longest common subsequence similarityLCSSeqlcsseq
longest common substring similarityLCSStrlcsstr
Ratcliff-Obershelp similarityRatcliffObershelpratcliff_obershelp

Compression based

Normalized compression distance with different compression algorithms.

Classic compression algorithms:

AlgorithmClassFunction
Arithmetic codingArithNCDarith_ncd
RLERLENCDrle_ncd
BWT RLEBWTRLENCDbwtrle_ncd

Normal compression algorithms:

AlgorithmClassFunction
Square RootSqrtNCDsqrt_ncd
EntropyEntropyNCDentropy_ncd

Work in progress algorithms that compare two strings as array of bits:

AlgorithmClassFunction
BZ2BZ2NCDbz2_ncd
LZMALZMANCDlzma_ncd
ZLibZLIBNCDzlib_ncd

Seeblog post for more details about NCD.

Phonetic

AlgorithmClassFunctions
MRAMRAmra
EditexEditexeditex

Simple

AlgorithmClassFunctions
Prefix similarityPrefixprefix
Postfix similarityPostfixpostfix
Length distanceLengthlength
Identity similarityIdentityidentity
Matrix similarityMatrixmatrix

Installation

Stable

Only pure python implementation:

pip install textdistance

With extra libraries for maximum speed:

pip install"textdistance[extras]"

With all libraries (required forbenchmarking andtesting):

pip install"textdistance[benchmark]"

With algorithm specific extras:

pip install"textdistance[Hamming]"

Algorithms with available extras:DamerauLevenshtein,Hamming,Jaro,JaroWinkler,Levenshtein.

Dev

Via pip:

pip install -e git+https://github.com/life4/textdistance.git#egg=textdistance

Or clone repo and install with some extras:

git clone https://github.com/life4/textdistance.gitpip install -e".[benchmark]"

Usage

All algorithms have 2 interfaces:

  1. Class with algorithm-specific params for customizing.
  2. Class instance with default params for quick and simple usage.

All algorithms have some common methods:

  1. .distance(*sequences) -- calculate distance between sequences.
  2. .similarity(*sequences) -- calculate similarity for sequences.
  3. .maximum(*sequences) -- maximum possible value for distance and similarity. For any sequence:distance + similarity == maximum.
  4. .normalized_distance(*sequences) -- normalized distance between sequences. The return value is a float between 0 and 1, where 0 means equal, and 1 totally different.
  5. .normalized_similarity(*sequences) -- normalized similarity for sequences. The return value is a float between 0 and 1, where 0 means totally different, and 1 equal.

Most common init arguments:

  1. qval -- q-value for split sequences into q-grams. Possible values:
    • 1 (default) -- compare sequences by chars.
    • 2 or more -- transform sequences to q-grams.
    • None -- split sequences by words.
  2. as_set -- for token-based algorithms:
    • True --t andttt is equal.
    • False (default) --t andttt is different.

Examples

For example,Hamming distance:

importtextdistancetextdistance.hamming('test','text')# 1textdistance.hamming.distance('test','text')# 1textdistance.hamming.similarity('test','text')# 3textdistance.hamming.normalized_distance('test','text')# 0.25textdistance.hamming.normalized_similarity('test','text')# 0.75textdistance.Hamming(qval=2).distance('test','text')# 2

Any other algorithms have same interface.

Articles

A few articles with examples how to use textdistance in the real world:

Extra libraries

For main algorithms textdistance try to call known external libraries (fastest first) if available (installed in your system) and possible (this implementation can compare this type of sequences).Install textdistance with extras for this feature.

You can disable this by passingexternal=False argument on init:

importtextdistancehamming=textdistance.Hamming(external=False)hamming('text','testit')# 3

Supported libraries:

  1. jellyfish
  2. py_stringmatching
  3. pylev
  4. Levenshtein
  5. pyxDamerauLevenshtein

Algorithms:

  1. DamerauLevenshtein
  2. Hamming
  3. Jaro
  4. JaroWinkler
  5. Levenshtein

Benchmarks

Without extras installation:

algorithmlibrarytime
DamerauLevenshteinrapidfuzz0.00312
DamerauLevenshteinjellyfish0.00591
DamerauLevenshteinpyxdameraulevenshtein0.03335
DamerauLevenshteintextdistance0.83524
HammingLevenshtein0.00038
Hammingrapidfuzz0.00044
Hammingjellyfish0.00091
Hammingtextdistance0.03531
Jarorapidfuzz0.00092
Jarojellyfish0.00191
Jarotextdistance0.07365
JaroWinklerrapidfuzz0.00094
JaroWinklerjellyfish0.00195
JaroWinklertextdistance0.07501
Levenshteinrapidfuzz0.00099
LevenshteinLevenshtein0.00122
Levenshteinjellyfish0.00254
Levenshteinpylev0.15688
Levenshteintextdistance0.53902

Total: 24 libs.

Yeah, so slow. Use TextDistance on production only with extras.

Textdistance use benchmark's results for algorithm's optimization and try to call fastest external lib first (if possible).

You can run benchmark manually on your system:

pip install textdistance[benchmark]python3 -m textdistance.benchmark

TextDistance show benchmarks results table for your system and save libraries priorities intolibraries.json file in TextDistance's folder. This file will be used by textdistance for calling fastest algorithm implementation. Defaultlibraries.json already included in package.

Running tests

All you need istask. SeeTaskfile.yml for the list of available commands. For example, to run tests including third-party libraries usage, executetask pytest-external:run.

Contributing

PRs are welcome!

  • Found a bug? Fix it!
  • Want to add more algorithms? Sure! Just make it with the same interface as other algorithms in the lib and add some tests.
  • Can make something faster? Great! Just avoid external dependencies and remember that everything should work not only with strings.
  • Something else that do you think is good? Do it! Just make sure that CI passes and everything from the README is still applicable (interface, features, and so on).
  • Have no time to code? Tell your friends and subscribers abouttextdistance. More users, more contributions, more amazing features.

Thank you ❤️


[8]ページ先頭

©2009-2025 Movatter.jp