Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Fast Semantic Text Deduplication

License

NotificationsYou must be signed in to change notification settings

MinishLab/semhash

Repository files navigation

SemHash logo

SemHash is a lightweight and flexible tool for deduplicating datasets using semantic similarity. It combines fast embedding generation fromModel2Vec with efficient ANN-based similarity search throughVicinity.

SemHash supports both single-dataset deduplication (e.g., cleaning up a train set) and multi-dataset deduplication (e.g., ensuring no overlap between a test set and a train set). It works with simple datasets, such as text lists, and more complex ones, like multi-column QA datasets. Additionally, it includes functions to inspect deduplication results, making it easier to understand and refine your data cleaning process.

Quickstart

Install the package with:

pip install semhash

Deduplicate a single dataset with the following code (note: the examples assume you havedatasets installed, which you can install withpip install datasets):

fromdatasetsimportload_datasetfromsemhashimportSemHash# Load a dataset to deduplicatetexts=load_dataset("ag_news",split="train")["text"]# Initialize a SemHash instancesemhash=SemHash.from_records(records=texts)# Deduplicate the textsdeduplicated_texts=semhash.self_deduplicate().deduplicated

Or, deduplicate across two datasets with the following code (e.g., eliminating train/test leakage):

fromdatasetsimportload_datasetfromsemhashimportSemHash# Load two datasets to deduplicatetrain_texts=load_dataset("ag_news",split="train")["text"]test_texts=load_dataset("ag_news",split="test")["text"]# Initialize a SemHash instance with the training datasemhash=SemHash.from_records(records=train_texts)# Deduplicate the test data against the training data, optionally with a specific thresholddeduplicated_test_texts=semhash.deduplicate(records=test_texts,threshold=0.9).deduplicated

Or, deduplicate multi-column datasets with the following code (e.g., deduplicating a QA dataset):

fromdatasetsimportload_datasetfromsemhashimportSemHash# Load the datasetdataset=load_dataset("squad_v2",split="train")# Convert the dataset to a list of dictionariesrecords= [dict(row)forrowindataset]# Initialize SemHash with the columns to deduplicatesemhash=SemHash.from_records(records=records,columns=["question","context"])# Deduplicate the recordsdeduplicated_records=semhash.self_deduplicate().deduplicated

Thededuplicate andself_deduplicate functions return aDeduplicationResult. This object stores the deduplicated corpus, a set of duplicate object (along with the objects that caused duplication), and several useful functions to further inspect the deduplication result. Examples of how these functions can be used can be found in theusage section.

Main Features

  • Fast: SemHash usesmodel2vec to embed texts andvicinity to perform similarity search, making it extremely fast.
  • Scalable: SemHash can deduplicate large datasets with millions of records thanks to the ANN backends in Vicinity.
  • Flexible: SemHash can be used to deduplicate a single dataset or across two datasets, and can also be used to deduplicate multi-column datasets (such as QA datasets).
  • Lightweight: SemHash is a lightweight package with minimal dependencies, making it easy to install and use.
  • Explainable: Easily inspect the duplicates and what caused them with theDeduplicationResult object. You can also view the lowest similarity duplicates to find the right threshold for deduplication for your dataset.

Usage

The following examples show the various ways you can use SemHash to deduplicate datasets. These examples assume you have thedatasets library installed, which you can install withpip install datasets.

Deduplicate a single dataset

The following code snippet shows how to deduplicate a single dataset using SemHash (in this example, the train split of theAG News dataset):

fromdatasetsimportload_datasetfromsemhashimportSemHash# Load a dataset to deduplicatetexts=load_dataset("ag_news",split="train")["text"]# Initialize a SemHash instancesemhash=SemHash.from_records(records=texts)# Deduplicate the textsdeduplicated_texts=semhash.self_deduplicate()
Deduplicate across two datasets

The following code snippet shows how to deduplicate across two datasets using SemHash (in this example, the train/test split of theAG News dataset):

fromdatasetsimportload_datasetfromsemhashimportSemHash# Initialize a SemHash instancesemhash=SemHash()# Load two datasets to deduplicatetrain_texts=load_dataset("ag_news",split="train")["text"]test_texts=load_dataset("ag_news",split="test")["text"]# Initialize a SemHash instancesemhash=SemHash.from_records(records=train_texts)# Deduplicate the test data against the training datadeduplicated_test_texts=semhash.deduplicate(records=test_texts)
Deduplicate multi-column datasets

The following code snippet shows how to deduplicate multi-column datasets using SemHash (in this example, the train split of the QA datasetSQuAD 2.0, which consists of questions, contexts, and answers):

fromdatasetsimportload_datasetfromsemhashimportSemHash# Load the datasetdataset=load_dataset("squad_v2",split="train")# Convert the dataset to a list of dictionariesrecords= [dict(row)forrowindataset]# Initialize SemHash with the columns to deduplicatesemhash=SemHash.from_records(records=records,columns=["question","context"])# Deduplicate the recordsdeduplicated_records=semhash.self_deduplicate().deduplicated
DeduplicationResult functionality

TheDeduplicationResult object returned by thededuplicate andself_deduplicate functions contains several useful functions to inspect the deduplication result. The following code snippet shows how to use these functions:

fromdatasetsimportload_datasetfromsemhashimportSemHash# Load a dataset to deduplicatetexts=load_dataset("ag_news",split="train")["text"]# Initialize a SemHash instancesemhash=SemHash.from_records(records=texts)# Deduplicate the textsdeduplication_result=semhash.self_deduplicate()# Check the deduplicated textsdeduplication_result.deduplicated# Check the duplicatesdeduplication_result.duplicates# See what percentage of the texts were duplicatesdeduplication_result.duplicate_ratio# See what percentage of the texts were exact duplicatesdeduplication_result.exact_duplicate_ratio# Get the least similar text from the duplicates. This is useful for finding the right threshold for deduplication.least_similar=deduplication_result.get_least_similar_from_duplicates()# Rethreshold the duplicates. This allows you to instantly rethreshold the duplicates with a new threshold without having to re-deduplicate the texts.deduplication_result.rethreshold(0.95)
Using custom encoders

The following code snippet shows how to use a custom encoder with SemHash:

fromdatasetsimportload_datasetfrommodel2vecimportStaticModelfromsemhashimportSemHash# Load a dataset to deduplicatetexts=load_dataset("ag_news",split="train")["text"]# Load an embedding model (in this example, a multilingual model)model=StaticModel.from_pretrained("minishlab/M2V_multilingual_output")# Initialize a SemHash with the model and custom encodersemhash=SemHash.from_records(records=texts,model=model)# Deduplicate the textsdeduplicated_texts=semhash.self_deduplicate()

Any encoder can be used that adheres to ourencoder protocol. For example, anysentence-transformers model can be used as an encoder:

fromdatasetsimportload_datasetfromsemhashimportSemHashfromsentence_transformersimportSentenceTransformer# Load a dataset to deduplicatetexts=load_dataset("ag_news",split="train")["text"]# Load a sentence-transformers modelmodel=SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")# Initialize a SemHash with the model and custom encodersemhash=SemHash.from_records(records=texts,model=model)# Deduplicate the textsdeduplicated_texts=semhash.self_deduplicate()
Using Pandas DataFrames

You can easily use Pandas DataFrames with SemHash. The following code snippet shows how to deduplicate a Pandas DataFrame:

importpandasaspdfromdatasetsimportload_datasetfromsemhashimportSemHash# Load a dataset as a pandas dataframedataframe=load_dataset("ag_news",split="train").to_pandas()# Convert the dataframe to a list of dictionariesdataframe=dataframe.to_dict(orient="records")# Initialize a SemHash instance with the columns to deduplicatesemhash=SemHash.from_records(records=dataframe,columns=["text"])# Deduplicate the textsdeduplicated_records=semhash.self_deduplicate().deduplicated# Convert the deduplicated records back to a pandas dataframededuplicated_dataframe=pd.DataFrame(deduplicated_records)

NOTE: By default, we use the ANN (approximate-nearest neighbors) backend for deduplication. We recommend keeping this since the recall for smaller datasets is ~100%, and it's needed for larger datasets (>1M samples) since these will take too long to deduplicate without ANN. If you want to use the flat/exact-matching backend, you can setuse_ann=False in the SemHash constructor:

semhash=SemHash.from_records(records=texts,use_ann=False)

Benchmarks

We've benchmarked SemHash on a variety of datasets to measure the deduplication performance and speed. The benchmarks were run with the following setup:

  • The benchmarks were all run on CPU
  • The benchmarks were all run withuse_ann=True
  • The used encoder is the default encoder (potion-base-8M).
  • The timings include the encoding time, index building time, and deduplication time.

Train Deduplication Benchmark

DatasetOriginal Train SizeDeduplicated Train Size% RemovedDeduplication Time (s)
bbc122511446.610.57
senteval_cr301229900.730.14
tweet_sentiment_extraction27481266952.861.77
emotion16000156951.910.77
amazon_counterfactual500049920.160.33
ag_news12000010692110.905.20
enron_spam317162054035.242.03
subj800079900.120.63
sst5854485260.210.58
20_newgroups11314106845.570.73
hatespeech_offensive22783220903.040.92
ade176371571810.880.73
imdb25000248300.681.76
massive_scenario11514936618.660.47
student1175196385645.668.80
squad_v213031910969815.828.81
wikitext180135088464550.8983.53

Train/Test Deduplication Benchmark

DatasetTrain SizeTest SizeDeduplicated Test Size% RemovedDeduplication Time (s)
bbc1225100087013.000.71
senteval_cr30127537500.400.13
tweet_sentiment_extraction27481353434123.451.53
emotion16000200019263.700.65
amazon_counterfactual5000500049900.200.51
ag_news1200007600619818.453.74
enron_spam317162000106047.001.94
subj8000200019990.050.62
sst58544221022050.230.59
20_newgroups11314753270985.762.25
hatespeech_offensive22783200019253.750.77
ade176375879495215.770.81
imdb2500025000247950.822.81
massive_scenario115142974219026.360.46
student1175195000239352.143.78
squad_v213031911873118630.087.13
wikitext18013504358213950.9240.32

As can be seen, SemHash is extremely fast, and scales to large datasets with millions of records. There are some notable examples of train/test leakage, such asenron_spam andstudent, where the test dataset contains a significant amount of semantic overlap with the training dataset.

Reproducing the Benchmarks

To run the benchmarks yourself, you can use the following command (assuming you have thedatasets library installed):

python -m benchmarks.run_benchmarks

Optionally, the datasets can be updated in thedatasets.py file.

License

MIT

Citing

If you use SemHash in your research, please cite the following:

@software{minishlab2025semhash,author ={Thomas van Dongen and Stephan Tulkens},title ={SemHash: Fast Semantic Text Deduplication},year ={2025},url ={https://github.com/MinishLab/semhash}}

[8]ページ先頭

©2009-2025 Movatter.jp