Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Batch upsert documents#1539

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Draft
levkk wants to merge1 commit intomaster
base:master
Choose a base branch
Loading
fromlevkk-sdk-batching
Draft

Batch upsert documents#1539

levkk wants to merge1 commit intomasterfromlevkk-sdk-batching

Conversation

levkk
Copy link
Contributor

@levkklevkk commentedJun 19, 2024
edited
Loading

Features

  1. [SDK] Add support for automatic batching for document upserts. Looking for review on the API before moving forward with PR, e.g. before adding tests.
frompgmlimportCollection,Batchcollection=Collection("my_collection")batch=Batch(collection,25, {"merge":True})awaitbatch.upsert_documents([{"id":1}])# Doesn't upsert yetforiinrange(23):awaitbatch.upsert_documents([{"id":i}])# Doesn't upsert yet# Upserts whatever is in the current batch# and appends the document to the next batchawaitbatch.upsert_documents([{"id":1}])# Upserts the final batchawaitbatch.finish()

Bugs

  1. [SDK] Formatting for long SQL queries.
  2. [SDK] A couple of spelling typos.

@levkklevkk requested a review fromSilasMarvinJune 19, 2024 14:36
@levkklevkk marked this pull request as draftJune 19, 2024 14:38
@SilasMarvin
Copy link
Contributor

SilasMarvin commentedJun 19, 2024
edited
Loading

I'm not sure why a user would use that, and not just:

frompgmlimportCollection,Batchcollection=Collection("my_collection")# batch = Batch(collection, 25, {"merge": True})batch= []# await batch.upsert_documents([{"id": 1}]) # Doesn't upsert yetbatch.append({"id":1})foriinrange(23):# await batch.upsert_documents([{"id": i}]) # Doesn't upsert yetbatch.append({"id":i})# Upserts whatever is in the current batch# and appends the document to the next batch# await batch.upsert_documents([{"id": 1}])awaitcollection.upsert_documents(batch, {"merge":True})# Upserts the final batch# await batch.finish()

@SilasMarvin
Copy link
Contributor

Oh I see the automatic handling of upserting after they hit the threshold is nice, but it is a bit confusing. I think most people in the Python world are used to using batching systems already built into the dataset they are operating on. For example:https://huggingface.co/docs/datasets/en/process#batch-processing Not saying we shouldn't add it, but maybe we should clarify the name to likeAutoBatchUpsert or something I'm not sure.

@montanalow
Copy link
Contributor

Why not use thebatch_size argument on Collection.upsert_documents for this functionality?

@levkk
Copy link
ContributorAuthor

levkk commentedJun 19, 2024
edited
Loading

Python world are used to using batching systems already built into the dataset

Datasets are one of very many sources for data. For example, my use case that triggered the desire for this feature was streaming WET files from awarcio.archiveiterator.ArchiveIterator which seemingly doesn't have batching support built-in. This is not uncommon for most non-machine learning libraries and toolkits which people use to build regular web apps. Is it easy to write the batching logic yourself? Seemingly so, but it's really easy to forget to flush the last often incomplete batch when the source stream is complete, especially when you have to do it yourself, over and over, for each use case you have in your code.

Why not use thebatch_size argument on Collection.upsert_documents for this functionality?

batch_size doesn't handle the incomplete batch scenario, where a user insertslen(records) % batch_size != 0, hence the need forfinish() akaflush(). You have to tell the collection when you're done writing and no more records will be added to whatever incomplete batch it's been buffering.

@montanalow
Copy link
Contributor

Right, having to call flush/finish is only an issue because of this new API you’re introducing. The example Silas gave doesn’t have the issue.

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@SilasMarvinSilasMarvinAwaiting requested review from SilasMarvin

Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

3 participants
@levkk@SilasMarvin@montanalow

[8]ページ先頭

©2009-2025 Movatter.jp