Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Added embed2 which returns a table structure#1186

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
ns1000 wants to merge4 commits intopostgresml:master
base:master
Choose a base branch
Loading
fromns1000:master

Conversation

ns1000
Copy link
Contributor

The proposed change added new wrapper for pgml.embed, this will instead of return a single row, return a table structure which is very useful for batch processing strings.

Example usage is

select*frompgml.embed2('all-MiniLM-L6-v2', (select array_agg(phrase)from (select*from phraseslimit10)));

To import the function is as follows:

CREATE OR REPLACEFUNCTIONpgml."embed2"(    transformerTEXT,    inputsTEXT[],    kwargs JSONB DEFAULT'{}') RETURNS TABLE (textTEXT, embeddingreal[])     LANGUAGE c IMMUTABLE STRICT PARALLEL SAFEAS'MODULE_PATHNAME','embed_batch2_wrapper';

montanalow reacted with rocket emoji
…ould segfault after a client session which used pgml command closes. The issue can be identified in postgres log files with the line 'arrow::fs::FinalizeS3 was not called even though S3 was initialized. This could lead to a segmentation fault at exit'
Copy link
Contributor

@montanalowmontanalow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

You can add a migration to create the function for people who upgrade, e.g.
https://github.com/postgresml/postgresml/blob/master/pgml-extension/sql/pgml--2.7.13--2.8.0.sql

@@ -558,6 +558,26 @@ pub fn embed_batch(
}
}

#[cfg(all(feature = "python", not(feature = "use_as_lib")))]
#[pg_extern(immutable, parallel_safe, name = "embed2")]
pub fn embed_batch2<'a>(
Copy link
Contributor

@montanalowmontanalowNov 25, 2023
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

My rough thoughts, without running the code on some examples.

I think we should name this SQLembed_3 and Rustembed_batch_3 with the goal of establishing this as the 3.0embed API, as well as a pattern for releasing 3.0 APIs early as we developing them in an alpha state (with potentially breaking changes, where we completely drop them in 3.1 in favor of the newly established default behavior).

Your example convinces me that batch APIs should return a table, but I think that table's rows should beJSONB with {id, embedding} keys (at least), unless there is a significant performance implication on that front. My thinking is that embedding models are getting more complicated and now some take JSON rather than TEXT forinputs including aprompt. It would be nice to have an optionalid in the input JSON, and if it's not present, then just return the entire input JSON as theid, which acts just like your TEXT as the key.

Final thought is thatkwargs is JSONB currently, which works well with the underlying Python dependencies, but I'd like to structure it as much as possible for final 3.0. We should find a way to flag this obviously as an alpha API, that will be broken and eventually dropped when a final version is available.

@ns1000
Copy link
ContributorAuthor

So it turns out the batching is not really necessary to achieve speed. When running on CPU within the Postgres python VM, you really need to torch.set_num_threads(1) in order to get the maximum speed. Leaving it to the default value, which is the number of CPUs was creating the slow down problems for me. It will still use all the CPUs when use threads=1.

I am using a debian system, with python 3.11 and postgres 16 to test all this.

@montanalow
Copy link
Contributor

So it turns out the batching is not really necessary to achieve speed. When running on CPU within the Postgres python VM, you really need to torch.set_num_threads(1) in order to get the maximum speed. Leaving it to the default value, which is the number of CPUs was creating the slow down problems for me. It will still use all the CPUs when use threads=1.

I am using a debian system, with python 3.11 and postgres 16 to test all this.

Ah, so this is actually another hit on#1161

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@montanalowmontanalowmontanalow requested changes

Requested changes must be addressed to merge this pull request.

Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

2 participants
@ns1000@montanalow

[8]ページ先頭

©2009-2025 Movatter.jp