Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Separate embedding kwargs into init kwargs and encode kwargs#1555

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
Changes from1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
NextNext commit
Separate embedding kwargs into init kwargs and encode kwargs
  • Loading branch information
@tomaarsen
tomaarsen committedJul 12, 2024
commit455861eb1f759c1ede53cc97f58291d615a0bf5f
26 changes: 21 additions & 5 deletionspgml-extension/src/bindings/transformers/transformers.py
View file
Open in desktop
Original file line numberDiff line numberDiff line change
Expand Up@@ -527,8 +527,8 @@ def rank(transformer, query, documents, kwargs):
returnrank_using(model,query,documents,kwargs)


defcreate_embedding(transformer):
returnSentenceTransformer(transformer)
defcreate_embedding(transformer,kwargs):
returnSentenceTransformer(transformer,**kwargs)


defembed_using(model,transformer,inputs,kwargs):
Expand All@@ -545,16 +545,32 @@ def embed_using(model, transformer, inputs, kwargs):

defembed(transformer,inputs,kwargs):
kwargs=orjson.loads(kwargs)

ensure_device(kwargs)

init_kwarg_keys= [
"device",
"trust_remote_code",
"revision",
"model_kwargs",
"tokenizer_kwargs",
"config_kwargs",
"truncate_dim",
"token",
]
init_kwargs= {
key:valueforkey,valueinkwargs.items()ifkeyininit_kwarg_keys
}
encode_kwargs= {
key:valueforkey,valueinkwargs.items()ifkeynotininit_kwarg_keys
}

iftransformernotin__cache_sentence_transformer_by_name:
__cache_sentence_transformer_by_name[transformer]=create_embedding(
transformer
transformer,init_kwargs
)
model=__cache_sentence_transformer_by_name[transformer]

returnembed_using(model,transformer,inputs,kwargs)
returnembed_using(model,transformer,inputs,encode_kwargs)


defclear_gpu_cache(memory_usage:None):
Expand Down

[8]ページ先頭

©2009-2025 Movatter.jp