ibm_watson_machine_learning
Bases:BaseEmbeddings
,WMLResource
Instantiate the embeddings service.
model_id (str,optional) – the type of model to use
params (dict,optional) – parameters to use during generate requests, useibm_watsonx_ai.metanames.EmbedTextParamsMetaNames().show()
to view the list of MetaNames
credentials (dict,optional) – credentials for the Watson Machine Learning instance
project_id (str,optional) – ID of the Watson Studio project
space_id (str,optional) – ID of the Watson Studio space
api_client (APIClient,optional) – initialized APIClient object with a set project ID or space ID. If passed,credentials
andproject_id
/space_id
are not required.
verify (bool orstr,optional) –
You can pass one of following as verify:
the path to a CA_BUNDLE file
the path of a directory with certificates of trusted CAs
True - default path to truststore will be taken
False - no verification will be made
persistent_connection (bool,optional) – defines whether to keep a persistent connection when evaluating thegenerate, ‘embed_query’, and ‘embed_documents` methods with one promptor batch of prompts that meet the length limit. For more details, seeGenerate embeddings.To close the connection, runembeddings.close_persistent_connection(), defaults to True. Added in 1.1.2.
batch_size (int,optional) – Number of elements to be embedded sending in one call (used only for sync methods), defaults to 1000
concurrency_limit (int,optional) – number of requests to be sent in parallel when generating embedding vectors (used only for sync methods), max is 10, defaults to 5
max_retries (int,optional) – number of retries performed when request was not successful and status code is in retry_status_codes, defaults to 10
delay_time (float,optional) – delay time to retry request, factor in exponential backoff formula: wx_delay_time * pow(2.0, attempt), defaults to 0.5s
retry_status_codes (list[int],optional) – list of status codes which will be considered for retry mechanism, defaults to [429, 503, 504, 520]
Note
When thecredentials
parameter is passed, one of these parameters is required: [project_id
,space_id
].
Hint
You can copy the project_id from the Project’s Manage tab (Project -> Manage -> General -> Details).
Example:
fromibm_watsonx_aiimportCredentialsfromibm_watsonx_ai.foundation_modelsimportEmbeddingsfromibm_watsonx_ai.metanamesimportEmbedTextParamsMetaNamesasEmbedParamsfromibm_watsonx_ai.foundation_models.utils.enumsimportEmbeddingTypesembed_params={EmbedParams.TRUNCATE_INPUT_TOKENS:3,EmbedParams.RETURN_OPTIONS:{'input_text':True}}embedding=Embeddings(model_id=EmbeddingTypes.IBM_SLATE_30M_ENG,params=embed_params,credentials=Credentials(api_key=IAM_API_KEY,url="https://us-south.ml.cloud.ibm.com"),project_id="*****")
Returns list of embedding vectors for provided texts in an asynchronous manner.
texts (list[str]) – list of texts for which embedding vectors will be generated, max length is determined by API (for more information, please refer to the documentation:https://cloud.ibm.com/apidocs/watsonx-ai#text-embeddings)
params (ParamsType |None,optional) – MetaProps for the embedding generation, useibm_watsonx_ai.metanames.EmbedTextParamsMetaNames().show()
to view the list of MetaNames, defaults to None
list of embedding vectors
list[list[float]]
Example:
q=["What is a Generative AI?","Generative AI refers to a type of artificial intelligence that can original content."]embedding_vectors=awaitembedding.aembed_documents(texts=q)print(embedding_vectors)
Returns an embedding vector for a provided text in an asynchronous manner.
text (str) – text for which embedding vector will be generated
params (ParamsType |None,optional) – MetaProps for the embedding generation, useibm_watsonx_ai.metanames.EmbedTextParamsMetaNames().show()
to view the list of MetaNames, defaults to None
embedding vector
list[float]
Example:
q="What is a Generative AI?"embedding_vector=awaitembedding.aembed_query(text=q)print(embedding_vector)
Generate embeddings vectors for the given input with the givenparameters in an asynchronous manner. Returns a REST API response.
inputs (list[str]) – list of texts for which embedding vectors will be generated, max length is determined by API (for more information, please refer to the documentation:https://cloud.ibm.com/apidocs/watsonx-ai#text-embeddings)
params (ParamsType |None,optional) – MetaProps for the embedding generation, useibm_watsonx_ai.metanames.EmbedTextParamsMetaNames().show()
to view the list of MetaNames, defaults to None
scoring results containing generated embeddings vectors
dict
Only applicable if persistent_connection was set to True in Embeddings initialization.Calling this method closes the currenthttpx.Client and recreates a newhttpx.Client with default values:timeout: httpx.Timeout(read=30 * 60, write=30 * 60, connect=10, pool=30 * 60)limit: httpx.Limits(max_connections=10, max_keepalive_connections=10, keepalive_expiry=HTTPX_KEEPALIVE_EXPIRY)
Returns list of embedding vectors for provided texts.
texts (list[str]) – list of texts for which embedding vectors will be generated
params (ParamsType |None,optional) – MetaProps for the embedding generation, useibm_watsonx_ai.metanames.EmbedTextParamsMetaNames().show()
to view the list of MetaNames, defaults to None
concurrency_limit (int,optional) – number of requests to be sent in parallel, max is 10, defaults to 5
list of embedding vectors
list[list[float]]
Example:
q=["What is a Generative AI?","Generative AI refers to a type of artificial intelligence that can original content."]embedding_vectors=embedding.embed_documents(texts=q)print(embedding_vectors)
Returns an embedding vector for a provided text.
text (str) – text for which embedding vector will be generated
params (ParamsType |None,optional) – MetaProps for the embedding generation, useibm_watsonx_ai.metanames.EmbedTextParamsMetaNames().show()
to view the list of MetaNames, defaults to None
embedding vector
list[float]
Example:
q="What is a Generative AI?"embedding_vector=embedding.embed_query(text=q)print(embedding_vector)
Generate embeddings vectors for the given input with the givenparameters. Returns a REST API response.
inputs (list[str]) – list of texts for which embedding vectors will be generated
params (ParamsType |None,optional) – MetaProps for the embedding generation, useibm_watsonx_ai.metanames.EmbedTextParamsMetaNames().show()
to view the list of MetaNames, defaults to None
concurrency_limit (int,optional) – number of requests to be sent in parallel, max is 10, defaults to 5
scoring results containing generated embeddings vectors
dict
Bases:ABC
LangChain-like embedding function interface.
DeserializeBaseEmbeddings
into a concrete one using arguments.
concrete Embeddings or None if data is incorrect
BaseEmbeddings | None
Set of MetaNames for Foundation Model Embeddings Parameters.
Available MetaNames:
MetaName | Type | Required | Example value |
TRUNCATE_INPUT_TOKENS | int | N |
|
RETURN_OPTIONS | dict | N |
|
Bases:StrEnum
This represents a dynamically generated Enum for Embedding Models.
Example of getting EmbeddingModels
# GET EmbeddingModels ENUMclient.foundation_models.EmbeddingModels# PRINT dict of Enumsclient.foundation_models.EmbeddingModels.show()
Example Output:
{'SLATE_125M_ENGLISH_RTRVR':'ibm/slate-125m-english-rtrvr',...'SLATE_30M_ENGLISH_RTRVR':'ibm/slate-30m-english-rtrvr'}
Example of initialising Embeddings with EmbeddingModels Enum:
fromibm_watsonx_ai.foundation_modelsimportEmbeddingsembeddings=Embeddings(model_id=client.foundation_models.EmbeddingModels.SLATE_30M_ENGLISH_RTRVR,credentials=Credentials(...),project_id=project_id,)
Bases:Enum
Deprecated since version 1.0.5:UseEmbeddingModels()
instead.
Supported embedding models.
Note
You can check the current list of supported embeddings model types of various environments withget_embeddings_model_specs()
or by referring to thewatsonx.aidocumentation.