ibm_watson_machine_learning
Store and manage connections.
MetaNames for connection creation.
Create a connection. Examples of PROPERTIES field input:
MySQL
client.connections.ConfigurationMetaNames.PROPERTIES:{"database":"database","password":"password","port":"3306","host":"host url","ssl":"false","username":"username"}
Google BigQuery
Method 1: Using service account json. The generated service account json can be provided as input as-is. Provide actual values in json. The example below is only indicative to show the fields. For information on how to generate the service account json, refer to Google BigQuery documentation.
client.connections.ConfigurationMetaNames.PROPERTIES:{"type":"service_account","project_id":"project_id","private_key_id":"private_key_id","private_key":"private key contents","client_email":"client_email","client_id":"client_id","auth_uri":"https://accounts.google.com/o/oauth2/auth","token_uri":"https://oauth2.googleapis.com/token","auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs","client_x509_cert_url":"client_x509_cert_url"}Method 2: Using OAuth Method. For information on how to generate a OAuth token, refer to Google BigQuery documentation.
client.connections.ConfigurationMetaNames.PROPERTIES:{"access_token":"access token generated for big query","refresh_token":"refresh token","project_id":"project_id","client_secret":"This is your gmail account password","client_id":"client_id"}
MS SQL
client.connections.ConfigurationMetaNames.PROPERTIES:{"database":"database","password":"password","port":"1433","host":"host","username":"username"}
Teradata
client.connections.ConfigurationMetaNames.PROPERTIES:{"database":"database","password":"password","port":"1433","host":"host","username":"username"}
meta_props (dict) –
metadata of the connection configuration. To see available meta names, use:
client.connections.ConfigurationMetaNames.get()
metadata of the stored connection
dict
Example:
sqlserver_data_source_type_id=client.connections.get_datasource_type_id_by_name('sqlserver')connections_details=client.connections.create({client.connections.ConfigurationMetaNames.NAME:"sqlserver connection",client.connections.ConfigurationMetaNames.DESCRIPTION:"connection description",client.connections.ConfigurationMetaNames.DATASOURCE_TYPE:sqlserver_data_source_type_id,client.connections.ConfigurationMetaNames.PROPERTIES:{"database":"database","password":"password","port":"1433","host":"host","username":"username"}})
Delete a stored connection.
connection_id (str) – unique ID of the connection to be deleted
status (“SUCCESS” or “FAILED”)
str
Example:
client.connections.delete(connection_id)
Get datasource type details for the given datasource type ID.
datasource_type_id (str) – ID of the datasource type
connection_properties (bool) – if True, the connection properties are included in the returned details. defaults to False
Datasource type details
dict
Example:
client.connections.get_datasource_type_details_by_id(datasource_type_id)
Get datasource type details for the given datasource type name.
datasource_type_name (str) – NAME of the datasource type
connection_properties (bool) – if True, the connection properties are included in the returned details. defaults to False
Datasource type details
dict
Example:
client.connections.get_datasource_type_details_by_name(datasource_type_name)
Get a stored datasource type ID for the given datasource type name.
name (str) – name of datasource type
ID of datasource type
str
Example:
client.connections.get_datasource_type_id_by_name('cloudobjectstorage')
Get a stored datasource type ID for the given datasource type name.
Deprecated: UseConnections.get_datasource_type_id_by_name(name)
instead.
name (str) – name of datasource type
ID of datasource type
str
Example:
client.connections.get_datasource_type_uid_by_name('cloudobjectstorage')
Get connection details for the given unique connection ID.If no connection_id is passed, details for all connections are returned.
connection_id (str) – unique ID of the connection
metadata of the stored connection
dict
Example:
connection_details=client.connections.get_details(connection_id)connection_details=client.connections.get_details()
Get ID of a stored connection.
connection_details (dict) – metadata of the stored connection
unique ID of the stored connection
str
Example:
connection_id=client.connection.get_id(connection_details)
Get the unique ID of a stored connection.
Deprecated: UseConnections.get_id(details)
instead.
connection_details (dict) – metadata of the stored connection
unique ID of the stored connection
str
Example:
connection_uid=client.connection.get_uid(connection_details)
Get uploaded db driver jar names and paths.Supported for IBM Cloud Pak® for Data, version 4.6.1 and up.
Output
Important
Returns dictionary containing name and path for connection files.
return type: Dict[Str, Str]
Example:
result=client.connections.get_uploaded_db_drivers()
Return pd.DataFrame table with all stored connections in a table format.
pandas.DataFrame with listed connections
pandas.DataFrame
Example:
client.connections.list()
Print stored datasource types assets in a table format.
pandas.DataFrame with listed datasource types
pandas.DataFrame
Example:https://test.cloud.ibm.com/apidocs/watsonx-ai#trainings-list
client.connections.list_datasource_types()
Return pd.DataFrame table with uploaded db driver jars in table a format. Supported for IBM Cloud Pak® for Data only.
pandas.DataFrame with listed uploaded db drivers
pandas.DataFrame
Example:
client.connections.list_uploaded_db_drivers()
Get a signed db driver jar URL to be used during JDBC generic connection creation.The jar name passed as argument needs to be uploaded into the system first.Supported for IBM Cloud Pak® for Data only, version 4.0.4 and later.
jar_name (str) – name of db driver jar
URL of signed db driver
str
Example:
jar_uri=client.connections.sign_db_driver_url('db2jcc4.jar')
Set of MetaNames for Connection.
Available MetaNames:
MetaName | Type | Required | Example value |
NAME | str | Y |
|
DESCRIPTION | str | N |
|
DATASOURCE_TYPE | str | Y |
|
PROPERTIES | dict | Y |
|
FLAGS | list | N |
|
Store and manage data assets.
MetaNames for Data Assets creation.
Create a data asset and upload content to it.
name (str) – name to be given to the data asset
file_path (str) – path to the content file to be uploaded
duplicate_action (AssetDuplicateAction,optional) – determines behaviour when asset with the same name already exists,if not specified, the value from catalogs/projects/spaces will be used
metadata of the stored data asset
dict
Example:
asset_details=client.data_assets.create(name="sample_asset",file_path="/path/to/file")
Soft delete the stored data asset. The asset will be moved to trashed assetsand will not be visible in asset list. To permanently delete assets setpurge_on_delete parameter to True.
asset_id (str) – unique ID of the data asset
purge_on_delete (bool,optional) – if set to True will purge the asset
status (“SUCCESS” or “FAILED”) or dictionary, if deleted asynchronously
str or dict
Example:
client.data_assets.delete(asset_id)
Download and store the content of a data asset.
asset_id (str) – unique ID of the data asset to be downloaded
filename (str) – filename to be used for the downloaded file
normalized path to the downloaded asset content
str
Example:
client.data_assets.download(asset_id,"sample_asset.csv")
Download the content of a data asset.
asset_id (str) – unique ID of the data asset to be downloaded
the asset content
bytes
Example:
content=client.data_assets.get_content(asset_id).decode('ascii')
Get data asset details. If no asset_id is passed, details for all assets are returned.
asset_id (str) – unique ID of the asset
limit (int,optional) – limit number of fetched records
get_all (bool,optional) – if True, it will get all entries in ‘limited’ chunks
metadata of the stored data asset
dict
Example:
asset_details=client.data_assets.get_details(asset_id)
Get the URL of a stored data asset.
asset_details (dict) – details of the stored data asset
href of the stored data asset
str
Example:
asset_details=client.data_assets.get_details(asset_id)asset_href=client.data_assets.get_href(asset_details)
Get the unique ID of a stored data asset.
asset_details (dict) – details of the stored data asset
unique ID of the stored data asset
str
Example:
asset_id=client.data_assets.get_id(asset_details)
Lists stored data assets in a table format.
limit (int,optional) – limit number for fetched records
DataFrame
listed elements
Example:
client.data_assets.list()
Create a data asset and upload content to it.
meta_props (dict) –
metadata of the space configuration. To see available meta names, use:
client.data_assets.ConfigurationMetaNames.get()
Example:
Example of data asset creation for files:
metadata={client.data_assets.ConfigurationMetaNames.NAME:'my data assets',client.data_assets.ConfigurationMetaNames.DESCRIPTION:'sample description',client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME:'sample.csv'}asset_details=client.data_assets.store(meta_props=metadata)
Example of data asset creation using a connection:
metadata={client.data_assets.ConfigurationMetaNames.NAME:'my data assets',client.data_assets.ConfigurationMetaNames.DESCRIPTION:'sample description',client.data_assets.ConfigurationMetaNames.CONNECTION_ID:'39eaa1ee-9aa4-4651-b8fe-95d3ddae',client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME:'t1/sample.csv'}asset_details=client.data_assets.store(meta_props=metadata)
Example of data asset creation with a database sources type connection:
metadata={client.data_assets.ConfigurationMetaNames.NAME:'my data assets',client.data_assets.ConfigurationMetaNames.DESCRIPTION:'sample description',client.data_assets.ConfigurationMetaNames.CONNECTION_ID:'23eaf1ee-96a4-4651-b8fe-95d3dadfe',client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME:'t1'}asset_details=client.data_assets.store(meta_props=metadata)
Store and manage folder assets.
MetaNames for Folder Assets creation.
Create a folder asset.
name (str) – name to be given to the folder asset
connection_path (str) – path to the folder asset
connection_id (str,optional) – ID of the connection where the folder asset is placed
metadata of the stored folder asset
dict
Example:
folder_asset_details=client.folder_assets.create(name="sample_folder_asset",connection_id="sample_connection_id",connection_path="/bucket1/folder1/folder1.1")
Soft delete the stored folder asset. The asset will be moved to trashed assetsand will not be visible in asset list. To permanently delete assets setpurge_on_delete parameter to True.
asset_id (str) – unique ID of the folder asset
purge_on_delete (bool,optional) – if set to True will purge the asset
status (“SUCCESS” or “FAILED”) or dictionary, if deleted asynchronously
str or dict
Example:
client.folder_assets.delete(asset_id)
Get folder asset details. If nofolder_asset_id
is passed, details for all assets are returned.
folder_asset_id (str) – unique ID of the asset
get_all (bool,optional) – if True, it will get all entries in ‘limited’ chunks
limit (int,optional) – limit number of fetched records
metadata of the stored folder asset
dict
Example:
folder_asset_details=client.folder_assets.get_details(folder_asset_id)
Get the URL of a stored folder asset.
asset_details (dict) – details of the stored folder asset
href of the stored folder asset
str
Example:
asset_details=client.folder_assets.get_details(asset_id)asset_href=client.folder_assets.get_href(asset_details)
Get the unique ID of a stored folder asset.
asset_details (dict) – details of the stored folder asset
unique ID of the stored folder asset
str
Example:
asset_id=client.folder_assets.get_id(asset_details)
Lists stored folder assets in a table format.
limit (int,optional) – limit number for fetched records
DataFrame
listed elements
Example:
client.folder_assets.list()
Create a folder asset.
meta_props (dict) –
metadata of the space configuration. To see available meta names, use:
client.folder_assets.ConfigurationMetaNames.get()
Example:
Example of creating a folder asset placed in a project/space container:
metadata={client.folder_assets.ConfigurationMetaNames.NAME:'my folder asset',client.folder_assets.ConfigurationMetaNames.DESCRIPTION:'sample description',client.folder_assets.ConfigurationMetaNames.CONNECTION_PATH:'/bucket1/folder1/folder1.1'}asset_details=client.folder_assets.store(meta_props=metadata)
Example of creating a folder asset connected to a COS bucket folder:
metadata={client.folder_assets.ConfigurationMetaNames.NAME:'my folder asset',client.folder_assets.ConfigurationMetaNames.DESCRIPTION:'sample description',client.folder_assets.ConfigurationMetaNames.CONNECTION_ID:'f1fea17c-a7e5-49e4-9f8e-23cef3e11ed5',client.folder_assets.ConfigurationMetaNames.CONNECTION_PATH:'/bucket1/folder1/folder1.1'}asset_details=client.folder_assets.store(meta_props=metadata)
Manage trashed assets.
Delete a trashed asset.
asset_id (str) – trashed asset ID
status “SUCCESS” if deletion is successful
Literal[“SUCCESS”]
Example:
client.trashed_assets.delete(asset_id)
Get metadata of a given trashed asset. If noasset_id is specified, all trashed assets metadata is returned.
asset_id (str,optional) – trashed asset ID
limit (int,optional) – limit number of fetched records
export metadata
dict (if asset_id is not None) or {“resources”: [dict]} (if asset_id is None)
Example:
details=client.trashed_assets.get_details(asset_id)details=client.trashed_assets.get_details()details=client.trashed_assets.get_details(limit=100)
Get the ID of a trashed asset.
trashed_asset_details (dict) – metadata of the trashed asset
unique ID of the trashed asset
str
Example:
asset_id=client.trashed_assets.get_id(trashed_asset_details)
List trashed assets.
limit (int,optional) – set the limit for number of listed trashed assets,default isNone (all trashed assets should be fetched)
Pandas DataFrame with information about trashed assets
pandas.DataFrame
Example:
trashed_assets_list=client.trashed_assets.list()print(trashed_assets_list)# Result:# NAME ASSET_TYPE ASSET_ID# 0 data.csv data_asset 8e421c27-767d-4824-9aab-dc5c7c19ba87
Purge all trashed asset.
Note
If there is more than 20 trashed assets, they will be removed asynchronously.It may take a few seconds until all trashed assets will disappear from trashed assets list.
status “SUCCESS” if purge is successful
Literal[“SUCCESS”]
Example:
client.trashed_assets.purge_all()
Deploy and score published artifacts (models and functions).
An enum class that represents the different hardware request sizesavailable.
Return a capitalized version of the string.
More specifically, make the first character have upper case and the rest lowercase.
Return a version of the string suitable for caseless comparisons.
Return a centered string of length width.
Padding is done using the specified fill character (default is a space).
Return the number of non-overlapping occurrences of substring sub instring S[start:end]. Optional arguments start and end areinterpreted as in slice notation.
Encode the string using the codec registered for encoding.
The encoding in which to encode the string.
The error handling scheme to use for encoding errors.The default is ‘strict’ meaning that encoding errors raise aUnicodeEncodeError. Other possible values are ‘ignore’, ‘replace’ and‘xmlcharrefreplace’ as well as any other name registered withcodecs.register_error that can handle UnicodeEncodeErrors.
Return True if S ends with the specified suffix, False otherwise.With optional start, test S beginning at that position.With optional end, stop comparing S at that position.suffix can also be a tuple of strings to try.
Return a copy where all tab characters are expanded using spaces.
If tabsize is not given, a tab size of 8 characters is assumed.
Return the lowest index in S where substring sub is found,such that sub is contained within S[start:end]. Optionalarguments start and end are interpreted as in slice notation.
Return -1 on failure.
Return a formatted version of S, using substitutions from args and kwargs.The substitutions are identified by braces (‘{’ and ‘}’).
Return a formatted version of S, using substitutions from mapping.The substitutions are identified by braces (‘{’ and ‘}’).
Return the lowest index in S where substring sub is found,such that sub is contained within S[start:end]. Optionalarguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
Return True if the string is an alpha-numeric string, False otherwise.
A string is alpha-numeric if all characters in the string are alpha-numeric andthere is at least one character in the string.
Return True if the string is an alphabetic string, False otherwise.
A string is alphabetic if all characters in the string are alphabetic and thereis at least one character in the string.
Return True if all characters in the string are ASCII, False otherwise.
ASCII characters have code points in the range U+0000-U+007F.Empty string is ASCII too.
Return True if the string is a decimal string, False otherwise.
A string is a decimal string if all characters in the string are decimal andthere is at least one character in the string.
Return True if the string is a digit string, False otherwise.
A string is a digit string if all characters in the string are digits and thereis at least one character in the string.
Return True if the string is a valid Python identifier, False otherwise.
Call keyword.iskeyword(s) to test whether string s is a reserved identifier,such as “def” or “class”.
Return True if the string is a lowercase string, False otherwise.
A string is lowercase if all cased characters in the string are lowercase andthere is at least one cased character in the string.
Return True if the string is a numeric string, False otherwise.
A string is numeric if all characters in the string are numeric and there is atleast one character in the string.
Return True if the string is printable, False otherwise.
A string is printable if all of its characters are considered printable inrepr() or if it is empty.
Return True if the string is a whitespace string, False otherwise.
A string is whitespace if all characters in the string are whitespace and thereis at least one character in the string.
Return True if the string is a title-cased string, False otherwise.
In a title-cased string, upper- and title-case characters may onlyfollow uncased characters and lowercase characters only cased ones.
Return True if the string is an uppercase string, False otherwise.
A string is uppercase if all cased characters in the string are uppercase andthere is at least one cased character in the string.
Concatenate any number of strings.
The string whose method is called is inserted in between each given string.The result is returned as a new string.
Example: ‘.’.join([‘ab’, ‘pq’, ‘rs’]) -> ‘ab.pq.rs’
Return a left-justified string of length width.
Padding is done using the specified fill character (default is a space).
Return a copy of the string converted to lowercase.
Return a copy of the string with leading whitespace removed.
If chars is given and not None, remove characters in chars instead.
Return a translation table usable for str.translate().
If there is only one argument, it must be a dictionary mapping Unicodeordinals (integers) or characters to Unicode ordinals, strings or None.Character keys will be then converted to ordinals.If there are two arguments, they must be strings of equal length, andin the resulting dictionary, each character in x will be mapped to thecharacter at the same position in y. If there is a third argument, itmust be a string, whose characters will be mapped to None in the result.
Partition the string into three parts using the given separator.
This will search for the separator in the string. If the separator is found,returns a 3-tuple containing the part before the separator, the separatoritself, and the part after it.
If the separator is not found, returns a 3-tuple containing the original stringand two empty strings.
Return a str with the given prefix string removed if present.
If the string starts with the prefix string, return string[len(prefix):].Otherwise, return a copy of the original string.
Return a str with the given suffix string removed if present.
If the string ends with the suffix string and that suffix is not empty,return string[:-len(suffix)]. Otherwise, return a copy of the originalstring.
Return a copy with all occurrences of substring old replaced by new.
- count
Maximum number of occurrences to replace.-1 (the default value) means replace all occurrences.
If the optional argument count is given, only the first count occurrences arereplaced.
Return the highest index in S where substring sub is found,such that sub is contained within S[start:end]. Optionalarguments start and end are interpreted as in slice notation.
Return -1 on failure.
Return the highest index in S where substring sub is found,such that sub is contained within S[start:end]. Optionalarguments start and end are interpreted as in slice notation.
Raises ValueError when the substring is not found.
Return a right-justified string of length width.
Padding is done using the specified fill character (default is a space).
Partition the string into three parts using the given separator.
This will search for the separator in the string, starting at the end. Ifthe separator is found, returns a 3-tuple containing the part before theseparator, the separator itself, and the part after it.
If the separator is not found, returns a 3-tuple containing two empty stringsand the original string.
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespacecharacter (including n r t f and spaces) and will discardempty strings from the result.
- maxsplit
Maximum number of splits.-1 (the default value) means no limit.
Splitting starts at the end of the string and works to the front.
Return a copy of the string with trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
Return a list of the substrings in the string, using sep as the separator string.
- sep
The separator used to split the string.
When set to None (the default value), will split on any whitespacecharacter (including n r t f and spaces) and will discardempty strings from the result.
- maxsplit
Maximum number of splits.-1 (the default value) means no limit.
Splitting starts at the front of the string and works to the end.
Note, str.split() is mainly useful for data that has been intentionallydelimited. With natural text that includes punctuation, consider usingthe regular expression module.
Return a list of the lines in the string, breaking at line boundaries.
Line breaks are not included in the resulting list unless keepends is given andtrue.
Return True if S starts with the specified prefix, False otherwise.With optional start, test S beginning at that position.With optional end, stop comparing S at that position.prefix can also be a tuple of strings to try.
Return a copy of the string with leading and trailing whitespace removed.
If chars is given and not None, remove characters in chars instead.
Convert uppercase characters to lowercase and lowercase characters to uppercase.
Return a version of the string where each word is titlecased.
More specifically, words start with uppercased characters and all remainingcased characters have lower case.
Replace each character in the string using the given translation table.
- table
Translation table, which must be a mapping of Unicode ordinals toUnicode ordinals, strings, or None.
The table must implement lookup/indexing via __getitem__, for instance adictionary or list. If this operation raises LookupError, the character isleft untouched. Characters mapped to None are deleted.
Return a copy of the string converted to uppercase.
Create a deployment from an artifact. An artifact is a model or function that can be deployed.
artifact_id (str) – ID of the published artifact (the model or function ID)
meta_props (dict,optional) –
meta props. To see the available list of meta names, use:
client.deployments.ConfigurationMetaNames.get()
rev_id (str,optional) – revision ID of the deployment
metadata of the created deployment
dict
Example:
meta_props={client.deployments.ConfigurationMetaNames.NAME:"SAMPLE DEPLOYMENT NAME",client.deployments.ConfigurationMetaNames.ONLINE:{},client.deployments.ConfigurationMetaNames.HARDWARE_SPEC:{"id":"e7ed1d6c-2e89-42d7-aed5-8sb972c1d2b"},client.deployments.ConfigurationMetaNames.SERVING_NAME:'sample_deployment'}deployment_details=client.deployments.create(artifact_id,meta_props)
Create an asynchronous deployment job.
deployment_id (str) – unique ID of the deployment
meta_props (dict) – metaprops. To see the available list of metanames,useclient.deployments.ScoringMetaNames.get()
orclient.deployments.DecisionOptimizationmetaNames.get()
retention (int,optional) – how many job days job meta should be retained,takes integer values >= -1, supported only on Cloud
transaction_id (str,optional) – transaction ID to be passed with the payload
metadata of the created async deployment job
dict or str
Note
The valid payloads for scoring input are either list of values, pandas or numpy dataframes.
Example:
scoring_payload={client.deployments.ScoringMetaNames.INPUT_DATA:[{'fields':['GENDER','AGE','MARITAL_STATUS','PROFESSION'],'values':[['M',23,'Single','Student'],['M',55,'Single','Executive']]}]}async_job=client.deployments.create_job(deployment_id,scoring_payload)
Delete a deployment.
deployment_id (str) – unique ID of the deployment
status (“SUCCESS” or “FAILED”)
str
Example:
client.deployments.delete(deployment_id)
Delete a deployment job that is running. This method can also delete metadatadetails of completed or canceled jobs when hard_delete parameter is set to True.
job_id (str) – unique ID of the deployment job to be deleted
hard_delete (bool,optional) –
specifyTrue orFalse:
True - To delete the completed or canceled job.
False - To cancel the currently running deployment job.
status (“SUCCESS” or “FAILED”)
str
Example:
client.deployments.delete_job(job_id)
Generate a raw response withprompt for givendeployment_id.
deployment_id (str) – unique ID of the deployment
prompt (str,optional) – prompt needed for text generation. If deployment_id points to the Prompt Template asset, then the prompt argument must be None, defaults to None
params (dict,optional) – meta props for text generation, useibm_watsonx_ai.metanames.GenTextParamsMetaNames().show()
to view the list of MetaNames
guardrails (bool,optional) – If True, then potentially hateful, abusive, and/or profane language (HAP) was detectedfilter is toggle on for both prompt and generated text, defaults to False
guardrails_hap_params (dict,optional) – meta props for HAP moderations, useibm_watsonx_ai.metanames.GenTextModerationsMetaNames().show()
to view the list of MetaNames
concurrency_limit (int,optional) – number of requests to be sent in parallel, maximum is 10
async_mode (bool,optional) – If True, then yield results asynchronously (using generator). In this case both the prompt andthe generated text will be concatenated in the final response - undergenerated_text, defaultsto False
validate_prompt_variables (bool) – If True, prompt variables provided inparams are validated with the ones in Prompt Template Asset.This parameter is only applicable in a Prompt Template Asset deployment scenario and should not be changed for different cases, defaults to True
guardrails_granite_guardian_params (dict,optional) – parameters for Granite Guardian moderations
scoring result containing generated content
dict
Given the selected deployment (deployment_id), a text prompt as input, and the parameters and concurrency_limit,the selected inference will generate a completion text as generated_text response.
deployment_id (str) – unique ID of the deployment
prompt (str,optional) – the prompt string or list of strings. If the list of strings is passed, requests will be managed in parallel with the rate of concurency_limit, defaults to None
params (dict,optional) – meta props for text generation, useibm_watsonx_ai.metanames.GenTextParamsMetaNames().show()
to view the list of MetaNames
raw_response (bool,optional) – returns the whole response object
guardrails (bool,optional) – If True, then potentially hateful, abusive, and/or profane language (HAP) was detectedfilter is toggle on for both prompt and generated text, defaults to False
guardrails_hap_params (dict,optional) – meta props for HAP moderations, useibm_watsonx_ai.metanames.GenTextModerationsMetaNames().show()
to view the list of MetaNames
concurrency_limit (int,optional) – number of requests to be sent in parallel, maximum is 10
validate_prompt_variables (bool) – If True, prompt variables provided inparams are validated with the ones in Prompt Template Asset.This parameter is only applicable in a Prompt Template Asset deployment scenario and should not be changed for different cases, defaults to True
guardrails_granite_guardian_params (dict,optional) – parameters for Granite Guardian moderations
generated content
str
Note
By default only the first occurance ofHAPDetectionWarning is displayed. To enable printing all warnings of this category, use:
importwarningsfromibm_watsonx_ai.foundation_models.utilsimportHAPDetectionWarningwarnings.filterwarnings("always",category=HAPDetectionWarning)
Given the selected deployment (deployment_id), a text prompt as input and parameters,the selected inference will generate a streamed text as generate_text_stream.
deployment_id (str) – unique ID of the deployment
prompt (str,optional) – the prompt string, defaults to None
params (dict,optional) – meta props for text generation, useibm_watsonx_ai.metanames.GenTextParamsMetaNames().show()
to view the list of MetaNames
raw_response (bool,optional) – yields the whole response object
guardrails (bool,optional) – If True, then potentially hateful, abusive, and/or profane language (HAP) was detectedfilter is toggle on for both prompt and generated text, defaults to False
guardrails_hap_params (dict,optional) – meta props for HAP moderations, useibm_watsonx_ai.metanames.GenTextModerationsMetaNames().show()
to view the list of MetaNames
validate_prompt_variables (bool) – If True, prompt variables provided inparams are validated with the ones in Prompt Template Asset.This parameter is only applicable in a Prompt Template Asset deployment scenario and should not be changed for different cases, defaults to True
guardrails_granite_guardian_params (dict,optional) – parameters for Granite Guardian moderations
generated content
str
Note
By default only the first occurance ofHAPDetectionWarning is displayed. To enable printing all warnings of this category, use:
importwarningsfromibm_watsonx_ai.foundation_models.utilsimportHAPDetectionWarningwarnings.filterwarnings("always",category=HAPDetectionWarning)
Get information about deployment(s).If deployment_id is not passed, all deployment details are returned.
deployment_id (str,optional) – unique ID of the deployment
serving_name (str,optional) – serving name that filters deployments
limit (int,optional) – limit number of fetched records
asynchronous (bool,optional) – if True, it will work as a generator
get_all (bool,optional) – if True, it will get all entries in ‘limited’ chunks
spec_state (SpecStates,optional) – software specification state, can be used only whendeployment_id is None
metadata of the deployment(s)
dict (if deployment_id is not None) or {“resources”: [dict]} (if deployment_id is None)
Example:
deployment_details=client.deployments.get_details(deployment_id)deployment_details=client.deployments.get_details(deployment_id=deployment_id)deployments_details=client.deployments.get_details()deployments_details=client.deployments.get_details(limit=100)deployments_details=client.deployments.get_details(limit=100,get_all=True)deployments_details=[]forentryinclient.deployments.get_details(limit=100,asynchronous=True,get_all=True):deployments_details.extend(entry)
Get deployment_download_url from the deployment details.
deployment_details (dict) – created deployment details
deployment download URL that is used to get file deployment (for example: Core ML)
str
Example:
deployment_url=client.deployments.get_download_url(deployment)
Get deployment_href from the deployment details.
deployment_details (dict) – metadata of the deployment
deployment href that is used to manage the deployment
str
Example:
deployment_href=client.deployments.get_href(deployment)
Get the deployment ID from the deployment details.
deployment_details (dict) – metadata of the deployment
deployment ID that is used to manage the deployment
str
Example:
deployment_id=client.deployments.get_id(deployment)
Get information about deployment job(s).If deployment job_id is not passed, all deployment jobs details are returned.
job_id (str,optional) – unique ID of the job
include (str,optional) – fields to be retrieved from ‘decision_optimization’and ‘scoring’ section mentioned as value(s) (comma separated) as output response fields
limit (int,optional) – limit number of fetched records
metadata of deployment job(s)
dict (if job_id is not None) or {“resources”: [dict]} (if job_id is None)
Example:
deployment_details=client.deployments.get_job_details()deployments_details=client.deployments.get_job_details(job_id=job_id)
Get the href of a deployment job.
job_details (dict) – metadata of the deployment job
href of the deployment job
str
Example:
job_details=client.deployments.get_job_details(job_id=job_id)job_status=client.deployments.get_job_href(job_details)
Get the unique ID of a deployment job.
job_details (dict) – metadata of the deployment job
unique ID of the deployment job
str
Example:
job_details=client.deployments.get_job_details(job_id=job_id)job_status=client.deployments.get_job_id(job_details)
Get the status of a deployment job.
job_id (str) – unique ID of the deployment job
status of the deployment job
dict
Example:
job_status=client.deployments.get_job_status(job_id)
Get the unique ID of a deployment job.
Deprecated: Useget_job_id(job_details)
instead.
job_details (dict) – metadata of the deployment job
unique ID of the deployment job
str
Example:
job_details=client.deployments.get_job_details(job_uid=job_uid)job_status=client.deployments.get_job_uid(job_details)
Get scoring URL from deployment details.
deployment_details (dict) – metadata of the deployment
scoring endpoint URL that is used to make scoring requests
str
Example:
scoring_href=client.deployments.get_scoring_href(deployment)
Get serving URL from the deployment details.
deployment_details (dict) – metadata of the deployment
serving endpoint URL that is used to make scoring requests
str
Example:
scoring_href=client.deployments.get_serving_href(deployment)
Get deployment_uid from the deployment details.
Deprecated: Useget_id(deployment_details)
instead.
deployment_details (dict) – metadata of the deployment
deployment UID that is used to manage the deployment
str
Example:
deployment_uid=client.deployments.get_uid(deployment)
Check if the serving name is available for use.
serving_name (str) – serving name that filters deployments
information about whether the serving name is available
bool
Example:
is_available=client.deployments.is_serving_name_available('test')
Returns deployments in a table format.
limit (int,optional) – limit number of fetched records
artifact_type (str,optional) – return only deployments with the specified artifact_type
pandas.DataFrame with the listed deployments
pandas.DataFrame
Example:
client.deployments.list()
Return the async deployment jobs in a table format.
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed deployment jobs
pandas.DataFrame
Note
This method list only async deployment jobs created for WML deployment.
Example:
client.deployments.list_jobs()
Execute an AI service by providing a scoring payload.
deployment_id (str) – unique ID of the deployment
ai_service_payload (dict) – AI service payload to be passed to generate the method
path_suffix (str,optional) – path suffix to be appended to the scoring url, defaults to None
response of the AI service
Any
Note
By executing this class method, a POST request is performed.
In case ofmethod not allowed error, try sending requests directly to your deployed ai service.
Execute an AI service by providing a scoring payload.
deployment_id (str) – unique ID of the deployment
ai_service_payload (dict) – AI service payload to be passed to generate the method
stream of the response of the AI service
Generator
Make scoring requests against the deployed artifact.
deployment_id (str) – unique ID of the deployment to be scored
meta_props (dict) – meta props for scoring, useclient.deployments.ScoringMetaNames.show()
to view the list of ScoringMetaNames
transaction_id (str,optional) – transaction ID to be passed with the records during payload logging
scoring result that contains prediction and probability
dict
Note
client.deployments.ScoringMetaNames.INPUT_DATA is the only metaname valid for sync scoring.
The valid payloads for scoring input are either list of values, pandas or numpy dataframes.
Example:
scoring_payload={client.deployments.ScoringMetaNames.INPUT_DATA:[{'fields':['GENDER','AGE','MARITAL_STATUS','PROFESSION'],'values':[['M',23,'Single','Student'],['M',55,'Single','Executive']]}]}predictions=client.deployments.score(deployment_id,scoring_payload)
Updates existing deployment metadata. If ASSET is patched, then ‘id’ field is mandatoryand it starts a deployment with the provided asset id/rev. Deployment ID remains the same.
deployment_id (str) – unique ID of deployment to be updated
changes (dict) – elements to be changed, where keys are ConfigurationMetaNames
background_mode (bool,optional) – indicator whether the update() method will run in the background (async) or not (sync), defaults to False
metadata of the updated deployment
dict or None
Examples
metadata={client.deployments.ConfigurationMetaNames.NAME:"updated_Deployment"}updated_deployment_details=client.deployments.update(deployment_id,changes=metadata)metadata={client.deployments.ConfigurationMetaNames.ASSET:{"id":"ca0cd864-4582-4732-b365-3165598dc945","rev":"2"}}deployment_details=client.deployments.update(deployment_id,changes=metadata)
Set of MetaNames for Deployments Specs.
Available MetaNames:
MetaName | Type | Required | Schema | Example value |
TAGS | list | N |
|
|
NAME | str | N |
| |
DESCRIPTION | str | N |
| |
CUSTOM | dict | N |
| |
ASSET | dict | N |
| |
PROMPT_TEMPLATE | dict | N |
| |
HARDWARE_SPEC | dict | N |
| |
HARDWARE_REQUEST | dict | N |
| |
HYBRID_PIPELINE_HARDWARE_SPECS | list | N |
| |
ONLINE | dict | N |
| |
BATCH | dict | N |
| |
DETACHED | dict | N |
| |
R_SHINY | dict | N |
| |
VIRTUAL | dict | N |
| |
OWNER | str | N |
| |
BASE_MODEL_ID | str | N |
| |
BASE_DEPLOYMENT_ID | str | N |
| |
PROMPT_VARIABLES | dict | N |
|
Allowable values of R_Shiny authentication.
Set of MetaNames for Scoring.
Available MetaNames:
MetaName | Type | Required | Schema | Example value |
NAME | str | N |
| |
INPUT_DATA | list | N |
|
|
INPUT_DATA_REFERENCES | list | N |
| |
OUTPUT_DATA_REFERENCE | dict | N |
| |
EVALUATIONS_SPEC | list | N |
|
|
ENVIRONMENT_VARIABLES | dict | N |
| |
SCORING_PARAMETERS | dict | N |
|
Set of MetaNames for Decision Optimization.
Available MetaNames:
MetaName | Type | Required | Schema | Example value |
INPUT_DATA | list | N |
|
|
INPUT_DATA_REFERENCES | list | N |
|
|
OUTPUT_DATA | list | N |
| |
OUTPUT_DATA_REFERENCES | list | N |
| |
SOLVE_PARAMETERS | dict | N |
Class included to keep the interface compatible with the Deployment’s RuntimeContextused in AIServices implementation.
api_client (APIClient) – initialized APIClient object with a set project ID or space ID. If passed,credentials
andproject_id
/space_id
are not required.
request_payload_json (dict,optional) – Request payload for testing of generate/ generate_stream call of AI Service.
method (str,optional) – HTTP request method for testing of generate/ generate_stream call of AI Service.
path (str,optional) – Request endpoint path for testing of generate/ generate_stream call of AI Service.
``RuntimeContext`` initialized for testing purposes before deployment:
context=RuntimeContext(api_client=client,request_payload_json={"field":"value"})
Examples ofRuntimeContext
usage within AI Service source code:
defdeployable_ai_service(context,**custom):task_token=context.generate_token()defgenerate(context)->dict:user_token=context.get_token()headers=context.get_headers()json_body=context.get_json()...return{"body":json_body}returngenerategenerate=deployable_ai_service(context)generate_output=generate(context)# returns {"body": {"field": "value"}}
Change the JSON body inRuntimeContext
:
context.request_payload_json={"field2":"value2"}generate=deployable_ai_service(context)generate_output=generate(context)# returns {"body": {"field2": "value2"}}
Cancel an export job.space_id orproject_id has to be provided.
Note
To delete anexport_id job, usedelete()
API.
export_id (str) – export job identifier
space_id (str,optional) – space identifier
project_id (str,optional) – project identifier
status (“SUCCESS” or “FAILED”)
str
Example:
client.export_assets.cancel(export_id='6213cf1-252f-424b-b52d-5cdd9814956c',space_id='3421cf1-252f-424b-b52d-5cdd981495fe')
Delete the givenexport_id job.space_id orproject_id has to be provided.
export_id (str) – export job identifier
space_id (str,optional) – space identifier
project_id (str,optional) – project identifier
status (“SUCCESS” or “FAILED”)
str
Example:
client.export_assets.delete(export_id='6213cf1-252f-424b-b52d-5cdd9814956c',space_id='98a53931-a8c0-4c2f-8319-c793155e4598')
Get metadata of a given export job. If noexport_id is specified, all export metadata is returned.
export_id (str,optional) – export job identifier
space_id (str,optional) – space identifier
project_id (str,optional) – project identifier
limit (int,optional) – limit number of fetched records
asynchronous (bool,optional) – ifTrue, it will work as a generator
get_all (bool,optional) – ifTrue, it will get all entries in ‘limited’ chunks
export metadata
dict (if export_id is not None) or {“resources”: [dict]} (if export_id is None)
Example:
details=client.export_assets.get_details(export_id,space_id='98a53931-a8c0-4c2f-8319-c793155e4598')details=client.export_assets.get_details()details=client.export_assets.get_details(limit=100)details=client.export_assets.get_details(limit=100,get_all=True)details=[]forentryinclient.export_assets.get_details(limit=100,asynchronous=True,get_all=True):details.extend(entry)
Get the exported content as a zip file.
export_id (str) – export job identifier
space_id (str,optional) – space identifier
project_id (str,optional) – project identifier
file_path (str,optional) – name of local file to create, this should be absolute path of the fileand the file shouldn’t exist
path to the downloaded function content
str
Example:
client.export_assets.get_exported_content(export_id,space_id='98a53931-a8c0-4c2f-8319-c793155e4598',file_path='/home/user/my_exported_content.zip')
Get the ID of the export job from export details.
export_details (dict) – metadata of the export job
ID of the export job
str
Example:
id=client.export_assets.get_id(export_details)
Return export jobs in a table format.
space_id (str,optional) – space identifier
project_id (str,optional) – project identifier
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed connections
pandas.DataFrame
Example:
client.export_assets.list()
Start the export. You must provide the space_id or the project_id.ALL_ASSETS is by default False. You don’t need to provide it unless it is set to True.You must provide one of the following in the meta_props: ALL_ASSETS, ASSET_TYPES, or ASSET_IDS. Only one of these can beprovided.
In themeta_props:
ALL_ASSETS is a boolean. When set to True, it exports all assets in the given space.ASSET_IDS is an array that contains the list of assets IDs to be exported.ASSET_TYPES is used to provide the asset types to be exported. All assets of that asset type will be exported.
Eg: wml_model, wml_model_definition, wml_pipeline, wml_function, wml_experiment,software_specification, hardware_specification, package_extension, script
meta_props (dict) – metadata,to see available meta names useclient.export_assets.ConfigurationMetaNames.get()
space_id (str,optional) – space identifier
project_id – project identifier
Response json
dict
Example:
metadata={client.export_assets.ConfigurationMetaNames.NAME:"export_model",client.export_assets.ConfigurationMetaNames.ASSET_IDS:["13a53931-a8c0-4c2f-8319-c793155e7517","13a53931-a8c0-4c2f-8319-c793155e7518"]}details=client.export_assets.start(meta_props=metadata,space_id="98a53931-a8c0-4c2f-8319-c793155e4598")
metadata={client.export_assets.ConfigurationMetaNames.NAME:"export_model",client.export_assets.ConfigurationMetaNames.ASSET_TYPES:["wml_model"]}details=client.export_assets.start(meta_props=metadata,space_id="98a53931-a8c0-4c2f-8319-c793155e4598")
metadata={client.export_assets.ConfigurationMetaNames.NAME:"export_model",client.export_assets.ConfigurationMetaNames.ALL_ASSETS:True}details=client.export_assets.start(meta_props=metadata,space_id="98a53931-a8c0-4c2f-8319-c793155e4598")
Cancel an import job. You must provide the space_id or the project_id.
Note
To delete an import_id job, use delete() api
import_id (str) – import the job identifier
space_id (str,optional) – space identifier
project_id (str,optional) – project identifier
Example:
client.import_assets.cancel(import_id='6213cf1-252f-424b-b52d-5cdd9814956c',space_id='3421cf1-252f-424b-b52d-5cdd981495fe')
Deletes the given import_id job. You must provide the space_id or the project_id.
import_id (str) – import the job identifier
space_id (str,optional) – space identifier
project_id (str,optional) – project identifier
Example:
client.import_assets.delete(import_id='6213cf1-252f-424b-b52d-5cdd9814956c',space_id='98a53931-a8c0-4c2f-8319-c793155e4598')
Get metadata of the given import job. If noimport_id is specified, all import metadata is returned.
import_id (str,optional) – import the job identifier
space_id (str,optional) – space identifier
project_id (str,optional) – project identifier
limit (int,optional) – limit number of fetched records
asynchronous (bool,optional) – ifTrue, it will work as a generator
get_all (bool,optional) – ifTrue, it will get all entries in ‘limited’ chunks
import(s) metadata
dict (if import_id is not None) or {“resources”: [dict]} (if import_id is None)
Example:
details=client.import_assets.get_details(import_id)details=client.import_assets.get_details()details=client.import_assets.get_details(limit=100)details=client.import_assets.get_details(limit=100,get_all=True)details=[]forentryinclient.import_assets.get_details(limit=100,asynchronous=True,get_all=True):details.extend(entry)
Get ID of the import job from import details.
import_details (dict) – metadata of the import job
ID of the import job
str
Example:
id=client.import_assets.get_id(import_details)
Return import jobs in a table format.
space_id (str,optional) – space identifier
project_id (str,optional) – project identifier
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed assets
pandas.DataFrame
Example:
client.import_assets.list()
Start the import. You must provide the space_id or the project_id.
file_path (str) – file path to the zip file with exported assets
space_id (str,optional) – space identifier
project_id (str,optional) – project identifier
response json
dict
Example:
details=client.import_assets.start(space_id="98a53931-a8c0-4c2f-8319-c793155e4598",file_path="/home/user/data_to_be_imported.zip")
Warning! Not supported for IBM Cloud Pak® for Data.
Link WML Model to Model Entry.
Return all WKC Model Entry assets for a catalog.
catalog_id (str,optional) – catalog ID where you want to register model. If no catalog_id is provided, WKC Model Entry assets from all catalogs are listed.
all WKC Model Entry assets for a catalog
dict
Example:
model_entries=client.factsheets.list_model_entries(catalog_id)
Link WML Model to Model Entry
model_id (str) – ID of the published model/asset
meta_props (dict[str,str]) –
metaprops, to see the available list of meta names use:
client.factsheets.ConfigurationMetaNames.get()
catalog_id (str,optional) – catalog ID where you want to register model
metadata of the registration
dict
Example:
meta_props={client.factsheets.ConfigurationMetaNames.ASSET_ID:'83a53931-a8c0-4c2f-8319-c793155e7517'}registration_details=client.factsheets.register_model_entry(model_id,catalog_id,meta_props)
or
meta_props={client.factsheets.ConfigurationMetaNames.NAME:"New model entry",client.factsheets.ConfigurationMetaNames.DESCRIPTION:"New model entry"}registration_details=client.factsheets.register_model_entry(model_id,meta_props)
Unregister WKC Model Entry
asset_id (str) – ID of the WKC model entry
catalog_id (str,optional) – catalog ID where the asset is stored, when not provided,default client space or project will be taken
Example:
model_entries=client.factsheets.unregister_model_entry(asset_id='83a53931-a8c0-4c2f-8319-c793155e7517',catalog_id='34553931-a8c0-4c2f-8319-c793155e7517')
or
client.set.default_space('98f53931-a8c0-4c2f-8319-c793155e7517')model_entries=client.factsheets.unregister_model_entry(asset_id='83a53931-a8c0-4c2f-8319-c793155e7517')
Store and manage hardware specs.
MetaNames for Hardware Specification.
Delete a hardware specification.
hw_spec_id (str) – unique ID of the hardware specification to be deleted
status (“SUCCESS” or “FAILED”)
str
Get hardware specification details.
hw_spec_id (str) – unique ID of the hardware spec
metadata of the hardware specifications
dict
Example:
hw_spec_details=client.hardware_specifications.get_details(hw_spec_uid)
Get the URL of hardware specifications.
hw_spec_details (dict) – details of the hardware specifications
href of the hardware specifications
str
Example:
hw_spec_details=client.hw_spec.get_details(hw_spec_id)hw_spec_href=client.hw_spec.get_href(hw_spec_details)
Get the ID of a hardware specifications asset.
hw_spec_details (dict) – metadata of the hardware specifications
unique ID of the hardware specifications
str
Example:
asset_id=client.hardware_specifications.get_id(hw_spec_details)
Get the unique ID of a hardware specification for the given name.
hw_spec_name (str) – name of the hardware specification
unique ID of the hardware specification
str
Example:
asset_id=client.hardware_specifications.get_id_by_name(hw_spec_name)
Get the UID of a hardware specifications asset.
Deprecated: Useget_id(hw_spec_details)
instead.
hw_spec_details (dict) – metadata of the hardware specifications
unique ID of the hardware specifications
str
Example:
asset_uid=client.hardware_specifications.get_uid(hw_spec_details)
Get the unique ID of a hardware specification for the given name.
Deprecated: Useget_id_by_name(hw_spec_name)
instead.
hw_spec_name (str) – name of the hardware specification
unique ID of the hardware specification
str
Example:
asset_uid=client.hardware_specifications.get_uid_by_name(hw_spec_name)
List hardware specifications in a table format.
name (str,optional) – unique ID of the hardware spec
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed hardware specifications
pandas.DataFrame
Example:
client.hardware_specifications.list()
Create a hardware specification.
meta_props (dict) –
metadata of the hardware specification configuration. To see available meta names, use:
client.hardware_specifications.ConfigurationMetaNames.get()
metadata of the created hardware specification
dict
Example:
meta_props={client.hardware_specifications.ConfigurationMetaNames.NAME:"custom hardware specification",client.hardware_specifications.ConfigurationMetaNames.DESCRIPTION:"Custom hardware specification creted with SDK",client.hardware_specifications.ConfigurationMetaNames.NODES:{"cpu":{"units":"2"},"mem":{"size":"128Gi"},"gpu":{"num_gpu":1}}}client.hardware_specifications.store(meta_props)
Bases:
Load credentials from the config file.
[DEV_LC]credentials={}cos_credentials={}
env_name (str) – name of [ENV] defined in the config file
credentials_name (str) – name of credentials
config_path (str) – path to the config file
loaded credentials
dict
Example:
get_credentials_from_config(env_name='DEV_LC',credentials_name='credentials')
Store and manage model definitions.
MetaNames for model definition creation.
Create a revision for the given model definition. Revisions are immutable once created.The metadata and attachment of the model definition is taken and a revision is created out of it.
model_definition_id (str) – ID of the model definition
revised metadata of the stored model definition
dict
Example:
model_definition_revision=client.model_definitions.create_revision(model_definition_id)
Delete a stored model definition.
model_definition_id (str) – unique ID of the stored model definition
status (“SUCCESS” or “FAILED”)
str
Example:
client.model_definitions.delete(model_definition_id)
Download the content of a model definition asset.
model_definition_id (str) – unique ID of the model definition asset to be downloaded
filename (str) – filename to be used for the downloaded file
rev_id (str,optional) – revision ID
path to the downloaded asset content
str
Example:
client.model_definitions.download(model_definition_id,"model_definition_file")
Get metadata of a stored model definition. If nomodel_definition_id is passed,details for all model definitions are returned.
model_definition_id (str,optional) – unique ID of the model definition
limit (int,optional) – limit number of fetched records
get_all (bool,optional) – if True, it will get all entries in ‘limited’ chunks
metadata of model definition
dict (ifmodel_definition_id is not None)
Example:
Get the href of a stored model definition.
model_definition_details (dict) – details of the stored model definition
href of the stored model definition
str
Example:
model_definition_id=client.model_definitions.get_href(model_definition_details)
Get the unique ID of a stored model definition asset.
model_definition_details (dict) – metadata of the stored model definition asset
unique ID of the stored model definition asset
str
Example:
asset_id=client.model_definition.get_id(asset_details)
Get metadata of a model definition.
model_definition_id (str) – ID of the model definition
rev_id (str,optional) – ID of the revision. If this parameter is not provided, it returns the latest revision. If there is no latest revision, it returns an error.
metadata of the stored model definition
dict
Example:
script_details=client.model_definitions.get_revision_details(model_definition_id,rev_id)
Get the UID of the stored model.
Deprecated: Useget_id(model_definition_details)
instead.
model_definition_details (dict) – details of the stored model definition
UID of the stored model definition
str
Example:
model_definition_uid=client.model_definitions.get_uid(model_definition_details)
Return the stored model definition assets in a table format.
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed model definitions
pandas.DataFrame
Example:
client.model_definitions.list()
Return the stored model definition assets in a table format.
model_definition_id (str) – unique ID of the model definition
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed model definitions
pandas.DataFrame
Example:
client.model_definitions.list_revisions()
Create a model definition.
meta_props (dict) –
metadata of the model definition configuration. To see available meta names, use:
client.model_definitions.ConfigurationMetaNames.get()
model_definition (str) – path to the content file to be uploaded
metadata of the created model definition
dict
Example:
client.model_definitions.store(model_definition,meta_props)
Update the model definition with metadata, attachment, or both.
model_definition_id (str) – ID of the model definition
meta_props (dict) – metadata of the model definition configuration to be updated
file_path (str,optional) – path to the content file to be uploaded
updated metadata of the model definition
dict
Example:
model_definition_details=client.model_definition.update(model_definition_id,meta_props,file_path)
Set of MetaNames for Model Definition.
Available MetaNames:
MetaName | Type | Required | Schema | Example value |
NAME | str | Y |
| |
DESCRIPTION | str | N |
| |
PLATFORM | dict | Y |
|
|
VERSION | str | Y |
| |
COMMAND | str | N |
| |
CUSTOM | dict | N |
| |
SPACE_UID | str | N |
|
Store and manage software Packages Extension specs.
MetaNames for Package Extensions creation.
Delete a package extension.
pkg_extn_id (str) – unique ID of the package extension
status (“SUCCESS” or “FAILED”) if deleted synchronously or dictionary with response
str or dict
Example:
client.package_extensions.delete(pkg_extn_id)
Download a package extension.
pkg_extn_id (str) – unique ID of the package extension to be downloaded
filename (str) – filename to be used for the downloaded file
path to the downloaded package extension content
str
Example:
client.package_extensions.download(pkg_extn_id,"sample_conda.yml/custom_library.zip")
Get package extensions details.
pkg_extn_id (str) – unique ID of the package extension
details of the package extension
dict
Example:
pkg_extn_details=client.pkg_extn.get_details(pkg_extn_id)
Get the URL of a stored package extension.
pkg_extn_details (dict) – details of the package extension
href of the package extension
str
Example:
pkg_extn_details=client.package_extensions.get_details(pkg_extn_id)pkg_extn_href=client.package_extensions.get_href(pkg_extn_details)
Get the unique ID of a package extension.
pkg_extn_details (dict) – details of the package extension
unique ID of the package extension
str
Example:
asset_id=client.package_extensions.get_id(pkg_extn_details)
Get the ID of a package extension.
pkg_extn_name (str) – name of the package extension
unique ID of the package extension
str
Example:
asset_id=client.package_extensions.get_id_by_name(pkg_extn_name)
List the package extensions in a table format.
pandas.DataFrame with listed package extensions
pandas.DataFrame
client.package_extensions.list()
Create a package extension.
meta_props (dict) –
metadata of the package extension. To see available meta names, use:
client.package_extensions.ConfigurationMetaNames.get()
file_path (str) – path to the file to be uploaded as a package extension
metadata of the package extension
dict
Example:
meta_props={client.package_extensions.ConfigurationMetaNames.NAME:"skl_pipeline_heart_problem_prediction",client.package_extensions.ConfigurationMetaNames.DESCRIPTION:"description scikit-learn_0.20",client.package_extensions.ConfigurationMetaNames.TYPE:"conda_yml"}pkg_extn_details=client.package_extensions.store(meta_props=meta_props,file_path="/path/to/file")
Store and manage parameter sets.
MetaNames for Parameter Sets creation.
Create a parameter set.
meta_props (dict) –
metadata of the space configuration. To see available meta names, use:
client.parameter_sets.ConfigurationMetaNames.get()
metadata of the stored parameter set
dict
Example:
meta_props={client.parameter_sets.ConfigurationMetaNames.NAME:"Example name",client.parameter_sets.ConfigurationMetaNames.DESCRIPTION:"Example description",client.parameter_sets.ConfigurationMetaNames.PARAMETERS:[{"name":"string","description":"string","prompt":"string","type":"string","subtype":"string","value":"string","valid_values":["string"]}],client.parameter_sets.ConfigurationMetaNames.VALUE_SETS:[{"name":"string","values":[{"name":"string","value":"string"}]}]}parameter_sets_details=client.parameter_sets.create(meta_props)
Delete a parameter set.
parameter_set_id (str) – unique ID of the parameter set
status (“SUCCESS” or “FAILED”)
str
Example:
client.parameter_sets.delete(parameter_set_id)
Get parameter set details. If no parameter_sets_id is passed, details for all parameter setsare returned.
parameter_set_id (str,optional) – ID of the software specification
metadata of the stored parameter set(s)
dict - ifparameter_set_id is not None
{“parameter_sets”: [dict]} - ifparameter_set_id is None
Examples
Ifparameter_set_id is None:
parameter_sets_details=client.parameter_sets.get_details()
Ifparameter_set_id is given:
parameter_sets_details=client.parameter_sets.get_details(parameter_set_id)
Get the unique ID of a parameter set.
parameter_set_name (str) – name of the parameter set
unique ID of the parameter set
str
Example:
asset_id=client.parameter_sets.get_id_by_name(parameter_set_name)
List parameter sets in a table format.
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed parameter sets
pandas.DataFrame
Example:
client.parameter_sets.list()
Update parameter sets.
parameter_set_id (str) – unique ID of the parameter sets
new_data (str,list) – new data for parameters
file_path (str) – path to update
metadata of the updated parameter sets
dict
Example for description
new_description_data="New description"parameter_set_details=client.parameter_sets.update(parameter_set_id,new_description_data,"description")
Example for parameters
new_parameters_data=[{"name":"string","description":"new_description","prompt":"new_string","type":"new_string","subtype":"new_string","value":"new_string","valid_values":["new_string"]}]parameter_set_details=client.parameter_sets.update(parameter_set_id,new_parameters_data,"parameters")
Example for value_sets
new_value_sets_data=[{"name":"string","values":[{"name":"string","value":"new_string"}]}]parameter_set_details=client.parameter_sets.update_value_sets(parameter_set_id,new_value_sets_data,"value_sets")
Set of MetaNames for Parameter Sets metanames.
Available MetaNames:
MetaName | Type | Required | Example value |
NAME | str | Y |
|
DESCRIPTION | str | N |
|
PARAMETERS | list | Y |
|
VALUE_SETS | list | N |
|
Store and manage models, functions, spaces, pipelines, and experimentsusing the Watson Machine Learning Repository.
To view ModelMetaNames, use:
client.repository.ModelMetaNames.show()
To view ExperimentMetaNames, use:
client.repository.ExperimentMetaNames.show()
To view FunctionMetaNames, use:
client.repository.FunctionMetaNames.show()
To view PipelineMetaNames, use:
client.repository.PipelineMetaNames.show()
To view AIServiceMetaNames, use:
client.repository.AIServiceMetaNames.show()
Data class with supported model asset types.
Create a new AI service revision.
ai_service_id (str) – unique ID of the AI service
revised metadata of the stored AI service
dict
Example:
client.repository.create_ai_service_revision(ai_service_id)
Create a new experiment revision.
experiment_id (str) – unique ID of the stored experiment
new revision details of the stored experiment
dict
Example:
experiment_revision_artifact=client.repository.create_experiment_revision(experiment_id)
Create a new function revision.
function_id (str) – unique ID of the function
revised metadata of the stored function
dict
Example:
client.repository.create_function_revision(pipeline_id)
Create a revision for a given model ID.
model_id (str) – ID of the stored model
revised metadata of the stored model
dict
Example:
model_details=client.repository.create_model_revision(model_id)
Create a new pipeline revision.
pipeline_id (str) – unique ID of the pipeline
details of the pipeline revision
dict
Example:
client.repository.create_pipeline_revision(pipeline_id)
Create a revision for passedartifact_id.
artifact_id (str) – unique ID of a stored model, experiment, function, or pipelines
artifact new revision metadata
dict
Example:
details=client.repository.create_revision(artifact_id)
Delete a model, experiment, pipeline, function, or AI service from the repository.
artifact_id (str) – unique ID of the stored model, experiment, function, pipeline, or AI service
status “SUCCESS” if deletion is successful
Literal[“SUCCESS”]
Example:
client.repository.delete(artifact_id)
Download the configuration file for an artifact with the specified ID.
artifact_id (str) – unique ID of the model or function
filename (str,optional) – name of the file to which the artifact content will be downloaded
rev_id (str,optional) – revision ID
format (str,optional) – format of the content, applicable for models
path to the downloaded artifact content
str
Examples
client.repository.download(model_id,'my_model.tar.gz')client.repository.download(model_id,'my_model.json')# if original model was saved as json, works only for xgboost 1.3
Get the metadata of AI service(s). If neither AI service ID nor AI service name is specified,all AI service metadata is returned.If only AI service name is specified, metadata of AI services with the name is returned (if any).
ai_service_id (str,optional) – ID of the AI service
limit (int |None,optional) – limit number of fetched records
asynchronous (bool,optional) – ifTrue, it will work as a generator, defaults to False
get_all (bool,optional) – ifTrue, it will get all entries in ‘limited’ chunks, defaults to False
spec_state (SpecStates |None,optional) – software specification state, can be used only whenai_service_id is None
ai_service_name (str,optional) – name of the AI service, can be used only whenai_service_id is None
metadata of the AI service
dict (if ID is not None) or {“resources”: [dict]} (if ID is None)
Note
In the current implementation setting,spec_state=True might break the setlimit and return less records than stated in the setlimit.
Examples:
ai_service_details=client.repository.get_ai_service_details(ai_service_id)ai_service_details=client.repository.get_ai_service_details(ai_service_name)ai_service_details=client.repository.get_ai_service_details()ai_service_details=client.repository.get_ai_service_details(limit=100)ai_service_details=client.repository.get_ai_service_details(limit=100,get_all=True)ai_service_details=[]forentryinclient.repository.get_ai_service_details(limit=100,asynchronous=True,get_all=True):ai_service_details.extend(entry)
Get the ID of a stored AI service.
ai_service_details (dict) – metadata of the stored AI service
ID of the stored AI service
str
Example:
ai_service_details=client.repository.get_ai_service_details(ai_service_id)ai_service_id=client.repository.get_ai_service_id(ai_service_details)
Get the metadata of a specific revision of a stored AI service.
ai_service_id (str) – definition of the stored AI service
rev_id (str) – unique ID of the AI service revision
metadata of the stored AI service revision
dict
Example:
ai_service_revision_details=client.repository.get_ai_service_revision_details(ai_service_id,rev_id)
Get metadata of stored artifacts. Ifartifact_id andartifact_name are not specified,the metadata of all models, experiments, functions, pipelines, and ai services is returned.If onlyartifact_name is specified, metadata of all artifacts with the name is returned.
artifact_id (str,optional) – unique ID of the stored model, experiment, function, or pipeline
spec_state (SpecStates,optional) – software specification state, can be used only whenartifact_id is None
artifact_name (str,optional) – name of the stored model, experiment, function, pipeline, or ai servicecan be used only whenartifact_id is None
metadata of the stored artifact(s)
dict (if artifact_id is not None)
{“models”: dict, “experiments”: dict, “pipeline”: dict, “functions”: dict, “ai_service”: dict} (if artifact_id is None)
Examples
details=client.repository.get_details(artifact_id)details=client.repository.get_details(artifact_name='Sample_model')details=client.repository.get_details()
Example of getting all repository assets with deprecated software specifications:
fromibm_watsonx_ai.lifecycleimportSpecStatesdetails=client.repository.get_details(spec_state=SpecStates.DEPRECATED)
Get metadata of the experiment(s). If neither experiment ID nor experiment name is specified,all experiment metadata is returned.If only experiment name is specified, metadata of experiments with the name is returned (if any).
experiment_id (str,optional) – ID of the experiment
limit (int,optional) – limit number of fetched records
asynchronous (bool,optional) – ifTrue, it will work as a generator
get_all (bool,optional) – ifTrue, it will get all entries in ‘limited’ chunks
experiment_name (str,optional) – name of the experiment, can be used only whenexperiment_id is None
experiment metadata
dict (if ID is not None) or {“resources”: [dict]} (if ID is None)
Example:
experiment_details=client.repository.get_experiment_details(experiment_id)experiment_details=client.repository.get_experiment_details(experiment_name='Sample_experiment')experiment_details=client.repository.get_experiment_details()experiment_details=client.repository.get_experiment_details(limit=100)experiment_details=client.repository.get_experiment_details(limit=100,get_all=True)experiment_details=[]forentryinclient.repository.get_experiment_details(limit=100,asynchronous=True,get_all=True):experiment_details.extend(entry)
Get the href of a stored experiment.
experiment_details (dict) – metadata of the stored experiment
href of the stored experiment
str
Example:
experiment_details=client.repository.get_experiment_details(experiment_id)experiment_href=client.repository.get_experiment_href(experiment_details)
Get the unique ID of a stored experiment.
experiment_details (dict) – metadata of the stored experiment
unique ID of the stored experiment
str
Example:
experiment_details=client.repository.get_experiment_details(experiment_id)experiment_id=client.repository.get_experiment_id(experiment_details)
Get metadata of a stored experiments revisions.
experiment_id (str) – ID of the stored experiment
rev_id (str) – rev_id number of the stored experiment
revision metadata of the stored experiment
dict
Example:
experiment_details=client.repository.get_experiment_revision_details(experiment_id,rev_id)
Get metadata of function(s). If neither function ID nor function name is specified,the metadata of all functions is returned.If only function name is specified, metadata of functions with the name is returned (if any).
function_id – ID of the function
limit (int,optional) – limit number of fetched records
asynchronous (bool,optional) – ifTrue, it will work as a generator
get_all (bool,optional) – ifTrue, it will get all entries in ‘limited’ chunks
spec_state (SpecStates,optional) – software specification state, can be used only whenfunction_id is None
function_name (str,optional) – name of the function, can be used only whenfunction_id is None
str, optional
metadata of the function
dict (if ID is not None) or {“resources”: [dict]} (if ID is None)
Note
In current implementation settingspec_state=True may break setlimit,returning less records than stated by setlimit.
Examples
function_details=client.repository.get_function_details(function_id)function_details=client.repository.get_function_details(function_name='Sample_function')function_details=client.repository.get_function_details()function_details=client.repository.get_function_details(limit=100)function_details=client.repository.get_function_details(limit=100,get_all=True)function_details=[]forentryinclient.repository.get_function_details(limit=100,asynchronous=True,get_all=True):function_details.extend(entry)
Get the URL of a stored function.
function_details (dict) – details of the stored function
href of the stored function
str
Example:
function_details=client.repository.get_function_details(function_id)function_url=client.repository.get_function_href(function_details)
Get ID of stored function.
function_details (dict) – metadata of the stored function
ID of stored function
str
Example:
function_details=client.repository.get_function_details(function_id)function_id=client.repository.get_function_id(function_details)
Get metadata of a specific revision of a stored function.
function_id (str) – definition of the stored function
rev_id (str) – unique ID of the function revision
stored function revision metadata
dict
Example:
function_revision_details=client.repository.get_function_revision_details(function_id,rev_id)
Get the ID of a stored artifact by name.
artifact_name (str) – name of the stored artifact
ID of the stored artifact if exactly one with the ‘artifact_name’ exists. Otherwise, raise an error.
str
Example:
artifact_id=client.repository.get_id_by_name(artifact_name)
Get metadata of stored models. If neither model ID nor model name is specified,the metadata of all models is returned.If only model name is specified, metadata of models with the name is returned (if any).
model_id (str,optional) – ID of the stored model, definition, or pipeline
limit (int,optional) – limit number of fetched records
asynchronous (bool,optional) – ifTrue, it will work as a generator
get_all (bool,optional) – ifTrue, it will get all entries in ‘limited’ chunks
spec_state (SpecStates,optional) – software specification state, can be used only whenmodel_id is None
model_name (str,optional) – name of the stored model, definition, or pipeline, can be used only whenmodel_id is None
metadata of the stored model(s)
dict (if ID is not None) or {“resources”: [dict]} (if ID is None)
Note
In current implementation settingspec_state may break setlimit,returning less records than stated by setlimit.
Example:
model_details=client.repository.get_model_details(model_id)models_details=client.repository.get_model_details(model_name='Sample_model')models_details=client.repository.get_model_details()models_details=client.repository.get_model_details(limit=100)models_details=client.repository.get_model_details(limit=100,get_all=True)models_details=[]forentryinclient.repository.get_model_details(limit=100,asynchronous=True,get_all=True):models_details.extend(entry)
Get the URL of a stored model.
model_details (dict) – details of the stored model
URL of the stored model
str
Example:
model_url=client.repository.get_model_href(model_details)
Get the ID of a stored model.
model_details (dict) – details of the stored model
ID of the stored model
str
Example:
model_id=client.repository.get_model_id(model_details)
Get metadata of a stored model’s specific revision.
model_id (str) – ID of the stored model, definition, or pipeline
rev_id (str) – unique ID of the stored model revision
metadata of the stored model(s)
dict
Example:
model_details=client.repository.get_model_revision_details(model_id,rev_id)
Get metadata of stored pipeline(s). If neither pipeline ID nor pipeline name is specified,the metadata of all pipelines is returned.If only pipeline name is specified, metadata of pipelines with the name is returned (if any).
pipeline_id (str,optional) – ID of the pipeline
limit (int,optional) – limit number of fetched records
asynchronous (bool,optional) – ifTrue, it will work as a generator
get_all (bool,optional) – ifTrue, it will get all entries in ‘limited’ chunks
pipeline_name (str,optional) – name of the pipeline, can be used only whenpipeline_id is None
metadata of pipeline(s)
dict (if ID is not None) or {“resources”: [dict]} (if ID is None)
Example:
pipeline_details=client.repository.get_pipeline_details(pipeline_id)pipeline_details=client.repository.get_pipeline_details(pipeline_name='Sample_pipeline')pipeline_details=client.repository.get_pipeline_details()pipeline_details=client.repository.get_pipeline_details(limit=100)pipeline_details=client.repository.get_pipeline_details(limit=100,get_all=True)pipeline_details=[]forentryinclient.repository.get_pipeline_details(limit=100,asynchronous=True,get_all=True):pipeline_details.extend(entry)
Get the href from pipeline details.
pipeline_details (dict) – metadata of the stored pipeline
href of the pipeline
str
Example:
pipeline_details=client.repository.get_pipeline_details(pipeline_id)pipeline_href=client.repository.get_pipeline_href(pipeline_details)
Get the pipeline ID from pipeline details.
pipeline_details (dict) – metadata of the stored pipeline
unique ID of the pipeline
str
Example:
pipeline_id=client.repository.get_pipeline_id(pipeline_details)
Get metadata of a pipeline revision.
pipeline_id (str) – ID of the stored pipeline
rev_id (str) – revision ID of the stored pipeline
revised metadata of the stored pipeline
dict
Example:
pipeline_details=client.repository.get_pipeline_revision_details(pipeline_id,rev_id)
Note
rev_id parameter is not applicable in Cloud platform.
Get and list stored models, pipelines, functions, experiments, and AI services in a table/DataFrame format.
framework_filter (str,optional) – get only the frameworks with the desired names
DataFrame with listed names and IDs of stored models
pandas.DataFrame
Example:
client.repository.list()client.repository.list(framework_filter='prompt_tune')
Print all revisions for a given AI service ID in a table format.
ai_service_id (str) – unique ID of the stored AI service
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed revisions
pandas.DataFrame
Example:
client.repository.list_ai_service_revisions(ai_service_id)
Return stored AI services in a table format.
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed AI services
pandas.DataFrame
Example:
client.repository.list_ai_services()
List stored experiments in a table format.
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed experiments
pandas.DataFrame
Example:
client.repository.list_experiments()
Print all revisions for a given experiment ID in a table format.
experiment_id (str) – unique ID of the stored experiment
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed revisions
pandas.DataFrame
Example:
client.repository.list_experiments_revisions(experiment_id)
Return stored functions in a table format.
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed functions
pandas.DataFrame
Example:
client.repository.list_functions()
Print all revisions for a given function ID in a table format.
function_id (str) – unique ID of the stored function
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed revisions
pandas.DataFrame
Example:
client.repository.list_functions_revisions(function_id)
List stored models in a table format.
limit (int,optional) – limit number of fetched records
asynchronous (bool,optional) – ifTrue, it will work as a generator
get_all (bool,optional) – ifTrue, it will get all entries in ‘limited’ chunks
pandas.DataFrame with listed models or generator ifasynchronous is set to True
pandas.DataFrame | Generator
Example:
client.repository.list_models()client.repository.list_models(limit=100)client.repository.list_models(limit=100,get_all=True)[entryforentryinclient.repository.list_models(limit=100,asynchronous=True,get_all=True)]
Print all revisions for the given model ID in a table format.
model_id (str) – unique ID of the stored model
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed revisions
pandas.DataFrame
Example:
client.repository.list_models_revisions(model_id)
List stored pipelines in a table format.
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed pipelines
pandas.DataFrame
Example:
client.repository.list_pipelines()
List all revision for a given pipeline ID in a table format.
pipeline_id (str) – unique ID of the stored pipeline
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed revisions
pandas.DataFrame
Example:
client.repository.list_pipelines_revisions(pipeline_id)
Load a model from the repository to object in a local environment.
Note
The use of the load() method is restricted and not permitted for AutoAI models.
artifact_id (str) – ID of the stored model
trained model
object
Example
model=client.repository.load(model_id)
Promote a model from a project to space. Supported only for IBM Cloud Pak® for Data.
Deprecated: Useclient.spaces.promote(asset_id, source_project_id, target_space_id) instead.
Create an AI service asset.
Note
Supported for IBM watsonx.ai for IBM Cloud and IBM watsonx.ai software with IBM Cloud Pak® for Data (version 5.1.1 and later).
filepath to gz file
generator function that takes no argument or arguments that all have primitive python default values, and returns agenerate function.
ai_service (str |Callable) – path to a file with an archived AI service function’s content or a generator function (as described above)
meta_props (dict) – metadata for storing an AI service asset. To see available meta namesuseclient.repository.AIServiceMetaNames.show()
or direct toAIServiceMetaNames
class.
metadata of the stored AI service
dict
Examples:
The most simple use of an AI service is:
documentation_request={"application/json":{"$schema":"http://json-schema.org/draft-07/schema#","type":"object","properties":{"query":{"type":"string"},"parameters":{"properties":{"max_new_tokens":{"type":"integer"},"top_p":{"type":"number"},},"required":["max_new_tokens","top_p"],},},"required":["query"],}}documentation_response={"application/json":{"$schema":"http://json-schema.org/draft-07/schema#","type":"object","properties":{"query":{"type":"string"},"result":{"type":"string"}},"required":["query","result"],}}meta_props={client.repository.AIServiceMetaNames.NAME:"AI service example",client.repository.AIServiceMetaNames.DESCRIPTION:"This is AI service function",client.repository.AIServiceMetaNames.SOFTWARE_SPEC_ID:"53dc4cf1-252f-424b-b52d-5cdd9814987f",client.repository.AIServiceMetaNames.DOCUMENTATION_REQUEST:documentation_request,client.repository.AIServiceMetaNames.DOCUMENTATION_RESPONSE:documentation_response,}defdeployable_ai_service(context,params={"k1":"v1"},**kwargs):# importsfromibm_watsonx_aiimportCredentialsfromibm_watsonx_ai.foundation_modelsimportModelInferencetask_token=context.generate_token()outer_context=contexturl="https://us-south.ml.cloud.ibm.com"project_id="53dc4cf1-252f-424b-b52d-5cdd9814987f"defgenerate(context):task_token=outer_context.generate_token()payload=context.get_json()model=ModelInference(model_id="google/flan-t5-xl",credentials=Credentials(url=url,token=task_token),project_id=project_id)response=model.generate_text(payload['query'])response_body={'query':payload['query'],'result':response}return{'body':response_body}returngeneratestored_ai_service_details=client.repository.store_ai_service(deployable_ai_service,meta_props)
Create an experiment.
meta_props (dict) –
metadata of the experiment configuration. To see available meta names, use:
client.repository.ExperimentMetaNames.get()
metadata of the stored experiment
dict
Example:
metadata={client.repository.ExperimentMetaNames.NAME:'my_experiment',client.repository.ExperimentMetaNames.EVALUATION_METRICS:['accuracy'],client.repository.ExperimentMetaNames.TRAINING_REFERENCES:[{'pipeline':{'href':pipeline_href_1}},{'pipeline':{'href':pipeline_href_2}}]}experiment_details=client.repository.store_experiment(meta_props=metadata)experiment_href=client.repository.get_experiment_href(experiment_details)
Create a function.
filepath to gz file
‘score’ function reference, where the function is the function which will be deployed
generator function, which takes no argument or arguments which all have primitive python default valuesand as result return ‘score’ function
function (str orfunction) – path to file with archived function content or function (as described above)
meta_props (str ordict) – meta data or name of the function, to see available meta namesuseclient.repository.FunctionMetaNames.show()
stored function metadata
dict
Examples
The most simple use is (usingscore function):
meta_props={client.repository.FunctionMetaNames.NAME:"function",client.repository.FunctionMetaNames.DESCRIPTION:"This is ai function",client.repository.FunctionMetaNames.SOFTWARE_SPEC_UID:"53dc4cf1-252f-424b-b52d-5cdd9814987f"}defscore(payload):values=[[row[0]*row[1]]forrowinpayload['values']]return{'fields':['multiplication'],'values':values}stored_function_details=client.repository.store_function(score,meta_props)
Other, more interesting example is using generator function.In this situation it is possible to pass some variables:
creds={...}defgen_function(credentials=creds,x=2):deff(payload):values=[[row[0]*row[1]*x]forrowinpayload['values']]return{'fields':['multiplication'],'values':values}returnfstored_function_details=client.repository.store_function(gen_function,meta_props)
Create a model.
Here you can explore how to save external models in correct format.
model (str (for filename,path, orLLM name) orobject (corresponding to model type)) –
Can be one of following:
The train model object:
scikit-learn
xgboost
spark (PipelineModel)
path to saved model in format:
tensorflow / keras (.tar.gz)
pmml (.xml)
scikit-learn (.tar.gz)
spss (.str)
spark (.tar.gz)
xgboost (.tar.gz)
directory containing model file(s):
scikit-learn
xgboost
tensorflow
unique ID of the trained model
LLM name
meta_props (dict,optional) –
metadata of the models configuration. To see available meta names, use:
client.repository.ModelMetaNames.get()
training_data (spark dataframe,pandas dataframe,numpy.ndarray orarray,optional) – Spark DataFrame supported for spark models. Pandas dataframe, numpy.ndarray or arraysupported for scikit-learn models
training_target (array,optional) – array with labels required for scikit-learn models
pipeline (object,optional) – pipeline required for spark mllib models
feature_names (numpy.ndarray orlist,optional) – feature names for the training data in case of Scikit-Learn/XGBoost models,this is applicable only in the case where the training data is not of type - pandas.DataFrame
label_column_names (numpy.ndarray orlist,optional) – label column names of the trained Scikit-Learn/XGBoost models
round_number (int,optional) – round number of a Federated Learning experiment that has been configured to saveintermediate models, this applies when model is a training id
experiment_metadata (dict,optional) – metadata retrieved from the experiment that created the model
training_id (str,optional) – Run id of AutoAI or TuneExperiment experiment.
metadata of the created model
dict
Note
For a keras model, model content is expected to contain a .h5 file and an archived version of it.
feature_names is an optional argument containing the feature names for the training datain case of Scikit-Learn/XGBoost models. Valid types are numpy.ndarray and list.This is applicable only in the case where the training data is not of type - pandas.DataFrame.
If thetraining_data is of type pandas.DataFrame andfeature_names are provided,feature_names are ignored.
For INPUT_DATA_SCHEMA meta prop use list even when passing single input data schema. You can providemultiple schemas as dictionaries inside a list.
More details about Foundation Models you can findhere.
Examples
stored_model_details=client.repository.store_model(model,name)
In more complicated cases you should create proper metadata, similar to this one:
sw_spec_id=client.software_specifications.get_id_by_name('scikit-learn_0.23-py3.7')metadata={client.repository.ModelMetaNames.NAME:'customer satisfaction prediction model',client.repository.ModelMetaNames.SOFTWARE_SPEC_ID:sw_spec_id,client.repository.ModelMetaNames.TYPE:'scikit-learn_0.23'}
In case when you want to provide input data schema of the model, you can provide it as part of meta:
sw_spec_id=client.software_specifications.get_id_by_name('spss-modeler_18.1')metadata={client.repository.ModelMetaNames.NAME:'customer satisfaction prediction model',client.repository.ModelMetaNames.SOFTWARE_SPEC_ID:sw_spec_id,client.repository.ModelMetaNames.TYPE:'spss-modeler_18.1',client.repository.ModelMetaNames.INPUT_DATA_SCHEMA:[{'id':'test','type':'list','fields':[{'name':'age','type':'float'},{'name':'sex','type':'float'},{'name':'fbs','type':'float'},{'name':'restbp','type':'float'}]},{'id':'test2','type':'list','fields':[{'name':'age','type':'float'},{'name':'sex','type':'float'},{'name':'fbs','type':'float'},{'name':'restbp','type':'float'}]}]}
store_model()
method used with a local tar.gz file that contains a model:
stored_model_details=client.repository.store_model(path_to_tar_gz,meta_props=metadata,training_data=None)
store_model()
method used with a local directory that contains model files:
stored_model_details=client.repository.store_model(path_to_model_directory,meta_props=metadata,training_data=None)
store_model()
method used with the ID of a trained model:
stored_model_details=client.repository.store_model(trained_model_id,meta_props=metadata,training_data=None)
store_model()
method used with a pipeline that was generated by an AutoAI experiment:
metadata={client.repository.ModelMetaNames.NAME:'AutoAI prediction model stored from object'}stored_model_details=client.repository.store_model(pipeline_model,meta_props=metadata,experiment_metadata=experiment_metadata)
metadata={client.repository.ModelMetaNames.NAME:'AutoAI prediction Pipeline_1 model'}stored_model_details=client.repository.store_model(model="Pipeline_1",meta_props=metadata,training_id=training_id)
Example of storing a prompt tuned model:
stored_model_details=client.repository.store_model(training_id=prompt_tuning_run_id)
Example of storing a custom foundation model:
sw_spec_id=client.software_specifications.get_id_by_name('watsonx-cfm-caikit-1.0')metadata={client.repository.ModelMetaNames.NAME:'custom FM asset',client.repository.ModelMetaNames.SOFTWARE_SPEC_ID:sw_spec_id,client.repository.ModelMetaNames.TYPE:client.repository.ModelAssetTypes.CUSTOM_FOUNDATION_MODEL_1_0}stored_model_details=client.repository.store_model(model='mistralai/Mistral-7B-Instruct-v0.2',meta_props=metadata)
Create a pipeline.
meta_props (dict) –
metadata of the pipeline configuration. To see available meta names, use:
client.repository.PipelineMetaNames.get()
stored pipeline metadata
dict
Example:
metadata={client.repository.PipelineMetaNames.NAME:'my_training_definition',client.repository.PipelineMetaNames.DOCUMENT:{"doc_type":"pipeline","version":"2.0","primary_pipeline":"dlaas_only","pipelines":[{"id":"dlaas_only","runtime_ref":"hybrid","nodes":[{"id":"training","type":"model_node","op":"dl_train","runtime_ref":"DL","inputs":[],"outputs":[],"parameters":{"name":"tf-mnist","description":"Simple MNIST model implemented in TF","command":"python3 convolutional_network.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000","compute":{"name":"k80","nodes":1},"training_lib_href":"/v4/libraries/64758251-bt01-4aa5-a7ay-72639e2ff4d2/content"},"target_bucket":"wml-dev-results"}]}]}}pipeline_details=client.repository.store_pipeline(training_definition_filepath,meta_props=metadata)
Updates existing AI service asset metadata.
ai_service_id (str) – ID of AI service to be updated
changes (dict) – elements that will be changed, where keys are AIServiceMetaNames
update_ai_service – path to the file with an archived AI service function’s content or function that will be changed for a specific ai_service_id
Example:
metadata={client.repository.AIServiceMetaNames.NAME:"updated_ai_service"}ai_service_details=client.repository.update_ai_service(ai_service_id,changes=metadata)
Updates existing experiment metadata.
experiment_id (str) – ID of the experiment with the definition to be updated
changes (dict) – elements to be changed, where keys are ExperimentMetaNames
metadata of the updated experiment
dict
Example:
metadata={client.repository.ExperimentMetaNames.NAME:"updated_exp"}exp_details=client.repository.update_experiment(experiment_id,changes=metadata)
Updates existing function metadata.
function_id (str) – ID of function which define what should be updated
changes (dict) – elements which should be changed, where keys are FunctionMetaNames
update_function (str orfunction,optional) – path to file with archived function content or function which should be changedfor specific function_id, this parameter is valid only for CP4D 3.0.0
Example:
metadata={client.repository.FunctionMetaNames.NAME:"updated_function"}function_details=client.repository.update_function(function_id,changes=metadata)
Update an existing model.
model_id (str) – ID of model to be updated
updated_meta_props (dict,optional) – new set of updated_meta_props to be updated
update_model (object ormodel,optional) – archived model content file or path to directory that contains the archived model filethat needs to be changed for the specific model_id
updated metadata of the model
dict
Example:
model_details=client.repository.update_model(model_id,update_model=updated_content)
Update metadata of an existing pipeline.
pipeline_id (str) – unique ID of the pipeline to be updated
changes (dict) – elements to be changed, where keys are PipelineMetaNames
rev_id (str) – revision ID of the pipeline
metadata of the updated pipeline
dict
Example:
metadata={client.repository.PipelineMetaNames.NAME:"updated_pipeline"}pipeline_details=client.repository.update_pipeline(pipeline_id,changes=metadata)
Set of MetaNames for models.
Available MetaNames:
MetaName | Type | Required | Schema | Example value |
NAME | str | Y |
| |
DESCRIPTION | str | N |
| |
INPUT_DATA_SCHEMA | list | N |
|
|
TRAINING_DATA_REFERENCES | list | N |
|
|
TEST_DATA_REFERENCES | list | N |
|
|
OUTPUT_DATA_SCHEMA | dict | N |
|
|
LABEL_FIELD | str | N |
| |
TRANSFORMED_LABEL_FIELD | str | N |
| |
TAGS | list | N |
|
|
SIZE | dict | N |
|
|
PIPELINE_ID | str | N |
| |
RUNTIME_ID | str | N |
| |
TYPE | str | Y |
| |
CUSTOM | dict | N |
| |
DOMAIN | str | N |
| |
HYPER_PARAMETERS | dict | N | ||
METRICS | list | N | ||
IMPORT | dict | N |
|
|
TRAINING_LIB_ID | str | N |
| |
MODEL_DEFINITION_ID | str | N |
| |
SOFTWARE_SPEC_ID | str | N |
| |
TF_MODEL_PARAMS | dict | N |
| |
FAIRNESS_INFO | dict | N |
| |
MODEL_LOCATION | dict | N |
| |
FRAMEWORK | str | N |
| |
VERSION | str | N |
|
Note:project (MetaNames.PROJECT_ID) andspace (MetaNames.SPACE_ID) meta names are not supported and considered as invalid. Instead use client.set.default_space(<SPACE_ID>) to set the space or client.set.default_project(<PROJECT_ID>).
Set of MetaNames for experiments.
Available MetaNames:
MetaName | Type | Required | Schema | Example value |
NAME | str | Y |
| |
DESCRIPTION | str | N |
| |
TAGS | list | N |
|
|
EVALUATION_METHOD | str | N |
| |
EVALUATION_METRICS | list | N |
|
|
TRAINING_REFERENCES | list | Y |
|
|
SPACE_UID | str | N |
| |
LABEL_COLUMN | str | N |
| |
CUSTOM | dict | N |
|
Set of MetaNames for AI functions.
Available MetaNames:
MetaName | Type | Required | Schema | Example value |
NAME | str | Y |
| |
DESCRIPTION | str | N |
| |
SOFTWARE_SPEC_ID | str | N |
| |
SOFTWARE_SPEC_UID | str | N |
| |
INPUT_DATA_SCHEMAS | list | N |
|
|
OUTPUT_DATA_SCHEMAS | list | N |
|
|
TAGS | list | N |
|
|
TYPE | str | N |
| |
CUSTOM | dict | N |
| |
SAMPLE_SCORING_INPUT | dict | N |
|
|
Set of MetaNames for pipelines.
Available MetaNames:
MetaName | Type | Required | Schema | Example value |
NAME | str | Y |
| |
DESCRIPTION | str | N |
| |
SPACE_ID | str | N |
| |
SPACE_UID | str | N |
| |
TAGS | list | N |
|
|
DOCUMENT | dict | N |
|
|
CUSTOM | dict | N |
| |
IMPORT | dict | N |
|
|
RUNTIMES | list | N |
| |
COMMAND | str | N |
| |
COMPUTE | dict | N |
|
Set of MetaNames for AI services.
Available MetaNames:
MetaName | Type | Required | Schema | Example value |
NAME | str | Y |
| |
DESCRIPTION | str | N |
| |
SOFTWARE_SPEC_ID | str | N |
| |
DOCUMENTATION_REQUEST | dict | N |
| |
DOCUMENTATION_RESPONSE | dict | N |
| |
DOCUMENTATION_INIT | dict | N |
| |
DOCUMENTATION_FUNCTIONS | dict | N |
| |
TAGS | list | N |
|
|
CODE_TYPE | str | N |
| |
CUSTOM | dict | N |
| |
TOOLING | dict | N |
|
Store and manage script assets.
MetaNames for script assets creation.
Create a revision for the given script. Revisions are immutable once created.The metadata and attachment atscript_id is taken and a revision is created out of it.
script_id (str) – ID of the script
revised metadata of the stored script
dict
Example:
script_revision=client.script.create_revision(script_id)
Delete a stored script asset.
asset_id (str) – ID of the script asset
status (“SUCCESS” or “FAILED”) if deleted synchronously or dictionary with response
str | dict
Example:
client.script.delete(asset_id)
Download the content of a script asset.
asset_id (str) – unique ID of the script asset to be downloaded
filename (str) – filename to be used for the downloaded file
rev_id (str,optional) – revision ID
path to the downloaded asset content
str
Example:
client.script.download(asset_id,"script_file")
Get script asset details. If no script_id is passed, details for all script assets are returned.
script_id (str,optional) – unique ID of the script
limit (int,optional) – limit number of fetched records
get_all (bool,optional) – if True, it will get all entries in ‘limited’ chunks
metadata of the stored script asset
dict - if script_id is not None
{“resources”: [dict]} - if script_id is None
Example:
script_details=client.script.get_details(script_id)
Get the URL of a stored script asset.
asset_details (dict) – details of the stored script asset
href of the stored script asset
str
Example:
asset_details=client.script.get_details(asset_id)asset_href=client.script.get_href(asset_details)
Get the unique ID of a stored script asset.
asset_details (dict) – metadata of the stored script asset
unique ID of the stored script asset
str
Example:
asset_id=client.script.get_id(asset_details)
Get metadata of the script revision.
script_id (str) – ID of the script
rev_id (str,optional) – ID of the revision. If this parameter is not provided, it returns the latest revision. If there is no latest revision, it returns an error.
metadata of the stored script(s)
list
Example:
script_details=client.script.get_revision_details(script_id,rev_id)
List stored scripts in a table format.
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed scripts
pandas.DataFrame
Example:
client.script.list()
Print all revisions for the given script ID in a table format.
script_id (str) – ID of the stored script
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed revisions
pandas.DataFrame
Example:
client.script.list_revisions(script_id)
Create a script asset and upload content to it.
meta_props (dict) – name to be given to the script asset
file_path (str) – path to the content file to be uploaded
metadata of the stored script asset
dict
Example:
metadata={client.script.ConfigurationMetaNames.NAME:'my first script',client.script.ConfigurationMetaNames.DESCRIPTION:'description of the script',client.script.ConfigurationMetaNames.SOFTWARE_SPEC_ID:'0cdb0f1e-5376-4f4d-92dd-da3b69aa9bda'}asset_details=client.script.store(meta_props=metadata,file_path="/path/to/file")
Update a script with metadata, attachment, or both.
script_id (str) – ID of the script
meta_props (dict,optional) – changes for the script matadata
file_path (str,optional) – file path to the new attachment
updated metadata of the script
dict
Example:
script_details=client.script.update(script_id,meta,content_path)
Connect, get details, and check usage of a Watson Machine Learning service instance.
Get the API key of a Watson Machine Learning service.
API key
str
Example:
instance_details=client.service_instance.get_api_key()
Get information about the Watson Machine Learning instance.
metadata of the service instance
dict
Example:
instance_details=client.service_instance.get_details()
Get the instance ID of a Watson Machine Learning service.
ID of the instance
str
Example:
instance_details=client.service_instance.get_instance_id()
Get the password for the Watson Machine Learning service. Applicable only for IBM Cloud Pak® for Data.
password
str
Example:
instance_details=client.service_instance.get_password()
Set a space_id or a project_id to be used in the subsequent actions.
Warning! Not supported for IBM Cloud.
Store and manage shiny assets.
MetaNames for Shiny Assets creation.
Create a revision for the given shiny asset. Revisions are immutable once created.The metadata and attachment atscript_id is taken and a revision is created out of it.
shiny_id (str) – ID of the shiny asset
revised metadata of the stored shiny asset
dict
Example:
shiny_revision=client.shiny.create_revision(shiny_id)
Delete a stored shiny asset.
shiny_id (str) – unique ID of the shiny asset
status (“SUCCESS” or “FAILED”) if deleted synchronously or dictionary with response
str | dict
Example:
client.shiny.delete(shiny_id)
Download the content of a shiny asset.
shiny_id (str) – unique ID of the shiny asset to be downloaded
filename (str) – filename to be used for the downloaded file
rev_id (str,optional) – ID of the revision
path to the downloaded shiny asset content
str
Example:
client.shiny.download(shiny_id,"shiny_asset.zip")
Get shiny asset details. If no shiny_id is passed, details for all shiny assets are returned.
shiny_id (str,optional) – unique ID of the shiny asset
limit (int,optional) – limit number of fetched records
get_all (bool,optional) – if True, it will get all entries in ‘limited’ chunks
metadata of the stored shiny asset
dict - if shiny_id is not None
{“resources”: [dict]} - if shiny_id is None
Example:
shiny_details=client.shiny.get_details(shiny_id)
Get the URL of a stored shiny asset.
shiny_details (dict) – details of the stored shiny asset
href of the stored shiny asset
str
Example:
shiny_details=client.shiny.get_details(shiny_id)shiny_href=client.shiny.get_href(shiny_details)
Get the unique ID of a stored shiny asset.
shiny_details (dict) – metadata of the stored shiny asset
unique ID of the stored shiny asset
str
Example:
shiny_id=client.shiny.get_id(shiny_details)
Get metadata of theshiny_id revision.
shiny_id (str) – ID of the shiny asset
rev_id (str,optional) – ID of the revision. If this parameter is not provided, it returns the latest revision. If there is no latest revision, it returns an error.
stored shiny(s) metadata
list
Example:
shiny_details=client.shiny.get_revision_details(shiny_id,rev_id)
Get the Unique ID of a stored shiny asset.
Deprecated: Useget_id(shiny_details)
instead.
shiny_details (dict) – metadata of the stored shiny asset
unique ID of the stored shiny asset
str
Example:
shiny_id=client.shiny.get_uid(shiny_details)
List stored shiny assets in a table format.
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed shiny assets
pandas.DataFrame
Example:
client.shiny.list()
List all revisions for the given shiny asset ID in a table format.
shiny_id (str) – ID of the stored shiny asset
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed shiny revisions
pandas.DataFrame
Example:
client.shiny.list_revisions(shiny_id)
Create a shiny asset and upload content to it.
meta_props (dict) – metadata of the shiny asset
file_path (str) – path to the content file to be uploaded
metadata of the stored shiny asset
dict
Example:
meta_props={client.shiny.ConfigurationMetaNames.NAME:"shiny app name"}shiny_details=client.shiny.store(meta_props,file_path="/path/to/file")
Update a shiny asset with metadata, attachment, or both.
shiny_id (str) – ID of the shiny asset
meta_props (dict,optional) – changes to the metadata of the shiny asset
file_path (str,optional) – file path to the new attachment
updated metadata of the shiny asset
dict
Example:
shiny_details=client.shiny.update(shiny_id,meta,content_path)
Store and manage software specs.
MetaNames for Software Specification creation.
Add a package extension to a software specification’s existing metadata.
sw_spec_id (str) – unique ID of the software specification to be updated
pkg_extn_id (str) – unique ID of the package extension to be added to the software specification
status
str
Example:
client.software_specifications.add_package_extension(sw_spec_id,pkg_extn_id)
Delete a software specification.
sw_spec_id (str) – unique ID of the software specification
status (“SUCCESS” or “FAILED”)
str
Example:
client.software_specifications.delete(sw_spec_id)
Delete a package extension from a software specification’s existing metadata.
sw_spec_id (str) – unique ID of the software specification to be updated
pkg_extn_id (str) – unique ID of the package extension to be deleted from the software specification
status
str
Example:
client.software_specifications.delete_package_extension(sw_spec_uid,pkg_extn_id)
Get software specification details. If no sw_spec_id is passed, details for all software specificationsare returned.
sw_spec_id (bool) – ID of the software specification
state_info – works only whensw_spec_id is None, instead of returning details of software specs, it returnsthe state of the software specs information (supported, unsupported, deprecated), containing suggested replacementin case of unsupported or deprecated software specs
metadata of the stored software specification(s)
dict - ifsw_spec_uid is not None
{“resources”: [dict]} - ifsw_spec_uid is None
Examples
sw_spec_details=client.software_specifications.get_details(sw_spec_uid)sw_spec_details=client.software_specifications.get_details()sw_spec_state_details=client.software_specifications.get_details(state_info=True)
Get the URL of a software specification.
sw_spec_details (dict) – details of the software specification
href of the software specification
str
Example:
sw_spec_details=client.software_specifications.get_details(sw_spec_id)sw_spec_href=client.software_specifications.get_href(sw_spec_details)
Get the unique ID of a software specification.
sw_spec_details (dict) – metadata of the software specification
unique ID of the software specification
str
Example:
asset_id=client.software_specifications.get_id(sw_spec_details)
Get the unique ID of a software specification.
sw_spec_name (str) – name of the software specification
unique ID of the software specification
str
Example:
asset_uid=client.software_specifications.get_id_by_name(sw_spec_name)
Get the unique ID of a software specification.
Deprecated: Useget_id(sw_spec_details)
instead.
sw_spec_details (dict) – metadata of the software specification
unique ID of the software specification
str
Example:
asset_uid=client.software_specifications.get_uid(sw_spec_details)
Get the unique ID of a software specification.
Deprecated: Useget_id_by_name(self,sw_spec_name)
instead.
sw_spec_name (str) – name of the software specification
unique ID of the software specification
str
Example:
asset_uid=client.software_specifications.get_uid_by_name(sw_spec_name)
List software specifications in a table format.
limit (int,optional) – limit number of fetched records
spec_states (list[SpecStates],optional) – specification state filter, by default shows available, supported and custom software specifications
pandas.DataFrame with listed software specifications
pandas.DataFrame
Example:
client.software_specifications.list()
Create a software specification.
meta_props (dict) –
metadata of the space configuration. To see available meta names, use:
client.software_specifications.ConfigurationMetaNames.get()
metadata of the stored space
dict
Example:
meta_props={client.software_specifications.ConfigurationMetaNames.NAME:"skl_pipeline_heart_problem_prediction",client.software_specifications.ConfigurationMetaNames.DESCRIPTION:"description scikit-learn_0.20",client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS:[],client.software_specifications.ConfigurationMetaNames.SOFTWARE_CONFIGURATION:{},client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION:{"guid":"<guid>"}}sw_spec_details=client.software_specifications.store(meta_props)
Set of MetaNames for Software Specifications Specs.
Available MetaNames:
MetaName | Type | Required | Schema | Example value |
NAME | str | Y |
| |
DESCRIPTION | str | N |
| |
PACKAGE_EXTENSIONS | list | N |
| |
SOFTWARE_CONFIGURATION | dict | N |
|
|
BASE_SOFTWARE_SPECIFICATION | dict | Y |
|
Store and manage spaces.
MetaNames for spaces creation.
MetaNames for space members creation.
Create a member within a space.
space_id (str) – ID of the space with the definition to be updated
meta_props (dict) –
metadata of the member configuration. To see available meta names, use:
client.spaces.MemberMetaNames.get()
metadata of the stored member
dict
Note
role can be any one of the following: “viewer”, “editor”, “admin”
type can be any one of the following: “user”, “service”
id can be one of the following: service-ID or IAM-userID
Examples
metadata={client.spaces.MemberMetaNames.MEMBERS:[{"id":"IBMid-100000DK0B","type":"user","role":"admin"}]}members_details=client.spaces.create_member(space_id=space_id,meta_props=metadata)
metadata={client.spaces.MemberMetaNames.MEMBERS:[{"id":"iam-ServiceId-5a216e59-6592-43b9-8669-625d341aca71","type":"service","role":"admin"}]}members_details=client.spaces.create_member(space_id=space_id,meta_props=metadata)
Delete a stored space.
space_id (str) – ID of the space
status “SUCCESS” if deletion is successful
Literal[“SUCCESS”]
Example:
client.spaces.delete(space_id)
Delete a member associated with a space.
space_id (str) – ID of the space
member_id (str) – ID of the member
status (“SUCCESS” or “FAILED”)
str
Example:
client.spaces.delete_member(space_id,member_id)
Get metadata of stored space(s). The method uses TTL cache.
space_id (str,optional) – ID of the space
limit (int,optional) – applicable whenspace_id is not provided, otherwiselimit will be ignored
asynchronous (bool,optional) – ifTrue, it will work as a generator
get_all (bool,optional) – ifTrue, it will get all entries in ‘limited’ chunks
space_name (str,optional) – name of the stored space, can be used only whenspace_id is None
metadata of stored space(s)
dict - if space_id is not None
{“resources”: [dict]} - if space_id is None
Example:
space_details=client.spaces.get_details(space_id)space_details=client.spaces.get_details(space_name)space_details=client.spaces.get_details(limit=100)space_details=client.spaces.get_details(limit=100,get_all=True)space_details=[]forentryinclient.spaces.get_details(limit=100,asynchronous=True,get_all=True):space_details.extend(entry)
Get the space_id from the space details.
space_details (dict) – metadata of the stored space
ID of the stored space
str
Example:
space_details=client.spaces.store(meta_props)space_id=client.spaces.get_id(space_details)
Get the ID of a stored space by name.
space_name (str) – name of the stored space
ID of the stored space
str
Example:
space_id=client.spaces.get_id_by_name(space_name)
Get metadata of a member associated with a space.
space_id (str) – ID of that space with the definition to be updated
member_id (str) – ID of the member
metadata of the space member
dict
Example:
member_details=client.spaces.get_member_details(space_id,member_id)
Get the unique ID of the space.
Deprecated: Use
get_id(space_details)
instead.
- param space_details:
metadata of the space
- type space_details:
dict
- return:
unique ID of the space
- rtype:
str
Example:
space_details=client.spaces.store(meta_props)space_uid=client.spaces.get_uid(space_details)
List stored spaces in a table format.
limit (int,optional) – limit number of fetched records
member (str,optional) – filters the result list, only includes spaces where the user with a matching user IDis a member
roles (str,optional) – a list of comma-separated space roles to use to filter the query results,must be used in conjunction with the “member” query parameter,available values :admin,editor,viewer
space_type (str,optional) – filter spaces by their type, available types are ‘wx’, ‘cpd’, and ‘wca’
pandas.DataFrame with listed spaces
pandas.DataFrame
Example:
client.spaces.list()
Print the stored members of a space in a table format.
space_id (str) – ID of the space
limit (int,optional) – limit number of fetched records
identity_type (str,optional) – filter the members by type
role (str,optional) – filter the members by role
state (str,optional) – filter the members by state
pandas.DataFrame with listed members
pandas.DataFrame
Example:
client.spaces.list_members(space_id)
Promote an asset from a project to a space.
asset_id (str) – ID of the stored asset
source_project_id (str) – source project, from which the asset is promoted
target_space_id (str) – target space, where the asset is promoted
rev_id (str,optional) – revision ID of the promoted asset
ID of the promoted asset
str
Examples
promoted_asset_id=client.spaces.promote(asset_id,source_project_id=project_id,target_space_id=space_id)promoted_model_id=client.spaces.promote(model_id,source_project_id=project_id,target_space_id=space_id)promoted_function_id=client.spaces.promote(function_id,source_project_id=project_id,target_space_id=space_id)promoted_data_asset_id=client.spaces.promote(data_asset_id,source_project_id=project_id,target_space_id=space_id)promoted_connection_asset_id=client.spaces.promote(connection_id,source_project_id=project_id,target_space_id=space_id)
Create a space. The instance associated with the space via COMPUTE will be used for billing purposes onthe cloud. Note that STORAGE and COMPUTE are applicable only for cloud.
meta_props (dict) –
metadata of the space configuration. To see available meta names, use:
client.spaces.ConfigurationMetaNames.get()
background_mode (bool,optional) – indicator if store() method will run in background (async) or (sync)
metadata of the stored space
dict
Example:
metadata={client.spaces.ConfigurationMetaNames.NAME:"my_space",client.spaces.ConfigurationMetaNames.DESCRIPTION:"spaces",client.spaces.ConfigurationMetaNames.STORAGE:{"resource_crn":"provide crn of the COS storage"},client.spaces.ConfigurationMetaNames.COMPUTE:{"name":"test_instance","crn":"provide crn of the instance"},client.spaces.ConfigurationMetaNames.STAGE:{"production":True,"name":"stage_name"},client.spaces.ConfigurationMetaNames.TAGS:["sample_tag_1","sample_tag_2"],client.spaces.ConfigurationMetaNames.TYPE:"cpd",}spaces_details=client.spaces.store(meta_props=metadata)
Update existing space metadata. ‘STORAGE’ cannot be updated.STORAGE and COMPUTE are applicable only for cloud.
space_id (str) – ID of the space with the definition to be updated
changes (dict) – elements to be changed, where keys are ConfigurationMetaNames
metadata of the updated space
dict
Example:
metadata={client.spaces.ConfigurationMetaNames.NAME:"updated_space",client.spaces.ConfigurationMetaNames.COMPUTE:{"name":"test_instance","crn":"v1:staging:public:pm-20-dev:us-south:a/09796a1b4cddfcc9f7fe17824a68a0f8:f1026e4b-77cf-4703-843d-c9984eac7272::"}}space_details=client.spaces.update(space_id,changes=metadata)
Update the metadata of an existing member.
space_id (str) – ID of the space
member_id (str) – ID of the member to be updated
changes (dict) – elements to be changed, where keys are ConfigurationMetaNames
metadata of the updated member
dict
Example:
metadata={client.spaces.MemberMetaNames.MEMBER:{"role":"editor"}}member_details=client.spaces.update_member(space_id,member_id,changes=metadata)
Set of MetaNames for Platform Spaces Specs.
Available MetaNames:
MetaName | Type | Required | Example value |
NAME | str | Y |
|
DESCRIPTION | str | N |
|
STORAGE | dict | N |
|
COMPUTE | dict | N |
|
STAGE | dict | N |
|
TAGS | list | N |
|
TYPE | str | N |
|
Store and manage projects.
Note
Projects module is available since Python SDK version 1.3.5.
MetaNames for projects creation.
MetaNames for project members creation.
Create a member within a project.
project_id (str) – ID of the project with the definition to be updated
meta_props (dict) –
metadata of the member configuration. To see available meta names, use:
client.projects.MemberMetaNames.get()
metadata of the stored member
dict
Note
role can be any one of the following: “viewer”, “editor”, “admin”
type can be any one of the following: “user”, “service”
id can be one of the following: service-ID or IAM-userID
Examples
metadata={client.projects.MemberMetaNames.MEMBERS:[{"id":"IBMid-100000DK0B","type":"user","role":"admin"}]}members_details=client.projects.create_member(project_id=project_id,meta_props=metadata)
metadata={client.projects.MemberMetaNames.MEMBERS:[{"id":"iam-ServiceId-5a216e59-6592-43b9-8669-625d341aca71","type":"service","role":"admin"}]}members_details=client.projects.create_member(project_id=project_id,meta_props=metadata)
Delete a stored project.
project_id (str) – ID of the project
status “SUCCESS” if deletion is successful
Literal[“SUCCESS”]
Example:
client.projects.delete(project_id)
Delete a member associated with a project.
project_id (str) – ID of the project
user_name (str,optional) – name of the member
status (“SUCCESS” if succeeded)
str
Example:
client.projects.delete_member(project_id,user_name)
Get metadata of stored project(s).
project_id (str,optional) – ID of the project
limit (int,optional) – applicable whenproject_id is not provided, otherwiselimit will be ignored
asynchronous (bool,optional) – ifTrue, it will work as a generator
get_all (bool,optional) – ifTrue, it will get all entries in ‘limited’ chunks
project_name (str,optional) – name of the stored project, can be used only whenproject_id is None
metadata of stored project(s)
dict - if project_id is not None
{“resources”: [dict]} - if project_id is None
Example:
project_details=client.projects.get_details(project_id)project_details=client.projects.get_details(project_name)project_details=client.projects.get_details(limit=100)project_details=client.projects.get_details(limit=100,get_all=True)project_details=[]forentryinclient.projects.get_details(limit=100,asynchronous=True,get_all=True):project_details.extend(entry)
Get the project_id from the project details.
project_details (dict) – metadata of the stored project
ID of the stored project
str
Example:
project_details=client.projects.store(meta_props)project_id=client.projects.get_id(project_details)
Get the ID of a stored project by name.
project_name (str) – name of the stored project
ID of the stored project
str
Example:
project_id=client.projects.get_id_by_name(project_name)
Get metadata of a member associated with a project. If no user_name is passed, all members details will be returned.
project_id (str) – ID of that project with the definition to be updated
user_name (str,optional) – name of the member
metadata of the project member
dict
Example:
member_details=client.projects.get_member_details(project_id,"test@ibm.com")members_details=client.projects.get_member_details(project_id)
List stored projects in a table format.
limit (int,optional) – limit number of fetched records
member (str,optional) – filters the result list, only includes projects where the user with a matching user IDis a member
roles (str,optional) – a list of comma-separated project roles to use to filter the query results,must be used in conjunction with the “member” query parameter,available values :admin,editor,viewer
project_type (str,optional) – filter projects by their type, available types are ‘cpd’, ‘wx’, ‘wca’, ‘dpx’ and ‘wxbi’
pandas.DataFrame with listed projects
pandas.DataFrame
Example:
client.projects.list()
Print the stored members of a project in a table format.
project_id (str) – ID of the project
limit (int,optional) – limit number of fetched records
identity_type (str,optional) – filter the members by type
role (str,optional) – filter the members by role
state (str,optional) – filter the members by state
pandas.DataFrame with listed members
pandas.DataFrame
Example:
client.projects.list_members(project_id)
Create a project.
meta_props (dict) –
metadata of the project configuration. To see available meta names, use:
client.projects.ConfigurationMetaNames.get()
metadata of the stored project
dict
Example:
meta_props={client.projects.ConfigurationMetaNames.NAME:"my project",client.projects.ConfigurationMetaNames.DESCRIPTION:"test project",client.projects.ConfigurationMetaNames.STORAGE:{"type":"assetfiles"}}projects_details=client.projects.store(meta_props)
Update existing project metadata. ‘STORAGE’ cannot be updated.
project_id (str) – ID of the project with the definition to be updated
changes (dict) – elements to be changed, where keys are ConfigurationMetaNames
metadata of the updated project
dict
Example:
metadata={client.projects.ConfigurationMetaNames.NAME:"updated_project",client.projects.ConfigurationMetaNames.COMPUTE:{"name":"test_instance","crn":"v1:staging:public:pm-20-dev:us-south:a/09796a1b4cddfcc9f7fe17824a68a0f8:f1026e4b-77cf-4703-843d-c9984eac7272::"}}project_details=client.projects.update(project_id,changes=metadata)
Update the metadata of an existing member.
project_id (str) – ID of the project
user_name (str) – name of the member to be updated
changes (dict) – elements to be changed, where keys are ConfigurationMetaNames
metadata of the updated member
dict
Example:
metadata={client.projects.MemberMetaNames.MEMBER:{"role":"editor"}}member_details=client.projects.update_member(project_id,user_name,changes=metadata)
Set of MetaNames for Projects Specs.
Available MetaNames:
MetaName | Type | Required | Default value | Example value |
NAME | str | Y |
| |
DESCRIPTION | str | N |
| |
STORAGE | dict | Y |
| |
COMPUTE | dict | N |
| |
TAGS | list | N |
| |
TYPE | str | N |
| |
GENERATOR | str | Y |
|
|
PUBLIC | bool | N |
| |
TOOLS | list | N |
| |
ENFORCE_MEMBERS | bool | N |
|
Set of MetaNames for Platform Spaces / Projects Member Specs.
Available MetaNames:
MetaName | Type | Required | Schema | Example value |
MEMBERS | list | N |
|
|
MEMBER | dict | N |
|
Store and manage your task credentials.
Delete a software specification.
task_credentials_id (str) – Unique Id of task credentials
status “SUCCESS” if deletion is successful
Literal[“SUCCESS”]
Example:
client.task_credentials.delete(task_credentials_id)
Get task credentials details. If no task_credentials_id is passed, details for all task credentialswill be returned.
task_credentials_id (str,optional) – ID of task credentials to be fetched
project_id (str,optional) – ID of project to be used for filtering
space_id (str,optional) – ID of space to be used for filtering
created task credentials details
dict (if task_credentials_id is not None) or {“resources”: [dict]} (if task_credentials_id is None)
Example:
task_credentials_details=client.task_credentials.get_details(task_credentials_id)
Get Unique Id of task credentials.
task_credentials_details (dict) – metadata of the task credentials
Unique Id of task credentials
str
Example:
task_credentials_id=client.task_credentials.get_id(task_credentials_details)
Lists task credentials in table format.
limit (int,optional) – limit number of fetched records
pandas.DataFrame with listed assets
pandas.DataFrame
Example:
client.task_credentials.list()
Store current credentials using Task Credentials API to use with long run tasks. Supported only on Cloud.
name (str,optional) – Name of the task credentials. Defaults toPython API generated task credentials
description (str,optional) – Description of the task credentials. Defaults toPython API generated task credentials
A dictionary containing metadata of the stored task credentials.
dict
Example:
task_credentials_details=client.task_credentials.store()
Train new models.
Cancel a training that is currently running. This method can delete metadatadetails of a completed or canceled training run whenhard_delete parameter is set toTrue.
training_id (str) – ID of the training
hard_delete (bool,optional) –
specifyTrue orFalse:
True - to delete the completed or canceled training run
False - to cancel the currently running training run
status “SUCCESS” if cancelation is successful
Literal[“SUCCESS”]
Example:
client.training.cancel(training_id)
Get metadata of training(s). If training_id is not specified, the metadata of all model spaces are returned.
training_id (str,optional) – unique ID of the training
limit (int,optional) – limit number of fetched records
asynchronous (bool,optional) – ifTrue, it will work as a generator
get_all (bool,optional) – ifTrue, it will get all entries in ‘limited’ chunks
training_type (str,optional) – filter the fetched list of trainings based on the training type [“pipeline” or “experiment”]
state (str,optional) – filter the fetched list of training based on their state:[queued,running,completed,failed]
tag_value (str,optional) – filter the fetched list of training based on their tag value
training_definition_id (str,optional) – filter the fetched trainings that are using the given training definition
metadata of training(s)
dict - if training_id is not None
{“resources”: [dict]} - if training_id is None
Examples
training_run_details=client.training.get_details(training_id)training_runs_details=client.training.get_details()training_runs_details=client.training.get_details(limit=100)training_runs_details=client.training.get_details(limit=100,get_all=True)training_runs_details=[]forentryinclient.training.get_details(limit=100,asynchronous=True,get_all=True):training_runs_details.extend(entry)
Get the training href from the training details.
training_details (dict) – metadata of the created training
training href
str
Example:
training_details=client.training.get_details(training_id)run_url=client.training.get_href(training_details)
Get the training ID from the training details.
training_details (dict) – metadata of the created training
unique ID of the training
str
Example:
training_details=client.training.get_details(training_id)training_id=client.training.get_id(training_details)
Get metrics of a training run.
training_id (str) – ID of the training
metrics of the training run
list of dict
Example:
training_status=client.training.get_metrics(training_id)
Get the status of a created training.
training_id (str) – ID of the training
training_status
dict
Example:
training_status=client.training.get_status(training_id)
List stored trainings in a table format.
limit (int,optional) – limit number of fetched records
asynchronous (bool,optional) – ifTrue, it will work as a generator
get_all (bool,optional) – ifTrue, it will get all entries in ‘limited’ chunks
pandas.DataFrame with listed experiments
pandas.DataFrame
Examples
client.training.list()training_runs_df=client.training.list(limit=100)training_runs_df=client.training.list(limit=100,get_all=True)training_runs_df=[]forentryinclient.training.list(limit=100,asynchronous=True,get_all=True):training_runs_df.extend(entry)
Print the intermediate_models in a table format.
training_id (str) – ID of the training
Note
This method is not supported for IBM Cloud Pak® for Data.
Example:
client.training.list_intermediate_models()
Print the logs of a training created.
training_id (str) – training ID
Note
This method is not supported for IBM Cloud Pak® for Data.
Example:
client.training.monitor_logs(training_id)
Print the metrics of a created training.
training_id (str) – ID of the training
Note
This method is not supported for IBM Cloud Pak® for Data.
Example:
client.training.monitor_metrics(training_id)
Create a new Machine Learning training.
meta_props (dict) –
metadata of the training configuration. To see available meta names, use:
client.training.ConfigurationMetaNames.show()
asynchronous (bool,optional) –
True - training job is submitted and progress can be checked later
False - method will wait till job completion and print training stats
metadata of the training created
dict
Note
client.training.ConfigurationMetaNames.EXPERIMENT
client.training.ConfigurationMetaNames.PIPELINE
client.training.ConfigurationMetaNames.MODEL_DEFINITION
Examples
Example of meta_props for creating a training run in IBM Cloud Pak® for Data version 3.0.1 or above:
metadata={client.training.ConfigurationMetaNames.NAME:'Hand-written Digit Recognition',client.training.ConfigurationMetaNames.DESCRIPTION:'Hand-written Digit Recognition Training',client.training.ConfigurationMetaNames.PIPELINE:{"id":"4cedab6d-e8e4-4214-b81a-2ddb122db2ab","rev":"12","model_type":"string","data_bindings":[{"data_reference_name":"string","node_id":"string"}],"nodes_parameters":[{"node_id":"string","parameters":{}}],"hardware_spec":{"id":"4cedab6d-e8e4-4214-b81a-2ddb122db2ab","rev":"12","name":"string","num_nodes":"2"}},client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES:[{'type':'s3','connection':{},'location':{'href':'v2/assets/asset1233456'},'schema':{'id':'t1','name':'Tasks','fields':[{'name':'duration','type':'number'}]}}],client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE:{'id':'string','connection':{'endpoint_url':'https://s3-api.us-geo.objectstorage.service.networklayer.com','access_key_id':'***','secret_access_key':'***'},'location':{'bucket':'wml-dev-results','path':"path"}'type':'s3'}}
Example of a Federated Learning training job:
aggregator_metadata={client.training.ConfigurationMetaNames.NAME:'Federated_Learning_Tensorflow_MNIST',client.training.ConfigurationMetaNames.DESCRIPTION:'MNIST digit recognition with Federated Learning using Tensorflow',client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES:[],client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE:{'type':results_type,'name':'outputData','connection':{},'location':{'path':'/projects/'+PROJECT_ID+'/assets/trainings/'}},client.training.ConfigurationMetaNames.FEDERATED_LEARNING:{'model':{'type':'tensorflow','spec':{'id':untrained_model_id},'model_file':untrained_model_name},'fusion_type':'iter_avg','metrics':'accuracy','epochs':3,'rounds':10,'remote_training':{'quorum':1.0,'max_timeout':3600,'remote_training_systems':[{'id':prime_rts_id},{'id':nonprime_rts_id}]},'hardware_spec':{'name':'S'},'software_spec':{'name':'runtime-22.1-py3.9'}}aggregator=client.training.run(aggregator_metadata,asynchronous=True)aggregator_id=client.training.get_id(aggregator)
Set of MetaNames for trainings.
Available MetaNames:
MetaName | Type | Required | Schema | Example value |
TRAINING_DATA_REFERENCES | list | Y |
|
|
TRAINING_RESULTS_REFERENCE | dict | Y |
|
|
TEST_DATA_REFERENCES | list | N |
|
|
TEST_OUTPUT_DATA | dict | N |
|
|
TAGS | list | N |
|
|
PIPELINE | dict | N |
| |
EXPERIMENT | dict | N |
| |
PROMPT_TUNING | dict | N |
| |
FINE_TUNING | dict | N |
| |
AUTO_UPDATE_MODEL | bool | N |
| |
FEDERATED_LEARNING | dict | N |
| |
SPACE_UID | str | N |
| |
MODEL_DEFINITION | dict | N |
| |
DESCRIPTION | str | Y |
| |
NAME | str | Y |
|
Bases:Enum
Classification algorithms that AutoAI can use for IBM Cloud.
Bases:Enum
Classification algorithms that AutoAI can use for IBM Cloud Pak® for Data(CP4D).The SnapML estimators (SnapDT, SnapRF, SnapSVM, SnapLR) are supportedon IBM Cloud Pak® for Data version 4.0.2 and later.
Bases:object
Supported types of DataConnection.
Bases:object
Possible metrics directions
Bases:object
Types of training data sampling.
Bases:Enum
Forecasting algorithms that AutoAI can use for IBM watsonx.ai software with IBM Cloud Pak® for Data.
Bases:Enum
Forecasting algorithms that AutoAI can use for IBM Cloud.
Bases:Enum
Forecasting pipeline types that AutoAI can use for IBM Cloud Pak® for Data(CP4D).
Get a list of pipelines that use supporting features (exogenous pipelines).
list of pipelines using supporting features
list[ForecastingPipelineTypes]
Get a list of pipelines that are not using supporting features (non-exogenous pipelines).
list of pipelines that do not use supporting features
list[ForecastingPipelineTypes]
Bases:Enum
Missing values imputation strategies.
Bases:object
Supported types of classification and regression metrics in AutoAI.
Bases:Enum
Map of metrics directions.
Bases:object
Supported types of Pipelines.
Bases:object
Metrics that need positive label definition for binary classification.
Bases:object
Supported types of learning.
Bases:object
Supported types of AutoAI RAG metrics
Bases:Enum
Regression algorithms that AutoAI can use for IBM Cloud.
Bases:Enum
Regression algorithms that AutoAI can use for IBM Cloud Pak® for Data(CP4D).The SnapML estimators (SnapDT, SnapRF, SnapBM) are supportedon IBM Cloud Pak® for Data version 4.0.2 and later.
Bases:object
Supported types of AutoAI fit/run.
Bases:object
Types of training data sampling.
Bases:object
Possible sizes of the AutoAI POD.Depending on the POD size, AutoAI can support different data set sizes.
S - small (2vCPUs and 8GB of RAM)
M - Medium (4vCPUs and 16GB of RAM)
L - Large (8vCPUs and 32GB of RAM))
XL - Extra Large (16vCPUs and 64GB of RAM)
Bases:Enum
Timeseries Anomaly Prediction algorithms that AutoAI can use for IBM Cloud.
Bases:Enum
Timeseries Anomaly Prediction pipeline types that AutoAI can use for IBM Cloud.
Bases:object
Supported types of congito transformers names in AutoAI.
Connections
Connections.ConfigurationMetaNames
Connections.create()
Connections.delete()
Connections.get_datasource_type_details_by_id()
Connections.get_datasource_type_details_by_name()
Connections.get_datasource_type_id_by_name()
Connections.get_datasource_type_uid_by_name()
Connections.get_details()
Connections.get_id()
Connections.get_uid()
Connections.get_uploaded_db_drivers()
Connections.list()
Connections.list_datasource_types()
Connections.list_uploaded_db_drivers()
Connections.sign_db_driver_url()
Connections.upload_db_driver()
ConnectionMetaNames
Deployments
Deployments.HardwareRequestSizes
Deployments.HardwareRequestSizes.capitalize()
Deployments.HardwareRequestSizes.casefold()
Deployments.HardwareRequestSizes.center()
Deployments.HardwareRequestSizes.count()
Deployments.HardwareRequestSizes.encode()
Deployments.HardwareRequestSizes.endswith()
Deployments.HardwareRequestSizes.expandtabs()
Deployments.HardwareRequestSizes.find()
Deployments.HardwareRequestSizes.format()
Deployments.HardwareRequestSizes.format_map()
Deployments.HardwareRequestSizes.index()
Deployments.HardwareRequestSizes.isalnum()
Deployments.HardwareRequestSizes.isalpha()
Deployments.HardwareRequestSizes.isascii()
Deployments.HardwareRequestSizes.isdecimal()
Deployments.HardwareRequestSizes.isdigit()
Deployments.HardwareRequestSizes.isidentifier()
Deployments.HardwareRequestSizes.islower()
Deployments.HardwareRequestSizes.isnumeric()
Deployments.HardwareRequestSizes.isprintable()
Deployments.HardwareRequestSizes.isspace()
Deployments.HardwareRequestSizes.istitle()
Deployments.HardwareRequestSizes.isupper()
Deployments.HardwareRequestSizes.join()
Deployments.HardwareRequestSizes.ljust()
Deployments.HardwareRequestSizes.lower()
Deployments.HardwareRequestSizes.lstrip()
Deployments.HardwareRequestSizes.maketrans()
Deployments.HardwareRequestSizes.partition()
Deployments.HardwareRequestSizes.removeprefix()
Deployments.HardwareRequestSizes.removesuffix()
Deployments.HardwareRequestSizes.replace()
Deployments.HardwareRequestSizes.rfind()
Deployments.HardwareRequestSizes.rindex()
Deployments.HardwareRequestSizes.rjust()
Deployments.HardwareRequestSizes.rpartition()
Deployments.HardwareRequestSizes.rsplit()
Deployments.HardwareRequestSizes.rstrip()
Deployments.HardwareRequestSizes.split()
Deployments.HardwareRequestSizes.splitlines()
Deployments.HardwareRequestSizes.startswith()
Deployments.HardwareRequestSizes.strip()
Deployments.HardwareRequestSizes.swapcase()
Deployments.HardwareRequestSizes.title()
Deployments.HardwareRequestSizes.translate()
Deployments.HardwareRequestSizes.upper()
Deployments.HardwareRequestSizes.zfill()
Deployments.create()
Deployments.create_job()
Deployments.delete()
Deployments.delete_job()
Deployments.generate()
Deployments.generate_text()
Deployments.generate_text_stream()
Deployments.get_details()
Deployments.get_download_url()
Deployments.get_href()
Deployments.get_id()
Deployments.get_job_details()
Deployments.get_job_href()
Deployments.get_job_id()
Deployments.get_job_status()
Deployments.get_job_uid()
Deployments.get_scoring_href()
Deployments.get_serving_href()
Deployments.get_uid()
Deployments.is_serving_name_available()
Deployments.list()
Deployments.list_jobs()
Deployments.run_ai_service()
Deployments.run_ai_service_stream()
Deployments.score()
Deployments.update()
DeploymentMetaNames
RShinyAuthenticationValues
ScoringMetaNames
DecisionOptimizationMetaNames
RuntimeContext
ModelDefinition
ModelDefinition.ConfigurationMetaNames
ModelDefinition.create_revision()
ModelDefinition.delete()
ModelDefinition.download()
ModelDefinition.get_details()
ModelDefinition.get_href()
ModelDefinition.get_id()
ModelDefinition.get_revision_details()
ModelDefinition.get_uid()
ModelDefinition.list()
ModelDefinition.list_revisions()
ModelDefinition.store()
ModelDefinition.update()
ModelDefinitionMetaNames
Repository
Repository.ModelAssetTypes
Repository.create_ai_service_revision()
Repository.create_experiment_revision()
Repository.create_function_revision()
Repository.create_model_revision()
Repository.create_pipeline_revision()
Repository.create_revision()
Repository.delete()
Repository.download()
Repository.get_ai_service_details()
Repository.get_ai_service_id()
Repository.get_ai_service_revision_details()
Repository.get_details()
Repository.get_experiment_details()
Repository.get_experiment_href()
Repository.get_experiment_id()
Repository.get_experiment_revision_details()
Repository.get_function_details()
Repository.get_function_href()
Repository.get_function_id()
Repository.get_function_revision_details()
Repository.get_id_by_name()
Repository.get_model_details()
Repository.get_model_href()
Repository.get_model_id()
Repository.get_model_revision_details()
Repository.get_pipeline_details()
Repository.get_pipeline_href()
Repository.get_pipeline_id()
Repository.get_pipeline_revision_details()
Repository.list()
Repository.list_ai_service_revisions()
Repository.list_ai_services()
Repository.list_experiments()
Repository.list_experiments_revisions()
Repository.list_functions()
Repository.list_functions_revisions()
Repository.list_models()
Repository.list_models_revisions()
Repository.list_pipelines()
Repository.list_pipelines_revisions()
Repository.load()
Repository.promote_model()
Repository.store_ai_service()
Repository.store_experiment()
Repository.store_function()
Repository.store_model()
Repository.store_pipeline()
Repository.update_ai_service()
Repository.update_experiment()
Repository.update_function()
Repository.update_model()
Repository.update_pipeline()
ModelMetaNames
ExperimentMetaNames
FunctionMetaNames
PipelineMetanames
AIServiceMetaNames
Spaces
Spaces.ConfigurationMetaNames
Spaces.MemberMetaNames
Spaces.create_member()
Spaces.delete()
Spaces.delete_member()
Spaces.get_details()
Spaces.get_id()
Spaces.get_id_by_name()
Spaces.get_member_details()
Spaces.get_uid()
Spaces.list()
Spaces.list_members()
Spaces.promote()
Spaces.store()
Spaces.update()
Spaces.update_member()
SpacesMetaNames
Projects
Projects.ConfigurationMetaNames
Projects.MemberMetaNames
Projects.create_member()
Projects.delete()
Projects.delete_member()
Projects.get_details()
Projects.get_id()
Projects.get_id_by_name()
Projects.get_member_details()
Projects.list()
Projects.list_members()
Projects.store()
Projects.update()
Projects.update_member()
ProjectsMetaNames
ClassificationAlgorithms
ClassificationAlgorithms.DT
ClassificationAlgorithms.EX_TREES
ClassificationAlgorithms.GB
ClassificationAlgorithms.LGBM
ClassificationAlgorithms.LR
ClassificationAlgorithms.RF
ClassificationAlgorithms.SnapBM
ClassificationAlgorithms.SnapDT
ClassificationAlgorithms.SnapLR
ClassificationAlgorithms.SnapRF
ClassificationAlgorithms.SnapSVM
ClassificationAlgorithms.XGB
ClassificationAlgorithmsCP4D
ClassificationAlgorithmsCP4D.DT
ClassificationAlgorithmsCP4D.EX_TREES
ClassificationAlgorithmsCP4D.GB
ClassificationAlgorithmsCP4D.LGBM
ClassificationAlgorithmsCP4D.LR
ClassificationAlgorithmsCP4D.RF
ClassificationAlgorithmsCP4D.SnapBM
ClassificationAlgorithmsCP4D.SnapDT
ClassificationAlgorithmsCP4D.SnapLR
ClassificationAlgorithmsCP4D.SnapRF
ClassificationAlgorithmsCP4D.SnapSVM
ClassificationAlgorithmsCP4D.XGB
DataConnectionTypes
Directions
DocumentsSamplingTypes
ForecastingAlgorithms
ForecastingAlgorithmsCP4D
ForecastingPipelineTypes
ForecastingPipelineTypes.ARIMA
ForecastingPipelineTypes.ARIMAX
ForecastingPipelineTypes.ARIMAX_DMLR
ForecastingPipelineTypes.ARIMAX_PALR
ForecastingPipelineTypes.ARIMAX_RAR
ForecastingPipelineTypes.ARIMAX_RSAR
ForecastingPipelineTypes.Bats
ForecastingPipelineTypes.DifferenceFlattenEnsembler
ForecastingPipelineTypes.ExogenousDifferenceFlattenEnsembler
ForecastingPipelineTypes.ExogenousFlattenEnsembler
ForecastingPipelineTypes.ExogenousLocalizedFlattenEnsembler
ForecastingPipelineTypes.ExogenousMT2RForecaster
ForecastingPipelineTypes.ExogenousRandomForestRegressor
ForecastingPipelineTypes.ExogenousSVM
ForecastingPipelineTypes.FlattenEnsembler
ForecastingPipelineTypes.HoltWinterAdditive
ForecastingPipelineTypes.HoltWinterMultiplicative
ForecastingPipelineTypes.LocalizedFlattenEnsembler
ForecastingPipelineTypes.MT2RForecaster
ForecastingPipelineTypes.RandomForestRegressor
ForecastingPipelineTypes.SVM
ForecastingPipelineTypes.get_exogenous()
ForecastingPipelineTypes.get_non_exogenous()
ImputationStrategy
ImputationStrategy.BEST_OF_DEFAULT_IMPUTERS
ImputationStrategy.CUBIC
ImputationStrategy.FLATTEN_ITERATIVE
ImputationStrategy.LINEAR
ImputationStrategy.MEAN
ImputationStrategy.MEDIAN
ImputationStrategy.MOST_FREQUENT
ImputationStrategy.NEXT
ImputationStrategy.NO_IMPUTATION
ImputationStrategy.PREVIOUS
ImputationStrategy.VALUE
Metrics
Metrics.ACCURACY_AND_DISPARATE_IMPACT_SCORE
Metrics.ACCURACY_SCORE
Metrics.AVERAGE_PRECISION_SCORE
Metrics.EXPLAINED_VARIANCE_SCORE
Metrics.F1_SCORE
Metrics.F1_SCORE_MACRO
Metrics.F1_SCORE_MICRO
Metrics.F1_SCORE_WEIGHTED
Metrics.LOG_LOSS
Metrics.MEAN_ABSOLUTE_ERROR
Metrics.MEAN_SQUARED_ERROR
Metrics.MEAN_SQUARED_LOG_ERROR
Metrics.MEDIAN_ABSOLUTE_ERROR
Metrics.PRECISION_SCORE
Metrics.PRECISION_SCORE_MACRO
Metrics.PRECISION_SCORE_MICRO
Metrics.PRECISION_SCORE_WEIGHTED
Metrics.R2_AND_DISPARATE_IMPACT_SCORE
Metrics.R2_SCORE
Metrics.RECALL_SCORE
Metrics.RECALL_SCORE_MACRO
Metrics.RECALL_SCORE_MICRO
Metrics.RECALL_SCORE_WEIGHTED
Metrics.ROC_AUC_SCORE
Metrics.ROOT_MEAN_SQUARED_ERROR
Metrics.ROOT_MEAN_SQUARED_LOG_ERROR
MetricsToDirections
MetricsToDirections.ACCURACY
MetricsToDirections.AVERAGE_PRECISION
MetricsToDirections.EXPLAINED_VARIANCE
MetricsToDirections.F1
MetricsToDirections.F1_MACRO
MetricsToDirections.F1_MICRO
MetricsToDirections.F1_WEIGHTED
MetricsToDirections.NEG_LOG_LOSS
MetricsToDirections.NEG_MEAN_ABSOLUTE_ERROR
MetricsToDirections.NEG_MEAN_SQUARED_ERROR
MetricsToDirections.NEG_MEAN_SQUARED_LOG_ERROR
MetricsToDirections.NEG_MEDIAN_ABSOLUTE_ERROR
MetricsToDirections.NEG_ROOT_MEAN_SQUARED_ERROR
MetricsToDirections.NEG_ROOT_MEAN_SQUARED_LOG_ERROR
MetricsToDirections.NORMALIZED_GINI_COEFFICIENT
MetricsToDirections.PRECISION
MetricsToDirections.PRECISION_MACRO
MetricsToDirections.PRECISION_MICRO
MetricsToDirections.PRECISION_WEIGHTED
MetricsToDirections.R2
MetricsToDirections.RECALL
MetricsToDirections.RECALL_MACRO
MetricsToDirections.RECALL_MICRO
MetricsToDirections.RECALL_WEIGHTED
MetricsToDirections.ROC_AUC
PipelineTypes
PositiveLabelClass
PositiveLabelClass.AVERAGE_PRECISION_SCORE
PositiveLabelClass.F1_SCORE
PositiveLabelClass.F1_SCORE_MACRO
PositiveLabelClass.F1_SCORE_MICRO
PositiveLabelClass.F1_SCORE_WEIGHTED
PositiveLabelClass.PRECISION_SCORE
PositiveLabelClass.PRECISION_SCORE_MACRO
PositiveLabelClass.PRECISION_SCORE_MICRO
PositiveLabelClass.PRECISION_SCORE_WEIGHTED
PositiveLabelClass.RECALL_SCORE
PositiveLabelClass.RECALL_SCORE_MACRO
PositiveLabelClass.RECALL_SCORE_MICRO
PositiveLabelClass.RECALL_SCORE_WEIGHTED
PredictionType
RAGMetrics
RegressionAlgorithms
RegressionAlgorithmsCP4D
RegressionAlgorithmsCP4D.DT
RegressionAlgorithmsCP4D.EX_TREES
RegressionAlgorithmsCP4D.GB
RegressionAlgorithmsCP4D.LGBM
RegressionAlgorithmsCP4D.LR
RegressionAlgorithmsCP4D.RF
RegressionAlgorithmsCP4D.RIDGE
RegressionAlgorithmsCP4D.SnapBM
RegressionAlgorithmsCP4D.SnapDT
RegressionAlgorithmsCP4D.SnapRF
RegressionAlgorithmsCP4D.XGB
RunStateTypes
SamplingTypes
TShirtSize
TimeseriesAnomalyPredictionAlgorithms
TimeseriesAnomalyPredictionPipelineTypes
TimeseriesAnomalyPredictionPipelineTypes.PointwiseBoundedBATS
TimeseriesAnomalyPredictionPipelineTypes.PointwiseBoundedBATSForceUpdate
TimeseriesAnomalyPredictionPipelineTypes.PointwiseBoundedHoltWintersAdditive
TimeseriesAnomalyPredictionPipelineTypes.WindowLOF
TimeseriesAnomalyPredictionPipelineTypes.WindowNN
TimeseriesAnomalyPredictionPipelineTypes.WindowPCA
Transformers
Transformers.ABS
Transformers.CBRT
Transformers.COS
Transformers.CUBE
Transformers.DIFF
Transformers.DIVIDE
Transformers.FEATUREAGGLOMERATION
Transformers.ISOFORESTANOMALY
Transformers.LOG
Transformers.MAX
Transformers.MINMAXSCALER
Transformers.NXOR
Transformers.PCA
Transformers.PRODUCT
Transformers.ROUND
Transformers.SIGMOID
Transformers.SIN
Transformers.SQRT
Transformers.SQUARE
Transformers.STDSCALER
Transformers.SUM
Transformers.TAN
VisualizationTypes