Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Python library for Modzy Machine Learning Operations (MLOps) Platform

License

NotificationsYou must be signed in to change notification settings

modzy/sdk-python

Installation

Install Modzy's Python SDK with PIP

  pip install modzy-sdk

Usage/Examples

Initializing the SDK

Initialize your client by authenticating with an API key. You candownload an API Key from your instance of Modzy.

frommodzyimportApiClient# Sets BASE_URL and API_KEY values# Best to set these as environment variablesBASE_URL="Valid Modzy URL"# e.g., "https://trial.modzy.com"API_KEY="Valid Modzy API Key"# e.g., "JbFkWZMx4Ea3epIrxSgA.a2fR36fZi3sdFPoztAXT"client=ApiClient(base_url=BASE_URL,api_key=API_KEY)

Running Inferences

Raw Text Inputs

Submit an inference job to a text-based model by providing the model ID, version, and raw input text:

# Creates a dictionary for text input(s)sources= {}# Adds any number of inputssources["first-phone-call"]= {"input.txt":"Mr Watson, come here. I want to see you.",}# Submit the text to v1.0.1 of a Sentiment Analysis model, and to make the job explainable, change explain=Truejob=client.jobs.submit_text("ed542963de","1.0.1",sources,explain=False)

File Inputs

Pass a file from your local directory to a model by providing the model ID, version, and the filepath of your sample data:

# Generate a mapping of your local file (nyc-skyline.jpg) to the input filename the model expectssources= {"nyc-skyline": {"image":"./images/nyc-skyline.jpg"}}# Submit the image to v1.0.1 of an Image-based Geolocation modeljob=client.jobs.submit_file("aevbu1h3yw","1.0.1",sources)

Embedded Inputs

Convert images and other large inputs to base64 embedded data and submit to a model by providing a model ID, version number, and dictionary with one or more base64 encoded inputs:

frommodzy._utilimportfile_to_bytes# Embed input as a string in base64image_bytes=file_to_bytes('./images/tower-bridge.jpg')# Prepare the source dictionarysources= {"tower-bridge": {"image":image_bytes}}# Submit the image to v1.0.1 of an Imaged-based Geolocation modeljob=client.jobs.submit_embedded("aevbu1h3yw","1.0.1",sources)

Inputs from Databases

Submit data from a SQL database to a model by providing a model ID, version, a SQL query, and database connection credentials:

# Add database connection and query informationdb_url="jdbc:postgresql://db.bit.io:5432/bitdotio"db_username=DB_USER_NAMEdb_password=DB_PASSWORDdb_driver="org.postgresql.Driver"# Select as "input.txt" becase that is the required input name for this modeldb_query="SELECT\"mailaddr\" as\"input.txt\" FROM\"user/demo_repo\".\"atl_parcel_attr\" LIMIT 10;"# Submit the database query to v0.0.12 of a Named Entity Recognition modeljob=client.jobs.submit_jdbc("a92fc413b5","0.0.12",db_url,db_username,db_password,db_driver,db_query)

Inputs from Cloud Storage

Submit data directly from your cloud storage bucket (Amazon S3, Azure Blob, NetApp StorageGrid supported) by providing a model ID, version, and storage-blob-specific parameters.

AWS S3

# Define sources dictionary with bucket and key that points to the correct file in your s3 bucketsources= {"first-amazon-review": {"input.txt": {"bucket":"s3-bucket-name","key":"key-to-file.txt"    }  }}AWS_ACCESS_KEY="aws-acces-key"AWC_SECRET_ACCESS_KEY="aws-secret-access-key"AWS_REGION="us-east-1"# Submit s3 input to v1.0.1 of a Sentiment Analysis modeljob=client.jobs.submit_aws_s3("ed542963de","1.0.1",sources,AWS_ACCESS_KEY,AWS_SECRET_ACCESS_KEY,AWS_REGION)

Azure Blob Storage

# Define sources dictionary with container name and filepath that points to the correct file in your Azure Blob containersources= {"first-amazon-review": {"input.txt": {"container":"azure-blob-container-name","filePath":"key-to-file.txt"    }  }}AZURE_STORAGE_ACCOUNT="Azure-Storage-Account"AZURE_STORAGE_ACCOUNT_KEY="cvx....ytw=="# Submit Azure Blob input to v1.0.1 of a Sentiment Analysis modeljob=client.jobs.submit_azureblob("ed542963de","1.0.1",sources,AZURE_STORAGE_ACCOUNT,AZURE_STORAGE_ACCOUNT_KEY)

NetApp StorageGRID

# Define sources dictionary with bucket name and key that points to the correct file in your NetApp StorageGRID bucketsources= {"first-amazon-review": {"input.txt": {"bucket":"bucket-name","key":"key-to-file.txt"    }  }}ACCESS_KEY="access-key"SECRET_ACCESS_KEY="secret-access-key"STORAGE_GRID_ENDPOINT="https://endpoint.storage-grid.example"# Submit StorageGRID input to v1.0.1 of a Sentiment Analysis modeljob=client.jobs.submit_storagegrid("ed542963de","1.0.1",sources,ACCESS_KEY,SECRET_ACCESS_KEY,STORAGE_GRID_ENDPOINT)

Getting Results

Modzy's APIs are asynchronous by nature, which means you can use theresults API to query available results for all completed inference jobs at any point in time. There are two ways you might leverage this Python SDK to query results:

Block Job until it completes

This method provides a mechanism to mimic a sycnchronous API by using two different APIs subsequently and a utility function.

# Define sources dictionary with input datasources= {"my-input": {"input.txt":"Today is a beautiful day!"}}# Submit the text to v1.0.1 of a Sentiment Analysis model, and to make the job explainable, change explain=Truejob=client.jobs.submit_text("ed542963de","1.0.1",sources,explain=False)# Use block until complete method to periodically ping the results API until job completesresults=client.results.block_until_complete(job,timeout=None,poll_interval=5)

Query a Job's Result

This method simply queries the results for a job at any point in time and returns the status of the job, which includes the results if the job has completed.

#  Query results for a job at any point in timeresults=client.results.get(job)#  Print the inference resultsresults_json=result.get_first_outputs()['results.json']print(results_json)

Deploying Models

Deploy a model to a your private model library in Modzy

frommodzyimportApiClient# Sets BASE_URL and API_KEY values# Best to set these as environment variablesBASE_URL="Valid Modzy URL"# e.g., "https://trial.modzy.com"API_KEY="Valid Modzy API Key"# e.g., "JbFkWZMx4Ea3epIrxSgA.a2fR36fZi3sdFPoztAXT"client=ApiClient(base_url=BASE_URL,api_key=API_KEY)model_data=client.models.deploy(container_image="modzy/grpc-echo-model:1.0.0",model_name="Echo Model",model_version="0.0.1",sample_input_file="./test.txt",run_timeout="60",status_timeout="60",short_description="This model returns the same text passed through as input, similar to an 'echo.'",long_description="This model returns the same text passed through as input, similar to an 'echo.'",technical_details="This section can include any technical information abot your model. Include information about how your model was trained, any underlying architecture details, or other pertinant information an end-user would benefit from learning.",performance_summary="This is the performance summary.")print(model_data)

To useclient.models.deploy() there are 4 fields that are required:

  • container_image (str): This parameter must represent a container image repository & tag name, or in other words, the string you would include after a docker pull command. For example, if you were to download this container image using docker pull modzy/grpc-echo-model:1.0.0, include justmodzy/grpc-echo-model:1.0.0 for this parameter
  • model_name: The name of the model you would like to deploy
  • model_version: The version of the model you would like to deploy
  • sample_input_file: Filepath to a sample piece of data that your model is expected to process and perform inference against.

Running Inferences at the Edge

The SDK provides support for running inferences on edge devices through Modzy's Edge Client. The inference workflow is almost identical to the previously outlined workflow, and provides functionality for interacting with both Job and Inferences APIs:

Initialize Edge Client

frommodzyimportEdgeClient# Initialize edge client# Use 'localhost' for local inferences, otherwise use the device's full IP addressclient=EdgeClient('localhost',55000)

Submit Inference withJob API

Modzy Edge supportstext,embedded, andaws-s3 input types.

# Submit text job to Sentiment Analysis model deployed on edge device by providing a model ID, version, and raw text data, wait for completionjob=client.jobs.submit_text("ed542963de","1.0.27",{"input.txt":"this is awesome"})# Block until results are readyfinal_job_details=client.jobs.block_until_complete(job)results=client.jobs.get_results(job)

Query Details about Inference withJob API

# get job details for a particular jobjob_details=client.jobs.get_job_details(job)# get job details for all jobs run on your Modzy Edge instanceall_job_details=client.jobs.get_all_job_details()

Submit Inference withInference API

The SDK provides several methods for interacting with Modzy's Inference API:

  • Synchronous: This convenience method wraps two SDK methods and is optimal for use cases that require real-time or sequential results (i.e., a prediction results are needed to inform action before submitting a new inference)
  • Asynchronous: This method combines two SDK methods and is optimal for submitting large batches of data and querying results at a later time (i.e., real-time inference is not required)
  • Streaming: This method is a convenience method for running multiple synchronous inferences consecutively and allows users to submit iterable objects to be processed sequentially in real-time

Synchronous (image-based model example)

frommodzyimportEdgeClientfrommodzy.edgeimportInputSourceimage_bytes=open("image_path.jpg","rb").read()input_object=InputSource(key="image",# input filename defined by model authordata=image_bytes,)withEdgeClient('localhost',55000)asclient:inference=client.inferences.run("<model-id>","<model-version>",input_object,explain=False,tags=None)results=inference.result.outputs

Asynchronous (image-based model example - submit batch of images in folder)

importosfrommodzyimportEdgeClientfrommodzy.edgeimportInputSource# submit inferencesimg_folder="./images"inferences= []forimginos.listdir(img_folder):input_object=InputSource(key="image",# input filename defined by model authordata=open(os.path.join(img_folder,img),'rb').read()  )withEdgeClient('localhost',55000)asclient:inference=client.inferences.perform_inference("<model-id>","<model-version>",input_object,explain=False,tags=None)inferences.append(inference)# query resultswithEdgeClient('localhost',55000)asclient:results= [client.inferences.block_until_complete(inference.identifier)forinferencesininferences]

Stream

importosfrommodzyimportEdgeClientfrommodzy.edgeimportInputSource# generate requests iterator to pass to stream methodrequests= []forimginos.listdir(img_folder):input_object=InputSource(key="image",# input filename defined by model authordata=open(os.path.join(img_folder,img),'rb').read()  )withEdgeClient('localhost',55000)asclient:requests.append(client.inferences.build_inference_request("<model-id>","<model-version>",input_object,explain=False,tags=None))# submit list of inference requests to streaming APIwithEdgeClient('localhost',55000)asclient:streaming_results=client.inferences.stream(requests)

SDK Code Examples

View examples of practical workflows:

Documentation

Modzy's SDK is built on top of theModzy HTTP/REST API. For a full list of features and supported routes visitPython SDK on docs.modzy.com

API Reference

FeatureCodeApi route
Deploy new modelclient.models.deploy()api/models
Get all modelsclient.models.get_all()api/models
List modelsclient.models.get_models()api/models
Get model detailsclient.models.get()api/models/:model-id
List models by nameclient.models.get_by_name()api/models
List models by tagclient.tags.get_tags_and_models()api/models/tags/:tag-id
Get related modelsclient.models.get_related()api/models/:model-id/related-models
List a model's versionsclient.models.get_versions()api/models/:model-id/versions
Get a version's detailsclient.models.get_version()api/models/:model-id/versions/:version-id
Update processing enginesclient.models.update_processing_engines()api/resource/models
Get minimum enginesclient.models.get_minimum_engines()api/models/processing-engines
List tagsclient.tags.get_all()api/models/tags
Submit a Job (Text)client.jobs.submit_text()api/jobs
Submit a Job (Embedded)client.jobs.submit_embedded()api/jobs
Submit a Job (File)client.jobs.submit_file()api/jobs
Submit a Job (AWS S3)client.jobs.submit_aws_s3()api/jobs
Submit a Job (Azure Blob Storage)client.jobs.submit_azureblob()api/jobs
Submit a Job (NetApp StorageGRID)client.jobs.submit_storagegrid()api/jobs
Submit a Job (JDBC)client.jobs.submit_jdbc()api/jobs
Cancel jobjob.cancel()api/jobs/:job-id
Hold until inference is completejob.block_until_complete()api/jobs/:job-id
Get job detailsclient.jobs.get()api/jobs/:job-id
Get resultsjob.get_result()api/results/:job-id
Get the job historyclient.jobs.get_history()api/jobs/history
Submit a Job with Edge Client (Embedded)EdgeClient.jobs.submit_embedded()Python/edge/jobs
Submit a Job with Edge Client (Text)EdgeClient.jobs.submit_text()Python/edge/jobs
Submit a Job with Edge Client (AWS S3)EdgeClient.jobs.submit_aws_s3()Python/edge/jobs
Get job details with Edge ClientEdgeClient.jobs.get_job_details()Python/edge/jobs
Get all job details with Edge ClientEdgeClient.jobs.get_all_job_details()Python/edge/jobs
Hold until job is complete with Edge ClientEdgeClient.jobs.block_until_complete()Python/edge/jobs
Get results with Edge ClientEdgeClient.jobs.get_results()Python/edge/jobs
Build inference request with Edge ClientEdgeClient.inferences.build_inference_request()Python/edge/inferences
Perform inference with Edge ClientEdgeClient.inferences.perform_inference()Python/edge/inferences
Get inference details with Edge ClientEdgeClient.inferences.get_inference_details()Python/edge/inferences
Run synchronous inferences with Edge ClientEdgeClient.inferences.run()Python/edge/inferences
Hold until inference completes with Edge ClientEdgeClient.inferences.block_until_complete()Python/edge/inferences
Stream inferences with Edge ClientEdgeClient.inferences.stream()Python/edge/inferences

Support

For support, emailopensource@modzy.com or join ourSlack.

Contributing

Contributions are always welcome!

Seecontributing.md for ways to get started.

Please adhere to this project'scode of conduct.

We are happy to receive contributions from all of our users. Check out our contributing file to learn more.

Contributor Covenant


[8]ページ先頭

©2009-2025 Movatter.jp