- Notifications
You must be signed in to change notification settings - Fork3
Python library for Modzy Machine Learning Operations (MLOps) Platform
License
modzy/sdk-python
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Install Modzy's Python SDK with PIP
pip install modzy-sdk
Initialize your client by authenticating with an API key. You candownload an API Key from your instance of Modzy.
frommodzyimportApiClient# Sets BASE_URL and API_KEY values# Best to set these as environment variablesBASE_URL="Valid Modzy URL"# e.g., "https://trial.modzy.com"API_KEY="Valid Modzy API Key"# e.g., "JbFkWZMx4Ea3epIrxSgA.a2fR36fZi3sdFPoztAXT"client=ApiClient(base_url=BASE_URL,api_key=API_KEY)
Submit an inference job to a text-based model by providing the model ID, version, and raw input text:
# Creates a dictionary for text input(s)sources= {}# Adds any number of inputssources["first-phone-call"]= {"input.txt":"Mr Watson, come here. I want to see you.",}# Submit the text to v1.0.1 of a Sentiment Analysis model, and to make the job explainable, change explain=Truejob=client.jobs.submit_text("ed542963de","1.0.1",sources,explain=False)
Pass a file from your local directory to a model by providing the model ID, version, and the filepath of your sample data:
# Generate a mapping of your local file (nyc-skyline.jpg) to the input filename the model expectssources= {"nyc-skyline": {"image":"./images/nyc-skyline.jpg"}}# Submit the image to v1.0.1 of an Image-based Geolocation modeljob=client.jobs.submit_file("aevbu1h3yw","1.0.1",sources)
Convert images and other large inputs to base64 embedded data and submit to a model by providing a model ID, version number, and dictionary with one or more base64 encoded inputs:
frommodzy._utilimportfile_to_bytes# Embed input as a string in base64image_bytes=file_to_bytes('./images/tower-bridge.jpg')# Prepare the source dictionarysources= {"tower-bridge": {"image":image_bytes}}# Submit the image to v1.0.1 of an Imaged-based Geolocation modeljob=client.jobs.submit_embedded("aevbu1h3yw","1.0.1",sources)
Submit data from a SQL database to a model by providing a model ID, version, a SQL query, and database connection credentials:
# Add database connection and query informationdb_url="jdbc:postgresql://db.bit.io:5432/bitdotio"db_username=DB_USER_NAMEdb_password=DB_PASSWORDdb_driver="org.postgresql.Driver"# Select as "input.txt" becase that is the required input name for this modeldb_query="SELECT\"mailaddr\" as\"input.txt\" FROM\"user/demo_repo\".\"atl_parcel_attr\" LIMIT 10;"# Submit the database query to v0.0.12 of a Named Entity Recognition modeljob=client.jobs.submit_jdbc("a92fc413b5","0.0.12",db_url,db_username,db_password,db_driver,db_query)
Submit data directly from your cloud storage bucket (Amazon S3, Azure Blob, NetApp StorageGrid supported) by providing a model ID, version, and storage-blob-specific parameters.
# Define sources dictionary with bucket and key that points to the correct file in your s3 bucketsources= {"first-amazon-review": {"input.txt": {"bucket":"s3-bucket-name","key":"key-to-file.txt" } }}AWS_ACCESS_KEY="aws-acces-key"AWC_SECRET_ACCESS_KEY="aws-secret-access-key"AWS_REGION="us-east-1"# Submit s3 input to v1.0.1 of a Sentiment Analysis modeljob=client.jobs.submit_aws_s3("ed542963de","1.0.1",sources,AWS_ACCESS_KEY,AWS_SECRET_ACCESS_KEY,AWS_REGION)
# Define sources dictionary with container name and filepath that points to the correct file in your Azure Blob containersources= {"first-amazon-review": {"input.txt": {"container":"azure-blob-container-name","filePath":"key-to-file.txt" } }}AZURE_STORAGE_ACCOUNT="Azure-Storage-Account"AZURE_STORAGE_ACCOUNT_KEY="cvx....ytw=="# Submit Azure Blob input to v1.0.1 of a Sentiment Analysis modeljob=client.jobs.submit_azureblob("ed542963de","1.0.1",sources,AZURE_STORAGE_ACCOUNT,AZURE_STORAGE_ACCOUNT_KEY)
# Define sources dictionary with bucket name and key that points to the correct file in your NetApp StorageGRID bucketsources= {"first-amazon-review": {"input.txt": {"bucket":"bucket-name","key":"key-to-file.txt" } }}ACCESS_KEY="access-key"SECRET_ACCESS_KEY="secret-access-key"STORAGE_GRID_ENDPOINT="https://endpoint.storage-grid.example"# Submit StorageGRID input to v1.0.1 of a Sentiment Analysis modeljob=client.jobs.submit_storagegrid("ed542963de","1.0.1",sources,ACCESS_KEY,SECRET_ACCESS_KEY,STORAGE_GRID_ENDPOINT)
Modzy's APIs are asynchronous by nature, which means you can use theresults API to query available results for all completed inference jobs at any point in time. There are two ways you might leverage this Python SDK to query results:
This method provides a mechanism to mimic a sycnchronous API by using two different APIs subsequently and a utility function.
# Define sources dictionary with input datasources= {"my-input": {"input.txt":"Today is a beautiful day!"}}# Submit the text to v1.0.1 of a Sentiment Analysis model, and to make the job explainable, change explain=Truejob=client.jobs.submit_text("ed542963de","1.0.1",sources,explain=False)# Use block until complete method to periodically ping the results API until job completesresults=client.results.block_until_complete(job,timeout=None,poll_interval=5)
This method simply queries the results for a job at any point in time and returns the status of the job, which includes the results if the job has completed.
# Query results for a job at any point in timeresults=client.results.get(job)# Print the inference resultsresults_json=result.get_first_outputs()['results.json']print(results_json)
Deploy a model to a your private model library in Modzy
frommodzyimportApiClient# Sets BASE_URL and API_KEY values# Best to set these as environment variablesBASE_URL="Valid Modzy URL"# e.g., "https://trial.modzy.com"API_KEY="Valid Modzy API Key"# e.g., "JbFkWZMx4Ea3epIrxSgA.a2fR36fZi3sdFPoztAXT"client=ApiClient(base_url=BASE_URL,api_key=API_KEY)model_data=client.models.deploy(container_image="modzy/grpc-echo-model:1.0.0",model_name="Echo Model",model_version="0.0.1",sample_input_file="./test.txt",run_timeout="60",status_timeout="60",short_description="This model returns the same text passed through as input, similar to an 'echo.'",long_description="This model returns the same text passed through as input, similar to an 'echo.'",technical_details="This section can include any technical information abot your model. Include information about how your model was trained, any underlying architecture details, or other pertinant information an end-user would benefit from learning.",performance_summary="This is the performance summary.")print(model_data)
To useclient.models.deploy() there are 4 fields that are required:
container_image (str): This parameter must represent a container image repository & tag name, or in other words, the string you would include after a docker pull command. For example, if you were to download this container image using docker pull modzy/grpc-echo-model:1.0.0, include justmodzy/grpc-echo-model:1.0.0for this parametermodel_name: The name of the model you would like to deploymodel_version: The version of the model you would like to deploysample_input_file: Filepath to a sample piece of data that your model is expected to process and perform inference against.
The SDK provides support for running inferences on edge devices through Modzy's Edge Client. The inference workflow is almost identical to the previously outlined workflow, and provides functionality for interacting with both Job and Inferences APIs:
frommodzyimportEdgeClient# Initialize edge client# Use 'localhost' for local inferences, otherwise use the device's full IP addressclient=EdgeClient('localhost',55000)
Modzy Edge supportstext,embedded, andaws-s3 input types.
# Submit text job to Sentiment Analysis model deployed on edge device by providing a model ID, version, and raw text data, wait for completionjob=client.jobs.submit_text("ed542963de","1.0.27",{"input.txt":"this is awesome"})# Block until results are readyfinal_job_details=client.jobs.block_until_complete(job)results=client.jobs.get_results(job)
# get job details for a particular jobjob_details=client.jobs.get_job_details(job)# get job details for all jobs run on your Modzy Edge instanceall_job_details=client.jobs.get_all_job_details()
The SDK provides several methods for interacting with Modzy's Inference API:
- Synchronous: This convenience method wraps two SDK methods and is optimal for use cases that require real-time or sequential results (i.e., a prediction results are needed to inform action before submitting a new inference)
- Asynchronous: This method combines two SDK methods and is optimal for submitting large batches of data and querying results at a later time (i.e., real-time inference is not required)
- Streaming: This method is a convenience method for running multiple synchronous inferences consecutively and allows users to submit iterable objects to be processed sequentially in real-time
Synchronous (image-based model example)
frommodzyimportEdgeClientfrommodzy.edgeimportInputSourceimage_bytes=open("image_path.jpg","rb").read()input_object=InputSource(key="image",# input filename defined by model authordata=image_bytes,)withEdgeClient('localhost',55000)asclient:inference=client.inferences.run("<model-id>","<model-version>",input_object,explain=False,tags=None)results=inference.result.outputs
Asynchronous (image-based model example - submit batch of images in folder)
importosfrommodzyimportEdgeClientfrommodzy.edgeimportInputSource# submit inferencesimg_folder="./images"inferences= []forimginos.listdir(img_folder):input_object=InputSource(key="image",# input filename defined by model authordata=open(os.path.join(img_folder,img),'rb').read() )withEdgeClient('localhost',55000)asclient:inference=client.inferences.perform_inference("<model-id>","<model-version>",input_object,explain=False,tags=None)inferences.append(inference)# query resultswithEdgeClient('localhost',55000)asclient:results= [client.inferences.block_until_complete(inference.identifier)forinferencesininferences]
Stream
importosfrommodzyimportEdgeClientfrommodzy.edgeimportInputSource# generate requests iterator to pass to stream methodrequests= []forimginos.listdir(img_folder):input_object=InputSource(key="image",# input filename defined by model authordata=open(os.path.join(img_folder,img),'rb').read() )withEdgeClient('localhost',55000)asclient:requests.append(client.inferences.build_inference_request("<model-id>","<model-version>",input_object,explain=False,tags=None))# submit list of inference requests to streaming APIwithEdgeClient('localhost',55000)asclient:streaming_results=client.inferences.stream(requests)
View examples of practical workflows:
- Image-Based Geolocation Inference Notebook
- Automobile Classification Inference Notebook with Explainability
- Batch Inference with Sentiment Analysis
Modzy's SDK is built on top of theModzy HTTP/REST API. For a full list of features and supported routes visitPython SDK on docs.modzy.com
| Feature | Code | Api route |
|---|---|---|
| Deploy new model | client.models.deploy() | api/models |
| Get all models | client.models.get_all() | api/models |
| List models | client.models.get_models() | api/models |
| Get model details | client.models.get() | api/models/:model-id |
| List models by name | client.models.get_by_name() | api/models |
| List models by tag | client.tags.get_tags_and_models() | api/models/tags/:tag-id |
| Get related models | client.models.get_related() | api/models/:model-id/related-models |
| List a model's versions | client.models.get_versions() | api/models/:model-id/versions |
| Get a version's details | client.models.get_version() | api/models/:model-id/versions/:version-id |
| Update processing engines | client.models.update_processing_engines() | api/resource/models |
| Get minimum engines | client.models.get_minimum_engines() | api/models/processing-engines |
| List tags | client.tags.get_all() | api/models/tags |
| Submit a Job (Text) | client.jobs.submit_text() | api/jobs |
| Submit a Job (Embedded) | client.jobs.submit_embedded() | api/jobs |
| Submit a Job (File) | client.jobs.submit_file() | api/jobs |
| Submit a Job (AWS S3) | client.jobs.submit_aws_s3() | api/jobs |
| Submit a Job (Azure Blob Storage) | client.jobs.submit_azureblob() | api/jobs |
| Submit a Job (NetApp StorageGRID) | client.jobs.submit_storagegrid() | api/jobs |
| Submit a Job (JDBC) | client.jobs.submit_jdbc() | api/jobs |
| Cancel job | job.cancel() | api/jobs/:job-id |
| Hold until inference is complete | job.block_until_complete() | api/jobs/:job-id |
| Get job details | client.jobs.get() | api/jobs/:job-id |
| Get results | job.get_result() | api/results/:job-id |
| Get the job history | client.jobs.get_history() | api/jobs/history |
| Submit a Job with Edge Client (Embedded) | EdgeClient.jobs.submit_embedded() | Python/edge/jobs |
| Submit a Job with Edge Client (Text) | EdgeClient.jobs.submit_text() | Python/edge/jobs |
| Submit a Job with Edge Client (AWS S3) | EdgeClient.jobs.submit_aws_s3() | Python/edge/jobs |
| Get job details with Edge Client | EdgeClient.jobs.get_job_details() | Python/edge/jobs |
| Get all job details with Edge Client | EdgeClient.jobs.get_all_job_details() | Python/edge/jobs |
| Hold until job is complete with Edge Client | EdgeClient.jobs.block_until_complete() | Python/edge/jobs |
| Get results with Edge Client | EdgeClient.jobs.get_results() | Python/edge/jobs |
| Build inference request with Edge Client | EdgeClient.inferences.build_inference_request() | Python/edge/inferences |
| Perform inference with Edge Client | EdgeClient.inferences.perform_inference() | Python/edge/inferences |
| Get inference details with Edge Client | EdgeClient.inferences.get_inference_details() | Python/edge/inferences |
| Run synchronous inferences with Edge Client | EdgeClient.inferences.run() | Python/edge/inferences |
| Hold until inference completes with Edge Client | EdgeClient.inferences.block_until_complete() | Python/edge/inferences |
| Stream inferences with Edge Client | EdgeClient.inferences.stream() | Python/edge/inferences |
For support, emailopensource@modzy.com or join ourSlack.
Contributions are always welcome!
Seecontributing.md for ways to get started.
Please adhere to this project'scode of conduct.
We are happy to receive contributions from all of our users. Check out our contributing file to learn more.
About
Python library for Modzy Machine Learning Operations (MLOps) Platform
Topics
Resources
License
Code of conduct
Contributing
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors11
Uh oh!
There was an error while loading.Please reload this page.
