Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
This repository was archived by the owner on Jul 22, 2024. It is now read-only.

Official Azure Reference Architectures for AI workloads

License

NotificationsYou must be signed in to change notification settings

microsoft/AIReferenceArchitectures

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repository contains the recommended ways to train and deploy machine learning models on Azure. It ranges from running massively parallelhyperparameter tuning using Hyperdrive to deploying deep learning models onKubernetes. Eachtutorial takes you step by step through the process to train or deploy your model. If you are confused about what service to use and when look at theFAQ below.

For further documentation on the reference architectures please lookhere.

Getting Started

This repository is arranged as submodules and therefore you can either pull all the tutorials or simply the ones you want.To pull all the tutorials simply run:

git clone --recurse-submodules https://github.com/Microsoft/AIReferenceArchitectures.git

if you have git older than 2.13 run:

git clone --recursive https://github.com/Microsoft/AIReferenceArchitectures.git

Tutorials

TutorialEnvironmentDescriptionStatus
Deploy Deep Learning Model on KubernetesPython GPUDeploy image classification model on Kubernetes or IoT Edge forreal-time scoring using Azure MLBuild Status
Deploy Classic ML Model on KubernetesPython CPUTrain LightGBM model locally using Azure ML, deploy on Kubernetes or IoT Edge forreal-time scoring
Hyperparameter Tuning of Classical ML ModelsPython CPUTrain LightGBM model locally and run Hyperparameter tuning using Hyperdrive in Azure ML
Deploy Deep Learning Model on PipelinesPython GPUDeploy PyTorch style transfer model forbatch scoring using Azure ML PipelinesBuild Status
Deploy Classic ML Model on PipelinesPython CPUDeploy one-class SVM forbatch scoring anomaly detection using Azure ML Pipelines
Deploy R ML Model on KubernetesR CPUDeploy ML model forreal-time scoring on Kubernetes
Deploy R ML Model on BatchR CPUDeploy forecasting model forbatch scoring using Azure Batch and doAzureParallel
Deploy Spark ML Model on DatabricksSpark CPUDeploy a classification model forbatch scoring using Databricks
Train Distributed Deep Leaning ModelPython GPUDistributed training of ResNet50 model using Batch AI

Requirements

The tutorials have been mainly tested on Linux VMs in Azure. Each tutorial may have slightly different requirements such as GPU for some of the deep learning ones. For more details please consult the readme in each tutorial.

Reporting Issues

Please report issues with each tutorial in the tutorial's own github page.

FAQ

What service should I use for deploying models in Python?

When deploying ML models in Python there are two core questions. The first is will it be real time and whether the model is a deep learning model. For deploying deep learning models that require real time we recommend Azure Kubernetes Services (AKS) with GPUs. For a tutorial on how to do that look atAKS w/GPU. For deploying deep learning models for batch scoring we recommend using AzureML pipelines with GPUs, for a tutorial on how to do that lookAzureML Pipelines w/GPU. For non deep learning models we recommend you use the same services but without GPUs. For a tutorial on deploying classical ML models for real time scoring lookAKS and for batch scoringAzureML Pipelines

What service should I use to train a model in Python?

There are many options for training ML models in Python on Azure. The most straight forward way is to train your model on aDSVM. You can either do this in local model straight on the VM or through attaching it in AzureML as a compute target. If you want to have AzureML manage the compute for you and scale it up and down based on whether jobs are waiting in the queue then you should AzureML Compute.

Now if you are going to run multiple jobs for hyperparameter tuning or other purposes then we would recommend usingHyperdrive,Azure automated ML or AzureML Compute dependent on your requirements.For a tutorial on how to use Hyperdrive gohere.

Recommend a Scenario

If there is a particular scenario you are interested in seeing a tutorial for please fill in ascenario suggestion

Ongoing Work

We are constantly developing interesting AI reference architectures using Microsoft AI Platform. Some of the ongoing projects include IoT Edge scenarios, model scoring on mobile devices, add more... To follow the progress and any new reference architectures, please go to the AI section of thislink.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to aContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant usthe rights to use your contribution. For details, visithttps://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to providea CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructionsprovided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted theMicrosoft Open Source Code of Conduct.For more information see theCode of Conduct FAQ orcontactopencode@microsoft.com with any additional questions or comments.

Releases

No releases published

Packages

No packages published

Contributors6


[8]ページ先頭

©2009-2025 Movatter.jp