Choose a Vertex AI serverless training method Stay organized with collections Save and categorize content based on your preferences.
If you're writing your own training code instead of using AutoML},there areseveral ways of doing Vertex AI serverless training to consider. This document provides abrief overview and comparison of the different ways you can runserverless training.
Serverless training resources on Vertex AI
There are three types of Vertex AI resources you can create totrain custom models on Vertex AI:
When you create acustom job, you specify settings that Vertex AIneeds to run your training code, including:
- Oneworker poolfor single-node training (
WorkerPoolSpec), or multipleworker pools for distributed training - Optional settings for configuring job scheduling (
Scheduling),settingcertain environment variables for your trainingcode,using a customservice account, andusing VPC NetworkPeering
Within the worker pool(s), you can specify the following settings:
- Machine types and accelerators
- Configuration of what type of training code the worker poolruns: either a Python trainingapplication (
PythonPackageSpec) or a custom container (ContainerSpec)
Hyperparameter tuning jobs have additional settings to configure, such as themetric. Learn more abouthyperparameter tuning.
Atraining pipeline orchestrates serverless trainingjobs or hyperparameter tuningjobs with additional steps, such as loading a dataset or uploading the model toVertex AI after the training job is successfully completed.
Serverless training resources
To view existing training pipelines in your project, go to theTrainingPipelines page in theVertex AI section of theGoogle Cloud console.
Note: TheTraining pipelines page shows AutoML training pipelines, inaddition to serverless training pipelines. You can use theModel type column todistinguish between the two.To view existing custom jobs in your project, go to theCustom jobs page.
To view existing hyperparameter tuning jobs in your project, go to theHyperparameter tuning page.
Prebuilt and custom containers
Before you submit a serverless training job, hyperparametertuning job, or atraining pipeline to Vertex AI, you need to create aPythontraining application or acustom container to define the training code anddependencies you want to run on Vertex AI. If you create a Pythontraining application using TensorFlow, PyTorch, scikit-learn, or XGBoost, youcan use our prebuilt containers to run your code. If you're not sure which ofthese options to choose, refer to thetraining code requirements to learn more.
Distributed training
You can configure a serverless training job,hyperparameter tuning job, or atraining pipeline for distributed training by specifying multipleworker pools:
- Use your first worker pool to configure your primary replica, and setthe replica count to 1.
- Add more worker pools to configure worker replicas, parameter serverreplicas, or evaluator replicas, if your machine learning frameworksupports these additional cluster tasks for distributed training.
Learn more aboutusing distributed training.
What's next
- Learn how tocreate a persistent resource to runserverless training jobs.
- SeeCreate serverless training jobsto learn how to create serverless training jobs torun your serverless training applications on Vertex AI.
- SeeCreate training pipelines tolearn how to create training pipelines to run serverless training applications onVertex AI.
- SeeUse hyperparameter tuningto learn about Hyperparameter tuning searches.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.