Schedule a pipeline run with scheduler API Stay organized with collections Save and categorize content based on your preferences.
You can schedule one-time or recurring pipeline runs in Vertex AIusing the scheduler API. This lets you implementcontinuous training in your project.
After you create a schedule, it can have one of the following states:
ACTIVE: An active schedule continuously creates pipeline runs according tothe frequency configured using the cron schedule expression. A schedule becomesactive on its start time and remains in that state until the specified end time,or until you pause it.PAUSED: A paused schedule doesn't create pipeline runs. You can resume apaused schedule to make it active again. When you resume a paused schedule,you can use thecatch_upparameter to specify whether skipped runs (runs thatwould have been scheduled if the schedule had been active) need to berescheduled and submitted at the earliest possible schedule.COMPLETED: A completed schedule no longer creates new pipeline runs. Aschedule is completed according to its specified end time.
You can use the scheduler API to do the following:
Before you begin
Before you schedule a pipeline run using the scheduler API, use the following instructions to set up your Google Cloud project and development environment in the Google Cloud console.
Grant the at least one of the following IAM permissions to the user or service account for using the scheduler API:
roles/aiplatform.adminroles/aiplatform.user
Build and compile a pipeline. For more information, seeBuild a Pipeline.
Create a schedule
You can create a one-time or recurring schedule.
Console
Use the following instructions to create a schedule using the Google Cloud console. Ifa schedule already exists for the project and region, use the instructions inCreate a pipeline run.
Use the following instructions to create a pipeline schedule:
In the Google Cloud console, in the Vertex AI section, go to theSchedules tab on thePipelines page.
ClickCreate scheduled run to open theCreate pipeline run pane.
Specify the followingRun details by selecting one of the following options:
To create a pipeline run based on an existing pipeline template, clickSelect from existing pipelines and enter the following details:
Select theRepository containing the pipeline or component definition file.
Select thePipeline or component andVersion.
To upload a compiled pipeline definition, clickUpload file and enter the following details:
ClickBrowse to open the file selector. Navigate to the compiled pipeline YAML file that you want to run, select the pipeline, and clickOpen.
ThePipeline or component name shows the name specified in the pipeline definition, by default. Optionally, specify a different Pipeline name.
To import a pipeline definition file from Cloud Storage, clickImport from Cloud Storage and enter the following details:
ClickBrowse to navigate to the Cloud Storage bucket containing the pipeline definition object, select the file, and then clickSelect.
Specify thePipeline or component name.
Specify aRun name to uniquely identify the pipeline run.
Specify theRun schedule, as follows:
SelectRecurring.
UnderStart time, specify the when the schedule becomes active.
To schedule the first run to occur immediately upon schedule creation, selectImmediately.
To schedule the first run to occur at a specific time and date, selectOn.
In theFrequency field, specify the frequency to schedule and execute thepipeline runs, using a cron schedule expression based onunix-cron.
UnderEnds, specify when the schedule ends.
To indicate that the schedule creates pipeline runs indefinitely, selectNever.
To indicate that the schedule ends on a specific date and time, selectOn, and specify the end date and time for the schedule.
Optional: To specify a custom service account, a customer-managed encryption key (CMEK), or a peered VPC network, clickAdvanced options and specify a service account, CMEK, or peered VPC network name.
ClickContinue and specify theRuntime configuration for the pipeline.
ClickSubmit to create your pipeline run schedule.
REST
To create a pipeline run schedule, send a POST request by using theprojects.locations.schedules.create method.
Before using any of the request data, make the following replacements:
- LOCATION: The region where you want to run the pipeline. For more information about the regions where Vertex AI Pipelines is available, see theVertex AI locations guide.
- PROJECT_ID: The Google Cloud project where you want to run the pipeline.
- DISPLAY_NAME: The name of the pipeline schedule. You can specify a name having a maximum length of 128 UTF-8 characters.
- START_TIME: Timestamp after which the first run can be scheduled, for example,
2045-07-26T00:00:00Z. If you don't specify this parameter, the timestamp corresponding to the date and time when you create the schedule is used as the default value. - END_TIME: Timestamp after which pipeline runs are no longer scheduled scheduled. After theEND_TIME is reached, the state of the schedule changes to
COMPLETED. If you don't specify this parameter, then the schedule continues to run new pipeline jobs indefinitely until you pause or delete the schedule. - CRON_EXPRESSION: Cron schedule expression representing the frequency to schedule and execute pipeline runs. For more information, seecron.
- MAX_CONCURRENT_RUN_COUNT: The maximum number of concurrent runs for the schedule.
- API_REQUEST_TEMPLATE:
PipelineService.CreatePipelineJobAPI request template used to execute the scheduled pipeline runs. For more information about the parameters in the API request template, see the documentation forpipelineJobs.create. Note that you can't specify thepipelineJobIdparameter in this template, as the scheduler API doesn't support this parameter.Click here to view a sample API request template, where
gs://gcs_directoryis the Cloud Storage bucket for pipeline output artifacts.{ "parent":"projects//locations/us-central1", "pipelineJob": { "displayName": "hello-world", "pipelineSpec": { "deploymentConfig": { "@type": "type.googleapis.com/ml_pipelines.PipelineDeploymentConfig", "executors": { "HelloWorld_executor": { "container": { "command": ["sleep", "1"], "image": "google/cloud-sdk:latest" } } } }, "pipelineInfo": { "name": "hello-world" }, "root": { "dag": { "tasks": { "task-test-hello-world": { "taskInfo": {"name": "HelloWorld"}, "componentRef": {"name": "HelloWorld"}, "cachingOptions": {} } } } }, "components": { "HelloWorld": { "executorLabel": "HelloWorld_executor" } }, "schemaVersion": "2.0.0" }, "runtimeConfig": { "gcsOutputDirectory": "gs://gcs_directory" } }}
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules
Request JSON body:
{ "display_name":"DISPLAY_NAME", "start_time": "START_TIME", "end_time": "END_TIME", "cron": "CRON_EXPRESSION", "max_concurrent_run_count": "MAX_CONCURRENT_RUN_COUNT", "create_pipeline_job_request":API_REQUEST_TEMPLATE}To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list. Save the request body in a file namedrequest.json, and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list. Save the request body in a file namedrequest.json, and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules" | Select-Object -Expand Content
You should see output similar to the following. You can use theSCHEDULE_ID from the response to retrieve, pause, resume, or delete the schedule.PIPELINE_JOB_CREATION_REQUEST represents the API request to create the pipeline job.
{ "name": "projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID", "displayName": "DISPLAY_NAME", "startTime": "START_TIME", "state": "ACTIVE", "createTime": "2025-01-01T00:00:00.000000Z", "nextRunTime": "2045-08-01T00:00:00Z", "cron": "CRON_EXPRESSION", "maxConcurrentRunCount": "MAX_CONCURRENT_RUN_COUNT", "createPipelineJobRequest":PIPELINE_JOB_CREATION_REQUEST}Python
You can create a pipeline run schedule in the following ways:
Create a schedule based on a
PipelineJobusing thePipelineJob.create_schedulemethod.Creating a schedule using the
PipelineJobSchedule.createmethod.
While creating a pipeline run schedule, you can also pass the followingplaceholders supported by the KFP SDK as inputs:
{{$.pipeline_job_name_placeholder}}{{$.pipeline_job_resource_name_placeholder}}{{$.pipeline_job_id_placeholder}}{{$.pipeline_task_name_placeholder}}{{$.pipeline_task_id_placeholder}}{{$.pipeline_job_create_time_utc_placeholder}}{{$.pipeline_job_schedule_time_utc_placeholder}}{{$.pipeline_root_placeholder}}
For more information, seeSpecial input types in theKubeflow Pipelines v2 documentation.
Create a schedule from aPipelineJob
Use the following sample to schedule pipeline runs using thePipelineJob.create_schedule method:
fromgoogle.cloudimportaiplatformpipeline_job=aiplatform.PipelineJob(template_path="COMPILED_PIPELINE_PATH",pipeline_root="PIPELINE_ROOT_PATH",display_name="DISPLAY_NAME",)pipeline_job_schedule=pipeline_job.create_schedule(display_name="SCHEDULE_NAME",cron="TZ=CRON",max_concurrent_run_count=MAX_CONCURRENT_RUN_COUNT,max_run_count=MAX_RUN_COUNT,)COMPILED_PIPELINE_PATH: The path to your compiled pipeline YAML file. It can be a local path or a Cloud Storage URI.
Optional: To specify a particular version of a template, include the version tag alongwith the path in any one of the following formats:
COMPILED_PIPELINE_PATH:TAG, whereTAGis the version tag.COMPILED_PIPELINE_PATH@SHA256_TAG, whereSHA256_TAG is thesha256hash value of the pipeline version.
PIPELINE_ROOT_PATH: (optional) To override the pipeline root path specified in the pipeline definition, specify a path that your pipeline job can access, such as a Cloud Storage bucket URI.
DISPLAY_NAME: The name of the pipeline. This will show up in the Google Cloud console.
SCHEDULE_NAME: The name of the pipeline schedule. You can specify a name having a maximum length of 128 UTF-8 characters.
CRON: Cron schedule expression representing the frequency to schedule and execute pipeline runs. For more information, seeCron.
MAX_CONCURRENT_RUN_COUNT: The maximum number of concurrent runs for the schedule.
MAX_RUN_COUNT: The maximum number of pipeline runs that the schedule creates after which it's completed.
Create a schedule usingPipelineJobSchedule.create
Use the following sample to schedule pipeline runs using thePipelineJobSchedule.create method:
fromgoogle.cloudimportaiplatformpipeline_job=aiplatform.PipelineJob(template_path="COMPILED_PIPELINE_PATH",pipeline_root="PIPELINE_ROOT_PATH",display_name="DISPLAY_NAME",)pipeline_job_schedule=aiplatform.PipelineJobSchedule(pipeline_job=pipeline_job,display_name="SCHEDULE_NAME")pipeline_job_schedule.create(cron="TZ=CRON",max_concurrent_run_count=MAX_CONCURRENT_RUN_COUNT,max_run_count=MAX_RUN_COUNT,)COMPILED_PIPELINE_PATH: The path to your compiled pipeline YAML file. It can be a local path or a Cloud Storage URI.
Optional: To specify a particular version of a template, include the version tag alongwith the path in any one of the following formats:
COMPILED_PIPELINE_PATH:TAG, whereTAGis the version tag.
COMPILED_PIPELINE_PATH@SHA256_TAG, whereSHA256_TAG is the sha256 hash value of the pipeline version.
PIPELINE_ROOT_PATH: (optional) To override the pipeline root path specified in the pipeline definition, specify a path that your pipeline job can access, such as a Cloud Storage bucket URI.
DISPLAY_NAME: The name of the pipeline. This will show up in the Google Cloud console.
SCHEDULE_NAME: The name of the pipeline schedule. You can specify a name having a maximum length of 128 UTF-8 characters.
CRON: Cron schedule expression representing the frequency to schedule and execute pipeline runs. For more information, seeCron.
MAX_CONCURRENT_RUN_COUNT: The maximum number of concurrent runs for the schedule.
MAX_RUN_COUNT: The maximum number of pipeline runs that the schedule creates after which it's completed.
List schedules
You can view the list of pipeline schedules created for your Google Cloud project.
Console
You can view the list of pipeline schedules on theSchedules tab of the Google Cloud console for the selected region.
To view the list of pipeline schedules, in the Google Cloud console, in the Vertex AI section, go to theSchedules tab on thePipelines page.
REST
To list pipeline run schedules in your project, send a GET request by using theprojects.locations.schedules.list method.
Before using any of the request data, make the following replacements:
- LOCATION: The region where you want to run the pipeline. For more information about the regions where Vertex AI Pipelines is available, see theVertex AI locations guide.
- PROJECT_ID: The Google Cloud project where you want to run the pipeline.
- FILTER: (optional) Expression to filter the list of schedules. For more information, see ...
- PAGE_SIZE: (optional) The number of schedules to be listed per page.
- PAGE_TOKEN: (optional) The standard list page token, typically obtained via
ListSchedulesResponse.next_page_token[]from a previousScheduleService.ListSchedules[]call. - ORDER_BY: (optional) Comma-separated list of fields, indicating the sort order of the schedules in the response.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules?FILTER&PAGE_SIZE&PAGE_TOKEN&ORDER_BY
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules?FILTER&PAGE_SIZE&PAGE_TOKEN&ORDER_BY"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules?FILTER&PAGE_SIZE&PAGE_TOKEN&ORDER_BY" | Select-Object -Expand Content
You should see output similar to the following:
{ "schedules": [SCHEDULE_ENTITY_OBJECT_1,SCHEDULE_ENTITY_OBJECT_2, ... ],}Python
Use the following sample to list all the schedules in your project in the descending order of their creation times:
fromgoogle.cloudimportaiplatformaiplatform.PipelineJobSchedule.list(filter='display_name="DISPLAY_NAME"',order_by='create_time desc')DISPLAY_NAME: The name of the pipeline schedule. You can specify a name having a maximum length of 128 UTF-8 characters.
Retrieve a schedule
You can retrieve a pipeline run schedule using the schedule ID.
REST
To retrieve a pipeline run schedule, send a GET request by using theprojects.locations.schedules.getmethod and the schedule ID.
Before using any of the request data, make the following replacements:
- LOCATION: The region where you want to run the pipeline. For more information about the regions where Vertex AI Pipelines is available, see theVertex AI locations guide.
- PROJECT_ID: The Google Cloud project where you want to run the pipeline.
- SCHEDULE_ID: Unique schedule ID generated while creating the schedule.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID" | Select-Object -Expand Content
You should see output similar to the following.PIPELINE_JOB_CREATION_REQUEST represents the API request to create the pipeline job.
{ "name": "projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID", "displayName": "schedule_display_name", "startTime": "2045-07-26T06:59:59Z", "state": "ACTIVE", "createTime": "20xx-01-01T00:00:00.000000Z", "nextRunTime": "2045-08-01T00:00:00Z", "cron": "TZ=America/New_York 0 0 1 * *", "maxConcurrentRunCount": "10", "createPipelineJobRequest":PIPELINE_JOB_CREATION_REQUEST}Python
Use the following sample to retrieve a pipeline run schedule using the schedule ID:
fromgoogle.cloudimportaiplatformpipeline_job_schedule=aiplatform.PipelineJobSchedule.get(schedule_id=SCHEDULE_ID)SCHEDULE_ID: Unique schedule ID generated while creating the schedule.
Pause a schedule
You can pause an active pipeline schedule by specifying the schedule ID. When you pause a schedule, its state changes fromACTIVE toPAUSED.
Console
You can pause a pipeline run schedule that's currently active.
Use the following instructions to pause a schedule:
In the Google Cloud console, in the Vertex AI section, go to theSchedules tab on thePipelines page.
Go to the options menu that's in the same row as the schedule you want to pause, and then clickPause. You can pause any schedule where theStatus column showsActive.
REST
To pause a pipeline run schedule in your project, send a POST request by using theprojects.locations.schedules.pause method.
Before using any of the request data, make the following replacements:
- LOCATION: The region where the pipeline run schedule is currently active. For more information about the regions where Vertex AI Pipelines is available, see theVertex AI locations guide.
- PROJECT_ID: The Google Cloud project where the pipeline run schedule is currently active.
- SCHEDULE_ID: Unique schedule ID generated while creating the schedule.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID:pause
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d "" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID:pause"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID:pause" | Select-Object -Expand Content
You should receive a successful status code (2xx) and an empty response.
Use the following sample to pause a pipeline run schedule:Python
fromgoogle.cloudimportaiplatformpipeline_job_schedule=aiplatform.PipelineJobSchedule.get(schedule_id=SCHEDULE_ID)pipeline_job_schedule.pause()SCHEDULE_ID: Unique schedule ID generated while creating the schedule.
Update a schedule
You can update an existing pipeline schedule that was created for yourGoogle Cloud project.
Updating a schedule is similar to deleting and recreating a schedule. When youupdate a schedule, new runs are scheduled based on the frequency of the updatedschedule. New runs are no longer created based on the old schedule and any queuedruns are dropped. Pipeline runs that are already created by the old schedulearen't paused or canceled.
REST
To update a pipeline run schedule in your project, send a PATCH request by using theprojects.locations.schedules.patch method.
Before using any of the request data, make the following replacements:
- LOCATION: The region where you want to run the pipeline. For more information about the regions where Vertex AI Pipelines is available, see theVertex AI locations guide.
- PROJECT_ID: The Google Cloud project where you want to run the pipeline.
- DISPLAY_NAME: The name of the pipeline schedule. You can specify a name having a maximum length of 128 UTF-8 characters.
- MAX_CONCURRENT_RUN_COUNT: The maximum number of concurrent runs for the schedule.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID?updateMask=display_name,max_run_count -d '{"display_name":"DISPLAY_NAME", "max_concurrent_run_count":MAX_CONCURRENT_RUN_COUNT}'To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d "" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID?updateMask=display_name,max_run_count -d '{"display_name":"DISPLAY_NAME", "max_concurrent_run_count":MAX_CONCURRENT_RUN_COUNT}'"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID?updateMask=display_name,max_run_count -d '{"display_name":"DISPLAY_NAME", "max_concurrent_run_count":MAX_CONCURRENT_RUN_COUNT}'" | Select-Object -Expand Content
You should see output similar to the following. Based on the update, theNEXT_RUN_TIME is recalculated. When you update the schedule, theSTART_TIME remains unchanged.
{ "name": "projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID", "displayName": "DISPLAY_NAME", "startTime": "START_TIME", "state": "ACTIVE", "createTime": "2025-01-01T00:00:00.000000Z", "nextRunTime":NEXT_RUN_TIME, "maxConcurrentRunCount": "MAX_CONCURRENT_RUN_COUNT",}Python
Use the following sample to schedule pipeline runs using thePipelineJobSchedule.update method:
fromgoogle.cloudimportaiplatformpipeline_job_schedule=aiplatform.PipelineJobSchedule.get(schedule_id=SCHEDULE_ID)pipeline_job_schedule.update(display_name='DISPLAY_NAME',max_concurrent_run_count=MAX_CONCURRENT_RUN_COUNT,)- SCHEDULE_ID: Unique schedule ID generated while creating the schedule.
- DISPLAY_NAME: The name of the pipeline schedule. You can specify a name having a maximum length of 128 UTF-8 characters.
- MAX_CONCURRENT_RUN_COUNT: The maximum number of concurrent runs for the schedule.
Resume a schedule
You can resume a paused pipeline schedule by specifying the schedule ID. When you resume a schedule, its state changes fromPAUSED toACTIVE.
Console
You can resume a pipeline run schedule that's currently paused.
Use the following instructions to resume a schedule:
In the Google Cloud console, in the Vertex AI section, go to theSchedules tab on thePipelines page.
Go to the options menu that's in the same row as the schedule you want to resume, and then clickResume. You can resume any schedule where theStatus column showsPaused.
REST
To resume a pipeline run schedule in your project, send a POST request by using theprojects.locations.schedules.resume method.
Before using any of the request data, make the following replacements:
- LOCATION: The region where the pipeline run schedule is currently paused. For more information about the regions where Vertex AI Pipelines is available, see theVertex AI locations guide.
- PROJECT_ID: The Google Cloud project where the pipeline run schedule is currently paused.
- SCHEDULE_ID: Unique schedule ID generated while creating the schedule.
- CATCH_UP: (Optional) Indicate whether the paused schedule should backfill the skipped pipeline runs. To backfill and reschedule the skipped pipeline runs, enter the following:
{ "catch_up":true }This parameter is set to `false`, by default.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID:resume -d 'CATCH_UP'
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d "" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID:resume -d 'CATCH_UP'"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID:resume -d 'CATCH_UP'" | Select-Object -Expand Content
You should receive a successful status code (2xx) and an empty response.
Python
Use the following sample to resume a paused pipeline run schedule:
fromgoogle.cloudimportaiplatformpipeline_job_schedule=aiplatform.PipelineJobSchedule.get(schedule_id=SCHEDULE_ID)pipeline_job_schedule.resume(catch_up=CATCH_UP)- SCHEDULE_ID: Unique schedule ID generated while creating the schedule.
- CATCH_UP: (Optional) Indicate whether the paused schedule should backfill the skipped pipeline runs. To backfill and reschedule the skipped pipeline runs, enter the following:
{ "catch_up":true }
Delete a schedule
You can delete a pipeline schedule by specifying the schedule ID.
Console
You can delete a pipeline run schedule regardless of its status.
Use the following instructions to delete a schedule:
In the Google Cloud console, in the Vertex AI section, go to theSchedules tab on thePipelines page.
Go to the options menu that's in the same row as the schedule you want to delete, and then clickDelete.
To confirm deletion, clickDelete.
REST
To delete a pipeline run schedule in your project, send a DELETE request by using theprojects.locations.schedules.delete method.
Before using any of the request data, make the following replacements:
- LOCATION: The region where you want to delete the pipeline schedule. For more information about the regions where Vertex AI Pipelines is available, see theVertex AI locations guide.
- PROJECT_ID: The Google Cloud project where you want to delete the schedule.
- SCHEDULE_ID: The unique schedule ID generated while creating the schedule.
HTTP method and URL:
DELETE https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X DELETE \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method DELETE `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/schedules/SCHEDULE_ID" | Select-Object -Expand Content
You should see output similar to the following.OPERATION_ID represents the delete operation.
{ "name": "projects/PROJECT_ID/locations/LOCATION/operations/OPERATION_ID", "metadata": { "@type": "type.googleapis.com/google.cloud.aiplatform.v1.DeleteOperationMetadata", "genericMetadata": { "createTime": "20xx-01-01T00:00:00.000000Z", "updateTime": "20xx-01-01T00:00:00.000000Z" } }, "done": true, "response": { "@type": "type.googleapis.com/google.protobuf.Empty" }}Python
Use the following sample to delete a pipeline run schedule:
fromgoogle.cloudimportaiplatformpipeline_job_schedule=aiplatform.PipelineJobSchedule.get(schedule_id=SCHEDULE_ID)pipeline_job_schedule.delete()SCHEDULE_ID: Unique schedule ID generated while creating the schedule.
List all pipeline jobs created by a schedule
You can view a list of all the pipeline jobs created by a schedule by specifying the schedule ID.
REST
To list all the pipeline runs that have been created by a pipeline schedule, send a GET request by using theprojects.locations.pipelineJobs method.
Before using any of the request data, make the following replacements:
- LOCATION: The region where you want to run the pipeline. For more information about the regions where Vertex AI Pipelines is available, see theVertex AI locations guide.
- PROJECT_ID: The Google Cloud project where you want to run the pipeline.
- SCHEDULE_ID: Unique schedule ID generated while creating the schedule.
HTTP method and URL:
GET https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/pipelineJobs?filter=schedule_name=projects/PROJECT/locations/LOCATION/schedules/SCHEDULE_ID
To send your request, choose one of these options:
curl
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.Execute the following command:
curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/pipelineJobs?filter=schedule_name=projects/PROJECT/locations/LOCATION/schedules/SCHEDULE_ID"
PowerShell
Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.Execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/pipelineJobs?filter=schedule_name=projects/PROJECT/locations/LOCATION/schedules/SCHEDULE_ID" | Select-Object -Expand Content
You should see output similar to the following.
{ "pipelineJobs": [PIPELINE_JOB_ENTITY_1,PIPELINE_JOB_ENTITY_2, ... ],}Python
Use the following sample to list all the pipeline jobs created by a schedule in the descending order of their creation times:
fromgoogle.cloudimportaiplatformpipeline_job_schedule=aiplatform.PipelineJobSchedule.get(schedule_id=SCHEDULE_ID)pipeline_job_schedule.list_jobs(order_by='create_time_desc')SCHEDULE_ID: Unique schedule ID generated while creating the schedule.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.