Interfaces for Vertex AI Pipelines Stay organized with collections Save and categorize content based on your preferences.
To learn more, run the "Learn how to build Python function-based Kubeflow pipeline components" notebook in one of the following environments:
Open in Colab |Open in Colab Enterprise |Openin Vertex AI Workbench |View on GitHub
This page lists the interfaces that you can use to define and run ML pipelineson Vertex AI Pipelines.
Interfaces to define a pipeline
Vertex AI Pipelines supports ML pipelines defined using theKubeflow Pipelines (KFP) SDK or the TensorFlow Extended (TFX) SDK.
Kubeflow Pipelines (KFP) SDK
Use KFP for all usecases where you don't need to use TensorFlow Extended to process huge amounts ofstructured or text data. Vertex AI Pipelines supports KFP SDK v2.0 or later.
When you use the KFP SDK, you can define your ML workflow by building customcomponents and also by reusing prebuilt components, such as theGoogle Cloud Pipeline Components. Google Cloud Pipeline Components let you easily use Vertex AIservices like AutoML in your ML pipeline. Vertex AI Pipelinessupports Google Cloud Pipeline Components SDK v2 or later. For more information aboutGoogle Cloud Pipeline Components, seeIntroduction to Google Cloud Pipeline Components.
To learn how to build a pipeline using the Kubeflow Pipelines, seeBuild a pipeline. To learn more aboutKubeflow Pipelines, see theKubeflow Pipelines documentation.
TensorFlow Extended (TFX) SDK
Use TFX if you use TensorFlow Extended in your ML workflow to processterabytes of structured or text data. Vertex AI Pipelines supportsTFX SDK v0.30.0 or later.
To learn how to build ML pipelines using TFX, see theGetting started tutorialssection on theTensorFlow Extended in Production tutorials.
Interfaces to run a pipeline
After you define your ML pipeline, you can create an ML pipeline run using anyof the following interfaces:
REST API
SDK clients
Google Cloud console
For more information about the interfaces you can use to interact with Vertex AI, seeInterfaces for Vertex AI.
REST API
To create a pipeline run using REST, use thePipelines service API. This API uses theprojects.locations.pipelineJobs REST resource.
Pipelines service API are communicated as preview launches. You can test the changes announced in preview, see the the API documentation forprojects.locations.pipelineJobs (v1beta1). For more information about the preview launch stage, see thelaunch stage descriptions.SDK Clients
Vertex AI Pipelines lets you create pipeline runs using the Vertex AI SDK for Python or client libraries.
Vertex AI SDK for Python
The Vertex AI SDK for Python (aiplatform) is the recommended SDK for programmatically working with thePipelines service API. For more information about this SDK, see theAPI documentation forgoogle.cloud.aiplatform.PipelineJob.
Client libraries
Client libraries are programmatically Generated API Clients (GAPIC) SDKs. Vertex AI Pipelines supports the following client libraries:
Python (
aiplatformv1andv1beta1)Java
Node.js
For more information, seeInstall the Vertex AI client libraries.
Google Cloud console (GUI)
Google Cloud console is the recommended way for reviewing and monitoring your pipeline runs. You can also perform other tasks using the Google Cloud console, such as creating, deleting and cloning pipeline runs, accessing the Template Gallery, and retrieving the billing label for a pipeline run.
Go to Pipelines in Google Cloud console
What's next
Get started bylearning how to define a pipeline using the Kubeflow Pipelines SDK.
Learn aboutbest practices for implementing custom-trained ML models on Vertex AI.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.
Open in Colab
Open in Colab Enterprise
Openin Vertex AI Workbench
View on GitHub