TensorFlow integration Stay organized with collections Save and categorize content based on your preferences.
This page explains Vertex AI's TensorFlow integration andprovides resources that show you how to use TensorFlowon Vertex AI. Vertex AI's TensorFlow integrationmakes it easier for you to train, deploy, and orchestrateTensorFlow models in production.
Run code in notebooks
Vertex AI provides two options for running your code innotebooks, Colab Enterprise and Vertex AI Workbench.To learn more about these options, seechoose a notebook solution.
Prebuilt containers for training
Vertex AI provides prebuilt Docker container images for model training.These containers are organized by machine learning frameworks and frameworkversions and include common dependencies that you might want to use in yourtraining code.
To learn about which TensorFlow versions haveprebuilt training containers and how to train models witha prebuilt training container, seePrebuilt containers forcustom training.
Distributed training
You can run distributed training of TensorFlow models on Vertex AI. Formulti-worker training, you can use Reduction Server to optimize performanceeven further for all-reduce collective operations. To learn more aboutdistributed training on Vertex AI, seeDistributed training.
Prebuilt containers for inference
Similar to prebuilt containers for training, Vertex AI providesprebuilt container images for serving inferences and explanations fromTensorFlow models that you either created within oroutside of Vertex AI. These images provide HTTP inference serversthat you can use to serve inferences with minimal configuration.
To learn about which TensorFlow versions haveprebuilt training containers and how to train models witha prebuilt training container, seePrebuilt containers forcustom training.
Optimized TensorFlow runtime
Preview
This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
Theoptimized TensorFlowruntimeuses model optimizations and new proprietary Google technologies to improve thespeed and lower the cost of inferences compared to Vertex AI's standardprebuilt inference containers for TensorFlow.
TensorFlow Cloud Profiler integration
Train models cheaper and faster by monitoring and optimizing the performance ofyour training job using Vertex AI's TensorFlow Cloud Profilerintegration. TensorFlow Cloud Profiler helps you understand theresource consumption of training operations so you can identify andeliminate performance bottlenecks.
To learn more about Vertex AITensorFlow Cloud Profiler, seeProfile model training performanceusing Profiler.
Resources for using TensorFlow on Vertex AI
To learn more and start using TensorFlow in Vertex AI,see the following resources.
Prototype toProduction:A video series that provides and end-to-end example of developing anddeploying a custom TensorFlow model on Vertex AI.
Optimize training performance with Reduction Server onVertex AI:A blog post on optimizing distributed training on Vertex AI by usingReduction Server.
How to optimize training performance with theTensorFlow Cloud Profiler onVertex AI:A blog post that shows you how to identify performance bottlenecks in yourtraining job by using Vertex AI TensorFlow Cloud Profiler.
Custom model batch prediction with featurefiltering:A notebook tutorial that shows you how to use the Vertex AI SDK for Python totrain a custom tabular classification model and perform batch inference withfeature filtering.
Vertex AI Pipelines: Custom training with prebuilt Google CloudPipeline Components:A notebook tutorial that shows you how to use Vertex AI Pipelines withprebuilt Google Cloud Pipeline Components for custom training.
Co-host TensorFlow models on the same VM forpredictions:A codelab that shows you how to use the co-hosting model feature inVertex AI to host multiple models on the same VM for onlineinferences.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.