TensorFlow integration

This page explains Vertex AI's TensorFlow integration andprovides resources that show you how to use TensorFlowon Vertex AI. Vertex AI's TensorFlow integrationmakes it easier for you to train, deploy, and orchestrateTensorFlow models in production.

Run code in notebooks

Vertex AI provides two options for running your code innotebooks, Colab Enterprise and Vertex AI Workbench.To learn more about these options, seechoose a notebook solution.

Prebuilt containers for training

Vertex AI provides prebuilt Docker container images for model training.These containers are organized by machine learning frameworks and frameworkversions and include common dependencies that you might want to use in yourtraining code.

To learn about which TensorFlow versions haveprebuilt training containers and how to train models witha prebuilt training container, seePrebuilt containers forcustom training.

Distributed training

You can run distributed training of TensorFlow models on Vertex AI. Formulti-worker training, you can use Reduction Server to optimize performanceeven further for all-reduce collective operations. To learn more aboutdistributed training on Vertex AI, seeDistributed training.

Prebuilt containers for inference

Similar to prebuilt containers for training, Vertex AI providesprebuilt container images for serving inferences and explanations fromTensorFlow models that you either created within oroutside of Vertex AI. These images provide HTTP inference serversthat you can use to serve inferences with minimal configuration.

To learn about which TensorFlow versions haveprebuilt training containers and how to train models witha prebuilt training container, seePrebuilt containers forcustom training.

Optimized TensorFlow runtime

Preview

This product or feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA products and features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.

Theoptimized TensorFlowruntimeuses model optimizations and new proprietary Google technologies to improve thespeed and lower the cost of inferences compared to Vertex AI's standardprebuilt inference containers for TensorFlow.

TensorFlow Cloud Profiler integration

Train models cheaper and faster by monitoring and optimizing the performance ofyour training job using Vertex AI's TensorFlow Cloud Profilerintegration. TensorFlow Cloud Profiler helps you understand theresource consumption of training operations so you can identify andeliminate performance bottlenecks.

To learn more about Vertex AITensorFlow Cloud Profiler, seeProfile model training performanceusing Profiler.

Resources for using TensorFlow on Vertex AI

To learn more and start using TensorFlow in Vertex AI,see the following resources.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.