Create an instance using a custom container

This page describes how to create a Vertex AI Workbench instance based ona custom container.

Overview

Vertex AI Workbench instances support using a custom container derivedfrom one of the Google-provided base containers. You can modify thesebase containers to make a custom container image and use thesecustom containers to create a Vertex AI Workbench instance.

The base containers are configured with aContainer-OptimizedOS in the hostvirtual machine (VM). The host image is built from thecos-stableimage family.

Limitations

Consider the following limitations when planning your project:

  • The custom container must be derived from aGoogle-provided base container.Using a container that isn't derived from a base container increases therisk of compatibility issues and limits our ability to support yourusage of Vertex AI Workbench instances.

  • Use of more than one container with a Vertex AI Workbench instanceisn't supported.

  • Supported metadata for custom containers fromuser-managed notebooks and managed notebooks can havedifferent behavior when used with Vertex AI Workbench instances.

  • The VM hosting the custom container is running off of aContainer-Optimized OS,which restricts how you can interact with the host machine. For example,Container-Optimized OS doesn't include a package manager. Thismeans that packages acting on the host must be performed on a containerwith mounts. This affects the post-startup scripts that are migrated frommanaged notebooks instances anduser-managed notebooks instances, where thehost machine contains significantly more tooling thanContainer-Optimized OS.

  • Vertex AI Workbench instances usesnerdctl(a containerd CLI) for running the custom container. This is requiredfor compatibility with the Image streaming service. Any containerparameters that are added using a metadata value need toadhere to what is supported bynerdctl.

  • Vertex AI Workbench instances are configured to pull eitherfrom Artifact Registry or a public container repository. To configurean instance to pull from a private repository, you must manuallyconfigure the credentials used by the containerd.

Base containers

Standard base container

The standard base container supports all Vertex AI Workbench featuresand includes the following:

Specifications

The standard base container has the following specifications:

  • Base image:nvidia/cuda:12.6.1-cudnn-devel-ubuntu24.04
  • Image size: Approximately 22 GB
  • URI:us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-container:latest

Slim base container

The slim base container provides a minimal set of configurationsthat permit a proxy connection to the instance. StandardVertex AI Workbench features and packages aren't included,except for the following:

  • JupyterLab
  • Metadata-based JupyterLab configuration
  • Micromamba-based kernel management

Additional packages or JupyterLab extensions must be installed andmanaged independently.

Specifications

The slim base container has the following specifications:

  • Base image:marketplace.gcr.io/google/ubuntu24.04
  • Image size: Approximately 2 GB
  • URI:us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-container-slim:latest

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  3. Verify that billing is enabled for your Google Cloud project.

  4. Enable the Notebooks API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enable permission.Learn how to grant roles.

    Enable the API

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  6. Verify that billing is enabled for your Google Cloud project.

  7. Enable the Notebooks API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enable permission.Learn how to grant roles.

    Enable the API

Required roles

To get the permissions that you need to create a Vertex AI Workbench instance with a custom container, ask your administrator to grant you the following IAM roles:

For more information about granting roles, seeManage access to projects, folders, and organizations.

You might also be able to get the required permissions throughcustom roles or otherpredefined roles.

Create a custom container

To create a custom container for use with Vertex AI Workbench instances:

  1. Create a derivative container derived from aGoogle-provided base container image.

  2. Build and push the container to Artifact Registry. You'll usethe container's URI when you create your Vertex AI Workbench instance.For example, the URI might look like this:gcr.io/PROJECT_ID/IMAGE_NAME.

Create the instance

You can create a Vertex AI Workbench instance based on acustom container by using the Google Cloud console or the Google Cloud CLI.

Console

To create a Vertex AI Workbench instance based on a custom container,do the following:

  1. In the Google Cloud console, go to theInstances page.

    Go to Instances

  2. Click Create new.

  3. In theNew instance dialog, clickAdvanced options.

  4. In theCreate instance dialog, in theEnvironment section,selectUse custom container.

  5. ForDocker container image, clickSelect.

  6. In theSelect container image dialog, navigate tothe container image that you want to use, and then clickSelect.

  7. Optional. ForPost-startup script, enter a path to a post-startupscript that you want to use.

  8. Optional. Add metadata for your instance. To learn more, seeCustom container metadata.

  9. Optional. In theNetworking section, customize your network settings.To learn more, seeNetwork configuration options.

  10. Complete the rest of the instance creation dialog, and thenclickCreate.

    Vertex AI Workbench creates an instance and automatically starts it.When the instance is ready to use, Vertex AI Workbenchactivates anOpen JupyterLab link.

gcloud

Before using any of the command data below, make the following replacements:

  • INSTANCE_NAME: the name of your Vertex AI Workbench instance; must start with a letter followed by up to 62 lowercase letters, numbers, or hyphens (-), and cannot end with a hyphen
  • PROJECT_ID: your project ID
  • LOCATION: the zone where you want your instance to be located
  • CUSTOM_CONTAINER_PATH: the path to the container image repository, for example:gcr.io/PROJECT_ID/IMAGE_NAME
  • METADATA: custom metadata to apply to this instance; for example, to specify a post-startup-script, you can use thepost-startup-script metadata tag, in the format:"--metadata=post-startup-script=gs://BUCKET_NAME/hello.sh"

Execute the following command:

Linux, macOS, or Cloud Shell

Note: Ensure you have initialized the Google Cloud CLI with authentication and a project by running eithergcloud init; orgcloud auth login andgcloud config set project.
gcloudworkbenchinstancescreateINSTANCE_NAME\--project=PROJECT_ID\--location=LOCATION\--container-repository=CUSTOM_CONTAINER_URL\--container-tag=latest\--metadata=METADATA

Windows (PowerShell)

Note: Ensure you have initialized the Google Cloud CLI with authentication and a project by running eithergcloud init; orgcloud auth login andgcloud config set project.
gcloudworkbenchinstancescreateINSTANCE_NAME`--project=PROJECT_ID`--location=LOCATION`--container-repository=CUSTOM_CONTAINER_URL`--container-tag=latest`--metadata=METADATA

Windows (cmd.exe)

Note: Ensure you have initialized the Google Cloud CLI with authentication and a project by running eithergcloud init; orgcloud auth login andgcloud config set project.
gcloudworkbenchinstancescreateINSTANCE_NAME^--project=PROJECT_ID^--location=LOCATION^--container-repository=CUSTOM_CONTAINER_URL^--container-tag=latest^--metadata=METADATA

For more information about the command for creating aninstance from the command line, see thegcloud CLIdocumentation.

Vertex AI Workbench creates an instance and automatically starts it.When the instance is ready to use, Vertex AI Workbenchactivates anOpen JupyterLab link in the Google Cloud console.

Network configuration options

In addition to thegeneral network options,a Vertex AI Workbench instance with a custom container must have access tothe Artifact Registry service.

If you have turned off public IP access for your VPC,ensure that you have enabled Private Google Access.

Enable Image streaming

The custom container host is provisioned to interact withImage streaming in Google Kubernetes Engine (GKE),which pulls containers faster and reduces initialization time forlarge containers once they are cached in the GKEremote file system.

To view the requirements for enabling Image streaming, seeRequirements.Often, Image streaming can be used with Vertex AI Workbench instancesby enabling the Container File System API.

EnableContainer File System API

How the host VM runs the custom container

Instead of using Docker to run the custom container, the host VM usesnerdctl under the Kubernetes namespace to load and run thecontainer. This lets Vertex AI Workbench useImage streaming for custom containers.

# Runs the custom container.sudo/var/lib/google/nerdctl/nerdctl--snapshotter=gcfs-nk8s.iorun--namepayload-container

Example installation: custom container with a custom default kernel

The following example shows how to create a new kernel with a pip packagepre-installed.

  1. Create a new custom container:

    FROM us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-container:latestENV MAMBA_ROOT_PREFIX=/opt/micromambaRUN micromamba create -nENVIRONMENT_NAME -c conda-forge python=PYTHON_VERSION -ySHELL ["micromamba", "run", "-n", "ENVIRONMENT_NAME", "/bin/bash", "-c"]RUN micromamba install -c conda-forge pip -yRUN pip installPACKAGERUN pip install ipykernelRUN python -m ipykernel install --prefix /opt/micromamba/envs/ENVIRONMENT_NAME --nameENVIRONMENT_NAME --display-nameKERNEL_NAME# Creation of a micromamba kernel automatically creates a python3 kernel# that must be removed if it's in conflict with the new kernel.RUN rm -rf "/opt/micromamba/envs/ENVIRONMENT_NAME/share/jupyter/kernels/python3"
  2. Add the new container to Artifact Registry:

    gcloud auth configure-dockerREGION-docker.pkg.devdocker build -tREGION-docker.pkg.dev/PROJECT_ID/REPOSITORY_NAME/IMAGE_NAME .docker pushREGION-docker.pkg.dev/PROJECT_ID/REPOSITORY_NAME/IMAGE_NAME:latest
  3. Create an instance:

    gcloud workbench instances createINSTANCE_NAME  \    --project=PROJECT_ID \    --location=ZONE \    --container-repository=REGION-docker.pkg.dev/PROJECT_ID/REPOSITORY_NAME/IMAGE_NAME \    --container-tag=latest

Persistent kernels for custom containers

Vertex AI Workbench custom containers only mount a data disk to the/home/USER directory within each container,wherejupyter is the default user. This means thatany change outside of/home/USER is ephemeral andwon't persist after a restart. If you need installed packages to persistfor a specific kernel, you can create a kernel in the/home/USER directory.

Note: Base containers are configured with a mount point on/home/USER. To create a persistent kernelwhen building a custom image, you must enable Docker BuildKit.SeeNotes about specifyingvolumes.To learn more about BuildKit, see theBuildKitdocumentation.

To create a kernel in the/home/USER directory:

  1. Create a micromamba environment:

    micromamba create -p /home/USER/ENVIRONMENT_NAME -c conda-forge python=3.11 -ymicromamba activate /home/USER/ENVIRONMENT_NAMEpip install ipykernelpip install -r ~/requirement.txtpython -m ipykernel install --prefix "/home/USER/ENVIRONMENT_NAME" --display-name "Example Kernel"

    Replace the following:

    • USER: the user directory name, which isjupyter by default
    • ENVIRONMENT_NAME: the name of the environment
    • PYTHON_VERSION: the Python version, for example3.11
  2. Wait 30 seconds to 1 minute for the kernels to refresh.

Updating the startup of the base container

The base container for a Vertex AI Workbench instance(us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-container:latest)starts JupyterLab by running/run_jupyter.sh.

If you modify the container's startup in a derivative container, you mustappend/run_jupyter.sh to run the default configuration of JupyterLab.

The following is an example of how the Dockerfile might be modified:

# DockerFileFROM us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-container:latestCP startup_file.sh /# Ensure that you have the correct permissions and startup is executable.RUN chmod 755 /startup_file.sh && \    chown jupyter:jupyter /startup_file.sh# Override the existing CMD directive from the base container.CMD ["/startup_file.sh"]
# /startup_file.shecho "Running startup scripts".../run_jupyter.sh

Updating the JupyterLab Configuration within the base container

If you need to modify the JupyterLab configuration on the basecontainer you must do the following:

  • Ensure that JupyterLab is configured to port 8080. Our proxy agent isconfigured to forward any request to port 8080 and if the jupyter serverisn't listening to the correct port, the instance encountersprovisioning issues.

  • Modify JupyterLab packages under thejupyterlab micromamba environment.We provide a separate package environment to run JupyterLab and itsplugin to ensure that there aren't any dependency conflicts with the kernelenvironment. If you want to install an additional JupyterLab extension,you must install it within thejupyterlab environment. For example:

    # DockerFileFROM us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-container:latestRUN micromamba activate jupyterlab && \  jupyter nbextension install nbdime

Custom Container Metadata

In addition to the standard list ofmetadatathat can be applied to a Vertex AI Workbench instance, instances withcustom containers include the following metadata for managing theinstantiation of the payload container:

FeatureDescriptionMetadata keyAccepted values and defaults
Enables Cloud Storage FUSE on a container image

Mounts/dev/fuse onto the container and enablesgcsfuse for use on the container.

container-allow-fuse
  • true: Enables Cloud Storage FUSE.
  • false (default): Doesn't enable Cloud Storage FUSE.
Additional container run parameters

Appends additional container parameters tonerdctl run, wherenerdctl is the Containerd CLI.

container-custom-params

A string of container run parameters. Example:--v /mnt/disk1:/mnt/disk1.

Additional container environment flags

Stores Environment variables into a flag under/mnt/stateful_partition/workbench/container_env and appends it tonerdctl run.

container-env-file

A string of container environment variables. Example:CONTAINER_NAME=derivative-container.

Upgrade a Custom Container

When your instance starts for the first time, it pulls the container imagefrom a URI stored in thecustom-container-payload metadata.If you use the:latest tag, the container is updated atevery restart. Thecustom-container-payload metadata valuecan't be modified directly because it's aprotected metadata key.

To update your instance's custom container image, you can use the followingmethods supported by the Google Cloud CLI, Terraform, orthe Notebooks API.

gcloud

You can update the custom container image metadata ona Vertex AI Workbench instance by using the following command:

gcloudworkbenchinstancesupdateINSTANCE_NAME\--container-repository=CONTAINER_URI\--container-tag=CONTAINER_TAG

Terraform

You can change thecontainer_image field in the terraform configurationto update the container payload.

To learn how to apply or remove a Terraform configuration, seeBasic Terraform commands.

resource"google_workbench_instance""default"{name="workbench-instance-example"location="us-central1-a"gce_setup{machine_type="n1-standard-1"container_image{repository="us-docker.pkg.dev/deeplearning-platform-release/gcr.io/workbench-container"family="latest"}}}

Notebooks API

Use theinstances.patchmethod with changes togce_setup.container_image.repository andgce_setup.container_image.tag in theupdateMask.

Run the diagnostic tool

The diagnostic tool checks and verifies the status of variousVertex AI Workbench services. To learn more,seeTasks performed by thediagnostic tool.

When you create a Vertex AI Workbench instance using a custom container,the diagnostic tool isn't available as a script in the host environmentthat users can run. Instead, it is compiled into a binary and loaded ontoa Google runtime container that is built to run diagnostic services ina Container-Optimized OS environment. SeeContainer-Optimized OSOverview.

To run the diagnostic tool, complete the following steps:

  1. Use ssh to connect to your Vertex AI Workbenchinstance.

  2. In the SSH terminal, run the following command:

    sudodockerexecdiagnostic-service./diagnostic_tool
  3. To view additional command options, run the following command:

    sudodockerexecdiagnostic-service./diagnostic_tool--help

For more information about the diagnostic tool's options, see themonitoringhealth statusdocumentation.

To run the diagnostic tool by using the REST API, see theREST APIdocumentation.

Access your instance

You can access your instance through a proxy URL.

After your instance has been created and is active, you can get theproxy URL by using the gcloud CLI.

Before using any of the command data below, make the following replacements:

  • INSTANCE_NAME: the name of your Vertex AI Workbench instance
  • PROJECT_ID: your project ID
  • LOCATION: the zone where your instance is located

Execute the following command:

Linux, macOS, or Cloud Shell

Note: Ensure you have initialized the Google Cloud CLI with authentication and a project by running eithergcloud init; orgcloud auth login andgcloud config set project.
gcloudworkbenchinstancesdescribeINSTANCE_NAME\--project=PROJECT_ID\--location=LOCATION|grepproxy-url

Windows (PowerShell)

Note: Ensure you have initialized the Google Cloud CLI with authentication and a project by running eithergcloud init; orgcloud auth login andgcloud config set project.
gcloudworkbenchinstancesdescribeINSTANCE_NAME`--project=PROJECT_ID`--location=LOCATION|grepproxy-url

Windows (cmd.exe)

Note: Ensure you have initialized the Google Cloud CLI with authentication and a project by running eithergcloud init; orgcloud auth login andgcloud config set project.
gcloudworkbenchinstancesdescribeINSTANCE_NAME^--project=PROJECT_ID^--location=LOCATION|grepproxy-url
proxy-url: 7109d1b0d5f850f-dot-datalab-vm-staging.googleusercontent.com

Thedescribe command returns your proxy URL. To access yourinstance, open the proxy URL in a web browser.

For more information about the command for describing aninstance from the command line, see thegcloud CLIdocumentation.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-11-24 UTC.