Migrate x86 application on GKE to multi-arch with Arm Stay organized with collections Save and categorize content based on your preferences.
This tutorial describes how to migrate an application built for nodes using anx86 (Intel or AMD) processor in a Google Kubernetes Engine (GKE) cluster to amulti-architecture (multi-arch) application that runs on either x86 or Arm nodes.The intended audience for this tutorial is Platform Admins, App Operators, andApp Developers who want to run their existing x86-compatible workloads on Arm.
With GKE clusters, you can run workloads on Arm nodes using theC4A,N4A, orTauT2A machine series.This tutorial uses C4A nodes, which, like N4A and T2A nodes, can run in yourGKE cluster just like any other node using x86 (Intel or AMD)processors. C4A nodes provide Arm-based consistently high performance for yourworkloads.
For more information, seeArm workloads onGKE.
This tutorial assumes that you are familiar with Kubernetes and Docker. Thetutorial uses Google Kubernetes Engine and Artifact Registry.
Objectives
In this tutorial, you will complete the following tasks:
- Store container images with Docker in Artifact Registry.
- Deploy an x86-compatible workload to a GKE cluster.
- Rebuild an x86-compatible workload to run on Arm.
- Add an Arm node pool to an existing cluster.
- Deploy an Arm-compatible workload to run on an Arm node.
- Build a multi-arch image to run a workload across multiple architectures.
- Run workloads across multiple architectures in one GKE cluster.
Costs
In this document, you use the following billable components of Google Cloud:
To generate a cost estimate based on your projected usage, use thepricing calculator.
When you finish the tasks that are described in this document, you can avoid continued billing by deleting the resources that you created. For more information, seeClean up.
Before you begin
Take the following steps to enable the Kubernetes Engine API:- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Artifact Registry and Google Kubernetes Engine APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Artifact Registry and Google Kubernetes Engine APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.
When you finish this tutorial, you can avoid continued billing by deleting theresources you created. SeeClean up for more details.
Launch Cloud Shell
In this tutorial you will useCloud Shell,which is a shell environment for managing resources hosted onGoogle Cloud.
Cloud Shell comes preinstalled with theGoogle Cloud CLI andkubectl command-line tool. The gcloud CLI provides the primary command-lineinterface for Google Cloud, andkubectl provides the primary command-lineinterface for running commands against Kubernetes clusters.
Launch Cloud Shell:
Go to the Google Cloud console.
From the upper-right corner of the console, click theActivate Cloud Shell button:

A Cloud Shell session appears inside the console.You use this shell to rungcloud andkubectl commands.
Prepare your environment
In this section, you prepare your environment to follow the tutorial.
Set the default settings for the gcloud CLI
Set environment variables for your project ID, the Compute Engine location for yourcluster, and the name of your new cluster.
Caution: The following environment variables are used throughout the commands of this tutorial. You might need to set the environment variables again if you close the Cloud Shell.exportPROJECT_ID=PROJECT_IDexportCONTROL_PLANE_LOCATION=us-central1-aexportCLUSTER_NAME=my-clusterReplacePROJECT_ID with the project ID you chose forthis tutorial in theBefore you begin section.
In this tutorial, you create resources in us-central1-a. To see a complete listof where the C4A machine series is available, seeAvailable regions and zones.
Clone the git repository
Clone the repository:
gitclonehttps://github.com/GoogleCloudPlatform/kubernetes-engine-samplesChange your current working directory to the repository cloned in theprevious step:
cdkubernetes-engine-samples/workloads/migrate-x86-app-to-multi-arch/This repository contains the files you need to complete this tutorial. This tutorial uses Kubernetes Deployments. ADeployment is a Kubernetes API object that lets you run multiple replicas of Pods that are distributed among the nodes in a cluster..
Create a GKE cluster and deploy the x86 application
In the first part of this tutorial, you create a cluster with x86 nodes anddeploy an x86 application. The example application is a service which respondsto HTTP requests. It is built with the Golang programming language.
This setup represents what a typical cluster environment might look like, usingx86-compatible applications and x86 nodes.
Create a GKE cluster
First, create a GKE using nodes with x86 processors. With thisconfiguration, you create a typical cluster environment to run x86 applications.
Create the cluster:
gcloudcontainerclusterscreate$CLUSTER_NAME\--release-channel=rapid\--location=$CONTROL_PLANE_LOCATION\--machine-type=e2-standard-2\--num-nodes=1\--asyncThis cluster has autoscaling disabled in order to demonstrate specificfunctionality in later steps.
It might take several minutes to finish creating the cluster. The--asyncflag lets this operation run in the background while you complete the nextsteps.
You cancreate clusters with only Arm nodes,however for this tutorial you will create a cluster with only x86 nodes first tolearn about the process of making x86-only applications compatible with Arm.
Create the Artifact Registry Docker repository
Create a repository in Artifact Registry to store Docker images:
gcloudartifactsrepositoriescreatedocker-repo\--repository-format=docker\--location=us-central1\--description="Docker repository"Configure the Docker command-line tool to authenticate to this repository inArtifact Registry:
gcloudauthconfigure-dockerus-central1-docker.pkg.dev
Build the x86 image and push it to Artifact Registry
Build the x86-compatible version of the application:
dockerbuild-tus-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/x86-hello:v0.0.1.Push the image to Artifact Registry:
dockerpushus-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/x86-hello:v0.0.1
Deploy the x86 application
Check that the cluster is ready by running the following script:
echoecho-ne"Waiting for GKE cluster to finish provisioning"gke_status=""while[-z$gke_status];dosleep2echo-ne'.'gke_status=$(gcloudcontainerclusterslist--format="value(STATUS)"--filter="NAME=$CLUSTER_NAME AND STATUS=RUNNING")doneechoecho"GKE Cluster '$CLUSTER_NAME' is$gke_status"echoWhen the cluster is ready, the output should be similar to the following:
GKE Cluster 'my-cluster' is RUNNINGRetrieve the cluster credentials so that
kubectlcan connect to theKubernetes API for the cluster:gcloudcontainerclustersget-credentials$CLUSTER_NAME--location$CONTROL_PLANE_LOCATION--project$PROJECT_IDUpdate the image usingkustomize and deploy the x86 application:
$(cdk8s/overlays/x86 &&kustomizeeditsetimagehello=us-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/x86-hello:v0.0.1)kubectlapply-kk8s/overlays/x86Deploy a Service to expose the application to the Internet:
kubectlapply-fk8s/hello-service.yamlCheck that the external IP address for the Service,
hello-service, is finishedprovisioning:echoecho-ne"Waiting for External IP to be provisioned"external_ip=""while[-z$external_ip];dosleep2echo-ne'.'external_ip=$(kubectlgetsvchello-service--template="{{range .status.loadBalancer.ingress}}{{.ip}}{{end}}")doneechoecho"External IP:$external_ip"echoAfter the external IP address is provisioned, the output should be similar to the following:
External IP: 203.0.113.0Make an HTTP request to test that the deployment works as expected:
curl-w'\n'http://$external_ipThe output is similar to the following:
Hello from NODE:gke-my-cluster-default-pool-32019863-b41t, POD:x86-hello-deployment-6b7b456dd5-mwfkd, CPU PLATFORM:linux/amd64The output shows that this x86-compatible deployment is running on a node in thedefault node pool on the
amd64architecture. The nodes in the default nodepool of your cluster have x86 (either Intel or AMD) processors.
Add Arm nodes to the cluster
In the next part of this tutorial, add Arm nodes to your existing cluster. Thesenodes are where the Arm-compatible version of your application is deployed whenit's rebuilt to run on Arm.
Checkpoint
So far you've accomplished the following objectives:
- create a GKE cluster using x86 nodes.
- store an x86-compatible container image with Docker in Artifact Registry.
- deploy an x86-compatible workload to a GKE cluster.
You've configured a cluster environment with x86 nodes and an x86-compatibleworkload. This configuration is similar to your existing cluster environments if you don't currently use Arm nodes and Arm-compatible workloads.
Add an Arm node pool to your cluster
Add an Arm node pool to your existing cluster:
gcloudcontainernode-poolscreatearm-pool\--cluster$CLUSTER_NAME\--location$CONTROL_PLANE_LOCATION\--machine-type=c4a-standard-2\--num-nodes=1Thec4a-standard-2 machine type is an Arm VM from theC4A machineseries.
You create a node pool with Arm nodes in the same way as creating a node poolwith x86 nodes. After this node pool is created, you will have both x86 nodesand Arm nodes running in this cluster.
For more information about adding Arm node pools to existing clusters, seeAddan Arm node pool to a GKEcluster.
Scale up the existing application running on x86-based nodes
Nodes of multiple architecture types can work seamlessly together in onecluster. GKE doesn't schedule existing workloads running on x86nodes to Arm nodes in the cluster because a taint is automatically placed on Armnodes. You can see this by scaling up your existing application.
Update the workload, scaling it up to 6 replicas:
$(cdk8s/overlays/x86_increase_replicas &&kustomizeeditsetimagehello=us-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/x86-hello:v0.0.1)kubectlapply-kk8s/overlays/x86_increase_replicas/Wait 30 seconds, then run the following command to check the status of thedeployment:
kubectlgetpods-l="app=hello"--field-selector="status.phase=Pending"The output should look similar to the following:
NAME READY STATUS RESTARTS AGEx86-hello-deployment-6b7b456dd5-6tkxd 0/1 Pending 0 40sx86-hello-deployment-6b7b456dd5-k95b7 0/1 Pending 0 40sx86-hello-deployment-6b7b456dd5-kc876 0/1 Pending 0 40sThis output shows Pods with a Pending status as there is no room left on thex86-based nodes. Since Cluster Autoscaler is disabled and the Arm nodes aretainted, the workloads will not be deployed on any of the available Armnodes. This taint prevents GKE from scheduling x86 workloadson Arm nodes. To deploy to Arm nodes, you must indicate that the deploymentis compatible with Arm nodes.
Check the Pods that are in the Running state:
kubectlgetpods-l="app=hello"--field-selector="status.phase=Running"-owideThe output should look similar to the following:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESx86-hello-deployment-6b7b456dd5-cjclz 1/1 Running 0 62s 10.100.0.17 gke-my-cluster-default-pool-32019863-b41t <none> <none>x86-hello-deployment-6b7b456dd5-mwfkd 1/1 Running 0 34m 10.100.0.11 gke-my-cluster-default-pool-32019863-b41t <none> <none>x86-hello-deployment-6b7b456dd5-n56rg 1/1 Running 0 62s 10.100.0.16 gke-my-cluster-default-pool-32019863-b41t <none> <none>In this output, the
NODEcolumn indicates that all Pods from thedeployment are running only in the default-pool, meaning that thex86-compatible Pods are only scheduled to the x86 nodes. The original Podthat was already scheduled before the creation of the Arm node pool is stillrunning on the same node.Run the following command to access the service and see the output:
foriin$(seq16);docurl-w'\n'http://$external_ip;doneThe output is similar to the following:
Hello from NODE:gke-my-cluster-default-pool-32019863-b41t, POD:x86-hello-deployment-6b7b456dd5-cjclz, CPU PLATFORM:linux/amd64Hello from NODE:gke-my-cluster-default-pool-32019863-b41t, POD:x86-hello-deployment-6b7b456dd5-cjclz, CPU PLATFORM:linux/amd64Hello from NODE:gke-my-cluster-default-pool-32019863-b41t, POD:x86-hello-deployment-6b7b456dd5-n56rg, CPU PLATFORM:linux/amd64Hello from NODE:gke-my-cluster-default-pool-32019863-b41t, POD:x86-hello-deployment-6b7b456dd5-n56rg, CPU PLATFORM:linux/amd64Hello from NODE:gke-my-cluster-default-pool-32019863-b41t, POD:x86-hello-deployment-6b7b456dd5-cjclz, CPU PLATFORM:linux/amd64Hello from NODE:gke-my-cluster-default-pool-32019863-b41t, POD:x86-hello-deployment-6b7b456dd5-cjclz, CPU PLATFORM:linux/amd64This output shows that all Pods serving requests are running on x86 nodes.Some Pods cannot respond because they are still in the Pending state as thereis no space on the existing x86 nodes and they will not be scheduled to Armnodes.
Rebuild your application to run on Arm
In the previous section, you added an Arm node pool to your existing cluster.However, when you scaled up the existing x86 application, it did not scheduleany of the workloads to the Arm nodes. In this section, you rebuild your application tobe Arm-compatible, so that this application can run on the Arm nodes in thecluster.
For this example, accomplish these steps by usingdocker build.This two-step approach includes:
- First stage: Build the code to Arm.
- Second stage: Copy the executable to a lean container.
After following these steps, you will have an Arm-compatible image in additionto the x86-compatible image.
The second step of copying the executable to another container follows one of thebest practices for building a container, which is to build the smallest imagepossible.
This tutorial uses an example application built with the Golang programminglanguage. With Golang, you can cross-compile an application to differentoperating systems and CPU platforms by providing environment variables,GOOS andGOARCH, respectively.
Run
cat Dockerfile_armto see the Dockerfile written for Arm:## Build: 1st stage#FROM golang:1.18-alpine as builder WORKDIR /appCOPY go.mod .COPY hello.go .RUN GOARCH=arm64 go build -o /hello && \ apk add --update --no-cache file && \ file /helloThe snippet shown here shows just the first stage. In the file, both stagesare included.
In this file, setting
GOARCH=arm64instructs the Go compiler to build theapplication for the Arm instruction set. You do not need to setGOOSbecause the base image in the first stage is a Linux Alpine image.Build the code for Arm, and push it toArtifact Registry:
dockerbuild-tus-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/arm-hello:v0.0.1-fDockerfile_arm.dockerpushus-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/arm-hello:v0.0.1
Deploy the Arm version of your application
Now that the application is built to run on Arm nodes, you can deploy it to theArm nodes in your cluster.
Inspect the
add_arm_support.yamlby runningcat k8s/overlays/arm/add_arm_support.yaml:The output is similar to the following:
nodeSelector: kubernetes.io/arch: arm64This
nodeSelectorspecifies that the workload should run only on the Armnodes. When you use thenodeSelector, GKEadds a tolerationthat matches the taint on Arm nodes, letting GKE schedule theworkload on those nodes. For more information about setting this field, seePrepare an Arm workload fordeployment.Deploy one replica of the Arm-compatible version of the application:
$(cdk8s/overlays/arm &&kustomizeeditsetimagehello=us-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/arm-hello:v0.0.1)kubectlapply-kk8s/overlays/armWait 5 seconds, then check that the Arm deployment is answering
curlrequests:foriin$(seq16);docurl-w'\n'http://$external_ip;doneThe output is similar to the following:
HellofromNODE:gke-my-cluster-default-pool-32019863-b41t,POD:x86-hello-deployment-6b7b456dd5-n56rg,CPUPLATFORM:linux/amd64HellofromNODE:gke-my-cluster-default-pool-32019863-b41t,POD:x86-hello-deployment-6b7b456dd5-n56rg,CPUPLATFORM:linux/amd64HellofromNODE:gke-my-cluster-default-pool-32019863-b41t,POD:x86-hello-deployment-6b7b456dd5-mwfkd,CPUPLATFORM:linux/amd64HellofromNODE:gke-my-cluster-default-pool-32019863-b41t,POD:x86-hello-deployment-6b7b456dd5-mwfkd,CPUPLATFORM:linux/amd64HellofromNODE:gke-my-cluster-arm-pool-e172cff7-shwc,POD:arm-hello-deployment-69b4b6bdcc-n5l28,CPUPLATFORM:linux/arm64HellofromNODE:gke-my-cluster-default-pool-32019863-b41t,POD:x86-hello-deployment-6b7b456dd5-n56rg,CPUPLATFORM:linux/amd64This output should include responses from both the x86-compatibleand Arm-compatible applications responding to the
curlrequest.
Build a multi-architecture image to run a workload across architectures
While you can use the strategy described in the previous section and deployseparate workloads for x86 and Arm, this would require you to maintain and keeporganized two build processes and two container images.
Ideally, you want to build and run your application seamlessly across both x86and Arm platforms. We recommend this approach. To run your application with onemanifest across multiple architecture platforms, you need to usemulti-architecture (multi-arch) images. For more information aboutmulti-architecture images, seeBuild multi-arch images for Armworkloads.
To use multi-architecture images, you must ensure that your application meetsthe following prerequisites:
- Your application does not have any architecture platform-specific dependencies.
- All dependencies must be built formulti-architecture or, at minimum, the targeted platforms.
The example application used in this tutorial meets both of these prerequisites.However, we recommend testing your own applications when building theirmulti-arch images before deploying them to production.
Build and push multi-architecture images
You can build multi-arch images withDocker Buildx if your workload fulfills the following prerequisites:
- The base image supports multiple architectures. Check this by running
docker manifest inspecton the base image and checking the list of architecture platforms. See anexample of how to inspect an image at the end of this section. - The application does not require special build steps for each architectureplatform. If special steps were required, Buildx might not be sufficient. Youwould need to have a separate Dockerfile for each platform and create themanifest manually with
docker manifest create.
The example application's base image is Alpine, which supports multiplearchitectures. There are also no architecture platform-specific steps, so you can build the multi-arch image with Buildx.
Inspect the Dockerfile by running
cat Dockerfile:# This is a multi-stage Dockerfile. # 1st stage builds the app in the target platform# 2nd stage create a lean image coping the binary from the 1st stage## Build: 1st stage#FROM golang:1.18-alpine as builder ARG BUILDPLATFORM ARG TARGETPLATFORMRUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" WORKDIR /appCOPY go.mod .COPY hello.go .RUN go build -o /hello && \ apk add --update --no-cache file && \ file /hello ## Release: 2nd stage#FROM alpineWORKDIR /COPY --from=builder /hello /helloCMD [ "/hello" ]This Dockerfile defines two stages: the build stage and release stage. Youuse the same Dockerfile used for building the x86 application.
Run the following command to create and use a new
docker buildxbuilder:dockerbuildxcreate--namemultiarch--use--bootstrapNow that you have created this new builder, you can build and push an imagethat is compatible with both
linux/amd64andlinux/arm64by using the--platformflag. For each platform provided with the flag, Buildx builds animage in the target platform. When Buildx builds thelinux/arm64image, itdownloadsarm64base images. In the first stage, it builds the binary onthearm64 golang:1.18-alpineimage forarm64. In the second stage, thearm64Alpine Linux image is downloaded and the binary is copied to a layerof that image.Build and push the image:
dockerbuildxbuild-tus-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/multiarch-hello:v0.0.1-fDockerfile--platformlinux/amd64,linux/arm64--push.The output is similar to the following:
=> [linux/arm64 builder x/x] ..=> [linux/amd64 builder x/x] ..This output shows that two images are generated, one for
linux/arm64andone forlinux/amd64.Inspect the manifest of your new multi-arch image:
dockermanifestinspectus-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/multiarch-hello:v0.0.1The output is similar to the following:
{ "schemaVersion": 2, "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json", "manifests": [ { "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "size": 739, "digest": "sha256:dfcf8febd94d61809bca8313850a5af9113ad7d4741edec1362099c9b7d423fc", "platform": { "architecture": "amd64", "os": "linux" } }, { "mediaType": "application/vnd.docker.distribution.manifest.v2+json", "size": 739, "digest": "sha256:90b637d85a93c3dc03fc7a97d1fd640013c3f98c7c362d1156560bbd01f6a419", "platform": { "architecture": "arm64", "os": "linux" } } ]In this output, the
manifestssection includes two manifests, one with theamd64platform architecture, and the other with thearm64platformarchitecture.When you deploy this container image to yourcluster, GKE automatically downloads only the image thatmatches the node's architecture.
Deploy the multi-arch version of your application
Before you deploy the multi-arch image, delete the original workloads:
kubectldeletedeployx86-hello-deploymentarm-hello-deploymentInspect the
add_multiarch_support.yamlkustomize overlay by runningcat k8s/overlays/multiarch/add_multiarch_support.yaml:The output includes the following toleration set:
tolerations: - key: kubernetes.io/arch operator: Equal value: arm64 effect: NoScheduleThis toleration allows the workload to run on the Arm nodes in your cluster,since the toleration matches the taint set on all Arm nodes. As this workloadcan now run on any node in the cluster, only the toleration is needed. Withjust the toleration, GKE can schedule the workload to both x86and Arm nodes. If you want to specify where GKE can scheduleworkloads, use node selectors and node affinity rules. For more informationabout setting these fields, seePrepare an Arm workload fordeployment.
Deploy the multi-arch container image with 6 replicas:
$(cdk8s/overlays/multiarch &&kustomizeeditsetimagehello=us-central1-docker.pkg.dev/$PROJECT_ID/docker-repo/multiarch-hello:v0.0.1)kubectlapply-kk8s/overlays/multiarchWait 10 seconds, then confirm that all of the replicas of the application arerunning:
kubectl get pods -l="app=hello" -o wideThe output is similar to the following:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESmultiarch-hello-deployment-65bfd784d-5xrrr 1/1 Running 0 95s 10.100.1.5 gke-my-cluster-arm-pool-e172cff7-shwc <none> <none>multiarch-hello-deployment-65bfd784d-7h94b 1/1 Running 0 95s 10.100.1.4 gke-my-cluster-arm-pool-e172cff7-shwc <none> <none>multiarch-hello-deployment-65bfd784d-7qbkz 1/1 Running 0 95s 10.100.1.7 gke-my-cluster-arm-pool-e172cff7-shwc <none> <none>multiarch-hello-deployment-65bfd784d-7wqb6 1/1 Running 0 95s 10.100.1.6 gke-my-cluster-arm-pool-e172cff7-shwc <none> <none>multiarch-hello-deployment-65bfd784d-h2g2k 1/1 Running 0 95s 10.100.0.19 gke-my-cluster-default-pool-32019863-b41t <none> <none>multiarch-hello-deployment-65bfd784d-lc9dc 1/1 Running 0 95s 10.100.0.18 gke-my-cluster-default-pool-32019863-b41t <none> <none>This output includes a
NODEcolumn that indicates the Pods are running onboth nodes in the Arm node pool and others in the default (x86) node pool.Run the following command to access the service and see the output:
foriin$(seq16);docurl-w'\n'http://$external_ip;doneThe output is similar to the following:
Hello from NODE:gke-my-cluster-arm-pool-e172cff7-shwc, POD:multiarch-hello-deployment-65bfd784d-7qbkz, CPU PLATFORM:linux/arm64Hello from NODE:gke-my-cluster-default-pool-32019863-b41t, POD:multiarch-hello-deployment-65bfd784d-lc9dc, CPU PLATFORM:linux/amd64Hello from NODE:gke-my-cluster-arm-pool-e172cff7-shwc, POD:multiarch-hello-deployment-65bfd784d-5xrrr, CPU PLATFORM:linux/arm64Hello from NODE:gke-my-cluster-arm-pool-e172cff7-shwc, POD:multiarch-hello-deployment-65bfd784d-7wqb6, CPU PLATFORM:linux/arm64Hello from NODE:gke-my-cluster-arm-pool-e172cff7-shwc, POD:multiarch-hello-deployment-65bfd784d-7h94b, CPU PLATFORM:linux/arm64Hello from NODE:gke-my-cluster-arm-pool-e172cff7-shwc, POD:multiarch-hello-deployment-65bfd784d-7wqb6, CPU PLATFORM:linux/arm64You should see that Pods running across architecture platforms are answering therequests.
Note: It is possible that you run this command and receive only responsesfrom Pods running on one architecture platform. If this occurs, run thecommand once or twice more and you should responses from both architectureplatforms.
You built and deployed a multi-arch image to seamlessly run a workload acrossmultiple architectures.
Clean up
To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.
After you finish the tutorial, you can clean up the resources that you createdto reduce quota usage and stop billing charges. The following sectionsdescribe how to delete or turn off these resources.
Delete the project
The easiest way to eliminate billing is to delete the project that you created for the tutorial.
To delete the project:
Delete the service, cluster, and repository
If you don't want to delete the entire project, delete the cluster andrepository that you created for the tutorial:
Delete the application's Service by running
kubectl delete:kubectldeleteservicehello-serviceThis command deletes the Compute Engine load balancer that you createdwhen you exposed the Deployment.
Delete your cluster by running
gcloud container clusters delete:gcloudcontainerclustersdelete$CLUSTER_NAME--location$CONTROL_PLANE_LOCATIONDelete the repository:
gcloudartifactsrepositoriesdeletedocker-repo—location=us-central1--async
What's next
- Arm workloads on GKE
- Create clusters and node pools with Arm nodes
- Build multi-architecture images for Arm workloads
- Prepare an Arm workload for deployment
- Prepare Autopilot workloads on Arm architecture
- Best practices for running cost-optimized Kubernetes applications on GKE
- Explore reference architectures, diagrams, and best practices about Google Cloud.Take a look at ourCloud Architecture Center.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-18 UTC.