Exposing applications using services

This page shows how to make your applications accessible from your internal network or the internet, by creating Kubernetes Services in Google Kubernetes Engine (GKE) to expose those applications. It covers five Service types -- ClusterIP, NodePort, LoadBalancer,ExternalName, and Headless.

The tutorial includes examples for each Servicetype, showing how to create Deployments, expose them using Services, and access them.

This page is for Operators and Developers whoprovision and configure cloud resources and deploy apps and services. To learnmore about common roles and example tasks referenced in Google Cloudcontent, seeCommon GKE user roles and tasks.

Before reading this page, ensure that you're familiar with usingkubectl.

Introduction

The idea of aService is to group a set of Pod endpoints into a single resource.You can configure various ways to access the grouping. By default, you get astable cluster IP address that clients inside the cluster can use to contactPods in the Service. A client sends a request to the stable IP address, and therequest is routed to one of the Pods in the Service.

There are five types of Services:

  • ClusterIP (default)
  • NodePort
  • LoadBalancer
  • ExternalName
  • Headless

Autopilot clusters are public by default. If you opt for aprivateAutopilot cluster, you must configureCloud NATto make outbound internet connections, for example pulling images from DockerHub.

This topic has several exercises. In each exercise, you create a Deployment andexpose its Pods by creating a Service. Then you send an HTTP request to theService.

Before you begin

Before you start, make sure that you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running thegcloud components update command. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zone instead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.

Creating a Service of type ClusterIP

In this section, you create a Service of typeClusterIP.

kubectl apply

Here is a manifest for a Deployment:

apiVersion:apps/v1kind:Deploymentmetadata:name:my-deploymentspec:selector:matchLabels:app:metricsdepartment:salesreplicas:3template:metadata:labels:app:metricsdepartment:salesspec:containers:-name:helloimage:"us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0"

Copy the manifest to a file namedmy-deployment.yaml, and create theDeployment:

kubectlapply-fmy-deployment.yaml

Verify that three Pods are running:

kubectlgetpods

The output shows three running Pods:

NAME                            READY   STATUS    RESTARTS   AGEmy-deployment-dbd86c8c4-h5wsf   1/1     Running   0          7smy-deployment-dbd86c8c4-qfw22   1/1     Running   0          7smy-deployment-dbd86c8c4-wt4s6   1/1     Running   0          7s

Here is a manifest for a Service of typeClusterIP:

apiVersion:v1kind:Servicemetadata:name:my-cip-servicespec:type:ClusterIP# Uncomment the below line to create a Headless Service# clusterIP: Noneselector:app:metricsdepartment:salesports:-protocol:TCPport:80targetPort:8080

The Service has a selector that specifies two labels:

  • app: metrics
  • department: sales

Each Pod in the Deployment that you created previously has those two labels.So the Pods in the Deployment will become members of this Service.

Copy the manifest to a file namedmy-cip-service.yaml, and create theService:

kubectlapply-fmy-cip-service.yaml

Wait a moment for Kubernetes to assign a stable internal address to theService, and then view the Service:

kubectlgetservicemy-cip-service--outputyaml

The output shows a value forclusterIP:

spec:  clusterIP: 10.59.241.241

Make a note of yourclusterIP value for later.

Console

Create a Deployment

  1. Go to theWorkloads page in the Google Cloud console.

    Go to Workloads

  2. ClickDeploy.

  3. UnderSpecify container, selectExisting container image.

  4. ForImage path, enterus-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0

  5. ClickDone, then clickContinue.

  6. UnderConfiguration, forApplication name, entermy-deployment.

  7. UnderLabels, create the following labels:

    • Key:app andValue:metrics
    • Key:department andValue:sales
  8. UnderCluster, choose the cluster in which you want to create the Deployment.

  9. ClickDeploy.

  10. When your Deployment is ready, theDeployment details page opens.UnderManaged pods, you can see that your Deployment has one or morerunning Pods.

Create a Service to expose your Deployment

  1. On theDeployment details page, clickActions> Expose.
  2. In theExpose dialog, underPort mapping, set the following values:

    • Port:80
    • Target port:8080
    • Protocol:TCP
  3. From theService type drop-down list, selectCluster IP.

  4. ClickExpose.

  5. When your Service is ready, theService details page opens, and youcan see details about your Service. UnderCluster IP, make a note ofthe IP address that Kubernetes assigned to your Service. This is theIP address that internal clients can use to call the Service.

Note: Creating a Headless Service is not currently available though the Console.

Accessing your Service

List your running Pods:

kubectlgetpods

In the output, copy one of the Pod names that begins withmy-deployment.

NAME                            READY   STATUS    RESTARTS   AGEmy-deployment-dbd86c8c4-h5wsf   1/1     Running   0          2m51s

Get a shell into one of your running containers:

kubectlexec-itPOD_NAME--sh

ReplacePOD_NAME with the name of one of the Pods inmy-deployment.

In your shell, installcurl:

apkadd--no-cachecurl

In the container, make a request to your Service by using your cluster IPaddress and port 80. Notice that 80 is the value of theport field of yourService. This is the port that you use as a client of the Service.

curlCLUSTER_IP:80

ReplaceCLUSTER_IP with the value ofclusterIP in yourService.

Your request is forwarded to one of the member Pods on TCP port 8080, which isthe value of thetargetPort field. Note that each of the Service's member Podsmust have a container listening on port 8080.

The response shows the output ofhello-app:

Hello, world!Version: 2.0.0Hostname: my-deployment-dbd86c8c4-h5wsf

To exit the shell to your container, enterexit.

Note: You need to know ahead of time that each of your member Pods has acontainer listening on TCP port 8080. In this exercise, you did not do anythingto make the containers listen on port 8080. You can see thathello-app listenson port 8080 by looking at theDockerfile and the source code for the app.

Creating a Service of type NodePort

In this section, you create a Service of typeNodePort.

kubectl apply

Here is a manifest for a Deployment:

apiVersion:apps/v1kind:Deploymentmetadata:name:my-deployment-50000spec:selector:matchLabels:app:metricsdepartment:engineeringreplicas:3template:metadata:labels:app:metricsdepartment:engineeringspec:containers:-name:helloimage:"us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0"env:-name:"PORT"value:"50000"

Notice theenv object in the manifest. Theenv object specifies that thePORT environment variable for the running container will have a value of50000. Thehello-app application listens on the port specified by thePORT environment variable. So in this exercise, you are telling thecontainer to listen on port 50000.

Copy the manifest to a file namedmy-deployment-50000.yaml, and create theDeployment:

kubectlapply-fmy-deployment-50000.yaml

Verify that three Pods are running:

kubectlgetpods

Here is a manifest for a Service of type NodePort:

apiVersion:v1kind:Servicemetadata:name:my-np-servicespec:type:NodePortselector:app:metricsdepartment:engineeringports:-protocol:TCPport:80targetPort:50000

Copy the manifest to a file namedmy-np-service.yaml, and create theService:

kubectlapply-fmy-np-service.yaml

View the Service:

kubectlgetservicemy-np-service--outputyaml

The output shows anodePort value:

...  spec:    ...    ports:    - nodePort: 30876      port: 80      protocol: TCP      targetPort: 50000    selector:      app: metrics      department: engineering    sessionAffinity: None    type: NodePort...

Create a firewall rule to allow TCP traffic on your node port:

gcloudcomputefirewall-rulescreatetest-node-port\--allowtcp:NODE_PORT

ReplaceNODE_PORT with the value of thenodePortfield of your Service.

Console

Create a Deployment

  1. Go to theWorkloads page in the Google Cloud console.

    Go to Workloads

  2. ClickDeploy.

  3. UnderSpecify container, selectExisting container image.

  4. ForImage path, enterus-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0.

  5. ClickAdd Environment Variable.

  6. ForKey, enterPORT, and forValue, enter50000.

  7. ClickDone, then clickContinue.

  8. UnderConfiguration, forApplication name, entermy-deployment-50000.

  9. UnderLabels, create the following labels:

    • Key:app andValue:metrics
    • Key:department andValue:engineering
  10. UnderCluster, choose the cluster in which you want to create the Deployment.

  11. ClickDeploy.

  12. When your Deployment is ready, theDeployment details page opens.UnderManaged pods, you can see that your Deployment has one or morerunning Pods.

Create a Service to expose your Deployment

  1. On theDeployment details page, clickActions> Expose.
  2. In theExpose dialog, underPort mapping, set the following values:

    • Port:80
    • Target port:50000
    • Protocol:TCP
  3. From theService type drop-down list, selectNode port.

  4. ClickExpose.

  5. When your Service is ready, theService details page opens, and youcan see details about your Service. UnderPorts, make a note oftheNode Port that Kubernetes assigned to your Service.

Create a firewall rule for your node port

  1. Go to theFirewall policies page in the Google Cloud console.

    Go to Firewall policies

  2. ClickCreate firewall rule.

  3. ForName, entertest-node-port.

  4. From theTargets drop-down list, selectAll instances in the network.

  5. ForSource IPv4 ranges, enter0.0.0.0/0.

  6. UnderProtocols and ports, selectSpecified protocols and ports.

  7. Select thetcp checkbox, and enter the node port value you noted.

  8. ClickCreate.

Get a node IP address

Find the external IP address of one of your nodes:

kubectlgetnodes--outputwide

The output is similar to the following:

NAME          STATUS    ROLES     AGE    VERSION        EXTERNAL-IPgke-svc-...   Ready     none      1h     v1.9.7-gke.6   203.0.113.1

Not all clusters have external IP addresses for nodes. For example,if you haveenabled private nodes,the nodes won't have external IP addresses.

Access your Service

In your browser's address bar, enter the following:

NODE_IP_ADDRESS:NODE_PORT

Replace the following:

  • NODE_IP_ADDRESS: the external IP address of one ofyour nodes, found when creating the service in the previous task.
  • NODE_PORT: your node port value.

The output is similar to the following:

Hello, world!Version: 2.0.0Hostname: my-deployment-50000-6fb75d85c9-g8c4f

Creating a Service of type LoadBalancer

In this section, you create a Service of typeLoadBalancer.

kubectl apply

Here is a manifest for a Deployment:

apiVersion:apps/v1kind:Deploymentmetadata:name:my-deployment-50001spec:selector:matchLabels:app:productsdepartment:salesreplicas:3template:metadata:labels:app:productsdepartment:salesspec:containers:-name:helloimage:"us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0"env:-name:"PORT"value:"50001"

Notice that the containers in this Deployment will listen on port 50001.

Copy the manifest to a file namedmy-deployment-50001.yaml, and create theDeployment:

kubectlapply-fmy-deployment-50001.yaml

Verify that three Pods are running:

kubectlgetpods

Here is a manifest for a Service of typeLoadBalancer:

apiVersion:v1kind:Servicemetadata:name:my-lb-servicespec:type:LoadBalancerselector:app:productsdepartment:salesports:-protocol:TCPport:60000targetPort:50001

Copy the manifest to a file namedmy-lb-service.yaml, and create theService:

kubectlapply-fmy-lb-service.yaml

When you create a Service of typeLoadBalancer, a Google Cloudcontroller wakes up and configures anexternal passthrough Network Load Balancer.Wait a minute for the controller to configure the external passthrough Network Load Balancer andgenerate a stable IP address.

View the Service:

kubectlgetservicemy-lb-service--outputyaml

The output shows a stable external IP address underloadBalancer:ingress:

...spec:  ...  ports:  - ...    port: 60000    protocol: TCP    targetPort: 50001  selector:    app: products    department: sales  sessionAffinity: None  type: LoadBalancerstatus:  loadBalancer:    ingress:    - ip: 203.0.113.10

Console

Create a Deployment

  1. Go to theWorkloads page in the Google Cloud console.

    Go to Workloads

  2. ClickDeploy.

  3. UnderSpecify container, selectExisting container image.

  4. ForImage path, enterus-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0.

  5. ClickAdd Environment Variable.

  6. ForKey, enterPORT, and forValue, enter50001.

  7. ClickDone, then clickContinue.

  8. UnderConfiguration, forApplication name, entermy-deployment-50001.

  9. UnderLabels, create the following labels:

    • Key:app andValue:products
    • Key:department andValue:sales
  10. UnderCluster, choose the cluster in which you want to create the Deployment.

  11. ClickDeploy.

  12. When your Deployment is ready, theDeployment details page opens.UnderManaged pods, you can see that your Deployment has one or morerunning Pods.

Create a Service to expose your Deployment

  1. On theDeployment details page, clickActions> Expose.
  2. In theExpose dialog, underPort mapping, set the following values:

    • Port:60000
    • Target port:50001
    • Protocol:TCP
  3. From theService type drop-down list, selectLoad balancer.

  4. ClickExpose.

  5. When your Service is ready, theService details page opens, and youcan see details about your Service. UnderLoad Balancer, make a note ofthe load balancer's external IP address.

Access your Service

Wait a few minutes for GKE to configure the load balancer.

In your browser's address bar, enter the following:

LOAD_BALANCER_ADDRESS:60000

ReplaceLOAD_BALANCER_ADDRESS with the external IPaddress of your load balancer.

The response shows the output ofhello-app:

Hello, world!Version: 2.0.0Hostname: my-deployment-50001-68bb7dfb4b-prvct

Notice that the value ofport in a Service is arbitrary. The precedingexample demonstrates this by using aport value of 60000.

Creating a Service of type ExternalName

In this section, you create a Service of typeExternalName.

A Service of typeExternalName provides an internal alias for an external DNSname. Internal clients make requests using the internal DNS name, and therequests are redirected to the external name.

Here is a manifest for a Service of typeExternalName:

apiVersion:v1kind:Servicemetadata:name:my-xn-servicespec:type:ExternalNameexternalName:example.com

In the preceding example, the DNS name ismy-xn-service.default.svc.cluster.local. When an internal client makes a requestto my-xn-service.default.svc.cluster.local, the request gets redirected toexample.com.

Usingkubectl expose to create a Service

As an alternative to writing a Service manifest, you can create a Serviceby usingkubectl expose to expose a Deployment.

To exposemy-deployment, shown earlier in this topic, you could enter thiscommand:

kubectlexposedeploymentmy-deployment--namemy-cip-service\--typeClusterIP--protocolTCP--port80--target-port8080

To exposemy-deployment-50000, show earlier in this topic, you could enterthis command:

kubectlexposedeploymentmy-deployment-50000--namemy-np-service\--typeNodePort--protocolTCP--port80--target-port50000

To exposemy-deployment-50001, shown earlier in this topic, you could enterthis command:

kubectlexposedeploymentmy-deployment-50001--namemy-lb-service\--typeLoadBalancer--port60000--target-port50001

View your Services

You can view the Services you created on theServices page in the Google Cloud console.

Go to Services

Alternatively, you can also view your Services inApp Hubwithin the context of the business functions they support.App Hub provides a centralized overview of all your applications and theirassociated services.

To view your Services in App Hub, go to theApp Hubpage in the Google Cloud console.

Go to App Hub

As a managed Kubernetes service, GKE automatically sends Servicemetadata, specificallyresource URIs,to App Hub whenever resources are created or destroyed. This always-onmetadata ingestion enhances the application building and management experience inApp Hub.

For more information on resources that App Hub supports, seesupported resources.

To learn how to set up App Hub on your project, seeSet up App Hub.

Cleaning up

After completing the exercises on this page, follow these steps to removeresources and prevent unwanted charges incurring on your account:

kubectl apply

Deleting your Services

kubectldeleteservicesmy-cip-servicemy-np-servicemy-lb-service

Deleting your Deployments

kubectldeletedeploymentsmy-deploymentmy-deployment-50000my-deployment-50001

Deleting your firewall rule

gcloudcomputefirewall-rulesdeletetest-node-port

Console

Deleting your Services

  1. Go to theServices page in the Google Cloud console.

    Go to Services

  2. Select the Services you created in this exercise, then clickDelete.

  3. When prompted to confirm, clickDelete.

Deleting your Deployments

  1. Go to theWorkloads page in the Google Cloud console.

    Go to Workloads

  2. Select the Deployments you created in this exercise, then clickDelete.

  3. When prompted to confirm, select theDelete Horizontal Pod Autoscalersassociated with selected Deployments checkbox, then clickDelete.

Deleting your firewall rule

  1. Go to theFirewall policies page in the Google Cloud console.

    Go to Firewall policies

  2. Select thetest-node-port checkbox, then clickDelete.

  3. When prompted to confirm, clickDelete.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.