Setting up intranode visibility

This guide shows you how to set upintranode visibility on aGoogle Kubernetes Engine (GKE) cluster.

Intranode visibility configures networking on each node in the cluster sothat traffic sent from one Pod to another Pod is processed by the cluster'sVirtual Private Cloud (VPC) network, even if the Pods are on the same node.

Intranode visibility is disabled by default on Standard clusters andenabled by default in Autopilot clusters.

Architecture

Intranode visibility ensures that packets sent between Pods are always processedby the VPC network, which ensures that firewall rules, routes,flow logs, and packet mirroring configurations apply to the packets.

When a Pod sends a packet to another Pod on the same node, the packet leaves thenode and is processed by the Google Cloud network. Then the packetis immediately sent back to the same node and forwarded to the destination Pod.

Intranode visibility deploys thenetd DaemonSet.

Benefits

Intranode visibility provides the following benefits:

  • Seeflow logs for all traffic between Pods,including traffic between Pods on the same node.
  • Createfirewall rules that apply to all traffic amongPods, including traffic between Pods on the same node.
  • UsePacket Mirroring to clone traffic, includingtraffic between Pods on the same node, and forward it for examination.

Requirements and limitations

Intranode visibility has the following requirements and limitations:

  • Your cluster must be on GKE version 1.15 or later.
  • Intranode visibility is not supported with Windows Server node pools.
  • To prevent connectivity issues, when using theip-masq-agent flag withintranode visibility, your customnonMasqueradeCIDRs list must include thecluster's node and Pod IP address ranges.

Firewall rules

When you enable intranode visibility, the VPC network processesall packets sent between Pods, including packets sent between Pods on the samenode. This means VPC firewall rules and hierarchical firewallpolicies consistently apply to Pod-to-Pod communication, regardless of Podlocation.

If you configure custom firewall rules for communication within the cluster,carefully evaluate your cluster's networking needs to determine the set ofegress and ingress allow rules. You can use connectivity tests to ensure thatlegitimate traffic is not obstructed. For example, Pod-to-Pod communication isrequired fornetwork policyto function.

Before you begin

Before you start, make sure that you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task,install and theninitialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running thegcloud components update command. Earlier gcloud CLI versions might not support running the commands in this document.Note: For existing gcloud CLI installations, make sure to set thecompute/regionproperty. If you use primarily zonal clusters, set thecompute/zone instead. By setting a default location, you can avoid errors in the gcloud CLI like the following:One of [--zone, --region] must be supplied: Please specify location. You might need to specify the location in certain commands if the location of your cluster differs from the default that you set.

Enable intranode visibility on a new cluster

You can create a cluster with intranode visibility enabled using thegcloud CLI or the Google Cloud console.

gcloud

To create a single-node cluster that has intranode visibility enabled,use the--enable-intra-node-visibility flag:

gcloudcontainerclusterscreateCLUSTER_NAME\--location=CONTROL_PLANE_LOCATION\--enable-intra-node-visibility

Replace the following:

  • CLUSTER_NAME: the name of your new cluster.
  • CONTROL_PLANE_LOCATION: the Compute Enginelocation of the control plane of yourcluster. Provide a region for regional clusters, or a zone for zonal clusters.

Console

To create a single-node cluster that has intranode visibility enabled,perform the following steps:

  1. Go to theGoogle Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. ClickCreate.

  3. Enter theName for your cluster.

  4. In theConfigure cluster dialog, next toGKEStandard, clickConfigure.

  5. Configure your cluster as needed.

  6. From the navigation pane, underCluster, clickNetworking.

  7. Select theEnable intranode visibility checkbox.

  8. ClickCreate.

Enable intranode visibility on an existing cluster

You can enable intranode visibility on an existing cluster using thegcloud CLI or the Google Cloud console.

When you enable intranode visibility for an existing cluster, GKErestarts components in both the control plane and the worker nodes.

gcloud

To enable intranode visibility on an existing cluster, use the--enable-intra-node-visibility flag:

gcloudcontainerclustersupdateCLUSTER_NAME\--enable-intra-node-visibility

ReplaceCLUSTER_NAME with the name of your cluster.

Console

To enable intranode visibility on an existing cluster, perform the followingsteps:

  1. Go to theGoogle Kubernetes Engine page in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. UnderNetworking, clickEdit intranodevisibility.

  4. Select theEnable intranode visibility checkbox.

  5. ClickSave Changes.

This change requires recreating the nodes, which can cause disruption to yourrunning workloads. For details about this specific change, find thecorresponding row in themanual changes that recreate the nodes using a nodeupgrade strategy and respecting maintenancepoliciestable. To learn more about node updates, seePlanning for node updatedisruptions.

Important: GKE respects maintenance policies when recreating thenodes for this change using the node upgrade strategy, and depends onresourceavailability. Disablingnode auto-upgradesdoesn't prevent thischange.To manually apply the changes to the nodes, use the gcloud CLI tocall thegcloud container clustersupgrade command, passing the--cluster-version flag with the same GKE version that the nodepool is already running.

Disable intranode visibility

You can disable intranode visibility on a cluster using thegcloud CLI or the Google Cloud console.

When you disable intranode visibility for an existing cluster,GKE restarts components in both the control plane and the workernodes.

gcloud

To disable intranode visibility, use the--no-enable-intra-node-visibilityflag:

gcloudcontainerclustersupdateCLUSTER_NAME\--no-enable-intra-node-visibility

ReplaceCLUSTER_NAME with the name of your cluster.

Console

To disable intranode visibility, perform the following steps:

  1. Go to theGoogle Kubernetes Engine page in Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. UnderNetworking, clickEdit intranode visibility.

  4. Clear theEnable intranode visibility checkbox.

  5. ClickSave Changes.

This change requires recreating the nodes, which can cause disruption to yourrunning workloads. For details about this specific change, find thecorresponding row in themanual changes that recreate the nodes using a nodeupgrade strategy and respecting maintenancepoliciestable. To learn more about node updates, seePlanning for node updatedisruptions.

Important: GKE respects maintenance policies when recreating thenodes for this change using the node upgrade strategy, and depends onresourceavailability. Disablingnode auto-upgradesdoesn't prevent thischange.To manually apply the changes to the nodes, use the gcloud CLI tocall thegcloud container clustersupgrade command, passing the--cluster-version flag with the same GKE version that the nodepool is already running.

Exercise: Verify intranode visibility

This exercise shows you the steps required to enable intranode visibility andconfirm that it is working for your cluster.

In this exercise, you perform the following steps:

  1. Enable flow logs for the default subnet in theus-central1 region.
  2. Create a single-node cluster with intranode visibility enabled in theus-central1-a zone.
  3. Create two Pods in your cluster.
  4. Send an HTTP request from one Pod to another Pod.
  5. View the flow log entry for the Pod-to-Pod request.

Enable flow logs

  1. Enable flow logs for the default subnet:

    gcloudcomputenetworkssubnetsupdatedefault\--region=us-central1\--enable-flow-logs
  2. Verify that the default subnet has flow logs enabled:

    gcloudcomputenetworkssubnetsdescribedefault\--region=us-central1

    The output shows that flow logs are enabled, similar to the following:

    ...enableFlowLogs: true...

Create a cluster

  1. Create a single node cluster with intranode visibility enabled:

    gcloudcontainerclusterscreateflow-log-test\--location=us-central1-a\--num-nodes=1\--enable-intra-node-visibility
  2. Get the credentials for your cluster:

    gcloudcontainerclustersget-credentialsflow-log-test\--location=us-central1-a

Create two Pods

  1. Create a Pod.

    Save the following manifest to a file namedpod-1.yaml:

    apiVersion:v1kind:Podmetadata:name:pod-1spec:containers:-name:container-1image:google/cloud-sdk:slimcommand:-sh--c-while true; do sleep 30; done
  2. Apply the manifest to your cluster:

    kubectlapply-fpod-1.yaml
  3. Create a second Pod.

    Save the following manifest to a file namedpod-2.yaml:

    apiVersion:v1kind:Podmetadata:name:pod-2spec:containers:-name:container-2image:us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0
  4. Apply the manifest to your cluster:

    kubectlapply-fpod-2.yaml
  5. View the Pods:

    kubectlgetpodpod-1pod-2--outputwide

    The output shows the IP addresses of your Pods, similar to the following:

    NAME      READY     STATUS    RESTARTS   AGE       IP           ...pod-1     1/1       Running   0          1d        10.52.0.13   ...pod-2     1/1       Running   0          1d        10.52.0.14   ...

    Note the IP addresses ofpod-1 andpod-2.

Send a request

  1. Get a shell to the container inpod-1:

    kubectlexec-itpod-1--sh
  2. In your shell, send a request topod-2:

    curl-sPOD_2_IP_ADDRESS:8080

    ReplacePOD_2_IP_ADDRESS with the IP address ofpod-2.

    The output shows the response from the container running inpod-2.

    Hello, world!Version: 2.0.0Hostname: pod-2
  3. Type exit to leave the shell and return to your main command-line environment.

View flow log entries

To view a flow log entry, use the following command:

gcloudloggingread\'logName="projects/PROJECT_ID/logs/compute.googleapis.com%2Fvpc_flows" AND jsonPayload.connection.src_ip="POD_1_IP_ADDRESS" AND jsonPayload.connection.dest_ip="POD_2_IP_ADDRESS"'

Replace the following:

  • PROJECT_ID: your project ID.
  • POD_1_IP_ADDRESS: the IP address ofpod-1.
  • POD_2_IP_ADDRESS: the IP address ofpod-2.

The output shows a flow log entry for a request frompod-1 topod-2. In thisexample,pod-1 has IP address10.56.0.13, andpod-2 has IP address10.56.0.14.

...jsonPayload:  bytes_sent: '0'  connection:    dest_ip: 10.56.0.14    dest_port: 8080    protocol: 6    src_ip: 10.56.0.13    src_port: 35414...

Clean up

To avoid incurring unwanted charges on your account, perform the following stepsto remove the resources you created:

  1. Delete the cluster:

    gcloudcontainerclustersdelete-qflow-log-test
  2. Disable flow logs for the default subnet:

    gcloudcomputenetworkssubnetsupdatedefault--no-enable-flow-logs

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.