PCI DSS compliance on GKE

Last reviewed 2025-03-12 UTC

This guide is intended to help you address concerns unique toGoogle Kubernetes Engine (GKE) applications when you are implementing customerresponsibilities forPayment Card Industry Data Security Standard (PCI DSS) requirements.

Disclaimer: This guide is for informational purposes only.Google does not intend the information or recommendations in this guide toconstitute legal or audit advice. Each customer is responsible for independentlyevaluating their own particular use of the services as appropriate to supportits legal and compliance obligations.

Introduction to PCI DSS compliance and GKE

If you handle payment card data, you must secure it—whether it resides in anon-premises database or in the cloud. PCI DSS was developed to encourage andenhance cardholder data security and facilitate the broad adoption of consistentdata security measures globally. PCI DSS provides a baseline of technical andoperational requirements designed to protect credit card data. PCI DSS appliesto all entities involved in payment card processing—including merchants,processors, acquirers, issuers, and service providers. PCI DSS also applies toall other entities that store, process, or transmit cardholder data (CHD) orsensitive authentication data (SAD), or both.

Containerized applications have become popular recently with many legacyworkloads migrating from a virtual machine (VM)–based architecture to acontainerized one.Google Kubernetes Engine is a managed, production-ready environment for deployingcontainerized applications. It brings Google's latest innovations in developerproductivity, resource efficiency, automated operations, and open sourceflexibility to accelerate your time to market.

Compliance is a shared responsibility in the cloud. Google Cloud,including GKE (both Autopilot and Standardmodes of operation), adheres to PCI DSS requirements. We outlineour responsibilities in ourShared responsibility matrix.

Intended audience

  • Customers who want to bring PCI-compliant workloads toGoogle Cloud that involveGKE.
  • Developers, security officers, complianceofficers, IT administrators, and other employees who are responsible forimplementing controls and ensuring compliance with PCI DSS requirements.

Before you begin

For the recommendations that follow, you potentially have to use thefollowing:

  • Google Cloud Organization, Folder, and Project resources
  • Identity and Access Management (IAM)
  • Google Kubernetes Engine
  • Google Cloud VPCs
  • Google Cloud Armor
  • The Cloud Data Loss Prevention API (part of Sensitive Data Protection)
  • Identity-Aware Proxy (IAP)
  • Security Command Center

This guide is intended for those who are familiar with containers andGKE.

Scope

This guide identifies the following requirements from PCI DSS that are uniqueconcerns for GKE and supplies guidance for meeting them. It iswritten againstversion 4.0 of the standard.This guide doesn't cover all the requirements in PCI DSS.The information provided in this guide might assist organizations in their pursuit of PCI DSS compliance but it's not comprehensive advice.Organizations can engage aPCI Qualified Security Assessor (QSA) for formal validation.

PCI DSS goalsPCI DSS requirements
Segment your cardholderdata environmentSometimes referred to as requirement 0. Alhough it's not a must for PCIcompliance, we recommend this requirement to keep the PCI scope limited.
Build and maintain a secure network and systems1. Install and maintain network security controls

2. Apply secure configurations to all system components
Protect account data3. Protect stored account data

4. Protect cardholder data with strong cryptography during transmission over open, publicnetworks
Maintain a vulnerability managementprogram5. Protect all systems and networks from malicious software

6. Develop and maintain secure systems and software
Implement strong access controlmeasures7. Restrict access to system components and cardholder databy business need to know

8. Identify and authenticate access to system components

9. Restrict physical access tocardholder data
Regularly monitor and testnetworks10. Log and monitor all access to system components and cardholder data

11. Test security of systems and networks regularly
Maintain an information securitypolicy12. Support information security with organizational policies and programs

Terminology

This section defines terms used in this guide. For more details, see thePCI DSS glossary.

CHD

cardholder data. At a minimum, consists of the full primary account number(PAN). Cardholder data might also appear in the form of the full PAN plusany of the following:

  • Cardholder name
  • Expiration date or service code
  • Sensitive authentication data (SAD)
CDE

cardholder data environment. The people, processes, and technology thatstore, process, or transmit cardholder data or sensitive authentication data.

PAN

primary account number. A key piece of cardholder data that you areobligated to protect under PCI DSS. The PAN is generally a 16-digitnumber that is unique to a payment card (credit and debit) and thatidentifies the issuer and the cardholder account.

PIN

personal identification number. A numeric password known only tothe user and a system; used to authenticate the user to the system.

QSA

qualified security assessor. A person who is certified by the PCI SecurityStandards Council to perform audits and compliance analysis.

SAD

sensitive authentication data. In PCI compliance, data used by theissuers of cards to authorize transactions. Similar to cardholder data, PCIDSS requires protection of SAD. Additionally, SAD can't be retained bymerchants and their payment processors. SAD includes the following:

  • "Track" data from magnetic stripes
  • "Track equivalent data" generated by chip and contactless cards
  • Security validation codes (for example, the 3-4 digit number printed oncards) used for online and card-not-present transactions.
  • PINs and PIN blocks
segmentation

In the context of PCI DSS, the practice of isolating the CDE from theremainder of the entity's network. Segmentation is not a PCI DSSrequirement. However, it is strongly recommended as a method that can helpto reduce the following:

  • The scope and cost of the PCI DSS assessment
  • The cost and difficulty of implementing and maintaining PCI DSS controls
  • The risk to an organization (reduced by consolidating cardholder datainto fewer, more controlled locations)

Segment your cardholder data environment

The cardholder data environment (CDE) comprises people, processes, andtechnologies that store, process, or transmit cardholder data or sensitiveauthentication data. In the context of GKE, the CDE alsocomprises the following:

  • Systems that provide security services (for example, IAM).
  • Systems that facilitate segmentation (for example, projects, folders,firewalls, virtual private clouds (VPCs), and subnets).
  • Application pods and clusters that store, process, or transmitcardholder data. Without adequate segmentation, your entire cloud footprintcan get in scope for PCI DSS.

To be considered out of scope for PCI DSS, a system component must be properlyisolated from the CDE such that even if the out-of-scope system component werecompromised, it wouldn't impact the security of the CDE.

An important prerequisite to reduce the scope of the CDE is a clearunderstanding of business needs and processes related to the storage,processing, and transmission of cardholder data. Restricting cardholder data toas few locations as possible by eliminating unnecessary data andconsolidating necessary data might require you to reengineer long-standingbusiness practices.

You can properly segment your CDE through a number of means onGoogle Cloud. This section discusses the following means:

  • Logical segmentation by using the resource hierarchy
  • Network segmentation by using VPCs and subnets
  • Service level segmentation by using VPC
  • Other considerations for any in-scope cluster

Logical segmentation using the resource hierarchy

There are several ways to isolate your CDE within your organizational structureusing Google Cloud'sresource hierarchy.Google Cloud resources are organized hierarchically. TheOrganization resource is the root node in the Google Cloud resource hierarchy.Folders andprojects fall under the Organization resource. Folders can contain projects and folders.Folders are used to control access to resources in the folder hierarchy throughfolder-level IAM permissions. They're also used to group similar projects. Aproject is a trust boundary for all your resources and an IAM enforcement point.

You might group all projects that are in PCI scope within a folder toisolate at the folder level. You might also use one project for allin-scope PCI clusters and applications, or you might create a project andcluster for each in-scope PCI application and use them to organize yourGoogle Cloud resources. In any case, we recommend that you keep yourin-scope and out-of-scope workloads in different projects.

Network segmentation using VPC networks and subnets

You can useVirtual Private Cloud (VPC) and subnets to provision your network and to group andisolate CDE-related resources. VPC is a logical isolation of asection of a public cloud. VPC networks provide scalable andflexible networking for your Compute Engine virtual machine (VM) instances andfor the services that use VM instances, including GKE. Formore details, see theVPC overview and refer to thebest practice and reference architectures.

Service-level segmentation using VPC Service Controls and Google Cloud Armor

While VPC and subnets provide segmentation and create a perimeterto isolate your CDE,VPC Service Controls augments the security perimeter at layer 7. You can use VPC Service Controlsto create a perimeter around your in-scope CDE projects.VPC Service Controls gives you the following controls:

  • Ingress control. Only authorized identities and clients are allowedinto your security perimeter.
  • Egress control. Only authorized destinations are allowed foridentities and clients within your security perimeter.

You can useGoogle Cloud Armor to create lists of IP addresses to allow or deny access to yourHTTP(S) load balancer at the edge of the Google Cloud network. By examining IP addresses asclose as possible to the user and to malicious traffic, you help preventmalicious traffic from consuming resources or entering your VPC networks.

Use VPC Service Controls to define a service perimeter around your in-scopeprojects. This perimeter governs VM-to-service and service-to-service paths, aswell as VPC ingress and egress traffic.

Figure 1. Achieving segmentation using VPC Service Controls.
Figure 1. Achieving segmentation using VPC Service Controls

Build and maintain a secure network and systems

Building and maintaining a secure network encompasses requirements 1 and 2 ofPCI DSS.

Requirement 1

Install and maintain a firewall configuration to protectcardholder data and traffic into and out of the CDE.

Networking concepts for containers and GKE differ from those fortraditional VMs. Pods can reach each other directly, without NAT, even acrossnodes. This creates a simple network topology that might be surprising if you'reused to managing more complex systems. The first step in network security forGKE is to educate yourself onthese networking concepts.

Logical layout of a secure Kubernetes cluster.
Figure 2. Logical layout of a secure Kubernetes cluster

Before diving into individual requirements under Requirement 1, you might wantto review the following networking concepts in relation to GKE:

  • Firewall rules.Firewall rules are used to restrict traffic to your nodes. GKE nodes areprovisioned as Compute Engine instances and use the same firewallmechanisms as other instances. Within your network, you can use tags toapply these firewall rules to each instance. Each node pool receives its ownset of tags that you can use in rules. By default, each instance belongingto a node pool receives a tag that identifies a specific GKEcluster that this node pool is a part of. This tag is used in firewallrules that GKE creates automatically for you. You can addcustom tags at either cluster or node pool creation time by using the--tags flag in the Google Cloud CLI.

  • Network policies.Network policies let you limit network connections between pods, which can help restrictnetwork pivotingand lateral movement inside the cluster in the event ofa security issue with a pod. To use network policies, you must enable thefeature explicitly when creating the GKE cluster. You canenable it on an existing cluster, but it will cause your clusternodes to restart. The default behavior is that all pod-to-pod communicationis always open. Therefore, if you want to segment your network, you need toenforce pod-level networking policies. In GKE, you candefine a network policy by using theKubernetes Network Policy API or by using thekubectl tool. These pod-level traffic policy rulesdetermine which pods and services can access one another inside yourcluster.

  • Namespaces.Namespaces allow for resource segmentation inside your Kubernetes cluster. Kubernetescomes with a default namespace out of the box, but you can create multiplenamespaces within your cluster. Namespaces are logically isolated from eachother. They provide scope for pods, services, and deployments in thecluster, so that users interacting with one namespace won't see contentin another namespace. However, namespaces within the same cluster don'trestrict communication between namespaces; this is where network policiescome in. For more information on configuring namespaces, see theNamespaces Best Practices blog post.

The following diagram illustrates the preceding concepts in relation to eachother and other GKE components such as cluster, node, and pod.

A Kubernetes network policy controlling traffic within acluster.
Figure 3. A Kubernetes network policy controlling traffic within acluster

Requirement 1.1

Processes and mechanisms for installing and maintaining network security controls are defined and understood.

Requirement 1.1.2

Describe groups, roles, and responsibilities for managingnetwork components.

First, as you would with most services on Google Cloud, you need toconfigureIAM roles in order to set up authorization on GKE. When you've set upyour IAM roles, you need to addKubernetes role-based access control (RBAC) configuration as part of a Kubernetes authorization strategy.

Essentially, all IAM configuration applies to any Google Cloud resourcesand all clusters within a project. Kubernetes RBAC configuration applies to theresources in each Kubernetes cluster, and enables fine-grained authorization atthe namespace level. With GKE, these approaches to authorizationwork in parallel, with a user's capabilities effectively representing a union ofIAM and RBAC roles assigned to them:

  • Use IAM to control groups, roles, and responsibilitiesfor logical management of network components in GKE.
  • Use Kubernetes RBAC to grant granular permissions tonetwork policies within Kubernetes clusters, to control pod-to-pod traffic, and to preventunauthorized or accidental changes from non-CDE users.
  • Be able to justify for all IAM and RBAC users and permissions.Typically, when QSAs test for controls, they look for a businessjustification for a sample of IAM and RBAC.

Requirement 1.2

Network security controls (NSCs) are configured and maintained.

First, you configurefirewall rules on Compute Engine instances that run your GKE nodes.Firewall rules protect these cluster nodes.

Next, you configurenetwork policies to restrict flows and protect pods in a cluster. Anetwork policy is a specification of how groups of pods are allowed to communicate with eachother and with other network endpoints. You can use GKE's networkpolicy enforcement to control the communication between your cluster's pods andservices. To further segment your cluster, create multiple namespaces within it.Namespaces are logically isolated from each other. They provide scope forpods, services, and deployments in the cluster, so users interacting with onenamespace won't see content in another namespace. However, namespaces withinthe same cluster don't restrict communication between namespaces; this is wherenetwork policies come in. Namespaces allow for resource segmentation inside yourKubernetes cluster. For more information on configuring namespaces, see theNamespaces Best Practices blog post.

By default, if no policies exist in a namespace, then all ingress and egresstraffic is allowed to and from pods in that namespace. For example, youcan create a default isolation policy for a namespace by creating anetwork policy that selects all pods but doesn't allow any ingress traffic tothose pods.

Requirement 1.2.2

All changes to network connections and toconfigurations of NSCs are approved and managedin accordance with the change control processdefined at Requirement 6.5.1.

To treat your networking configurations and infrastructure as code, you need toestablish a continuous integration and continuous delivery (CI/CD)pipeline as part of your change-management and change-control processes.

You can useCloud Deployment Manager orTerraform templates as part of the CI/CD pipeline to create network policies on yourclusters. With Deployment Manager or Terraform, you can treatconfiguration and infrastructure as code that can reproduce consistent copies ofthe current production or other environments. Then you areable to write unit tests and other tests to ensure your network changes work asexpected. A change control process that includes an approval can be managedthrough configuration files stored in a version repository.

WithTerraform Config Validator,you can define constraints to enforce security and governance policies. Byadding Config Validator to your CI/CD pipeline, you can add a step to anyworkflow. This step validates a Terraform plan and rejects it if violationsare found.

Requirement 1.2.5

All services, protocols, and ports allowed areidentified, approved, and have a defined businessneed.

For strong ingress controls for your GKE clusters, you can useauthorized networks to restrict certain IP ranges that can reach your cluster'scontrol plane. GKE uses both Transport Layer Security (TLS) andauthentication to provide secure access to your cluster control plane endpoint from thepublic internet. This access gives you the flexibility to administer yourcluster from anywhere. By using authorized networks, you can further restrictaccess to specified sets of IP addresses.

You can useGoogle Cloud Armor to create IP deny lists and allow lists and security policies forGKE hosted applications. In a GKE cluster,incoming traffic is handled byHTTP(S) Load Balancing,which is a component ofCloud Load Balancing.Typically, the HTTP(S) load balancer is configured by theGKE ingress controller,which gets configuration information from a KubernetesIngress object. For more information, seehow to configure Google Cloud Armor policies with GKE.

Requirement 1.3

Network access to and from the cardholder data environment is restricted.

To keep sensitive data private, you can configure private communications betweenGKE clusters inside your VPC networks and on-premises hybriddeployments by usingVPC Service Controls and Private Google Access.

Requirement 1.3.1

Inbound traffic to the CDE is restricted as follows:

  • To only traffic that is necessary.
  • All other traffic is specifically denied.

Consider implementingCloud NAT setup with GKE to limit inbound internet traffic to only that cluster. You canset up a private cluster for the non-public facing clusters in your CDE. In a private cluster, the nodeshave internal RFC 1918 IP addresses only, which ensures that their workloads areisolated from the public internet.

Requirement 1.4

Network connections between trusted and untrusted networks are controlled.

You can address this requirement using the same methods listed for Requirement 1.3.

Requirement 1.4.3

Anti-spoofing measures are implemented todetect and block forged source IP addresses fromentering the trusted network.

You implement anti-spoofing measures by using alias IP addresses onGKE pods and clusters to detect and block forged source IPaddresses from entering the network. A cluster that usesalias IP ranges is called a VPC-native cluster.

Requirement 1.4.5

The disclosure of internal IP addresses and routing information is limited to only authorized parties.

You can use aGKE IP masquerade agent to do network address translation (NAT) for many-to-one IP address translationson a cluster. Masquerading masks multiple source IP addresses behind a singleaddress.

Requirement 2

Apply secure configurations to all system components.

Requirement 2 specifies how to harden securityparameters by removing defaults and vendor supplied credentials.Hardening your cluster is a customer responsibility.

Requirement 2.2

System components are configured and managed securely.

Ensure that these standards address all known security vulnerabilities and areconsistent with industry-accepted system hardening standards. Sources ofindustry-accepted system hardening standards may include, but are not limitedto:

Requirement 2.2.4

Only necessary services, protocols, daemons,and functions are enabled, and all unnecessaryfunctionality is removed or disabled.

Requirement 2.2.5

If any insecure services, protocols, ordaemons are present:
  • Business justification is documented.
  • Additional security features are documented andimplemented that reduce the risk of usinginsecure services, protocols, or daemons.

Requirement 2.2.6

System security parameters are configured toprevent misuse.

Pre-deployment

Before you move containers onto GKE, we recommend thefollowing:

  • Start with a containermanaged base image that is built, maintained, and vulnerability-checked by a trusted source.Consider creating a set of "known good" or "golden" base images that yourdevelopers can use. A more restrictive option is to use adistroless image or ascratch base image.
  • UseArtifact Analysis to scan your container images for vulnerabilities.
  • Establish an internal DevOps/SecOps policy to include only approved,trusted libraries and binaries into the containers.

At setup

During set up, we recommend the following:

  • Use the defaultContainer-Optimized OS as the node image for GKE. Container-Optimized OS isbased on Chromium OS and is optimized fornode security.
  • Enableauto-upgrading nodes for the clusters that run your applications. This feature automaticallyupgrades the node to the Kubernetes version that's running inthe managed control plane, providing better stability and security.
  • Enableauto-repairing nodes.When this feature is enabled, GKE periodically checks anduses the node's health status to determine if a node needs to be repaired.If a node requires repair, that node is drained and a new node is createdand added to the cluster.
  • Turn onCloud Monitoring andCloud Logging for visibility of all events, including security events and node healthstatus. CreateCloud Monitoring alert policies to get notified if a security incident occurs.
  • Applyleast privilege service accounts for GKE nodes
  • Review and apply (where applicable) the GKE section in theGoogle Cloud CIS Benchmark guide. Kubernetes audit logging is already enabled by default, and logs forboth requests tokubectl and the GKE API are written toCloud Audit Logs.
  • Configure audit logging.

Protect account data

Protecting cardholder data encompasses requirements 3 and 4 of PCI DSS.

Requirement 3

Protect stored account data.

Requirement 3 of PCI DSS stipulates that protection techniques such asencryption, truncation, masking, and hashing are critical components ofcardholder data protection. If an intruder circumvents other security controlsand gains access to encrypted data, without the proper cryptographic keys, thedata is unreadable and unusable to that person.

You might also consider other methods of protecting stored data aspotential risk-mitigation opportunities. For example, methods for minimizingrisk include not storing cardholder data unless absolutely necessary, truncatingcardholder data if the full PAN is not needed, and not sending unprotected PANsusing end-user messaging technologies, such as email and instant messaging.

Examples of systems where CHD might persist as part of your payment processingflows when running on Google Cloud are:

  • Cloud Storage buckets
  • BigQuery instances
  • Datastore
  • Cloud SQL

Be aware that CHD might be inadvertently stored in email or customer servicecommunication logs. It's prudent to useSensitive Data Protection to filter these data streams so that you limit your in-scope environmentto the payment processing systems.

Note that on Google Cloud, data isencrypted at rest by default,andencrypted in transit by default when it traverses physical boundaries.No additional configuration is necessary to enable these protections.

Requirement 3.5

Primary account number (PAN) is secured wherever it is stored.

One mechanism to render PAN data unreadable is tokenization. For moreinformation, see the solution guide ontokenizing sensitive cardholder data for PCI DSS.

You can use theDLP API to scan, discover, and report the cardholder data. Sensitive Data Protection has built-insupport for scanning and classifying 12–19-digit PAN data inCloud Storage, BigQuery, and Datastore.It also has a streaming content API to enable support for additional datasources, custom workloads, and applications. You can also use theDLP API totruncate (redact) or hash the data.

Requirement 3.6

Cryptographic keys used to protect stored account data are secured.

Cloud Key Management Service (KMS) is a managed storage system for cryptographic keys. It can generate, use,rotate, and destroy cryptographic keys. Although Cloud KMS does notdirectlystore secrets like cardholder data, it can be used to encrypt such data.

Secrets in the context of Kubernetes are Kubernetessecret objects that let you store and manage sensitive information, such as passwords, tokens,and keys.

By default, Google Cloudencrypts customer content stored at rest.GKE handles and manages this default encryption for you withoutany additional action on your part.Application-layer secrets encryption provides an additional layer of security for sensitive data such as secrets.Using this functionality, you can provide a key that you manage inCloud KMS,to encrypt data at the application layer. This protects against attackers whogain access to a copy of the Kubernetes configuration storage instance of yourcluster.

Application layer secrets with GKE.
Figure 4. Application layer secrets with GKE

Requirement 4

Protect cardholder data with strong cryptography during transmission over open, publicnetworks.

The in-scope data must be encrypted during transmission over networks that areeasily accessed by malicious individuals, for example, public networks.

Istio is an open source service mesh that layers transparently onto existingdistributed applications. Istio scalably manages authentication, authorization,and encryption of traffic between microservices. It's a platform that includesAPIs that let you integrate into any logging platform, telemetry, or policysystem. Istio's feature set lets you efficiently run a distributed microservicearchitecture and provides a uniform way to secure, connect, and monitormicroservices.

Requirement 4.1

Processes and mechanisms for protecting cardholder data with strong cryptography during transmission over open, public networks aredefined and documented.

You can use Istio to create a network of deployed services—with load balancing,service-to-service authentication, and monitoring. You can also use it todeliver secure service-to-service communication in a cluster—with strongidentity-based authentication and authorization based on mutual TLS. Mutual TLS(mTLS) is a TLS handshake performed twice, establishing the same level oftrust in both directions (as opposed to one-directional client-server trust).

Secure service-to-service communication using Istio and mTLS.
Figure 5. Secure service-to-service communication using Istio and mTLS

Istio lets you deploy TLS certificates to each of the GKE podswithin an application. Services running on the pod can use mTLS to stronglyidentify their peer identities. Service-to-service communication is tunneledthrough client-side and server-sideEnvoy proxies. Envoy usesSPIFFE IDs toestablish mTLS connections between services. For information on how to deployIstio on GKE, see theGKE documentation.And for information on supported TLS versions, see the IstioTraffic Management reference.Use TLS version 1.2 and later.

If your application is exposed to the internet, useGKE HTTP(S) Load Balancing with ingress routing that is set to use HTTP(S).HTTP(S) Load Balancing,configured by an Ingress object, includes the following features:

  • Flexible configuration for services. An Ingress object defines howtraffic reaches your services and how the traffic is routed to yourapplication. In addition, an Ingress can provide a single IP address formultiple services in your cluster.
  • Integration with Google Cloud network services. An Ingress objectcan configure Google Cloud features such asGoogle-managed SSL certificates (beta),Google Cloud Armor,Cloud CDN,andIdentity-Aware Proxy.
  • Support for multiple TLS certificates. An Ingress object can specify theuse of multiple TLS certificates for request termination.

When you create an Ingress object, theGKE ingress controller creates aCloud HTTP(S) load balancer and configures it according to the information in the Ingress and its associatedServices.

Maintain a vulnerability management program

Maintaining a vulnerability management program encompasses requirements 5 and 6of PCI DSS.

Requirement 5

Protect all systems and networks from malicious software.

Requirement 5 of PCI DSS stipulates that antivirus software must be used onall systems commonly affected by malware to protect systems from current andevolving malicious software threats—and containers are no exception.

Requirement 5.2

Malicious software (malware) is prevented, or detected and addressed..

You must implement vulnerability management programs for your container images.

We recommend the following actions:

  • Regularly check and apply up-to-date security patches on the containers.
  • Perform regularvulnerability scanning against containerized applications and binaries/libraries.
  • Scan images as part of the build pipeline.
  • Subscribe to avulnerability intelligence service to receive up-to-date vulnerability information relevant to the environmentand libraries used in the containers.

Google Cloud works with various container security solutions providersto improve security posture within customers' Google Cloud deployments. Werecommend leveraging validated security solutions and technologies to increasedepth of defense in your GKE environment. For the latestGoogle Cloud-validated security partners list, seeSecurity Partners.

Requirement 5.2.2

The deployed anti-malware solution(s):

  • Detects all known types of malware.
  • Removes, blocks, or contains all known types ofmalware.

Requirement 5.2.3

Any system components that are not at risk formalware are evaluated periodically to include thefollowing:

  • A documented list of all system components notat risk for malware.
  • Identification and evaluation of evolvingmalware threats for those system components.
  • Confirmation whether such system componentscontinue to not require anti-malware protection.

There are many solutions available to perform malware scans, but PCI DSSrecognizes that not all systems are equally likely to be vulnerable. It'scommon for merchants to declare their Linux servers, mainframes, and similarmachines as not "commonly affected by malicious software" and therefore exemptfrom 5.2.2. In that case, 5.2.3 applies, and you must implement a system forperiodic threat evaluations.

Keep in mind that these rules apply to both nodes and pods within aGKE cluster.

Requirement 5.3

Anti-malware mechanisms and processes are active, maintained, and monitored.

Requirements 5.2, 5.3, and 11.5 call for antivirus scans and file integrity monitoring(FIM) on any in-scope host. We recommend implementing a solution where all nodescan be scanned by a trusted agent within the cluster or where each node has ascanner that reports up to a single management endpoint.

For more information, seethesecurity overview for GKE,and thesecurity overview for Container-Optimized OS.

A common solution to both the antivirus and FIM requirements is to lock downyour container so only specific allowed folders have write access. To dothis, yourun your containers as a non-root user and use file system permissions to prevent write access to all but the workingdirectories within the container file system.Disallow privilege escalation to avoid circumvention of the file system rules.

Requirement 6

Develop and maintain secure systems and software.

Requirement 6 of PCI DSS stipulates that you establish a strong softwaredevelopment lifecycle where security is built in at every step of softwaredevelopment.

Requirement 6.2

Bespoke and custom software are developed securely.

Requirement 6.2.1

Bespoke and custom software are developedsecurely, as follows:

  • Based on industry standards and/or bestpractices for secure development.
  • In accordance with PCI DSS (for example,secure authentication and logging).
  • Incorporating consideration of informationsecurity issues during each stage of the softwaredevelopment lifecycle.

You can useBinary Authorization to help ensure that only trusted containers are deployed to GKE.If you want to enable only images authorized by one or more specific attestors,you can configure Binary Authorization to enforce apolicy with rules that requireattestations based on vulnerability scan results. You can also write policies that requireone or more trusted parties (called "attestors") to approve of an image beforeit can be deployed. For a multi-stage deployment pipeline where images progressfrom development to testing to production clusters, you can use attestors toensure that all required processes have completed before software moves to thenext stage.

At deployment time, Binary Authorization enforces your policy bychecking that the container image has passed all required constraints—includingthat all required attestors have verified that the image is ready fordeployment. If the image passes, the service allows it to be deployed.Otherwise, deployment is blocked and the image can't be deployed until it'scompliant.

Using Binary Authorization to enforce a policy that requires onlytrusted images applied to a GKE cluster.
Figure 6. Using Binary Authorization to enforce a policy that requires onlytrusted images are applied to a GKE cluster

For more information on Binary Authorization, seeSet up for GKE.

In an emergency, you can bypass a Binary Authorization policy by using thebreakglass workflow.All breakglass incidents are recorded inCloud Audit Logs.

GKE Sandbox reduces the need for the container to interact directly with the host, shrinkingthe attack surface for host compromise, and restricting the movement ofmalicious actors.

Requirement 6.3

Security vulnerabilities are identified and addressed.

Requirement 6.3.1

Security vulnerabilities are identified andmanaged as follows:

  • New security vulnerabilities are identified usingindustry-recognized sources for securityvulnerability information, including alerts frominternational and national computer emergencyresponse teams (CERTs).
  • Vulnerabilities are assigned a risk ranking basedon industry best practices and consideration ofpotential impact.
  • Risk rankings identify, at a minimum, allvulnerabilities considered to be a high-risk orcritical to the environment.
  • Vulnerabilities for bespoke and custom, andthird-party software (for example operatingsystems and databases) are covered.

Security in the cloud is a shared responsibility between the cloud provider andthe customer.

In GKE, Google manages the control plane, which includes theprimary VMs, the API server, and other components running on those VMs, as wellas theetcd database. This includes upgrades and patching, scaling, andrepairs, all backed by a service-level objective (SLO). For the nodes' operatingsystem, such as Container-Optimized OS or Ubuntu, GKEpromptly makes any patches to these images available. If you have auto-upgradeenabled, these patches are automatically deployed. (This is the base layer ofyour container—it's not the same as the operating system running in yourcontainers.)

For more information on the GKE shared responsibility model, seeExploring container security: the shared responsibility model in GKE.

Google provides several security services to help build security into yourCI/CD pipeline. To identify vulnerabilities in your container images, you canuseGoogle Artifact Analysis Vulnerability Scanning.When a container image is pushed toGoogle Container Registry (GCR),vulnerability scanning automatically scans images for known vulnerabilities andexposures from knownCVE sources. Vulnerabilities are assigned severity levels (critical, high, medium,low, and minimal) based onCVSS scores.

Requirement 6.4

Public-facing web applications are protected against attacks.

Web Security Scanner lets you scan publicly facing App Engine, Compute Engine, andGKE web applications for common vulnerabilities ranging fromcross-site scripting and misconfigurations to vulnerable resources. Scans can beperformed on demand and scheduled from theGoogle Cloud console.Using the Security Scanner APIs, you can automate the scan as part of yoursecurity test suite in your application build pipeline.

Implement strong access control measures

Implementing strong access control measures encompasses requirements 7, 8, and 9of PCI DSS.

Requirement 7

Restrict access to system components and cardholder data by business need to know.

Requirement 7 focuses onleast privilege orneed to know. PCI DSS definesthese as granting access to the least amount of data and providing the fewestprivileges that are required in order to perform a job.

Requirement 7.2

Access to system components and data is appropriately defined and assigned.

Employing IAM and RBAC to provide layers of security.
Figure 7. Employing IAM and RBAC to provide layers of security

IAM andKubernetes role-based access control (RBAC) work together to provide fine-grained access control to your GKEenvironment. IAM is used to manage user access and permissions ofGoogle Cloud resources in your CDE project. In GKE, you canalso use IAM to manage the access and actions that users andservice accounts can perform in your clusters, such as creating and deletingclusters.

Kubernetes RBAC lets you configure fine-grained sets ofpermissions that define how a given Google Cloud user, Google Cloudservice accounts, or group of users(Google Groups)can interact with any Kubernetes object in your cluster, or in a specificnamespace of your cluster. Examples of RBAC permissions include editingdeployments or configmaps, deleting pods, or viewing logs from a pod. You grantusers or services limited IAM permissions, such asGoogle Kubernetes Engine Cluster Viewer orcustom roles,then apply Kubernetes RBAC RoleBindings as appropriate.

Cloud Identity Aware Proxy (IAP) can be integrated through ingress for GKE to controlapplication-level access for employees or people who require access to your PCIapplications.

Additionally, you can useOrganization policies to restrict the APIs and services that are available within aproject.

Requirement 7.2.2

Access is assigned to users, includingprivileged users, based on:

  • Job classification and function.
  • Least privileges necessary to perform jobresponsibilities.

Along with making sure users and service accounts adhere to the principle ofleast privilege, containers should too. A best practice when running a containeris to run the process with a non-root user. You can accomplish and enforce thispractice by using thePodSecurity admission controller.

PodSecurity is a Kubernetes admission controller that lets you apply Pod Security Standards to Pods running on your GKE clusters. Pod Security Standards are predefined security policies that cover the high-level needs of Pod security in Kubernetes. These policies range from being highly permissive to highly restrictive. PodSecurity replaces the former PodSecurityPolicy admission controller that was removed in Kubernetes v1.25. Instructions are available formigrating from PodSecurityPolicy to the PodSecurity admission controller.

Requirement 8

Identify users and authenticate access to system components

Requirement 8 specifies that a unique ID must be assigned to each person who hasaccess to in-scope PCI systems to ensure that each individual is uniquelyaccountable for their actions.

Requirement 8.2

User identification and related accounts for users and administrators are strictly managed throughout an account's lifecycle.

Requirement 8.2.1

All users are assigned a unique ID beforeaccess to system components or cardholder data isallowed.

Requirement 8.2.5

Access for terminated users is immediatelyrevoked.

Both IAM and Kubernetes RBAC can be used to control access toyour GKE cluster, and in both cases you can grant permissions toa user. We recommend that the users tie back to yourexisting identity system,so that you can manage user accounts and policies in one location.

Requirement 8.3

Strong authentication for users and administrators is established and managed.

Requirement 8.3.1

All user access to system components forusers and administrators is authenticated by using atleast one of the following authentication factors:
  • Something you know, such as a password or passphrase.
  • Something you have, such as a token device or smart card.
  • Something you are, such as a biometric element.

Certificates are bound to a user's identity whenthey authenticate tokubectl.All GKE clusters are configured to accept Google Cloud userand service account identities, by validating the credentials and retrieving theemail address associated with the user or service account identity. As a result,the credentials for those accounts must include theuserinfo.email OAuth scopein order to successfully authenticate.

Requirement 9

Restrict physical access to cardholder data.

Google isresponsible for physical security controls on all Google data centers underlyingGoogle Cloud.

Regularly monitor and test networks

Regularly monitoring and testing networks encompasses requirements 10 and 11 ofPCI DSS.

Requirement 10

Log and monitor all access to system components and cardholder data.

Requirement 10.2

Audit logs are implemented to support the detection of anomalies and suspicious activity, and the forensic analysis of events.

Kubernetes clusters haveKubernetes audit logging enabled by default, which keeps a chronological record of calls that have beenmade to the Kubernetes API server. Kubernetes audit log entries are useful forinvestigating suspicious API requests, for collecting statistics, or forcreating monitoring alerts for unwanted API calls.

GKE clusters integrate a default configuration forGKE audit logging withCloud Audit Logs andLogging.You can see Kubernetes audit log entries in your Google Cloud project.

In addition to entries written by Kubernetes, your project's audit logs haveentries written by GKE.

To differentiate your CDE and non-CDE workloads, we recommend that you addlabels to your GKE pods that will percolate into metrics and logsemitted from those workloads.

Requirement 10.2.2

Audit logs record the following details for each auditable event:
  • User identification
  • Type of event
  • Date and time
  • Success or failure indication
  • Origination of event
  • Identity or name of affected data, systemcomponent, resource, or service (for example,name and protocol)

Every audit log entry in Logging is an object of typeLogEntry that contains the following fields:

  • A payload, which is of theprotoPayload type. The payload of each auditlog entry is an object of typeAuditLog.You can find the user identity in theAuthenticationInfo field ofAuditLog objects.
  • The specific event, which you can find in themethodName field ofAuditLog.
  • A timestamp.
  • The event status, which you can find in theresponse objects in theAuditLog object.
  • The operation request, which you can find in therequest andrequestMetadata objects in theAuditLog object.
  • The service that is going to be performed, which you can find in theAuditData object inserviceData.

Requirement 11

Test security of systems and networks regularly.

Requirement 11.3

External and internal vulnerabilities are regularly identified, prioritized, and addressed.

Requirement 11.3.1

Internal vulnerability scans are performed asfollows:
  • At least once every three months.
  • High-risk and critical vulnerabilities (per theentity's vulnerability risk rankings defined atRequirement 6.3.1) are resolved.
  • Rescans are performed that confirm all high-risk and critical vulnerabilities (as noted above)have been resolved.
  • Scan tool is kept up to date with latestvulnerability information.
  • Scans are performed by qualified personnel andorganizational independence of the tester exists.

Artifact Analysisvulnerability scanning performs the following types of vulnerability scanning for the images inContainer Registry:

  • Initial scanning. When you first activate theArtifact Analysis API, it scans your images inContainer Registry and extracts package manager, image basis, andvulnerability occurrences for the images.

  • Incremental scanning. Artifact Analysis scans newimages when they're uploaded to Container Registry.

  • Continuous analysis: As Artifact Analysis receives newand updated vulnerability information from vulnerability sources, it rerunsanalysis of containers to keep the list of vulnerability occurrences foralready scanned images up to date.

Requirement 11.5

Network intrusions and unexpected file changes are detected and responded to.

Requirement 11.5.1

Intrusion-detection and/or intrusion prevention techniques are used to detect and/orprevent intrusions into the network as follows:
  • All traffic is monitored at the perimeter of theCDE.
  • All traffic is monitored at critical points in theCDE.
  • Personnel are alerted to suspectedcompromises.
  • All intrusion-detection and prevention engines,baselines, and signatures are kept up to date.

Google CloudPacket Mirroringcan be used withCloud IDSto detect network intrusions. Google Cloud packet mirroring forwards allnetwork traffic from your Compute Engine VMs or Google Cloud clustersto a designated address. Cloud IDS can consume thismirrored traffic to detect a wide range of threats including exploit attempts,port scans, buffer overflows, protocol fragmentation, command and control (C2)traffic, and malware.

Security Command Center gives you centralized visibility into the security state ofGoogle Cloud services (including GKE) and assets across yourwhole organization, which makes it easier to prevent, detect, and respond tothreats. By using Security Command Center, you can see when high-risk threats such asmalware, cryptomining, unauthorized access to Google Cloud resources,outgoing DDoS attacks, port scanning, and brute-force SSH have been detectedbased on your Cloud Logging logs.

Maintain an information security policy

A strong security policy sets the security tone and informs people what isexpected of them. In this case, "people" refers to full-time and part-timeemployees, temporary employees, contractors, and consultants who have access toyour CDE.

Requirement 12

Support information security with organizational policies and programs.

For information about requirement 12, see theGoogle Cloud PCI Shared Responsibility Matrix.

Cleaning up

If you used any resources while following this article—for example, if youstarted new VMs or used the Terraform scripts—you can avoid incurringcharges to your Google Cloud account by deleting the project where youused those resources.

    Caution: Deleting a project has the following effects:
    • Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project.
    • Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as anappspot.com URL, delete selected resources inside the project instead of deleting the whole project.
  1. In the Google Cloud console, go to theManage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then clickDelete.
  3. In the dialog, type the project ID, and then clickShut down to delete the project.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-03-12 UTC.