Designing networks for migrating enterprise workloads: Architectural approaches

This document introduces a series that describes networking and securityarchitectures for enterprises that are migrating data center workloads toGoogle Cloud. These architectures emphasize advanced connectivity, zero-trustsecurity principles, and manageability across a hybrid environment.

As described in an accompanying document,Architectures for Protecting Cloud Data Planes, enterprises deploy a spectrum of architectures that factor inconnectivity and security needs in the cloud. We classify these architecturesinto three distinct architectural patterns: lift-and-shift, hybrid services, andzero-trust distributed. The current document considers different securityapproaches, depending on which architecture an enterprise has chosen. It alsodescribes how to realize those approaches using the building blocks provided byGoogle Cloud. You should use these security guidances in conjunction with otherarchitectural guidances covering reliability, availability, scale, performance,and governance.

This document is designed to help systems architects,network administrators, and security administrators who are planning to migrateon-premises workloads to the cloud. It assumes the following:

  • You are familiar with data center networking and security concepts.
  • You have existing workloads in your on-premises data center and arefamiliar with what they do and who their users are.
  • You have at least some workloads that you plan to migrate.
  • You are generally familiar with the concepts described inArchitectures for Protecting Cloud Data Planes.

The series consists of the following documents:

This document summarizes the three primary architectural patternsand introduces the resource building blocks that you can use to create yourinfrastructure. Finally, it describes how to assemble the building blocks into aseries of reference architectures that match the patterns. You can use thesereference architectures to guide your own architecture.

This document mentions virtual machines (VMs) as examples of workload resources.The information applies to other resources that use VPC networks,like Cloud SQL instances and Google Kubernetes Engine nodes.

Overview of architectural patterns

Typically, network engineers have focused on building the physical networkinginfrastructure and security infrastructure in on-premises data centers.

The journey to the cloud has changed this approach because cloud networkingconstructs are software-defined. In the cloud, application owners have limitedcontrol of the underlying infrastructure stack. They need a model that has asecure perimeter and provides isolation for their workloads.

In this series, we consider three common architectural patterns. Thesepatterns build on one another, and they can be seen as a spectrum rather than astrict choice.

Lift-and-shift pattern

In the lift-and-shift architectural pattern, enterprise application ownersmigrate their workloads to the cloud without refactoring those workloads.Network and security engineers use Layer 3 and Layer 4 controls to provideprotection using a combination of network virtual appliances that mimicon-premises physical devices and cloud firewall rules in the VPCnetwork. Workload owners deploy their services in VPC networks.

Hybrid services pattern

Workloads that are built using lift-and-shift might need access to cloudservices such as BigQuery or Cloud SQL. Typically, access to such cloudservices is at Layer 4 and Layer 7. In this context, isolation and securitycannot be done strictly at Layer 3. Therefore, service networking andVPC Service Controls are used to provide connectivity and security, based on theidentities of the service that's being accessed and the service that'srequesting access. In this model, it's possible to express rich access-controlpolicies.

Zero-trust distributed pattern

In a zero-trust architecture, enterprise applications extend securityenforcement beyond perimeter controls. Inside the perimeter, workloads cancommunicate with other workloads only if their IAM identity hasspecific permission, which is denied by default. In a Zero Trust DistributedArchitecture, trust is identity-based and enforced for each application.Workloads are built as microservices that have centrally issued identities. Thatway, services can validate their callers and make policy-based decisions foreach request about whether that access is acceptable. This architecture is oftenimplemented using distributed proxies (a service mesh) instead of usingcentralized gateways.

Enterprises can enforce zero-trust access from users and devices to enterpriseapplications by configuring Identity-Aware Proxy (IAP).IAP provides identity- and context-based controls for usertraffic from the internet or intranet.

Combining patterns

Enterprises that are building or migrating their business applications to thecloud usually use a combination of all three architectural patterns.

Google Cloud offers a portfolio of products and services that serve asbuilding blocks to implement the cloud data plane that powers the architecturalpatterns. These building blocks are discussed later in this document. Thecombination of controls that are provided in the cloud data plane, together withadministrative controls to manage cloud resources, form the foundation of anend-to-end security perimeter. The perimeter that's created by this combinationlets you govern, deploy, and operate your workloads in the cloud.

Resource hierarchy and administrative controls

This section presents a summary of the administrative controls thatGoogle Cloud provides as resource containers. The controls includeGoogle Cloud organization resources, folders, and projects that let yougroup andhierarchically organize cloud resources. This hierarchical organization provides you with an ownershipstructure and with anchor points for applying policy and controls.

A Googleorganization resource is the root node in the hierarchy and is the foundation for creatingdeployments in the cloud. An organization resource can have folders and projectsas children. A folder has projects or other folders as children. All other cloud resources are the children of projects.

You usefolders as a method of grouping projects.Projects form the basis for creating, enabling, and using all Google Cloudservices. Projects let you manage APIs, enable billing, add and removecollaborators, and manage permissions.

Using GoogleIdentity and Access Management (IAM),you can assign roles and define access policies and permissions at all resourcehierarchy levels. IAM policies are inherited by resources lowerin the hierarchy. These policies can't be altered by resource owners who arelower in the hierarchy. In some cases, the identity and access management isprovided at a more granular level, for example at the scope of objects in anamespace or cluster as inGoogle Kubernetes Engine.

Design considerations for Google Virtual Private Cloud networks

When you're designing a migration strategy to the cloud, it's important todevelop a strategy for how your enterprise will use VPC networks.You can think of a VPC network as a virtual version of yourtraditional physical network. It is a completely isolated, private networkpartition. By default, workloads or services that are deployed in oneVPC network cannot communicate with jobs in anotherVPC network. VPC networks therefore enableworkload isolation by forming a security boundary.

Because each VPC network in the cloud is a fully virtualnetwork, each has its own private IP address space. You can therefore use thesame IP address in multiple VPC networks without conflict. Atypical on-premises deployment might consume a large portion of the RFC 1918private IP address space. On the other hand, if you have workloads bothon-premises and in VPC networks, you can reuse the same addressranges in different VPC networks, as long as those networksaren'tconnected orpeered, thus using up IP address space less quickly.

VPC networks are global

VPC networks in Google Cloud are global, which means thatresources deployed in a project that has a VPC network cancommunicate with each other directly using Google's private backbone.

As figure 1 shows, you can have a VPC network in your projectthat contains subnetworks in differentregions that span multiple zones. The VMsin any region can communicate privately with each other using the localVPC routes.

Google Cloud global VPC network implementation with subnetworks configured in different regions.

Figure 1. Google Cloud global VPC networkimplementation with subnetworks configured in different regions.

Sharing a network using Shared VPC

Shared VPC lets anorganization resource connect multiple projects to a commonVPC network so that they can communicate with each other securely using internal IPaddresses from the shared network. Network administrators for that sharednetwork apply and enforce centralized control over network resources.

When you use Shared VPC, you designate a project as a host project andattach one or more service projects to it. The VPC networks inthe host project are calledShared VPC networks.Eligible resources from service projects can use subnets in the Shared VPC network.

Enterprises typically use Shared VPC networks when they need network andsecurity administrators to centralize management of network resources such assubnets and routes. At the same time, Shared VPC networks letapplication and development teams create and delete VM instances and deployworkloads in designated subnets using the service projects.

Isolating environments by using VPC networks

Using VPC networks to isolate environments has a number ofadvantages, but you need to consider a few disadvantages as well. This sectionaddresses these tradeoffs and describes common patterns for implementingisolation.

Reasons to isolate environments

Because VPC networks represent an isolation domain, manyenterprises use them to keep environments or business units in separate domains.Common reasons to create VPC-level isolation are the following:

  • An enterprise wants to establish default-deny communications betweenone VPC network and another, because these networksrepresent an organizationally meaningful distinction. For more information,seeCommon VPC network isolation patterns later in this document.
  • An enterprise needs to have overlapping IP address ranges because ofpre-existing on-premises environments, because of acquisitions, or becauseof deployments to other cloud environments.
  • An enterprise wants to delegate full administrative control of a networkto a portion of the enterprise.

Disadvantages of isolating environments

Creating isolated environments with VPC networks can have somedisadvantages. Having multiple VPC networks can increase theadministrative overhead of managing the services that span multiple networks.This document discusses techniques that you can use to manage this complexity.

Common VPC network isolation patterns

There are some common patterns for isolating VPC networks:

  • Isolate development, staging, and production environments. This pattern lets enterprises fully segregate theirdevelopment, staging, and production environments from each other. Ineffect, this structure maintains multiple complete copies of applications,with progressive rollout between each environment. In this pattern,VPC networks are used as security boundaries. Developershave a high degree of access to development VPC networks todo their day-to-day work. When development is finished, an engineeringproduction team or a QA team can migrate the changes to a stagingenvironment, where the changes can be tested in an integrated fashion. Whenthe changes are ready to be deployed, they are sent to a production environment.
  • Isolate business units. Some enterprises want to impose a highdegree of isolation between business units, especially in the case of unitsthat were acquired or ones that demand a high degree of autonomy andisolation. In this pattern, enterprises often create a VPCnetwork for each business unit and delegate control of thatVPC to the business unit's administrators. The enterpriseuses techniques that are described later in this document to exposeservices that span the enterprise or to host user-facing applications thatspan multiple business units.

Recommendation for creating isolated environments

We recommend that you design your VPC networks to have thebroadest domain that aligns with the administrative and security boundaries ofyour enterprise. You can achieve additional isolation between workloads that runin the same VPC network by using security controls such asfirewalls.

For more information about designing and building an isolation strategy foryour organization, seeBest practices and reference architectures for VPC design andNetworking in the Google Cloud enterprise foundations blueprint.

Building blocks for cloud networking

This section discusses the important building blocks for network connectivity,network security, service networking, and service security. Figure 2 shows howthese building blocks relate to one another. You can use one or more of theproducts that are listed in a given row.

Building blocks in the realm of cloud network connectivity and security.

Figure 2. Building blocks in the realm of cloud network connectivity andsecurity.

The following sections discuss each of the building blocks and whichGoogle Cloud services you can use for each of the blocks.

Network connectivity

The network connectivity block is at the base of the hierarchy. It'sresponsible for connecting Google Cloud resources to on-premises datacenters or other clouds. Depending on your needs, you might need only one ofthese products, or you might use all of them to handle different use cases.

Cloud VPN

Cloud VPN lets you connect your remote branch offices or other cloud providersto GoogleVPCnetworks through IPsec VPN connections. Traffic traveling between the two networks isencrypted by one VPN gateway and then decrypted by the other VPN gateway,thereby helping to protect data as it traverses the internet.

Cloud VPN lets you connect your on-premisesenvironment and Google Cloud for less cost, but lower bandwidth, thanCloud Interconnect (described in the next section). You can provision anHA VPN to meet an SLA requirement of up to 99.99% availability if you have theconforming architecture. For example,Cloud VPN is a good choice for non-mission-critical use cases or forextending connectivity to other cloud providers.

Cloud Interconnect

Cloud Interconnect provides enterprise-grade dedicated connectivity to Google Cloud that hashigher throughput and more reliable network performance compared to using VPN orinternet ingress.Dedicated Interconnect provides direct physical connectivity to Google's network from your routers.Partner Interconnect provides dedicated connectivity through an extensive network of partners, whomight offer broader reach or other bandwidth options thanDedicated Interconnect does.Cross-Cloud Interconnect provides dedicated direct connectivity from your VPC networks to other cloud providers.Dedicated Interconnect requires that you connect at a colocation facility where Google has a presence,butPartner Interconnect does not.Cloud Interconnect ensures that the traffic between youron-premises network or other cloud network and your VPC network doesn't traverse the public internet.

You can provision these Cloud Interconnect connections to meet an SLA requirement ofup to 99.99% availability if you provision the appropriate architecture. You canconsider using Cloud Interconnect to support workloads that require lowlatency, high bandwidth, and predictable performance while ensuring that all ofyour traffic stays private.

Network Connectivity Center for hybrid

Network Connectivity Center provides site-to-site connectivity among your on-premises and other cloudnetworks. It does this using Google's backbone network to deliver reliableconnectivity among your sites.

Additionally, you can extend your existing SD-WAN overlay network toGoogle Cloud by configuring a VM or athird-party vendorrouter appliance as a logical spoke attachment.

You can access resources inside the VPC networks using theRouter appliance,VPN,orCloud Interconnect network as spoke attachments. You can use Network Connectivity Center to consolidateconnectivity between your on-premises sites, your presences in other clouds, andGoogle Cloud and manage it all using a single view.

Network Connectivity Center for VPC networks

Network Connectivity Center also lets you create amesh or star topologyamong many VPC networks usingVPC spokes.You can connect the hub to on-premises or other clouds usingNetwork Connectivity Center hybrid spokes.

VPC Network Peering

VPC Network Peering lets you connect Google VPC networks so that workloads indifferent VPC networks can communicate internally regardless ofwhether they belong to the same project or to the same organization resource.Traffic stays within Google's network and doesn't traverse the public internet.

VPC Network Peering requires that the networks to be peered don't haveoverlapping IP addresses.

Network security

The network security block sits on top of the network connectivity block. It'sresponsible for allowing or denying access to resources based on thecharacteristics of IP packets.

Cloud NGFW

Cloud Next Generation Firewall (Cloud NGFW) is adistributed firewall service that lets you apply firewall policies at theorganization, folder, and network level. Enabled firewall rules are alwaysenforced, protecting your instances regardless of their configuration or theoperating system, or even whether the VMs have fully booted. The rules areapplied on a per-instance basis, meaning that the rules protect connectionsbetween VMs within a given network as well connections to outside the network.Rule application can be governed using IAM-governed Tags, which allow you tocontrol which VMs are covered by particular rules. Cloud NGFW alsooffers the option to do L7 inspection of packets.

Packet mirroring

Packet mirroring clones the traffic of specific instances in your VPC network andforwards it to collectors for examination. Packet mirroring captures all traffic and packet data,including payloads and headers. You can configure mirroring for both ingress andegress traffic, for only ingress traffic, or for only egress traffic. Themirroring happens on the VM instances, not on the network.

Network virtual appliance

Network virtual appliances let you apply security and compliance controls tothe virtual network that are consistent with controls in the on-premisesenvironment. You can do this by deploying VM images that are available in theGoogle Cloud Marketplace to VMs that have multiple network interfaces, each attachedto a different VPC network, to perform a variety of networkvirtual functions.

Typical use cases for virtual appliances are as follows:

  • Next-generation firewall(NGFW). NGFW NVAsdeliver protection insituations not covered by Cloud NGFW or to provide managementconsistency with on-premises NGFW installations.
  • Intrusion detection system/intrusion prevention system (IDS/IPS).A network-based IDS provides visibility into potentially malicious traffic.To prevent intrusions, IPS devices can block malicious traffic fromreaching its destination. Google Cloud offers Cloud Intrusion Detection System (Cloud IDS) as a managed service.
  • Secure web gateway (SWG). A SWG blocks threats from the internet byletting enterprises apply corporate policies on traffic that's traveling toand from the internet. This is done by using URL filtering, malicious codedetection, and access control. Google Cloud offers Secure Web Proxy as amanaged service.
  • Network address translation (NAT) gateway.A NAT gateway translates IP addresses and ports. For example, thistranslation helps avoid overlapping IP addresses. Google Cloud offersCloud NAT as a managed service.
  • Web application firewall (WAF).A WAF is designed to block malicious HTTP(S) traffic that's going to a webapplication. Google Cloud offers WAF functionality throughGoogle Cloud Armor security policies.The exact functionality differs between WAF vendors, so it's important todetermine what you need.

Cloud IDS

Cloud IDS is an intrusion detection service that provides threat detection for intrusions,malware, spyware, and command-and-control attacks on your network.Cloud IDS works by creating a Google-managed peered network containingVMs that will receive mirrored traffic. The mirrored traffic is then inspectedbyPalo Alto Networks threat protection technologies to provide advanced threat detection.

Cloud IDS provides full visibility into intra-subnet traffic, lettingyou monitor VM-to-VM communication and to detectlateral movement.

Cloud NAT

Cloud NAT provides fully managed, software-defined network address translation support forapplications. It enables source network address translation (source NAT or SNAT)for internet-facing traffic from VMs that don't have external IP addresses.

Firewall Insights

Firewall Insights helps you understand and optimize your firewall rules. It provides data abouthow your firewall rules are being used, exposes misconfigurations, andidentifies rules that could be made more strict. It also uses machine learningto predict future usage of your firewall rules so that you can make informeddecisions about whether to remove or tighten rules that seem overlypermissive.

Network logging

You can use multiple Google Cloud products to log and analyze networktraffic.

Firewall Rules Logging lets you audit, verify, and analyze the effects of your firewall rules. Forexample, you can determine if a firewall rule that's designed to deny traffic isfunctioning as intended. Firewall Rules Logging is also useful if youneed to determine how many connections are affected by a given firewall rule.

You enable Firewall Rules Logging individually for each firewall rulewhose connections you need to log. Firewall Rules Logging is an optionfor any firewall rule, regardless of the action (allow or deny) or direction(ingress or egress) of the rule.

VPC Flow Logs records a sample of network flows that are sent from and received byVM instances,including instances used asGoogle Kubernetes Engine (GKE) nodes.These logs can be used for network monitoring, forensics, real-time securityanalysis, and expense optimization.

Service networking

Service networking blocks are responsible for providing lookup services thattell services where a request should go (DNS, Service Directory) andwith getting requests to the correct place (Private Service Connect,Cloud Load Balancing).

Cloud DNS

Workloads are accessed using domain names.Cloud DNS offers reliable, low-latency translation of domain names to IP addresses thatare located anywhere in the world. Cloud DNS offers both public zonesand private managed DNS zones. A public zone is visible to the public internet,while a private zone is visible only from one or more VPCnetworks that you specify.

Cloud Load Balancing

Within Google Cloud,load balancers are a crucial component—they route traffic to various services to provide speedand efficiency, and to help improve security globally for both internal andexternal traffic.

Our load balancers also let traffic be routed and scaled across multiple cloudsor hybrid environments. This makes Cloud Load Balancing the "front door"through which any application can be scaled no matter where it is or in how manyplaces it's hosted. Google offers various types of load balancing: global andregional, external and internal, and Layer 4 and Layer 7.

Service Directory

Service Directory lets you manage your service inventory, providing a single secure place topublish, discover, and connect services, all operations underpinned byidentity-based access control. It lets you register named services and their endpoints.Registration can be either manual or by using integrations withPrivate Service Connect, GKE, andCloud Load Balancing. Service discovery is possible by using explicitHTTP and gRPC APIs, as well as by using Cloud DNS.

Cloud Service Mesh

Cloud Service Mesh isdesigned to run complex, distributed applications byenabling a rich set of traffic management and security policies in service mesharchitectures.

Cloud Service Mesh supports Kubernetes-based regional and globaldeployments, both Google Cloud and on-premises, that benefit from amanagedIstio product. It also supports Google Cloud using proxies on VMs or proxyless gRPC.

Private Service Connect

Private Service Connect creates service abstractions by making workloads accessible acrossVPC networks through a single endpoint. This allows two networksto communicate in a client-server model that exposes just the service to theconsumer instead of the entire network or the workload itself. Aservice-oriented network model allows network administrators to reason about theservices they expose between networks rather than subnets or VPCs, enablingconsumption of the services in a producer-consumer model, be it for first-party orthird-party services (SaaS).

With Private Service Connect a consumer VPC canuse a private IP address to connect to aGoogle API oraservice in another VPC.

You can extend Private Service Connectto your on-premises network to access endpoints that connect to Google APIs or to managed services inanother VPC network. Private Service Connectallows consumption of services at Layer 4 or Layer 7.

At Layer 4, Private Service Connect requires the producer tocreate one or more subnets specific to Private Service Connect.These subnets are also referred to asNAT subnets.Private Service Connect performs source NAT using an IP addressthat's selected from one of the Private Service Connect subnetsto route the requests to a service producer. This approach lets you useoverlapping IP addresses between consumers and producers.

At Layer 7, you cancreate a Private Service Connect backend using an internal Application Load Balancer. The internal Application Load Balancer lets you choose which servicesare available using aURL map.For more information, seeAbout Private Service Connect backends.

Private services access

Private services access is a private connection between your VPC network and a networkthat's owned by Google or by a third party. Google or the third parties whooffer services are known asservice producers. Private services access usesVPC Network Peering to establish the connectivity, and it requires theproducer and consumer VPC networks to be peered with each other.This is different from Private Service Connect, which lets youproject a single private IP address into your subnet.

The private connection lets VM instances in your VPC network andthe services that you access communicate exclusively by usinginternal IP addresses.VM instances don't need internet access or external IP addresses to reachservices that are available through private services access. Private servicesaccess can also beextended to the on-premises network by using Cloud VPN or Cloud Interconnect to provide a way forthe on-premises hosts to reach the service producer's network. For a list ofGoogle-managed services that are supported using private services access, seeSupported services in the Virtual Private Cloud documentation.

Serverless VPC Access

Serverless VPC Access makes it possible for you to connect directly to your VPC networkfrom services hosted in serverless environments such as Cloud Run,App Engine, or Cloud Run functions. ConfiguringServerless VPC Access lets your serverless environment sendrequests to your VPC network using internal DNS and internal IPaddresses. The responses to these requests also use your virtual network.

Serverless VPC Access sends internal traffic from yourVPC network to your serverless environment only when that trafficis a response to a request that was sent from your serverless environmentthrough the Serverless VPC Access connector.

Serverless VPC Access has the following benefits:

  • Requests sent to your VPC network are never exposed tothe internet.
  • Communication through Serverless VPC Access can have lesslatency compared to communication over the internet.

Direct VPC egress

Direct VPC egress lets your Cloud Run service send traffic to aVPC network without setting up a Serverless VPC Access connector.

Service security

The service security blocks control access to resources based on the identity of therequestor or based on higher-level understanding of packet patterns instead ofjust the characteristics of an individual packet.

Cloud Armor for DDoS/WAF

Cloud Armor is a web-application firewall (WAF) and distributed denial-of-service (DDoS)mitigation service that helps you defend your web applications and services frommultiple types of threats. These threats include DDoS attacks, web-based attackssuch as cross-site scripting (XSS) and SQL injection (SQLi), and fraud andautomation-based attacks.

Cloud Armor inspects incoming requests on Google's global edge. It hasa built-in set of web application firewall rules to scan forcommon web attacks and an advancedML-based attack detection system that builds a model of good traffic and then detects bad traffic. Finally,Cloud Armor integrates with GooglereCAPTCHA to help detect and stop sophisticated fraud and automation-based attacks byusing both endpoint telemetry and cloud telemetry.

Identity Aware Proxy (IAP)

Identity-Aware Proxy (IAP) provides context-aware access controls to cloud-based applications and VMs thatare running on Google Cloud or that are connected to Google Cloudusing any of the hybrid networking technologies. IAP verifies theuser identity and determines if the user request is originating from trustedsources, based on various contextual attributes. IAP alsosupports TCP tunneling for SSH/RDP access from enterprise users.

VPC Service Controls

VPC Service Controls helps you mitigate the risk of data exfiltration from Google Cloudservices such as Cloud Storage and BigQuery. UsingVPC Service Controls helps ensure that use of your Google Cloudservices happens only from approved environments.

You can use VPC Service Controls to create perimeters that protect theresources and data of services that you specify by limiting access to specificcloud-native identity constructs like service accounts and VPCnetworks. After a perimeter has been created, access to the specified Googleservices is denied unless the request comes from within the perimeter.

Content delivery

The content delivery blocks control the optimization of delivery ofapplications and content.

Cloud CDN

Cloud CDN provides static content acceleration by using Google's global edge network todeliver content from a point closest to the user. This helps reduce latency foryour websites and applications.

Media CDN

Media CDN is Google's media delivery solution and is built for high-throughput egressworkloads.

Observability

The observability blocks give you visibility into your network and provide insight which can be used to troubleshoot, document, investigate, issues.

Network Intelligence Center

Network Intelligence Center comprises several products that address various aspects of networkobservability. Each product has a different focus and provides rich insights toinform administrators, architects, and practitioners about network health and issues.

Reference architectures

The following documents present reference architectures for different types ofworkloads: intra-cloud, internet-facing, and hybrid. These workloadarchitectures are built on top of a cloud data plane that is realized usingthe building blocks and the architectural patterns that were outlined inearlier sections of this document.

You can use the reference architectures to design ways tomigrate or build workloads in the cloud. Your workloads are then underpinned bythe cloud data plane and use the architectures. Although these documents don'tprovide an exhaustive set of reference architectures, they do cover the mostcommon scenarios.

As with the security architecture patterns that are described inArchitectures for Protecting Cloud Data Planes,real-world services might use a combination of these designs. These documentsdiscuss each workload type and the considerations for each securityarchitecture.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-01-13 UTC.