Mirrored pattern Stay organized with collections Save and categorize content based on your preferences.
Themirrored pattern is based on replicating the design of a certain existingenvironment or environments to a new environment or environments. Therefore,this pattern applies primarily to architectures that follow theenvironment hybrid pattern.In that pattern, you run your development and testing workloads in oneenvironment while you run your staging and production workloads in another.
The mirrored pattern assumes that testing and production workloads aren'tsupposed to communicate directly with one another. However, it should bepossible to manage and deploy both groups of workloads in a consistent manner.
If you use this pattern, connect the two computing environments in a way thataligns with the following requirements:
- Continuous integration/continuous deployment (CI/CD) can deploy andmanage workloads across all computing environments or specific environments.
- Monitoring, configuration management, and other administrative systemsshould work across computing environments.
- Workloads can't communicate directly across computing environments. Ifnecessary, communication has to be in a fine-grained and controlled fashion.
Architecture
The following architecture diagram shows a high level reference architecture ofthis pattern that supports CI/CD, Monitoring, configuration management,other administrative systems, and workload communication:
The description of the architecture in the preceding diagram is as follows:
- Workloads are distributed based on the functional environments(development, testing, CI/CD and administrative tooling) across separateVPCs on the Google Cloud side.
- Shared VPC is used for development and testing workloads. An extra VPC is used for theCI/CD and administrative tooling. With shared VPCs:
- The applications are managed by different teams per environmentand per service project.
- The host project administers and controls the networkcommunication and security controls between the development and testenvironments—as well as to outside the VPC.
- CI/CD VPC is connected to the network running the production workloads inyour private computing environment.
- Firewall rules permit only allowed traffic.
- You might also useCloud Next Generation Firewall Enterprise with intrusion prevention service (IPS) to implement deep packetinspection for threat prevention without changing the design orrouting. Cloud Next Generation Firewall Enterprise works by creating Google-managed zonalfirewall endpoints that use packet intercept technology totransparently inspect the workloads for the configured threatsignatures. It also protects workloads against threats.
- Enables communication among the peered VPCs using internal IP addresses.
- The peering in this pattern allows CI/CD and administrativesystems to deploy and manage development and testing workloads.
- Consider thesegeneral best practices.
You establish this CI/CD connection by using one of the discussedhybrid and multicloud networking connectivity options that meet your business and applications requirements. To let you deploy andmanage production workloads, this connection provides private networkreachability between the different computing environments. All environmentsshould have overlap-free RFC 1918 IP address space.
If the instances in the development and testing environments require internetaccess, consider the following options:
- You can deployCloud NAT into the same Shared VPC host project network. Deploying into thesame Shared VPC host project network helps to avoid making theseinstances directly accessible from the internet.
- For outbound web traffic, you can useSecure Web Proxy.The proxy offersseveral benefits.
For more information about the Google Cloud tools and capabilitiesthat help you to build, test, and deploy in Google Cloud and across hybridand multicloud environments, see theDevOps and CI/CD on Google Cloud explained blog.
Variations
To meet different design requirements, while still considering allcommunication requirements, themirrored architecture pattern offers theseoptions, which are described in the following sections:
- Shared VPC per environment
- Centralized application layer firewall
- Hub-and-spoke topology
- Microservices zero trust distributed architecture
Shared VPC per environment
The shared VPC per environment design option allows for application-or service-level separation across environments, including CI/CD andadministrative tools that might be required to meet certain organizationalsecurity requirements. These requirements limit communication, administrativedomain, and access control for different services that also need to be managedby different teams.
This design achieves separation by providing network- and project-levelisolation between the different environments, which enables more fine-grainedcommunication andIdentity and Access Management (IAM) access control.
From a management and operations perspective, this design provides theflexibility to manage the applications and workloads created by different teamsper environment and per service project. VPC networking, and its securityfeatures can be provisioned and managed by networking operations teams based onthe following possible structures:
- One team manages all host projects across all environments.
- Different teams manage the host projects in their respective environments.
Decisions about managing host projects should be based on the team structure,security operations, and access requirements of each team. You can apply thisdesign variation to theShared VPC network for each environment landing zone design option.However, you need to consider the communication requirements of themirroredpattern to define what communication is allowed between the differentenvironments, including communication over the hybrid network.
You can also provision a Shared VPC network for each main environment,as illustrated in the following diagram:
Centralized application layer firewall
In some scenarios, the security requirements might mandate the consideration ofapplication layer (Layer 7) and deep packet inspection with advanced firewallingmechanisms that exceed the capabilities of Cloud Next Generation Firewall. To meet thesecurity requirements and standards of your organization, you can use an NGFWappliancehosted in a network virtual appliance (NVA).Several Google Cloudsecurity partners offer options well suited to this task.
As illustrated in the following diagram, you can place the NVA in the networkpath between Virtual Private Cloud and the private computing environment usingmultiple network interfaces.
This design also can be used with multiple shared VPCs as illustrated in thefollowing diagram.
The NVA in this design acts as the perimeter security layer. It also serves asthe foundation for enabling inline traffic inspection and enforcing strictaccess control policies.
For a robust multilayer security strategy that includes VPC firewall rules andintrusion prevention service capabilities, include further traffic inspectionand security control to both east-west and north-south traffic flows.
Note: In supported cloudregions,and when technically feasible for your design,NVAs can be deployed without requiring multiple VPC networks or applianceinterfaces. This deployment is based on using load balancing andpolicy-based routing capabilities. These capabilities enable a topology-independent, policy-drivenmechanism for integrating NVAs into your cloud network. For more details, seeDeploy network virtual appliances (NVAs) without multiple VPCs.Hub-and-spoke topology
Another possible design variation is to use separate VPCs (including sharedVPCs) for your development and different testing stages. In this variation, asshown in the following diagram, all stage environments connect with the CI/CDand administrative VPC in a hub-and-spoke architecture. Use this option if youmust separate the administrative domains and the functions in each environment.The hub-and-spoke communication model can help with the following requirements:
- Applications need to access a common set of services, like monitoring,configuration management tools, CI/CD, or authentication.
- A common set of security policies needs to be applied to inbound andoutbound traffic in a centralized manner through the hub.
For more information about hub-and-spoke design options, seeHub-and-spoke topology with centralized appliances andHub-and-spoke topology without centralized appliances.
As shown in the preceding diagram, the inter-VPC communication and hybridconnectivity all pass through the hub VPC. As part of this pattern, you cancontrol and restrict the communication at the hub VPC to align with yourconnectivity requirements.
As part of the hub-and-spoke network architecture the following are the primaryconnectivity options (between the spokes and hub VPCs) on Google Cloud:
- VPC Network Peering
- VPN
- Using network virtual appliance (NVA)
- With multiple network interfaces
- WithNetwork Connectivity Center (NCC)
For more information on which option you should consider in your design, seeHub-and-spoke network architecture.A key influencing factor for selecting VPN over VPC peering between the spokesand the hub VPC is whentraffic transitivity is required. Traffic transitivity means that traffic from a spoke can reachother spokes through the hub.
Microservices zero trust distributed architecture
Hybrid and multicloud architectures can requiremultiple clusters to achieve their technical and business objectives, including separating theproduction environment from the development and testing environments. Therefore,network perimeter security controls are important, especially when they'rerequired to comply with certain security requirements.
It's not enough to support the security requirements of current cloud-firstdistributed microservices architectures, you should also consider zero trustdistributed architectures. The microservices zero trust distributed architecturesupports your microservices architecture with microservice level security policyenforcement, authentication, and workload identity. Trust isidentity-based and enforced for each service.
By using a distributed proxy architecture, such as a service mesh, services caneffectively validate callers and implement fine-grained access control policiesfor each request, enabling a more secure and scalable microservices environment.Cloud Service Mesh gives you the flexibility to have a common mesh that can span yourGoogle Cloud and on-premises deployments. The mesh uses authorizationpolicies to help secure service-to-service communications.
You might also incorporateApigee Adapter for Envoy,which is a lightweight Apigee API gateway deployment within aKubernetes cluster, with this architecture. Apigee Adapter for Envoyis an open source edge and service proxy that's designed for cloud-firstapplications.
For more information about this topic, see the following articles:
- Zero Trust Distributed Architecture
- GKE Enterprise hybrid environment
- Connect to Google
- Connect an on-premises GKE Enterprise cluster to aGoogle Cloud network.
- Set up a multicloud or hybrid mesh
- Deploy Cloud Service Mesh across environments and clusters.
Mirrored pattern best practices
- The CI/CD systems required for deploying or reconfiguring productiondeployments must be highly available, meaning that all architecturecomponents must be designed to provide the expected level of systemavailability. For more information, seeGoogle Cloud infrastructure reliability.
- To eliminate configuration errors for repeated processes like codeupdates, automation is essential to standardize your builds, tests, anddeployments.
- The integration of centralized NVAs in this design might require theincorporation of multiple segments with varying levels of security accesscontrols.
- When designing a solution that includes NVAs, it's important to considerthe high availability (HA) of the NVAs to avoid a single point of failurethat could block all communication. Follow the HA and redundancy design andimplementation guidance provided by your NVA vendor.
- By not exporting on-premises IP routes over VPC peering or VPN to thedevelopment and testing VPC, you can restrict network reachability fromdevelopment and testing environments to the on-premises environment. Formore information, seeVPC Network Peering custom route exchange.
- For workloads with private IP addressing that can require Google's APIsaccess, you can exposeGoogle APIs by using aPrivate Service Connect endpoint within a VPC network. For more information, seeGated ingress,in this series.
- Review thegeneral best practices for hybrid and multicloud networking architecture patterns.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-01-23 UTC.