Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Shows how the CFT modules can be composed to build a secure cloud foundation

License

NotificationsYou must be signed in to change notification settings

mfnerd/terraform-example-foundation

 
 

Repository files navigation

This example repository shows how the CFT Terraform modules can build a secure Google Cloud foundation, following theGoogle Cloud Enterprise Foundations Blueprint (previously called theSecurity Foundations Guide).The supplied structure and code is intended to form a starting point for building your own foundation with pragmatic defaults that you can customize to meet your own requirements.

The intended audience of this blueprint is large enterprise organizations with a dedicated platform team responsible for deploying and maintaining their GCP environment, who is commited to separation of duties across multiple teams and managing their environment solely through version-controlled Infrastructure as Code. Smaller organizations looking for a turnkey solution might prefer other options such asGoogle Cloud Setup

Intended usage and support

This repository is intended as an example to be forked, tweaked, and maintained in the user's own version-control system; the modules within this repository are not intended for use as remote references.Though this blueprint can help accelerate your foundation design and build, we assume that you have the engineering skills and teams to deploy and customize your own foundation based on your own requirements.

We will support:

  • Code is semantically valid, pinned to known good versions, and passes terraform validate and lint checks
  • All PR to this repo must pass integration tests to deploy all resources into a test environment before being merged
  • Feature requests about ease of use of the code, or feature requests that generally apply to all users, are welcome

We will not support:

  • In-place upgrades from a foundation deployed with an earlier version to a more recent version, even for minor version changes, might not be feasible. Repository maintainers do not have visibility to what resources a user deploys on top of their foundation or how the foundation was customized in deployment, so we make no guarantee about avoiding breaking changes.
  • Feature requests that are specific to a single user's requirement and not representative of general best practices

Overview

This repo contains several distinct Terraform projects, each within their own directory that must be applied separately, but in sequence.Stage0-bootstrap is manually executed, and subsequent stages are executed using your preferred CI/CD tool.

Each of these Terraform projects are to be layered on top of each other, and run in the following order.

This stage executes theCFT Bootstrap module which bootstraps an existing Google Cloud organization, creating all the required Google Cloud resources and permissions to start using the Cloud Foundation Toolkit (CFT).ForCI/CD Pipelines, you can use either Cloud Build (by default) or Jenkins. If you want to use Jenkins instead of Cloud Build, seeREADME-Jenkins on how to use the Jenkins sub-module.

The bootstrap step includes:

  • Theprj-b-seed project that contains the following:
    • Terraform state bucket
    • Custom service accounts used by Terraform to create new resources in Google Cloud
  • Theprj-b-cicd project that contains the following:
    • ACI/CD Pipeline implemented with either Cloud Build or Jenkins
    • If using Cloud Build, the following items:
      • Cloud Source Repository
      • Artifact Registry
    • If using Jenkins, the following items:
      • A Compute Engine instance configured as a Jenkins Agent
      • Custom service account to run Compute Engine instances for Jenkins Agents
      • VPN connection with on-prem (or wherever your Jenkins Controller is located)

It is a best practice to separate concerns by having two projects here: one for the Terraform state and one for the CI/CD tool.

  • Theprj-b-seed project stores Terraform state and has the service accounts that can create or modify infrastructure.
  • Theprj-b-cicd project holds the CI/CD tool (either Cloud Build or Jenkins) that coordinates the infrastructure deployment.

To further separate the concerns at the IAM level as well, a distinct service account is created for each stage. The Terraform custom service accounts are granted the IAM permissions required to build the foundation.If using Cloud Build as the CI/CD tool, these service accounts are used directly in the pipeline to execute the pipeline steps (plan orapply).In this configuration, the baseline permissions of the CI/CD tool are unchanged.

If using Jenkins as the CI/CD tool, the service account of the Jenkins Agent (sa-jenkins-agent-gce@prj-b-cicd-xxxx.iam.gserviceaccount.com) is grantedimpersonation access so it can generate tokens over the Terraform custom Service Accounts.In this configuration, the baseline permissions of the CI/CD tool are limited.

After executing this step, you will have the following structure:

example-organization/└── fldr-bootstrap    ├── prj-b-cicd    └── prj-b-seed

When this step uses the Cloud Build submodule, it sets up the cicd project (prj-b-cicd) with Cloud Build and Cloud Source Repositories for each of the stages below.Triggers are configured to run aterraform plan for any non-environment branch andterraform apply when changes are merged to an environment branch (development,nonproduction orproduction).Usage instructions are available in the 0-bootstrapREADME.

The purpose of this stage is to set up the common folder used to house projects that contain shared resources such as Security Command Center notification, Cloud Key Management Service (KMS), org level secrets, and org level logging.This stage also sets up the network folder used to house network related projects such as DNS Hub, Interconnect, network hub, and base and restricted projects for each environment (development,nonproduction orproduction).This will create the following folder and project structure:

example-organization└── fldr-common    ├── prj-c-logging    ├── prj-c-billing-export    ├── prj-c-scc    ├── prj-c-kms    └── prj-c-secrets└── fldr-network    ├── prj-net-hub-base    ├── prj-net-hub-restricted    ├── prj-net-dns    ├── prj-net-interconnect    ├── prj-d-shared-base    ├── prj-d-shared-restricted    ├── prj-n-shared-base    ├── prj-n-shared-restricted    ├── prj-p-shared-base    └── prj-p-shared-restricted

Logs

Under the common folder, a projectprj-c-logging is used as the destination for organization wide sinks. This includes admin activity audit logs from all projects in your organization and the billing account.

Logs are collected into a logging bucket with a linked BigQuery dataset, which can be used for ad-hoc log investigations, querying, or reporting. Log sinks can also be configured to export to Pub/Sub for exporting to external systems or Cloud Storage for long-term storage.

Notes:

  • Log export to Cloud Storage bucket has optional object versioning support vialog_export_storage_versioning.
  • The various audit log types being captured in BigQuery are retained for 30 days.
  • For billing data, a BigQuery dataset is created with permissions attached, however you will need to configure a billing exportmanually, as there is no easy way to automate this at the moment.

Security Command Center notification

Another project created under the common folder. This project will host the Security Command Center notification resources at the organization level.This project will contain a Pub/Sub topic, a Pub/Sub subscription, and aSecurity Command Center notification configured to send all new findings to the created topic.You can adjust the filter when deploying this step.

KMS

Another project created under the common folder. This project is allocated forCloud Key Management for KMS resources shared by the organization.

Usage instructions are available for the org step in theREADME.

Secrets

Another project created under the common folder. This project is allocated forSecret Manager for secrets shared by the organization.

Usage instructions are available for the org step in theREADME.

DNS hub

This project is created under the network folder. This project will host the DNS hub for the organization.

Interconnect

Another project created under the network folder. This project will host the Dedicated InterconnectInterconnect connection for the organization. In case of Partner Interconnect, this project is unused and theVLAN attachments will be placed directly into the corresponding hub projects.

Networking

Under the network folder, two projects, one for base and another for restricted network, are created per environment (development,nonproduction, andproduction) which is intended to be used as aShared VPC host project for all projects in that environment.This stage only creates the projects and enables the correct APIs, the following networks stages,3-networks-dual-svpc and3-networks-hub-and-spoke, create the actual Shared VPC networks.

The purpose of this stage is to set up the environments folders that contain shared projects for each environemnt.This will create the following folder and project structure:

example-organization└── fldr-development    ├── prj-d-kms    └── prj-d-secrets└── fldr-nonproduction    ├── prj-n-kms    └── prj-n-secrets└── fldr-production    ├── prj-p-kms    └── prj-p-secrets

KMS

Under the environment folder, a project is created per environment (development,nonproduction, andproduction), which is intended to be used byCloud Key Management for KMS resources shared by the environment.

Usage instructions are available for the environments step in theREADME.

Secrets

Under the environment folder, a project is created per environment (development,nonproduction, andproduction), which is intended to be used bySecret Manager for secrets shared by the environment.

Usage instructions are available for the environments step in theREADME.

This step focuses on creating aShared VPC per environment (development,nonproduction, andproduction) in a standard configuration with a reasonable security baseline. Currently, this includes:

  • (Optional) Example subnets fordevelopment,nonproduction, andproduction inclusive of secondary ranges for those that want to use Google Kubernetes Engine.
  • Hierarchical firewall policy created to allow remote access toVMs through IAP, without needing public IPs.
  • Hierarchical firewall policy created to allow forload balancing health checks.
  • Hierarchical firewall policy created to allowWindows KMS activation.
  • Private service networking configured to enable workload dependant resources like Cloud SQL.
  • Base Shared VPC withprivate.googleapis.com configured for base access to googleapis.com and gcr.io. Route added for VIP so no internet access is required to access APIs.
  • Restricted Shared VPC withrestricted.googleapis.com configured for restricted access to googleapis.com and gcr.io. Route added for VIP so no internet access is required to access APIs.
  • Default routes to internet removed, with tag based routeegress-internet required on VMs in order to reach the internet.
  • (Optional) Cloud NAT configured for all subnets with logging and static outbound IPs.
  • Default Cloud DNS policy applied, with DNS logging andinbound query forwarding turned on.

Usage instructions are available for the networks step in theREADME.

This step configures the same network resources that the step 3-networks-dual-svpc does, but this time it makes use of the architecture based on thehub-and-spoke reference network model.

Usage instructions are available for the networks step in theREADME.

This step is focused on creating service projects with a standard configuration that are attached to the Shared VPC created in the previous step and application infrastructure pipelines.Running this code as-is should generate a structure as shown below:

example-organization/└── fldr-development    └── fldr-development-bu1        ├── prj-d-bu1-sample-floating        ├── prj-d-bu1-sample-base        ├── prj-d-bu1-sample-restrict        ├── prj-d-bu1-sample-peering    └── fldr-development-bu2        ├── prj-d-bu2-sample-floating        ├── prj-d-bu2-sample-base        ├── prj-d-bu2-sample-restrict        └── prj-d-bu2-sample-peering└── fldr-nonproduction    └── fldr-nonproduction-bu1        ├── prj-n-bu1-sample-floating        ├── prj-n-bu1-sample-base        ├── prj-n-bu1-sample-restrict        ├── prj-n-bu1-sample-peering    └── fldr-nonproduction-bu2        ├── prj-n-bu2-sample-floating        ├── prj-n-bu2-sample-base        ├── prj-n-bu2-sample-restrict        └── prj-n-bu2-sample-peering└── fldr-production    └── fldr-production-bu1        ├── prj-p-bu1-sample-floating        ├── prj-p-bu1-sample-base        ├── prj-p-bu1-sample-restrict        ├── prj-p-bu1-sample-peering    └── fldr-production-bu2        ├── prj-p-bu2-sample-floating        ├── prj-p-bu2-sample-base        ├── prj-p-bu2-sample-restrict        └── prj-p-bu2-sample-peering└── fldr-common    ├── prj-c-bu1-infra-pipeline    └── prj-c-bu2-infra-pipeline

The code in this step includes two options for creating projects.The first is the standard projects module which creates a project per environment, and the second creates a standalone project for one environment.If relevant for your use case, there are also two optional submodules which can be used to create a subnet per project, and a dedicated private DNS zone per project.

Usage instructions are available for the projects step in theREADME.

The purpose of this step is to deploy a simpleCompute Engine instance in one of the business unit projects using the infra pipeline set up in 4-projects.

Usage instructions are available for the app-infra step in theREADME.

Final view

After all steps above have been executed, your Google Cloud organization should represent the structure shown below, with projects being the lowest nodes in the tree.

example-organization└── fldr-common    ├── prj-c-logging    ├── prj-c-billing-export    ├── prj-c-scc    ├── prj-c-kms    ├── prj-c-secrets    ├── prj-c-bu1-infra-pipeline    └── prj-c-bu2-infra-pipeline└── fldr-network    ├── prj-net-hub-base    ├── prj-net-hub-restricted    ├── prj-net-dns    ├── prj-net-interconnect    ├── prj-d-shared-base    ├── prj-d-shared-restricted    ├── prj-n-shared-base    ├── prj-n-shared-restricted    ├── prj-p-shared-base    └── prj-p-shared-restricted└── fldr-development    ├── prj-d-kms    └── prj-d-secrets    └── fldr-development-bu1        ├── prj-d-bu1-sample-floating        ├── prj-d-bu1-sample-base        ├── prj-d-bu1-sample-restrict        ├── prj-d-bu1-sample-peering    └── fldr-development-bu2        ├── prj-d-bu2-sample-floating        ├── prj-d-bu2-sample-base        ├── prj-d-bu2-sample-restrict        └── prj-d-bu2-sample-peering└── fldr-nonproduction    ├── prj-n-kms    └── prj-n-secrets    └── fldr-nonproduction-bu1        ├── prj-n-bu1-sample-floating        ├── prj-n-bu1-sample-base        ├── prj-n-bu1-sample-restrict        ├── prj-n-bu1-sample-peering    └── fldr-nonproduction-bu2        ├── prj-n-bu2-sample-floating        ├── prj-n-bu2-sample-base        ├── prj-n-bu2-sample-restrict        └── prj-n-bu2-sample-peering└── fldr-production    ├── prj-p-kms    └── prj-p-secrets    └── fldr-production-bu1        ├── prj-p-bu1-sample-floating        ├── prj-p-bu1-sample-base        ├── prj-p-bu1-sample-restrict        ├── prj-p-bu1-sample-peering    └── fldr-production-bu2        ├── prj-p-bu2-sample-floating        ├── prj-p-bu2-sample-base        ├── prj-p-bu2-sample-restrict        └── prj-p-bu2-sample-peering└── fldr-bootstrap    ├── prj-b-cicd    └── prj-b-seed

Branching strategy

There are three main named branches:development,nonproduction, andproduction that reflect the corresponding environments. These branches should beprotected. When theCI/CD Pipeline (Jenkins or Cloud Build) runs on a particular named branch (say for instancedevelopment), only the corresponding environment (development) is applied. An exception is theshared environment, which is only applied when triggered on theproduction branch. This is because any changes in theshared environment may affect resources in other environments and can have adverse effects if not validated correctly.

Development happens on feature and bug fix branches (which can be namedfeature/new-foo,bugfix/fix-bar, etc.) and when complete, apull request (PR) ormerge request (MR) can be opened targeting thedevelopment branch. This will trigger theCI/CD Pipeline to perform a plan and validate against all environments (development,nonproduction,shared, andproduction). After the code review is complete and changes are validated, this branch can be merged intodevelopment. This will trigger aCI/CD Pipeline that applies the latest changes in thedevelopment branch on thedevelopment environment.

After validated indevelopment, changes can be promoted tononproduction by opening a PR or MR targeting thenonproduction branch and merging them. Similarly, changes can be promoted fromnonproduction toproduction.

Policy validation

This repo uses theterraform-tools component of thegcloud CLI to validate the Terraform plans against alibrary of Google Cloud policies.

TheScorecard bundle was used to create thepolicy-library folder withone extra constraint added.

See thepolicy-library documentation if you need to add more constraints from thesamples folder in your configuration based in your type of workload.

Step 1-org hasinstructions on the creation of the shared repository to host these policies.

Optional Variables

Some variables used to deploy the steps have default values, check thosebefore deployment to ensure they match your requirements. For more information, there are tables of inputs and outputs for the Terraform modules, each with a detailed description of their variables. Look for variables marked asnot required in the sectionInputs of these READMEs:

Errata summary

Refer to theerrata summary for an overview of the delta between the example foundation repository and theGoogle Cloud security foundations guide.

Contributing

Refer to thecontribution guidelines for information on contributing to this module.

About

Shows how the CFT modules can be composed to build a secure cloud foundation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HCL76.4%
  • Go17.5%
  • Shell4.7%
  • JavaScript0.6%
  • Open Policy Agent0.3%
  • Makefile0.3%
  • Dockerfile0.2%

[8]ページ先頭

©2009-2025 Movatter.jp