Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

refactor: one default common perimeter#1429

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
mariammartins wants to merge120 commits intoterraform-google-modules:main
base:main
Choose a base branch
Loading
frommariammartins:refactor/one-default-common-perimeter

Conversation

@mariammartins
Copy link
Contributor

This PR contains the refactoring of VPC Service Controls, with the creation of a single common perimeter, with all projects inserted in it, with the exception of the cloud build project.

@mariammartins
Copy link
ContributorAuthor

/gemini review

Copy link

@gemini-code-assistgemini-code-assistbot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Code Review

This pull request refactors the VPC Service Controls to use a single, common perimeter, which is a positive architectural simplification. The changes are extensive, touching documentation, Terraform configurations, and helper scripts to align with this new model. However, the implementation introduces some significant issues. There's a critical conflict in the Cloud Build private pool networking where both a managed Cloud NAT and a self-managed NAT VM are configured, which needs to be resolved. More importantly, the new perimeter configuration relies on placeholders in the Terraform code that must be manually updated using scripts in a later step. This approach is brittle, error-prone, and undermines the declarative nature of Infrastructure as Code. I've raised a critical issue to address this architectural flaw. Additionally, there are medium-severity concerns regarding hardcoded IP ranges and the use of a git branch reference instead of a specific tag or commit for a module source.

Comment on lines +215 to 323
resource"google_compute_address""cloud_build_nat" {
project=var.project_id
address_type="EXTERNAL"
name="cloud-build-nat"
network_tier="PREMIUM"
region="us-central1"

depends_on=[module.peered_network]
}

allow= [{
protocol="all"
ports=null
}]
resource"google_compute_instance""vm-proxy" {
project=var.project_id
name="cloud-build-nat-vm"
machine_type="e2-medium"
zone="us-central1-a"

deny= []
tags=["direct-gateway-access","nat-gateway"]

log_config= {
metadata="INCLUDE_ALL_METADATA"
boot_disk {
initialize_params {
image="debian-cloud/debian-12"
}
}]
}

network_interface {
network=local.peered_network_name
subnetwork="sb-b-cbpools-${var.private_worker_pool.region}"
subnetwork_project=var.project_id

}

can_ip_forward=true

// This script configures the VM to do IP Forwarding
metadata_startup_script="sysctl -w net.ipv4.ip_forward=1 && iptables -t nat -A POSTROUTING -o $(ip addr show scope global | head -1 | awk -F: '{print $2}') -j MASQUERADE"

service_account {
scopes=["cloud-platform"]
}

depends_on=[
resource.google_compute_router_nat.cb-nat,
module.peered_network
]
}

# This route will route packets to the NAT VM
resource"google_compute_route""through-nat" {
name="through-nat-range1"
project=var.project_id
dest_range="0.0.0.0/1"
network=local.peered_network_name
next_hop_instance=google_compute_instance.vm-proxy.id
priority=10
}

resource"google_compute_route""through-nat2" {
name="through-nat-range2"
project=var.project_id
dest_range="128.0.0.0/1"
network=local.peered_network_name
next_hop_instance=google_compute_instance.vm-proxy.id
priority=10
}

# This route allow the NAT VM to reach the internet with it's external IP address

resource"google_compute_route""direct-to-gateway" {
name="direct-to-gateway-range1"
project=var.project_id
dest_range="0.0.0.0/1"
network=local.peered_network_name
next_hop_gateway="default-internet-gateway"
tags=["direct-gateway-access"]
priority=5
}

resource"google_compute_route""direct-to-gateway2" {
name="direct-to-gateway-range2"
project=var.project_id
dest_range="128.0.0.0/1"
network=local.peered_network_name
next_hop_gateway="default-internet-gateway"
tags=["direct-gateway-access"]
priority=5
}

# Cloud Router
resource"google_compute_router""cb-router" {
name="cb-cloud-router"
network=local.peered_network_name
region="us-central1"
project=var.project_id
}

# Cloud NAT
resource"google_compute_router_nat""cb-nat" {
project=var.project_id
name="cb-cloud-nat"
router=google_compute_router.cb-router.name
region=google_compute_router.cb-router.region
nat_ip_allocate_option="AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat="ALL_SUBNETWORKS_ALL_IP_RANGES"

log_config {
enable=true
filter="ALL"
}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

critical

This configuration sets up both a managed Cloud NAT (google_compute_router_nat) and a self-managed NAT instance (google_compute_instance.vm-proxy) for the same VPC. This is redundant, increases complexity and cost, and could lead to unpredictable egress routing. The managed Cloud NAT is the recommended solution for providing NAT services in GCP. I strongly suggest removing the self-managed NAT VM resources (google_compute_address.cloud_build_nat,google_compute_instance.vm-proxy, and the associatedgoogle_compute_route resources) and relying solely on the managed Cloud NAT.

Comment on lines +20 to +38
nat_proxy_vm_ip_range="10.1.1.0/24"

single_project_network={
subnet_name="eab-develop-us-central1"
subnet_ip="10.1.20.0/24"
subnet_region= var.subnet_region
subnet_private_access=true
}
single_project_secondary={
"eab-develop-us-central1"= [
{
range_name="eab-develop-us-central1-secondary-01"
ip_cidr_range="192.168.0.0/18"
},
{
range_name="eab-develop-us-central1-secondary-02"
ip_cidr_range="192.168.64.0/18"
},
] }

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

medium

The IP ranges fornat_proxy_vm_ip_range,single_project_network, andsingle_project_secondary are hardcoded within thelocals block. This reduces the module's flexibility and reusability. It's recommended to define these as input variables, with the current values set as defaults, to allow for easier customization without modifying the module's core logic.

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@eeatoneeatonAwaiting requested review from eeatoneeaton is a code owner

@rjerremsrjerremsAwaiting requested review from rjerremsrjerrems is a code owner

@sleighton2022sleighton2022Awaiting requested review from sleighton2022sleighton2022 is a code owner

1 more reviewer

@gemini-code-assistgemini-code-assist[bot]gemini-code-assist[bot] left review comments

Reviewers whose approvals may not affect merge requirements

At least 1 approving review is required to merge this pull request.

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

1 participant

@mariammartins

[8]ページ先頭

©2009-2025 Movatter.jp