Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

terraform-google-modules/terraform-google-kubernetes-engine

 
 

Repository files navigation

This module handles opinionated Google Cloud Platform Kubernetes Engine cluster creation and configuration with Node Pools, IP MASQ, Network Policy, etc.The resources/services/activations/deletions that this module will create/trigger are:

  • Create a GKE cluster with the provided addons
  • Create GKE Node Pool(s) with provided configuration and attach to cluster
  • Replace the default kube-dns configmap ifstub_domains are provided
  • Activate network policy ifnetwork_policy is true
  • Addip-masq-agent configmap with providednon_masquerade_cidrs ifnetwork_policy is true

Usage

There are multiple examples included in theexamples folder but simple usage is as follows:

module"gke" {source="terraform-google-modules/kubernetes-engine/google"project_id="<PROJECT ID>"name="gke-test-1"region="us-central1"zones=["us-central1-a","us-central1-b","us-central1-f"]network="vpc-01"subnetwork="us-central1-01"ip_range_pods="us-central1-01-gke-01-pods"ip_range_services="us-central1-01-gke-01-services"http_load_balancing=falsehorizontal_pod_autoscaling=truekubernetes_dashboard=truenetwork_policy=truenode_pools=[    {      name="default-node-pool"      machine_type="n1-standard-2"      min_count=1      max_count=100      disk_size_gb=100      disk_type="pd-standard"      image_type="COS"      auto_repair=true      auto_upgrade=true      service_account="project-service-account@<PROJECT ID>.iam.gserviceaccount.com"      preemptible=false      initial_node_count=80    },  ]node_pools_labels={    all= {}    default-node-pool= {      default-node-pool="true"    }  }node_pools_metadata={    all= {}    default-node-pool= {      node-pool-metadata-custom-value="my-node-pool"    }  }node_pools_taints={    all= []    default-node-pool= [      {        key="default-node-pool"        value="true"        effect="PREFER_NO_SCHEDULE"      },    ]  }node_pools_tags={    all= []    default-node-pool= ["default-node-pool",    ]  }}

Then perform the following commands on the root folder:

  • terraform init to get the plugins
  • terraform plan to see the infrastructure plan
  • terraform apply to apply the infrastructure build
  • terraform destroy to destroy the built infrastructure

Upgrade to v1.0.0

Version 1.0.0 of this module introduces a breaking change: adding thedisable-legacy-endpoints metadata field to all node pools. This metadata is required by GKE anddetermines whether the/0.1/ and/v1beta1/ paths are available in the nodes' metadata server. If your applications do not require access to the node's metadata server, you can leave the default value oftrue provided by the module. If your applications require access to the metadata server, be sure to read the linked documentation to see if you need to set the value for this field tofalse to allow your applications access to the above metadata server paths.

In either case, upgrading to module versionv1.0.0 will trigger a recreation of all node pools in the cluster.

Inputs

NameDescriptionTypeDefaultRequired
descriptionThe description of the clusterstring""no
disable_legacy_metadata_endpointsDisable the /0.1/ and /v1beta1/ metadata server endpoints on the node. Changing this value will cause all node pools to be recreated.string"true"no
horizontal_pod_autoscalingEnable horizontal pod autoscaling addonstring"true"no
http_load_balancingEnable httpload balancer addonstring"true"no
ip_masq_link_localWhether to masquerade traffic to the link-local prefix (169.254.0.0/16).string"false"no
ip_masq_resync_intervalThe interval at which the agent attempts to sync its ConfigMap file from the disk.string"60s"no
ip_range_podsThename of the secondary subnet ip range to use for podsstringn/ayes
ip_range_servicesThename of the secondary subnet range to use for servicesstringn/ayes
kubernetes_dashboardEnable kubernetes dashboard addonstring"false"no
kubernetes_versionThe Kubernetes version of the masters. If set to 'latest' it will pull latest available version in the selected region.string"latest"no
logging_serviceThe logging service that the cluster should write logs to. Available options include logging.googleapis.com, logging.googleapis.com/kubernetes (beta), and nonestring"logging.googleapis.com"no
maintenance_start_timeTime window specified for daily maintenance operations in RFC3339 formatstring"05:00"no
master_authorized_networks_configThe desired configuration options for master authorized networks. Omit the nested cidr_blocks attribute to disallow external access (except the cluster node IPs, which GKE automatically whitelists)

### example format ### master_authorized_networks_config = [{ cidr_blocks = [{ cidr_block = "10.0.0.0/8" display_name = "example_network" }], }]
list<list>no
monitoring_serviceThe monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include monitoring.googleapis.com, monitoring.googleapis.com/kubernetes (beta) and nonestring"monitoring.googleapis.com"no
nameThe name of the cluster (required)stringn/ayes
networkThe VPC network to host the cluster in (required)stringn/ayes
network_policyEnable network policy addonstring"false"no
network_project_idThe project ID of the shared VPC's host (for shared vpc support)string""no
node_poolsList of maps containing node poolslist<list>no
node_pools_labelsMap of maps containing node labels by node-pool namemap<map>no
node_pools_metadataMap of maps containing node metadata by node-pool namemap<map>no
node_pools_tagsMap of lists containing node network tags by node-pool namemap<map>no
node_pools_taintsMap of lists containing node taints by node-pool namemap<map>no
node_versionThe Kubernetes version of the node pools. Defaults kubernetes_version (master) variable and can be overridden for individual node pools by setting theversion key on them. Must be empyty or set the same as master at cluster creation.string""no
non_masquerade_cidrsList of strings in CIDR notation that specify the IP address ranges that do not use IP masquerading.list<list>no
project_idThe project ID to host the cluster in (required)stringn/ayes
regionThe region to host the cluster in (required)stringn/ayes
regionalWhether is a regional cluster (zonal cluster if set false. WARNING: changing this after cluster creation is destructive!)string"true"no
remove_default_node_poolRemove default node pool while setting up the clusterstring"false"no
service_accountThe service account to default running nodes as if not overridden innode_pools. Defaults to the compute engine default service account. May also specifycreate to automatically create a cluster-specific service accountstring""no
stub_domainsMap of stub domains and their resolvers to forward DNS queries for a certain domain to an external DNS servermap<map>no
subnetworkThe subnetwork to host the cluster in (required)stringn/ayes
zonesThe zones to host the cluster in (optional if regional cluster / required if zonal)list<list>no

Outputs

NameDescription
ca_certificateCluster ca certificate (base64 encoded)
endpointCluster endpoint
horizontal_pod_autoscaling_enabledWhether horizontal pod autoscaling enabled
http_load_balancing_enabledWhether http load balancing enabled
kubernetes_dashboard_enabledWhether kubernetes dashboard enabled
locationCluster location (region if regional cluster, zone if zonal cluster)
logging_serviceLogging service used
master_authorized_networks_configNetworks from which access to master is permitted
master_versionCurrent master kubernetes version
min_master_versionMinimum master kubernetes version
monitoring_serviceMonitoring service used
nameCluster name
network_policy_enabledWhether network policy enabled
node_pools_namesList of node pools names
node_pools_versionsList of node pools versions
regionCluster region
service_accountThe service account to default running nodes as if not overridden innode_pools.
typeCluster type (regional / zonal)
zonesList of zones in which the cluster resides

Requirements

Before this module can be used on a project, you must ensure that the following pre-requisites are fulfilled:

  1. Terraform and kubectl areinstalled on the machine where Terraform is executed.
  2. The Service Account you execute the module with has the rightpermissions.
  3. The Compute Engine and Kubernetes Engine APIs areactive on the project you will launch the cluster in.
  4. If you are using a Shared VPC, the APIs must also be activated on the Shared VPC host project and your service account needs the proper permissions there.

Theproject factory can be used to provision projects with the correct APIs active and the necessary Shared VPC connections.

Software Dependencies

Kubectl

Terraform and Plugins

Configure a Service Account

In order to execute this module you must have a Service Account with thefollowing project roles:

  • roles/compute.viewer
  • roles/container.clusterAdmin
  • roles/container.developer
  • roles/iam.serviceAccountAdmin
  • roles/iam.serviceAccountUser
  • roles/resourcemanager.projectIamAdmin (only required ifservice_account is set tocreate)

Enable APIs

In order to operate with the Service Account you must activate the following APIs on the project where the Service Account was created:

  • Compute Engine API - compute.googleapis.com
  • Kubernetes Engine API - container.googleapis.com

File structure

The project has the following folders and files:

  • /: root folder
  • /examples: examples for using this module
  • /helpers: Helper scripts
  • /scripts: Scripts for specific tasks on module (see Infrastructure section on this file)
  • /test: Folders with files for testing the module (see Testing section on this file)
  • /main.tf: main file for this module, contains all the resources to create
  • /variables.tf: all the variables for the module
  • /output.tf: the outputs of the module
  • /readme.MD: this file

Templating

To more cleanly handle cases where desired functionality would require complex duplication of Terraform resources (i.e.PR 51), this repository is largely generated from theautogen directory.

The root module is generated by runningmake generate. Changes to this repository should be made in theautogen directory where appropriate.

Testing

Requirements

Autogeneration of documentation from .tf files

Run

make generate_docs

Integration test

Integration tests are run thoughtest-kitchen,kitchen-terraform, andInSpec.

Six test-kitchen instances are defined:

  • deploy-service
  • node-pool
  • shared-vpc
  • simple-regional
  • simple-zonal
  • stub-domains

The test-kitchen instances intest/fixtures/ wrap identically-named examples in theexamples/ directory.

Setup

  1. Configure thetest fixtures
  2. Download a Service Account key with the necessary permissions and put it in the module's root directory with the namecredentials.json.
    • Requires thepermissions to run the module
    • Requiresroles/compute.networkAdmin to create the test suite's networks
    • Requiresroles/resourcemanager.projectIamAdmin since service account creation is tested
  3. Build the Docker container for testing:
make docker_build_kitchen_terraform
  1. Run the testing container in interactive mode:
make docker_run

The module root directory will be loaded into the Docker container at/cft/workdir/.5. Run kitchen-terraform to test the infrastructure:

  1. kitchen create creates Terraform state and downloads modules, if applicable.
  2. kitchen converge creates the underlying resources. Runkitchen converge <INSTANCE_NAME> to create resources for a specific test case.
  3. Runkitchen converge again. This is necessary due to an oddity in hownetworkPolicyConfig is handled by the upstream API. (See#72 for details).
  4. kitchen verify tests the created infrastructure. Runkitchen verify <INSTANCE_NAME> to run a specific test case.
  5. kitchen destroy tears down the underlying resources created bykitchen converge. Runkitchen destroy <INSTANCE_NAME> to tear down resources for a specific test case.

Alternatively, you can simply runmake test_integration_docker to run all the test steps non-interactively.

If you wish to parallelize running the test suites, it is also possible to offload the work onto Concourse to run each test suite for you using the commandmake test_integration_concourse. The.concourse directory will be created and contain all of the logs from the running test suites.

When running tests locally, you will need to use your own test project environment. You can configure your environment by setting all of the following variables:

export COMPUTE_ENGINE_SERVICE_ACCOUNT="<EXISTING_SERVICE_ACCOUNT>"export PROJECT_ID="<PROJECT_TO_USE>"export REGION="<REGION_TO_USE>"export ZONES='["<LIST_OF_ZONES_TO_USE>"]'export SERVICE_ACCOUNT_JSON="$(cat "<PATH_TO_SERVICE_ACCOUNT_JSON>")"export CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE="<PATH_TO_SERVICE_ACCOUNT_JSON>"export GOOGLE_APPLICATION_CREDENTIALS="<PATH_TO_SERVICE_ACCOUNT_JSON>"

Test configuration

Each test-kitchen instance is configured with avariables.tfvars file in the test fixture directory, e.g.test/fixtures/node_pool/terraform.tfvars.For convenience, since all of the variables are project-specific, these files have been symlinked totest/fixtures/shared/terraform.tfvars.Similarly, each test fixture has avariables.tf to define these variables, and anoutputs.tf to facilitate providing necessary information forinspec to locate and query against created resources.

Each test-kitchen instance creates a GCP Network and Subnetwork fixture to house resources, and may create any other necessary fixture data as needed.

Autogeneration of documentation from .tf files

Run

make generate_docs

Linting

The makefile in this project will lint or sometimes just format any shell,Python, golang, Terraform, or Dockerfiles. The linters will only be run ifthe makefile finds files with the appropriate file extension.

All of the linter checks are in the default make target, so you just have torun

make -s

The -s is for 'silent'. Successful output looks like this

Running shellcheckRunning flake8Running go fmt and go vetRunning terraform validateRunning hadolint on DockerfilesChecking for required filesTesting the validity of the header check..----------------------------------------------------------------------Ran 2 tests in 0.026sOKChecking file headersThe following lines have trailing whitespace

The lintersare as follows:

  • Shell - shellcheck. Can be found in homebrew
  • Python - flake8. Can be installed with 'pip install flake8'
  • Golang - gofmt. gofmt comes with the standard golang installation. golangis a compiled language so there is no standard linter.
  • Terraform - terraform has a built-in linter in the 'terraform validate'command.
  • Dockerfiles - hadolint. Can be found in homebrew

[8]ページ先頭

©2009-2025 Movatter.jp