Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A Terraform module for configuring GKE clusters.

License

NotificationsYou must be signed in to change notification settings

thecodejunkie/terraform-google-kubernetes-engine

 
 

Repository files navigation

This module handles opinionated Google Cloud Platform Kubernetes Engine cluster creation and configuration with Node Pools, IP MASQ, Network Policy, etc.The resources/services/activations/deletions that this module will create/trigger are:

  • Create a GKE cluster with the provided addons
  • Create GKE Node Pool(s) with provided configuration and attach to cluster
  • Replace the default kube-dns configmap ifstub_domains are provided
  • Activate network policy ifnetwork_policy is true
  • Addip-masq-agent configmap with providednon_masquerade_cidrs ifconfigure_ip_masq is true

Sub modules are provided from creating private clusters, beta private clusters, and beta public clusters as well. Beta sub modules allow for the use of various GKE beta features. See the modules directory for the various sub modules.

Compatibility

This module is meant for use with Terraform 0.12. If you haven'tupgraded and need a Terraform0.11.x-compatible version of this module, the last released versionintended for Terraform 0.11.x is3.0.0.

Usage

There are multiple examples included in theexamples folder but simple usage is as follows:

module"gke" {source="terraform-google-modules/kubernetes-engine/google"project_id="<PROJECT ID>"name="gke-test-1"region="us-central1"zones=["us-central1-a","us-central1-b","us-central1-f"]network="vpc-01"subnetwork="us-central1-01"ip_range_pods="us-central1-01-gke-01-pods"ip_range_services="us-central1-01-gke-01-services"http_load_balancing=falsehorizontal_pod_autoscaling=truekubernetes_dashboard=truenetwork_policy=truenode_pools=[    {      name="default-node-pool"      machine_type="n1-standard-2"      min_count=1      max_count=100      disk_size_gb=100      disk_type="pd-standard"      image_type="COS"      auto_repair=true      auto_upgrade=true      service_account="project-service-account@<PROJECT ID>.iam.gserviceaccount.com"      preemptible=false      initial_node_count=80    },  ]node_pools_oauth_scopes={    all= []    default-node-pool= ["https://www.googleapis.com/auth/cloud-platform",    ]  }node_pools_labels={    all= {}    default-node-pool= {      default-node-pool=true    }  }node_pools_metadata={    all= {}    default-node-pool= {      node-pool-metadata-custom-value="my-node-pool"    }  }node_pools_taints={    all= []    default-node-pool= [      {        key="default-node-pool"        value=true        effect="PREFER_NO_SCHEDULE"      },    ]  }node_pools_tags={    all= []    default-node-pool= ["default-node-pool",    ]  }}

Then perform the following commands on the root folder:

  • terraform init to get the plugins
  • terraform plan to see the infrastructure plan
  • terraform apply to apply the infrastructure build
  • terraform destroy to destroy the built infrastructure

Upgrade to v3.0.0

v3.0.0 is a breaking release. Refer to theUpgrading to v3.0 guide for details.

Upgrade to v2.0.0

v2.0.0 is a breaking release. Refer to theUpgrading to v2.0 guide for details.

Upgrade to v1.0.0

Version 1.0.0 of this module introduces a breaking change: adding thedisable-legacy-endpoints metadata field to all node pools. This metadata is required by GKE anddetermines whether the/0.1/ and/v1beta1/ paths are available in the nodes' metadata server. If your applications do not require access to the node's metadata server, you can leave the default value oftrue provided by the module. If your applications require access to the metadata server, be sure to read the linked documentation to see if you need to set the value for this field tofalse to allow your applications access to the above metadata server paths.

In either case, upgrading to module versionv1.0.0 will trigger a recreation of all node pools in the cluster.

Inputs

NameDescriptionTypeDefaultRequired
basic_auth_passwordThe password to be used with Basic Authentication.string""no
basic_auth_usernameThe username to be used with Basic Authentication. An empty value will disable Basic Authentication, which is the recommended configuration.string""no
cluster_ipv4_cidrThe IP address range of the kubernetes pods in this cluster. Default is an automatically assigned CIDR.string""no
cluster_resource_labelsThe GCE resource labels (a map of key/value pairs) to be applied to the clustermap(string)<map>no
configure_ip_masqEnables the installation of ip masquerading, which is usually no longer required when using aliasied IP addresses. IP masquerading uses a kubectl call, so when you have a private cluster, you will need access to the API server.string"false"no
create_service_accountDefines if service account specified to run nodes should be created.bool"true"no
descriptionThe description of the clusterstring""no
disable_legacy_metadata_endpointsDisable the /0.1/ and /v1beta1/ metadata server endpoints on the node. Changing this value will cause all node pools to be recreated.bool"true"no
grant_registry_accessGrants created cluster-specific service account storage.objectViewer role.bool"false"no
horizontal_pod_autoscalingEnable horizontal pod autoscaling addonbool"true"no
http_load_balancingEnable httpload balancer addonbool"true"no
initial_node_countThe number of nodes to create in this cluster's default node pool.number"0"no
ip_masq_link_localWhether to masquerade traffic to the link-local prefix (169.254.0.0/16).bool"false"no
ip_masq_resync_intervalThe interval at which the agent attempts to sync its ConfigMap file from the disk.string"60s"no
ip_range_podsThename of the secondary subnet ip range to use for podsstringn/ayes
ip_range_servicesThename of the secondary subnet range to use for servicesstringn/ayes
issue_client_certificateIssues a client certificate to authenticate to the cluster endpoint. To maximize the security of your cluster, leave this option disabled. Client certificates don't automatically rotate and aren't easily revocable. WARNING: changing this after cluster creation is destructive!bool"false"no
kubernetes_dashboardEnable kubernetes dashboard addonbool"false"no
kubernetes_versionThe Kubernetes version of the masters. If set to 'latest' it will pull latest available version in the selected region.string"latest"no
logging_serviceThe logging service that the cluster should write logs to. Available options include logging.googleapis.com, logging.googleapis.com/kubernetes (beta), and nonestring"logging.googleapis.com"no
maintenance_start_timeTime window specified for daily maintenance operations in RFC3339 formatstring"05:00"no
master_authorized_networks_configThe desired configuration options for master authorized networks. The object format is {cidr_blocks = list(object({cidr_block = string, display_name = string}))}. Omit the nested cidr_blocks attribute to disallow external access (except the cluster node IPs, which GKE automatically whitelists).object<list>no
monitoring_serviceThe monitoring service that the cluster should write metrics to. Automatically send metrics from pods in the cluster to the Google Cloud Monitoring API. VM metrics will be collected by Google Compute Engine regardless of this setting Available options include monitoring.googleapis.com, monitoring.googleapis.com/kubernetes (beta) and nonestring"monitoring.googleapis.com"no
nameThe name of the cluster (required)stringn/ayes
networkThe VPC network to host the cluster in (required)stringn/ayes
network_policyEnable network policy addonbool"false"no
network_policy_providerThe network policy provider.string"CALICO"no
network_project_idThe project ID of the shared VPC's host (for shared vpc support)string""no
node_poolsList of maps containing node poolslist(map(string))<list>no
node_pools_labelsMap of maps containing node labels by node-pool namemap(map(string))<map>no
node_pools_metadataMap of maps containing node metadata by node-pool namemap(map(string))<map>no
node_pools_oauth_scopesMap of lists containing node oauth scopes by node-pool namemap(list(string))<map>no
node_pools_tagsMap of lists containing node network tags by node-pool namemap(list(string))<map>no
node_versionThe Kubernetes version of the node pools. Defaults kubernetes_version (master) variable and can be overridden for individual node pools by setting theversion key on them. Must be empyty or set the same as master at cluster creation.string""no
non_masquerade_cidrsList of strings in CIDR notation that specify the IP address ranges that do not use IP masquerading.list(string)<list>no
project_idThe project ID to host the cluster in (required)stringn/ayes
regionThe region to host the cluster in (optional if zonal cluster / required if regional)string"null"no
regionalWhether is a regional cluster (zonal cluster if set false. WARNING: changing this after cluster creation is destructive!)bool"true"no
registry_project_idProject holding the Google Container Registry. If empty, we use the cluster project. If grant_registry_access is true, storage.objectViewer role is assigned on this project.string""no
remove_default_node_poolRemove default node pool while setting up the clusterbool"false"no
service_accountThe service account to run nodes as if not overridden innode_pools. The create_service_account variable default value (true) will cause a cluster-specific service account to be created.string""no
skip_provisionersFlag to skip all local-exec provisioners. It breaksstub_domains andupstream_nameservers variables functionality.bool"false"no
stub_domainsMap of stub domains and their resolvers to forward DNS queries for a certain domain to an external DNS servermap(list(string))<map>no
subnetworkThe subnetwork to host the cluster in (required)stringn/ayes
upstream_nameserversIf specified, the values replace the nameservers taken by default from the node’s /etc/resolv.conflist<list>no
zonesThe zones to host the cluster in (optional if regional cluster / required if zonal)list(string)<list>no

Outputs

NameDescription
ca_certificateCluster ca certificate (base64 encoded)
endpointCluster endpoint
horizontal_pod_autoscaling_enabledWhether horizontal pod autoscaling enabled
http_load_balancing_enabledWhether http load balancing enabled
kubernetes_dashboard_enabledWhether kubernetes dashboard enabled
locationCluster location (region if regional cluster, zone if zonal cluster)
logging_serviceLogging service used
master_authorized_networks_configNetworks from which access to master is permitted
master_versionCurrent master kubernetes version
min_master_versionMinimum master kubernetes version
monitoring_serviceMonitoring service used
nameCluster name
network_policy_enabledWhether network policy enabled
node_pools_namesList of node pools names
node_pools_versionsList of node pools versions
regionCluster region
service_accountThe service account to default running nodes as if not overridden innode_pools.
typeCluster type (regional / zonal)
zonesList of zones in which the cluster resides

Requirements

Before this module can be used on a project, you must ensure that the following pre-requisites are fulfilled:

  1. Terraform and kubectl areinstalled on the machine where Terraform is executed.
  2. The Service Account you execute the module with has the rightpermissions.
  3. The Compute Engine and Kubernetes Engine APIs areactive on the project you will launch the cluster in.
  4. If you are using a Shared VPC, the APIs must also be activated on the Shared VPC host project and your service account needs the proper permissions there.

Theproject factory can be used to provision projects with the correct APIs active and the necessary Shared VPC connections.

Software Dependencies

Kubectl

Terraform and Plugins

Configure a Service Account

In order to execute this module you must have a Service Account with thefollowing project roles:

  • roles/compute.viewer
  • roles/container.clusterAdmin
  • roles/container.developer
  • roles/iam.serviceAccountAdmin
  • roles/iam.serviceAccountUser
  • roles/resourcemanager.projectIamAdmin (only required ifservice_account is set tocreate)

Additionally, ifservice_account is set tocreate andgrant_registry_access is requested, the service account requires the following role on theregistry_project_id project:

  • roles/resourcemanager.projectIamAdmin

Enable APIs

In order to operate with the Service Account you must activate the following APIs on the project where the Service Account was created:

  • Compute Engine API - compute.googleapis.com
  • Kubernetes Engine API - container.googleapis.com

File structure

The project has the following folders and files:

  • /: root folder
  • /examples: Examples for using this module and sub module.
  • /helpers: Helper scripts.
  • /scripts: Scripts for specific tasks on module (see Infrastructure section on this file).
  • /test: Folders with files for testing the module (see Testing section on this file).
  • /main.tf:main file for the public module, contains all the resources to create.
  • /variables.tf: Variables for the public cluster module.
  • /output.tf: The outputs for the public cluster module.
  • /README.MD: This file.
  • /modules: Private and beta sub modules.

About

A Terraform module for configuring GKE clusters.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages

  • HCL73.3%
  • Ruby15.3%
  • Shell7.7%
  • Python2.8%
  • Other0.9%

[8]ページ先頭

©2009-2025 Movatter.jp