Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A terraform configuration to create an AWS EKS / DigitalOcean Kubernetes cluster and connect to GitLab [PoC]

License

NotificationsYou must be signed in to change notification settings

lazy-orange/staging.lazyorange.xyz

Repository files navigation

A terraform configuration to create an AWS EKS cluster that can be connectedto your group clusters and then used across projects to useAuto DevOps feature.

This module under the hood uses this awesomeAWS EKS module by CloudPosse ☁️ and many others modules which make it possible this project ❤️.

Motivation

Manage the Kubernetes cluster and its dependencies can be hard, then when you is already familiar with main features provided by Gitlab and its integration withAWS EKS andGoogle Kubernetes Engine,Auto DevOps feature, using Gitlab UI you will want to manage its dependices suchNGinx Ingress Controller, CertManager, Gitlab Runner, etc throughGitOps with tools that you love and use on daily-basis, suchHelm andhelmfile and use custom values and settings that fit to your needs.

Think about this project as a collection of the services you would need to deploy on top of your Kubernetes cluster to enable logging, monitoring, certificate management, automatic discovery of Kubernetes resources via public DNS servers and other common infrastructure needs.

Assumed that you will use the separated terraform configuration for environments (e.g. staging, production).This module was designed to use withinstaging environment.In order to use for production environment you should clone this repo and rename toproduction.your_domain, change a few terraform variables and push changes to a remote origin.

The module supports the following:

  • The module creates an AWS EKS cluster and adds tothe group kubernetes clusters
  • IAM Role with specified policy for the cluster-autoscaler service account in kube-system namespace
  • Dedicated node group for Gitlab Runner withscaling a node group to 0 configuration
  • A collection of helmfiles to setupGitlab Runner,Cluster AutoScaler,CertManager,NGINX Ingress Contoller

Other benefits:

  • Gitlab Runner, CertManager, and other helm charts can be enabled or disabled by environment variable

Note: Helm chart which is used by default in Auto DevOps pipelines does not support Kubernetes 1.16+ yet

Cold start

You should do the manual steps that are described below before to run CI pipeline.

Prerequisites

Local development

AWS AMI image with preconfigured tools (build using Packer)

It suits totally when you are going to cook Kubernetes on AWS cloud.See the commands below to build your own AMI image.

Currently supports build image ineu-central-1 region.

set -a&&source .env.exmaple&&set +apacker build packer.json

Then run an instance using the image that you have just built, clone this repo and start building your infrastructure.

DNS requirements

In addition to the requirements listed above, a domain name is also required for setting up Ingress endpoints to services running in the cluster.The specified domain name can be a top-level domain (TLD) or a subdomain.In either case, you have to manually set up the NS records for the specified TLD or subdomain so as to delegate DNS resolution queries to an Amazon Route 53 hosted zone. This is required in order to generate valid TLS certificates.

Requirements

  • clone this repo to your development machine then create a gitlab repo that fits to yourdomain name andenvironment
  • create.env from.env.example and fillTF_VAR_root_gitlab_project with the project id from the previous step
  • create agitlab token that will be used to connect an AWS EKS cluster to your project and fillGITLAB_TOKEN variable in.env file (you can add this variable in another way)
  • may you need to addGITLAB_BASE_URL and other variables to setup properlyGitlab Terraform Provider
  • addAWS_DEFAULT_REGION,AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,KUBE_INGRESS_BASE_DOMAIN andGITLAB_TOKEN to Gitlab CI environment variables, then mark them as protected and masked

Installation and setup

Step 1: Setup terraform backend

1.1. Change./eu-central-1.tfvars according to your requirements,at least, change namespace and stage variables to your own.

cd ./terraform/aws-ekscp remote-state.tf.example remote-state.tfterraform initterraform apply -var-file ./eu-central-1.tfvars -target=module.terraform_state_backend

Output:

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.Outputs:terraform_backend_config = terraform {  required_version =">= 0.12.2"  backend"s3" {    region         ="eu-central-1"    bucket         ="lazyorange-staging-terraform-state"    key            ="terraform.tfstate"    dynamodb_table ="lazyorange-staging-terraform-state-lock"    profile        =""    role_arn       =""    encrypt        ="true"  }}

1.2. Add the terraform backend config to remote-state.tf

terraform output terraform_backend_config>> remote-state.tf

1.3. Re-runterraform init to copy existing state to the remote backend.

terraform init

Commit changes to repo and push to remote origin.

Step 2: Setup AWS Route53

To make possible useNGinx Ingress Controller or another Ingress Controller you should create a hosted zone inAWS Route53, also required by Gitlab to useAuto DevOps feature.

Step 3: Run CI pipeline from master branch

You can setup other environment variables such asGITLAB_RUNNER_INSTALLED to install Gitlab Runner from helm chart and then run CI pipeline fromCI/CD dashboard to apply changes.

Components

Gitlab Runner

In order to install Gitlab Runner from the helm chart you can addGITLAB_RUNNER_INSTALLED environment variable to Gitlab CI/CD variables (should be set totrue) or add on top of your.gitlab-ci.yml

By default Gitlab Runner will be started onprivileged mode to be able use Docker in Docker feature to build Docker images withCPU and RAM limits.

Gitlab Runner Manager will place pods in nodes with the following node affinity condition:

affinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:    -matchExpressions:      -key:purposeoperator:NotInvalues:        -gitlab-runner

By default, the AWS EKS cluster will be provisioned with dedicated node group for Gitlab Runners with min size set to 0.

Cluster AutoScaler

TBD

Ingress stack

  • NGINX Ingress Contoller: A Controller to satisfy requests for Ingress objects
  • CertManager: A Kubernetes add-on to automate the management and issuance of TLS certificates from various sources

Contributing

If you would like to become an active contributor to this project please follow the instructions provided incontribution guidelines.


[8]ページ先頭

©2009-2025 Movatter.jp