- Notifications
You must be signed in to change notification settings - Fork19
Growing repository of Infrastructure as Code demos (initially created for DevOps Wall Street)
License
ModusCreateOrg/devops-infra-demo
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
This repository houses demo code for Modus Create's DevOps talks and meetups.
Originally this was targeted towards theDevOps Wall Street talk titledMulti-Cloud Deployment with GitHub and Terraform. See the branchdemo-20170303 for the code demonstrated at that event.
See the branchdemo-20180619 for the code for the demo for theNYC DevOps talkApplying the CIS Baseline using Ansible & Packer. Slides from this presentation are onSlideShare.
See the branchdemo-20180926 for the code for the demo for theContinuous Delivery NYC talkManaging Expensive or Destructive Operations in Jenkins CI. Slides from this presentation are onSlideShare.
See the branchdemo-20181205 for the code for the demo for theAnsible NYC talkAnsible Image Bakeries: Best Practices & Pitfalls. Slides from this presentation are onSlideShare.
See the branchdemo-20190130 for the code for the demo for theBig Apple DevOps talkMonitoring and Alerting as code with Terraform and New Relic. Slides from this presentation are onSlideshare.
See the branchdemo-20191109 for the code for the demo for theBSidesCT 2019 talkExtensible DevSecOps pipelines with Jenkins, Docker, Terraform, and a kitchen sink full of scanners. Slides from this presentation are onSlideshare
To run the demo end to end, you will need:
- AWS Account
- Google Cloud Account
- Docker (tested with 18.05.0-ce)
- Packer (tested with 1.0.3)
- Terraform (tested with v0.11.7)
- JQ (tested with 1.3 and 1.5)
Optionally, you can use Vagrant to test ansible playbooks locally and Jenkins to orchestrate creation of AMIs in conjunction with GitHub branches and pull requests.
You will also need to set a few environment variables. The method of doing so will vary from platform to platform.
AWS_PROFILEAWS_DEFAULT_PROFILEAWS_DEFAULT_REGIONAWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYGOOGLE_CLOUD_KEYFILE_JSONGOOGLE_PROJECTGOOGLE_REGIONPACKER_AWS_VPC_IDPACKER_AWS_SUBNET_IDAsample file is provided as a template to customize:
cp env.sh.sample env.shvim env.sh. env.shThe AWS profile IAM user should have full control of EC2 in the account you are using.
You will need to create an application in the Google developer console, create a set of service-to-service JSON credentials, and enable the Google Cloud Storage API in the referenced Google developer application for the Google integration to work. If you don't care about that, alternately you may remove theterraform/google.tf file to get the demo to work without the Google part.
Runpacker/bin/pack.sh to initiate a Packer run. This will provision a machine on EC2, configure it using Ansible, and scan it usingOpenSCAP andGauntlt. The results from the scan will end up inpacker/build.
Optionally, you can use Vagrant to test ansible playbooks locally and Jenkins to orchestrate creation of AMIs in conjunction with GitHub branches and pull requests.
In order to make developing the Ansible playbooks faster, a Vagrantfile is provided to provision a VM locally.
InstallVagrant. Change directory into the root of the repository at the command line and issue the commandvagrant up. You can add or edit Ansible playbooks and support scripts then re-run the provisioning withvagrant provision to refine the remediations. This is more efficient that re-running packer and baking new AMIs for every change.
This Terraform setup stores its state in Amazon S3 and uses DynamoDB for locking. There is a bit of setup required to bootstrap that configuration. You can usethis repository to use Terraform to do that bootstrap process. Thebackend.tfvars file in that repo should be modified as follows to work with this project:
(Replace us-east-1 and XXXXXXXXXXXX with the AWS region and your account ID)
bucket = "tf-state.devops-infra-demo.us-east-1.XXXXXXXXXXXX"dynamodb_table = "TerraformStatelock-devops-infra-demo"key = "terraform.tfstate"profile = "terraform"region = "us-east-1"You'll also need to modify the list of operators who can modify the object in the S3 bucket. Put in the IAM user names of the user into thesetup/variables.tf file in that project. If your Jenkins instance uses an IAM role to grant access, give it a similar set of permissions to those granted on in the bucket policy to IAM users.
These commands will then set up cloud resources using terraform:
cd terraformterraform initterraform get# Example with values from our environment (replace with values from your environment)# terraform plan -var domain=modus.app -out tf.planterraform plan -out tf.plan -var 'domain=example.net'terraform apply tf.plan# check to see if everything worked - use the same variables here as aboveterraform destroy -var 'domain=example.net'Alternatively, use the wrapper script inbin/terraform.sh which will work interactively or from CI:
bin/terraform.sh planbin/terraform.sh applybin/terraform.sh plan-destroybin/terraform.sh destroy
This assumes that you already have a Route 53 domain in your AWS account created.You need to either edit variables.tf to match your domain and AWS zone or specify these values as command linevar parameters.
The application loads an image from Google storage. To get it loading correctly, edit theapplication/assets/css/main.css file and replaceexample-media-website-storage.storage.googleapis.com with a DNS reference for your Google storage location.
The application in this demo uses an AWS Auto Scaling Group in order to dynamically change the number of servers deployed in response to load. Two policies help guide how many instances are available: a CPU scaling policy that seeks to keep the average CPU load below 40% in the cluster, and a scheduled scaling policy that scales the entire cluster down to 0 instances at 02:00 UTC every night, to minimize the charges should you forget to destroy the cluster. If the cluster is scaled down to 0 instances, you will need to edit the Auto Scaling Group through the console, the CLI, or an API call to set the sizes to non-zero, for example
This demo allocates a single Classic ELB in order to load balance HTTP traffic among the running instances. This load balancer integrates with the auto scaling group and instances will join and leave the ELB automatically when created or destroyed.
The application enclosed in this demo is packaged and deployed usingAWS CodeDeploy. The scriptcodedeploy/bin/build.sh will package the application so that it can be deployed on the AMI built with Ansible and Packer.
The application contains both a simple HTML web site, and a Python app that has an API endpoint of/api/spin that spins the CPU of the server, in order to more easily test CPU-sensing auto scaling scale-out operations.
You must deploy the application at least once in order to begin testing the web server and spin service, as it starts the web server as part of its deployment process. New instances scaled out should automatically have a deployment triggered on them through an Auto Scaling Group hook.
There's an explicit dependency between the CodeDeploy application and the auto scaling group because the hook will not get created if the CodeDeploy application is created before the Auto Scaling Group.
A JMeter test harness allows testing of the application at scale. This uses a Docker container to run JMeter, and has a Jenkins test harness to allow you to run JMeter through Jenkins and record its outputs. Seebin/jmeter.sh and the JMeter test filejmeter/api-spin.jmx.
AJenkinsfile is provided that will allow Jenkins to execute Packer and Terraform, package a CodeDeploy application, and even run JMeter performance tests. In order for Jenkins to do this, it needs to have AWS credentials set up, preferably through an IAM role, granting full control of EC2 and VPC resources in that account, and write access to the S3 bucket used for storing CodeDeploy applications. Packer needs this in order to create AMIs, key pairs, etc, Terraform needs this to create a VPC and EC2 resources, and CodeDeploy needs this to store the artifact it creates. This could be pared down further through some careful logging and role work.
The Jenkins executor running this job needs to have both a recent Docker and the jq utility (version 1.3 or higher) installed.
The scripts here assume that Jenkins is running on EC2 and uses instance data from the Jenkins executor to infer what VPC and subnet to launch the new EC2 instance into. The AWS profile IAM user associated with your Jenkins instance or the Jenkins user's AWS credentials should have full control of EC2 in the account you are using.
This script relies on Jenkins having a secret file containing the Google application credentials in JSON with the idterraform-demo.json. You will need to add that to your Jenkins server's credentials.
After a sucessful build, Jenkins will archive the artifacts from the OpenSCAP and Gauntlt scans (if a Packer run has completed) and JMeter (if a JMeter run has completed).
Modus Create is a digital product consultancy. We use a distributed team of the best talent in the world to offer a full suite of digital product design-build services; ranging from consumer facing apps, to digital migration, to agile development training, and business transformation.
This project is part ofModus Labs.
This project isMIT licensed.
The content inapplication is adapted fromDimension byhttps://html5up.net/ and islicensed under a Creative Commons Attribution 3.0 License
About
Growing repository of Infrastructure as Code demos (initially created for DevOps Wall Street)
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors8
Uh oh!
There was an error while loading.Please reload this page.

