Embed presentation
Download as PDF, PPTX
















![@Library('aaptivPipelineLib') _nodePipeline(releaseBranch: 'master',releaseFamily: '1.3.x',scheme: 'internal',containerCounts: ['dev': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],'staging': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],'prod': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],],scaleOutThreshold: '1.0',scaleInThreshold: '0.7',cpu: 256,memory: 512,containerPort: 3000,healthCheckPath: '/health',healthCheckIntervalSeconds: 60,healthCheckTimeoutSeconds: 30,healthyThresholdCount: 2,unhealthyThresholdCount: 10,agentLabel: 'nodejs'It looks like this- it’s a declarative file that defines what the runtime architecture should look like for dev, staging, and production.TL;DR - explain all the fields.Well… most of them.We have global GitHub web hooks set up to respond to opening and closing pull requests. So when they open a pull request, Jenkins receives a web hook notification.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-17-2048.jpg&f=jpg&w=240)


![checkout([$class: 'GitSCM',branches: [[name: '*/master']],doGenerateSubmoduleConfigurations: false,extensions: [],submoduleCfg: [],userRemoteConfigs: []])And you can specify the branch you want from that repo. So for the engineers, they always want master because that’s where our production version is.But for those of us who work on it, it’s a way for us to create a branch to iterate and test on before making changes to master and potentially affecting production.The soa-template contains our CloudFormation template](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-20-2048.jpg&f=jpg&w=240)








![@Library('aaptivPipelineLib') _nodePipeline(releaseBranch: 'master',releaseFamily: '1.3.x',scheme: 'internal',containerCounts: ['dev': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],'staging': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],'prod': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],],scaleOutThreshold: '1.0',scaleInThreshold: '0.7',cpu: 256,memory: 512,containerPort: 3000,healthCheckPath: '/health',healthCheckIntervalSeconds: 60,healthCheckTimeoutSeconds: 30,healthyThresholdCount: 2,unhealthyThresholdCount: 10,agentLabel: 'nodejs'Earlier, you saw what this Jenkinsfile looks like- it’s fairly declarative and specifies what the runtime configuration for the service should look like.The thing I’d like to draw your attention to is this: We import a Groovy library called aaptivPipelineLib.In that library, we call a function called nodePipeline. This executes the build steps necessary for a node.js project. We currently have support for python, and limitedsupport for Lambda functions and Java projects.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-29-2048.jpg&f=jpg&w=240)

![@Library('aaptivPipelineLib') _nodePipeline(releaseBranch: 'master',releaseFamily: '1.3.x',scheme: 'internal',containerCounts: ['dev': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],'staging': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],'prod': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],],scaleOutThreshold: '1.0',scaleInThreshold: '0.7',cpu: 256,memory: 512,containerPort: 3000,healthCheckPath: '/health',healthCheckIntervalSeconds: 60,healthCheckTimeoutSeconds: 30,healthyThresholdCount: 2,unhealthyThresholdCount: 10,agentLabel: 'nodejs'If we take a look at the aaptivPipelineLib project, you can see this groovy file nodePipeline.groovy.So in the Jenkinsfile for our project, when we called the nodePipeline function, it’s actually calling this groovy file within the library.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-31-2048.jpg&f=jpg&w=240)


![stage('Configure build') {steps {script {slackChannel = args.slackChannel ?: 'jenkins-deployments'environment = 'dev'if (BRANCH_NAME == args.releaseBranch) {if (params.environment) {environment = params.environment} else if (params.tag == null || params.tag.isEmpty()) {environment = 'staging'}}if (args.proxyImage) {proxyRepoUri = getEcrRepo(args.proxyImage.projectName)proxyImageUrl = "${proxyRepoUri}:${gitHash}"}runTests = trueif (params.tag) {runTests = false}runBuild = trueif (params.tag || !(BRANCH_NAME == args.releaseBranch || BRANCH_NAME.startsWith("PR-"))) {runBuild = false}runDeploy = trueif (!(BRANCH_NAME == args.releaseBranch || BRANCH_NAME.startsWith("PR-"))) {runDeploy = false}runTagRelease = falseif (BRANCH_NAME == args.releaseBranch) {runTagRelease = true}if (args.customSecurityGroups) {customSecurityGroupId = args.customSecurityGroups[environment]}}}}Then we start running through the defined stages for the build.The first one is “Configure build” and I’m going to kind of just gloss over this because it just sets up some conditions for the following stages- mainly determining whetherwe are building a dev, staging, or production build and setting some global variables that are used by the following stages.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-34-2048.jpg&f=jpg&w=240)

![def call(def env, def project, def nodeVersion="default", reporter="true") {// Get a random, free port on Jenkins build node (so we can run multiple Skyfit tests in parallel).def randomPort = getAvailablePort()// Runs npm testssh """source ~/.bash_profileaaptivsecrets env_export --env ${env} ${project} --outfile env.propertiessource ${WORKSPACE}/env.propertiesnvm use ${nodeVersion}npm installPORT=$randomPort npm testif [ "${reporter}" == "true" ]; thennode_modules/.bin/nyc report --reporter=cobertura --dir coveragefi"""cobertura autoUpdateHealth: false, autoUpdateStability: false, coberturaReportFile: 'coverage/cobertura-coverage.xml',conditionalCoverageTargets: '70, 0, 0', failUnhealthy: false, failUnstable: false, lineCoverageTargets: '80, 0, 0', maxNumberOfBuilds: 0,methodCoverageTargets: '80, 0, 0', onlyStable: false, sourceEncoding: 'ASCII', zoomCoverageChart: false}If we take a look at that file, there are a couple of things noteworthy going on here.First is that it’s just executing some shell commands.Next we have nvm installed that allows the engineers to specify which version of nodejs their project uses.And then we run the npm test command. That’s a pretty widely accepted convention for running tests in a node.js project, so it makes it easy for us to run the testswithout having to know the implementation details of testing.The remainder of the function publishes the test results and code coverage back to Jenkins where it’s displayed visually in their project.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-36-2048.jpg&f=jpg&w=240)
![def call(Map args) {if (params.tag == null || params.tag.isEmpty()) {println("No tag set, so build the image")def projectName = args.projectNamedef repoUri = args.repoUridef validChars = "[^w^-^.^_]"def cleanBranchName = "${BRANCH_NAME}".replaceAll(validChars,"")def gitHash = args.gitHashdef buildPath = args.buildPathif (buildPath == null || buildPath.isEmpty()) {buildPath = "."}echo "Building ${projectName}"sh "source ~/.bash_profile"def dockerLogin = sh(script: '/usr/bin/aws ecr get-login --no-include-email --region us-east-1', returnStdout: true).trim()sh "${dockerLogin}"echo "Building Docker Image"sh "docker build -t ${projectName} ${buildPath}"sh "docker tag ${projectName} ${repoUri}:${cleanBranchName}"sh "docker tag ${projectName} ${repoUri}:${cleanBranchName}.${BUILD_NUMBER}"sh "docker tag ${projectName} ${repoUri}:${gitHash}"sh "echo 'Pushing branch ${cleanBranchName} build ${BUILD_NUMBER} to ECR'"sh "docker push ${repoUri}:${cleanBranchName}"sh "docker push ${repoUri}:${cleanBranchName}.${BUILD_NUMBER}"sh "docker push ${repoUri}:${gitHash}"sh "docker rmi ${repoUri}:${cleanBranchName}"sh "docker rmi ${repoUri}:${cleanBranchName}.${BUILD_NUMBER}"sh "docker rmi ${repoUri}:${gitHash}"}}Build Image is very similar, it calls a function called buildImage that contains this code.The main tasks performed here are building and tagging the image, then pushing it up to the AWS ECR repository. Which sounds redundant because it is, but I’m notsure how else to refer to it…](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-37-2048.jpg&f=jpg&w=240)


![def getStackStatus(stackName) {def result = ""try {result = sh(script: """aws cloudformation describe-stacks --stack-name ${stackName}--query 'Stacks[0].StackStatus'""", returnStdout: true).trim().replace(""","")println("Stack status is: ${result}")} catch(ex) {println("Stack does not exist")}return result}Then we check to see if the stack exists. If it doesn’t, we need to create it- this is commonly the case for PR branches.If the stack does exist, we need to update it to deploy the requested changes to it.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-40-2048.jpg&f=jpg&w=240)


![Resources:# Once we decide on how we do environments, this will need to change to mapping , ratherthan an iffServiceDNSName:Type: "AWS::Route53::RecordSet"Properties:HostedZoneId: !If [EnvironmentIsProd , 'xxxxxxxx', ‘xxxxxxxx']Name: !If- EnvironmentIsProd- !Join ['', [!Ref ProjectName,'.', 'aaptiv.com', .]]- !If- EnvironmentIsStaging- !Join ['', [!Ref ProjectName , ".", 'aapdev.com', .]] #Staging Goes Directlyto Aapdev- !Join ['', [!Ref ProjectName, '-', !Ref BranchName, ".", 'aapdev.com', .]]#Other Branchs Get PrefixesTTL: '300'Type: 'CNAME'ResourceRecords:- !GetAtt LoadBalancer.DNSNameIn the resources section of our cloud formation template, we define the DNS name for the deployed service. Again, this makes it easier for developers to find the URL fortheir deployed project because it always follows the same naming convention.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-43-2048.jpg&f=jpg&w=240)
![LoadBalancer:Type: AWS::ElasticLoadBalancingV2::LoadBalancerProperties:Scheme: !If [InternetFacing , 'internet-facing', 'internal']IpAddressType: ipv4Tags:- Key: BranchNameValue: !Ref BranchName- Key: NameValue: !Join- '-'- - !Ref Environment- !Ref ProjectName- !Ref BranchName- Key: ServiceValue: !Ref ProjectName- Key: EnvValue: !Ref Environment- Key: RoleValue: "Load Balancer"- Key: TeamValue: !Ref TeamWe define the load balancer and by using a lookup, we can correctly provision the load balancer as either internet-facing or internal.One additional thing we do here is apply tags for the load balancer name, service, environment, role, and team. We use this for cost allocation, allowing us to break downour operating costs by each of these tags and control our expenses.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-44-2048.jpg&f=jpg&w=240)
![LoadBalancerListener:Type: AWS::ElasticLoadBalancingV2::ListenerProperties:DefaultActions:- TargetGroupArn: !Ref 'TargetGroup'Type: 'forward'LoadBalancerArn: !Ref 'LoadBalancer'Port: !If [InternetFacing , 443, 80]Protocol: !If [InternetFacing , 'HTTPS', 'HTTP']Certificates:- CertificateArn: !If- InternetFacing- !If- EnvironmentIsProd- 'arn:aws:acm:us-east-1:1234567890:certificate/xxxxxxxx'- ‘arn:aws:acm:us-east-1:1234567890:certificate/xxxxxxxx'- !Ref "AWS::NoValue"LoadBalancerRedirectListener:Type: AWS::ElasticLoadBalancingV2::ListenerCondition: InternetFacingProperties:DefaultActions:- Type: 'redirect'RedirectConfig:Port: '443'Protocol: 'HTTPS'StatusCode: 'HTTP_301'LoadBalancerArn: !Ref 'LoadBalancer'Port: 80Protocol: 'HTTP'The load balancer has to have listeners, so we define those as well.For internet-facing load balancers, we setup SSL, configure the certificate and create an automatic HTTP 301 redirect to SSL for any traffic received over HTTP.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-45-2048.jpg&f=jpg&w=240)
![TaskDefinition:Type: 'AWS::ECS::TaskDefinition'Properties:Family: !Join ['-',[!Ref Environment,!Ref ProjectName, !Ref BranchName] ]Cpu: !Ref CPUMemory: !Ref MemoryRequiresCompatibilities:- FARGATEVolumes:-Name: "aaptiv_logs"ContainerDefinitions:- Name: !Join ['-',[!Ref Environment,!Ref ProjectName, !Ref BranchName] ]Cpu: !Ref CPUMemory: !Ref MemoryImage: !Ref ImageUrlEssential: trueEnvironment:- Name: ENVValue: !Ref Environment- Name: BRANCH_NAMEValue: !Ref BranchName- Name: PROJECT_NAMEValue: !Ref ProjectName- Name: NODE_ENVValue: "production"- Name: BRANCH_OVERRIDEValue: !If [EnvironmentIsDev, !Ref BranchOverride, ""]LogConfiguration:LogDriver: "awslogs"Options:"awslogs-group": !Join- '-'- - !Ref Environment- !Ref ProjectName- !Ref BranchName"awslogs-region": "us-east-1""awslogs-stream-prefix": !Ref ProjectNamePortMappings:- ContainerPort: !Ref ContainerPortHostPort: !Ref ContainerPortProtocol: 'TCP'Then we create the ECS task definition:The task definition is the description of the ECS environment for this service. It includes the definition for the docker containers you want to run, memory and cpurequirements, and whether your task runs on EC2 or Fargate.This is largely just a variable substitution exercise, setting the parameters for the task based on the values supplied by the Jenkinsfile and the environment. Rememberthat we got these values into the CloudFormation template by writing them out to a json file in the Jenkins stage then supplying that json file as a cli argument when wecalled the cloud formation command.Things like memory, cpu, and exposed port are specified by the Jenkinsfile.Environment, branch, and project name are calculated by the pipeline library.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-46-2048.jpg&f=jpg&w=240)
![Service:Type: 'AWS::ECS::Service'DependsOn: LoadBalancerRuleProperties:ServiceName: !Join ['-',[!Ref Environment,!Ref ProjectName, !Ref BranchName] ]Cluster: !Ref ClusterNameLaunchType: FARGATEDeploymentConfiguration:MaximumPercent: 200MinimumHealthyPercent: 50DesiredCount: !If [EnvironmentIsDev, 1, !Ref DesiredCount]HealthCheckGracePeriodSeconds: !Ref HealthCheckGracePeriodSecondsNetworkConfiguration:AwsvpcConfiguration:AssignPublicIp: DISABLEDSecurityGroups:- !FindInMap [ "SecurityGroupByEnvironment",!Ref Environment, 'base' ] ##Lets ittalk to its own env- !If ##Extra Security Group only added for dev access to staging- RequiresCrossEnvAccess- !FindInMap [ "SecurityGroupByEnvironment",!Ref Environment,'crossEnvAccess' ]- !Ref "AWS::NoValue"- !If- UseCustomSecurityGroup- !Ref CustomSecurityGroupId- !Ref "AWS::NoValue"Subnets:- !FindInMap [ "SubnetByScheme",'internal', 'AZ1SubnetId' ] #Services shouldAlways be on the- !FindInMap [ "SubnetByScheme",'internal', 'AZ2SubnetId' ] #DMZ , regardless ofwhether loadbalacner- !FindInMap [ "SubnetByScheme",'internal', 'AZ3SubnetId' ] # is internet facing- !FindInMap [ "SubnetByScheme",'internal', 'AZ4SubnetId' ]TaskDefinition: !Ref TaskDefinitionAnd then we define the service.In ECS, a service is a running instance of a task definition.Some of the key things we do here is set the desired count- that is, the number of running tasks the service should have. For production environments, this defaults to aminimum of 3 to ensure there aren’t single-point-of-failure services. In dev, we always set it to 1.We also define the network configuration for the service here, with most of it being determined by the environment: dev, staging, or prod.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-47-2048.jpg&f=jpg&w=240)














The document discusses Aaptiv's migration from Heroku to AWS using Fargate, detailing considerations, successes, and failures of the process. The transition aimed to enhance infrastructure by breaking a monolith into microservices to accommodate company growth while maintaining ease of use. Aaptiv leveraged tools like Jenkins and CloudFormation to streamline deployments and support their developers effectively.
















![@Library('aaptivPipelineLib') _nodePipeline(releaseBranch: 'master',releaseFamily: '1.3.x',scheme: 'internal',containerCounts: ['dev': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],'staging': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],'prod': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],],scaleOutThreshold: '1.0',scaleInThreshold: '0.7',cpu: 256,memory: 512,containerPort: 3000,healthCheckPath: '/health',healthCheckIntervalSeconds: 60,healthCheckTimeoutSeconds: 30,healthyThresholdCount: 2,unhealthyThresholdCount: 10,agentLabel: 'nodejs'It looks like this- it’s a declarative file that defines what the runtime architecture should look like for dev, staging, and production.TL;DR - explain all the fields.Well… most of them.We have global GitHub web hooks set up to respond to opening and closing pull requests. So when they open a pull request, Jenkins receives a web hook notification.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-17-2048.jpg&f=jpg&w=240)


![checkout([$class: 'GitSCM',branches: [[name: '*/master']],doGenerateSubmoduleConfigurations: false,extensions: [],submoduleCfg: [],userRemoteConfigs: []])And you can specify the branch you want from that repo. So for the engineers, they always want master because that’s where our production version is.But for those of us who work on it, it’s a way for us to create a branch to iterate and test on before making changes to master and potentially affecting production.The soa-template contains our CloudFormation template](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-20-2048.jpg&f=jpg&w=240)








![@Library('aaptivPipelineLib') _nodePipeline(releaseBranch: 'master',releaseFamily: '1.3.x',scheme: 'internal',containerCounts: ['dev': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],'staging': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],'prod': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],],scaleOutThreshold: '1.0',scaleInThreshold: '0.7',cpu: 256,memory: 512,containerPort: 3000,healthCheckPath: '/health',healthCheckIntervalSeconds: 60,healthCheckTimeoutSeconds: 30,healthyThresholdCount: 2,unhealthyThresholdCount: 10,agentLabel: 'nodejs'Earlier, you saw what this Jenkinsfile looks like- it’s fairly declarative and specifies what the runtime configuration for the service should look like.The thing I’d like to draw your attention to is this: We import a Groovy library called aaptivPipelineLib.In that library, we call a function called nodePipeline. This executes the build steps necessary for a node.js project. We currently have support for python, and limitedsupport for Lambda functions and Java projects.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-29-2048.jpg&f=jpg&w=240)

![@Library('aaptivPipelineLib') _nodePipeline(releaseBranch: 'master',releaseFamily: '1.3.x',scheme: 'internal',containerCounts: ['dev': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],'staging': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],'prod': ['minimumCount': 1,'desiredCount': 1,'maximumCount': 4],],scaleOutThreshold: '1.0',scaleInThreshold: '0.7',cpu: 256,memory: 512,containerPort: 3000,healthCheckPath: '/health',healthCheckIntervalSeconds: 60,healthCheckTimeoutSeconds: 30,healthyThresholdCount: 2,unhealthyThresholdCount: 10,agentLabel: 'nodejs'If we take a look at the aaptivPipelineLib project, you can see this groovy file nodePipeline.groovy.So in the Jenkinsfile for our project, when we called the nodePipeline function, it’s actually calling this groovy file within the library.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-31-2048.jpg&f=jpg&w=240)


![stage('Configure build') {steps {script {slackChannel = args.slackChannel ?: 'jenkins-deployments'environment = 'dev'if (BRANCH_NAME == args.releaseBranch) {if (params.environment) {environment = params.environment} else if (params.tag == null || params.tag.isEmpty()) {environment = 'staging'}}if (args.proxyImage) {proxyRepoUri = getEcrRepo(args.proxyImage.projectName)proxyImageUrl = "${proxyRepoUri}:${gitHash}"}runTests = trueif (params.tag) {runTests = false}runBuild = trueif (params.tag || !(BRANCH_NAME == args.releaseBranch || BRANCH_NAME.startsWith("PR-"))) {runBuild = false}runDeploy = trueif (!(BRANCH_NAME == args.releaseBranch || BRANCH_NAME.startsWith("PR-"))) {runDeploy = false}runTagRelease = falseif (BRANCH_NAME == args.releaseBranch) {runTagRelease = true}if (args.customSecurityGroups) {customSecurityGroupId = args.customSecurityGroups[environment]}}}}Then we start running through the defined stages for the build.The first one is “Configure build” and I’m going to kind of just gloss over this because it just sets up some conditions for the following stages- mainly determining whetherwe are building a dev, staging, or production build and setting some global variables that are used by the following stages.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-34-2048.jpg&f=jpg&w=240)

![def call(def env, def project, def nodeVersion="default", reporter="true") {// Get a random, free port on Jenkins build node (so we can run multiple Skyfit tests in parallel).def randomPort = getAvailablePort()// Runs npm testssh """source ~/.bash_profileaaptivsecrets env_export --env ${env} ${project} --outfile env.propertiessource ${WORKSPACE}/env.propertiesnvm use ${nodeVersion}npm installPORT=$randomPort npm testif [ "${reporter}" == "true" ]; thennode_modules/.bin/nyc report --reporter=cobertura --dir coveragefi"""cobertura autoUpdateHealth: false, autoUpdateStability: false, coberturaReportFile: 'coverage/cobertura-coverage.xml',conditionalCoverageTargets: '70, 0, 0', failUnhealthy: false, failUnstable: false, lineCoverageTargets: '80, 0, 0', maxNumberOfBuilds: 0,methodCoverageTargets: '80, 0, 0', onlyStable: false, sourceEncoding: 'ASCII', zoomCoverageChart: false}If we take a look at that file, there are a couple of things noteworthy going on here.First is that it’s just executing some shell commands.Next we have nvm installed that allows the engineers to specify which version of nodejs their project uses.And then we run the npm test command. That’s a pretty widely accepted convention for running tests in a node.js project, so it makes it easy for us to run the testswithout having to know the implementation details of testing.The remainder of the function publishes the test results and code coverage back to Jenkins where it’s displayed visually in their project.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-36-2048.jpg&f=jpg&w=240)
![def call(Map args) {if (params.tag == null || params.tag.isEmpty()) {println("No tag set, so build the image")def projectName = args.projectNamedef repoUri = args.repoUridef validChars = "[^w^-^.^_]"def cleanBranchName = "${BRANCH_NAME}".replaceAll(validChars,"")def gitHash = args.gitHashdef buildPath = args.buildPathif (buildPath == null || buildPath.isEmpty()) {buildPath = "."}echo "Building ${projectName}"sh "source ~/.bash_profile"def dockerLogin = sh(script: '/usr/bin/aws ecr get-login --no-include-email --region us-east-1', returnStdout: true).trim()sh "${dockerLogin}"echo "Building Docker Image"sh "docker build -t ${projectName} ${buildPath}"sh "docker tag ${projectName} ${repoUri}:${cleanBranchName}"sh "docker tag ${projectName} ${repoUri}:${cleanBranchName}.${BUILD_NUMBER}"sh "docker tag ${projectName} ${repoUri}:${gitHash}"sh "echo 'Pushing branch ${cleanBranchName} build ${BUILD_NUMBER} to ECR'"sh "docker push ${repoUri}:${cleanBranchName}"sh "docker push ${repoUri}:${cleanBranchName}.${BUILD_NUMBER}"sh "docker push ${repoUri}:${gitHash}"sh "docker rmi ${repoUri}:${cleanBranchName}"sh "docker rmi ${repoUri}:${cleanBranchName}.${BUILD_NUMBER}"sh "docker rmi ${repoUri}:${gitHash}"}}Build Image is very similar, it calls a function called buildImage that contains this code.The main tasks performed here are building and tagging the image, then pushing it up to the AWS ECR repository. Which sounds redundant because it is, but I’m notsure how else to refer to it…](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-37-2048.jpg&f=jpg&w=240)


![def getStackStatus(stackName) {def result = ""try {result = sh(script: """aws cloudformation describe-stacks --stack-name ${stackName}--query 'Stacks[0].StackStatus'""", returnStdout: true).trim().replace(""","")println("Stack status is: ${result}")} catch(ex) {println("Stack does not exist")}return result}Then we check to see if the stack exists. If it doesn’t, we need to create it- this is commonly the case for PR branches.If the stack does exist, we need to update it to deploy the requested changes to it.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-40-2048.jpg&f=jpg&w=240)


![Resources:# Once we decide on how we do environments, this will need to change to mapping , ratherthan an iffServiceDNSName:Type: "AWS::Route53::RecordSet"Properties:HostedZoneId: !If [EnvironmentIsProd , 'xxxxxxxx', ‘xxxxxxxx']Name: !If- EnvironmentIsProd- !Join ['', [!Ref ProjectName,'.', 'aaptiv.com', .]]- !If- EnvironmentIsStaging- !Join ['', [!Ref ProjectName , ".", 'aapdev.com', .]] #Staging Goes Directlyto Aapdev- !Join ['', [!Ref ProjectName, '-', !Ref BranchName, ".", 'aapdev.com', .]]#Other Branchs Get PrefixesTTL: '300'Type: 'CNAME'ResourceRecords:- !GetAtt LoadBalancer.DNSNameIn the resources section of our cloud formation template, we define the DNS name for the deployed service. Again, this makes it easier for developers to find the URL fortheir deployed project because it always follows the same naming convention.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-43-2048.jpg&f=jpg&w=240)
![LoadBalancer:Type: AWS::ElasticLoadBalancingV2::LoadBalancerProperties:Scheme: !If [InternetFacing , 'internet-facing', 'internal']IpAddressType: ipv4Tags:- Key: BranchNameValue: !Ref BranchName- Key: NameValue: !Join- '-'- - !Ref Environment- !Ref ProjectName- !Ref BranchName- Key: ServiceValue: !Ref ProjectName- Key: EnvValue: !Ref Environment- Key: RoleValue: "Load Balancer"- Key: TeamValue: !Ref TeamWe define the load balancer and by using a lookup, we can correctly provision the load balancer as either internet-facing or internal.One additional thing we do here is apply tags for the load balancer name, service, environment, role, and team. We use this for cost allocation, allowing us to break downour operating costs by each of these tags and control our expenses.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-44-2048.jpg&f=jpg&w=240)
![LoadBalancerListener:Type: AWS::ElasticLoadBalancingV2::ListenerProperties:DefaultActions:- TargetGroupArn: !Ref 'TargetGroup'Type: 'forward'LoadBalancerArn: !Ref 'LoadBalancer'Port: !If [InternetFacing , 443, 80]Protocol: !If [InternetFacing , 'HTTPS', 'HTTP']Certificates:- CertificateArn: !If- InternetFacing- !If- EnvironmentIsProd- 'arn:aws:acm:us-east-1:1234567890:certificate/xxxxxxxx'- ‘arn:aws:acm:us-east-1:1234567890:certificate/xxxxxxxx'- !Ref "AWS::NoValue"LoadBalancerRedirectListener:Type: AWS::ElasticLoadBalancingV2::ListenerCondition: InternetFacingProperties:DefaultActions:- Type: 'redirect'RedirectConfig:Port: '443'Protocol: 'HTTPS'StatusCode: 'HTTP_301'LoadBalancerArn: !Ref 'LoadBalancer'Port: 80Protocol: 'HTTP'The load balancer has to have listeners, so we define those as well.For internet-facing load balancers, we setup SSL, configure the certificate and create an automatic HTTP 301 redirect to SSL for any traffic received over HTTP.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-45-2048.jpg&f=jpg&w=240)
![TaskDefinition:Type: 'AWS::ECS::TaskDefinition'Properties:Family: !Join ['-',[!Ref Environment,!Ref ProjectName, !Ref BranchName] ]Cpu: !Ref CPUMemory: !Ref MemoryRequiresCompatibilities:- FARGATEVolumes:-Name: "aaptiv_logs"ContainerDefinitions:- Name: !Join ['-',[!Ref Environment,!Ref ProjectName, !Ref BranchName] ]Cpu: !Ref CPUMemory: !Ref MemoryImage: !Ref ImageUrlEssential: trueEnvironment:- Name: ENVValue: !Ref Environment- Name: BRANCH_NAMEValue: !Ref BranchName- Name: PROJECT_NAMEValue: !Ref ProjectName- Name: NODE_ENVValue: "production"- Name: BRANCH_OVERRIDEValue: !If [EnvironmentIsDev, !Ref BranchOverride, ""]LogConfiguration:LogDriver: "awslogs"Options:"awslogs-group": !Join- '-'- - !Ref Environment- !Ref ProjectName- !Ref BranchName"awslogs-region": "us-east-1""awslogs-stream-prefix": !Ref ProjectNamePortMappings:- ContainerPort: !Ref ContainerPortHostPort: !Ref ContainerPortProtocol: 'TCP'Then we create the ECS task definition:The task definition is the description of the ECS environment for this service. It includes the definition for the docker containers you want to run, memory and cpurequirements, and whether your task runs on EC2 or Fargate.This is largely just a variable substitution exercise, setting the parameters for the task based on the values supplied by the Jenkinsfile and the environment. Rememberthat we got these values into the CloudFormation template by writing them out to a json file in the Jenkins stage then supplying that json file as a cli argument when wecalled the cloud formation command.Things like memory, cpu, and exposed port are specified by the Jenkinsfile.Environment, branch, and project name are calculated by the pipeline library.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-46-2048.jpg&f=jpg&w=240)
![Service:Type: 'AWS::ECS::Service'DependsOn: LoadBalancerRuleProperties:ServiceName: !Join ['-',[!Ref Environment,!Ref ProjectName, !Ref BranchName] ]Cluster: !Ref ClusterNameLaunchType: FARGATEDeploymentConfiguration:MaximumPercent: 200MinimumHealthyPercent: 50DesiredCount: !If [EnvironmentIsDev, 1, !Ref DesiredCount]HealthCheckGracePeriodSeconds: !Ref HealthCheckGracePeriodSecondsNetworkConfiguration:AwsvpcConfiguration:AssignPublicIp: DISABLEDSecurityGroups:- !FindInMap [ "SecurityGroupByEnvironment",!Ref Environment, 'base' ] ##Lets ittalk to its own env- !If ##Extra Security Group only added for dev access to staging- RequiresCrossEnvAccess- !FindInMap [ "SecurityGroupByEnvironment",!Ref Environment,'crossEnvAccess' ]- !Ref "AWS::NoValue"- !If- UseCustomSecurityGroup- !Ref CustomSecurityGroupId- !Ref "AWS::NoValue"Subnets:- !FindInMap [ "SubnetByScheme",'internal', 'AZ1SubnetId' ] #Services shouldAlways be on the- !FindInMap [ "SubnetByScheme",'internal', 'AZ2SubnetId' ] #DMZ , regardless ofwhether loadbalacner- !FindInMap [ "SubnetByScheme",'internal', 'AZ3SubnetId' ] # is internet facing- !FindInMap [ "SubnetByScheme",'internal', 'AZ4SubnetId' ]TaskDefinition: !Ref TaskDefinitionAnd then we define the service.In ECS, a service is a running instance of a task definition.Some of the key things we do here is set the desired count- that is, the number of running tasks the service should have. For production environments, this defaults to aminimum of 3 to ensure there aren’t single-point-of-failure services. In dev, we always set it to 1.We also define the network configuration for the service here, with most of it being determined by the environment: dev, staging, or prod.](/image.pl?url=https%3a%2f%2fimage.slidesharecdn.com%2fawstofargatewithnotes-190124031544%2f75%2fBuild-an-Infra-Product-with-AWS-Fargate-47-2048.jpg&f=jpg&w=240)












