Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments Stay organized with collections Save and categorize content based on your preferences.
This document helps you plan and design a migration path from manualdeployments to automated, containerized deployments in Google Cloudusing cloud-native tools and Google Cloud managed services.
This document is part of the following multi-part series about migrating toGoogle Cloud:
- Migrate to Google Cloud: Get started
- Migrate to Google Cloud: Assess and discover your workloads
- Migrate to Google Cloud: Plan and build your foundation
- Migrate to Google Cloud: Transfer your large datasets
- Migrate to Google Cloud: Deploy your workloads
- Migrate to Google Cloud: Migrate from manual deployments toautomated, containerized deployments (this document)
- Migrate to Google Cloud: Optimize your environment
- Migrate to Google Cloud: Best practices for validating a migration plan
- Migrate to Google Cloud: Minimize costs
This document is useful if you're planning to modernize your deploymentprocesses, if you're migrating from manual and legacy deployment processes toautomated and containerized deployments, or if you're evaluating the opportunityto migrate and want to explore what it might look like.
Before starting this migration, you should evaluate the scope of the migrationand the status of your current deployment processes, and set your expectationsand goals. You choose the starting point according to how you'recurrently deploying your workloads:
- You're deploying your workloads manually.
- You're deploying your workloads with configuration management (CM) tools.
It's hard to move from manual deployments directly to fully automated andcontainerized deployments. Instead, we recommend the following migrationsteps:
This migration path is an ideal one, but you can stop earlier in the migrationprocess if the benefits of moving to the next step outweigh the costs for yourparticular case. For example, if you don't plan to automatically deploy yourworkloads, you can stop after you deploy by using container orchestration tools.You can revisit this document in the future, when you're ready to continue onthe journey.
When you move from one step of the migration to another, there is a transitionphase where you might be using different deployment processes at the same time.In fact, you don't need to choose only one deployment option for all of yourworkloads. For example, you might have a hybrid environment where you deploycertain workloads using CM tools, while deploying other workloads with containerorchestration tools.
For this migration to Google Cloud, we recommend that you followthe migration framework described inMigrate to Google Cloud: Get started.
The following diagram illustrates the path of your migration journey.
You might migrate from your source environment to Google Cloud in a seriesof iterations—for example, you might migrate some workloads first and otherslater. For each separate migration iteration, you follow the phases of thegeneral migration framework:
- Assess and discover your workloads and data.
- Plan and build a foundation on Google Cloud.
- Migrate your workloads and data to Google Cloud.
- Optimize your Google Cloud environment.
For more information about the phases of this framework, seeMigrate to Google Cloud: Get started.
To design an effective migration plan, we recommend that you validate each stepof the plan, and ensure that you have a rollback strategy. To help you validateyour migration plan, seeMigrate to Google Cloud: Best practices for validating a migration plan.
Migrate to container orchestration tools
One of your first steps to move away from manual deployments is to deploy yourworkloads with container orchestration tools. In this step, you design andimplement a deployment process to handle containerized workloads by usingcontainer orchestration tools, such asKubernetes.
If your workloads aren't alreadycontainerized,you're going to spend a significant effort containerizing them. Not allworkloads are suitable for containerization. If you're deploying a workload thatisn't cloud-ready or ready for containerization, it might not be worthcontainerizing the workloads. Some workloads can't even support containerizationfor technical or licensing reasons.
Assess and discover your workloads
To scope your migration, you first need an inventory of the artifacts thatyou're producing and deploying along with their dependencies on othersystems and artifacts. To build this inventory, you need to use the expertise ofthe teams that designed and implemented your current artifact production anddeployment processes. TheMigrate to Google Cloud: Assess and discover your workloads document discusses how to assess your environment during a migrationand how tobuild an inventory of apps.
For each artifact, you need to evaluate its test coverage.You should have proper test coverage for all your artifacts before moving on tothe next step. If you have to manually test and validate each artifact, youdon't benefit from the automation. Adopt a methodology that highlights theimportance of testing, liketest-driven development.
When you evaluate your processes,consider how many different versions of your artifacts you might have inproduction. For example, if the latest version of an artifact is severalversions ahead of instances that you must support, you have to design a modelthat supports both versions.
Also consider the branching strategy that you use to manage yourcodebase. A branching strategy is only part of a collaboration model that youneed to evaluate, and you need to assess the broader collaboration processesinside and outside your teams. For example, if you adopt a flexiblebranching strategy but don't adapt it to the communication process, theefficiency of those teams might be reduced.
In this assessment phase, you also determine how you can make the artifactsyou're producing more efficient and suitable for containerization thanyour current deployment processes. One way to improve efficiency is to assessthe following:
- Common parts: Assess what your artifacts have in common. For example,if you have common libraries and other runtime dependencies, considerconsolidating them in one runtime environment.
- Runtime environment requirements: Assess whether you can streamline theruntime environments to reduce their variance. For example, if you'reusing different runtime environments to run all your workloads,consider starting from a common base to reduce the maintenance burden.
- Unnecessary components: Assess whether your artifacts contain unnecessaryparts. For example, you might have utility tools, such as debugging andtroubleshooting tools, that are not strictly needed.
- Configuration and secret injection: Assess how you'reconfiguring your artifacts according to the requirements of your runtimeenvironment. For example, your current configuration injection system mightnot support a containerized environment.
- Security requirements: Assess whether your container security model meetsyour requirements. For example, the security model of a containerizedenvironment might clash with the requirement of a workload to have superuser privileges, direct access to system resources, or sole tenancy.
- Deployment logic requirements: Assess whether you need to implement advanceddeployment processes. For example, if you need to implement acanary deployment process,you could determine whether the container orchestration tool supports that.
Plan and build a foundation
In the plan and build phase, you provision and configure the infrastructure todo the following:
- Support your workloads in your Google Cloud environment.
- Connect your source environment and your Google Cloud environment tocomplete the migration.
The plan and build phase is composed of the following tasks:
- Build a resource hierarchy.
- Configure Google Cloud's Identity and Access Management (IAM).
- Set up billing.
- Set up network connectivity.
- Harden your security.
- Set up logging, monitoring, and alerting.
For more information about each of these tasks, see theMigrate to Google Cloud: Plan and build your foundation.
To achieve the necessary flexibility to manage your Google Cloud resources,we recommend that you design aGoogle Cloud resource hierarchy that supports multiple environments such as for development, testing, andproduction workloads.
When you're establishing user and service identities, for the best isolationyou need at least a service account for each deployment process step. Forexample, if your process executes steps to produce the artifact and to managethe storage of that artifact in a repository, you need at least two serviceaccounts. If you want to provision and configure development and testingenvironments for your deployment processes, you might need to create moreservice accounts. If you have a distinct set of service accounts per environment,you make the environments independent from each other. Although thisconfiguration increases the complexity of your infrastructure and puts moreburden on your operations team, it gives you the flexibility to independentlytest and validate each change to the deployment processes.
You also need to provision and configure the services andinfrastructure to support your containerized workloads:
- Set up a registry to store your container images, likeArtifact Registry and to isolate this registry and the relatedmaintenance tasks, you set it up in a dedicated Google Cloud project.
- Provision and configure the Kubernetes clusters you need to support yourworkloads. Depending on your current environment and your goals, you canuse services likeGoogle Kubernetes Engine (GKE).
- Provision and configure persistent storage for yourstateful workloads.For more information, seeGoogle Kubernetes Engine storage overview.
By using container orchestration tools, you don't have to worry aboutprovisioning your infrastructure when you deploy new workloads. Forexample, you can useAutopilot to automatically manage your GKE cluster configuration.
Deploy your artifacts with container orchestration tools
Based on the requirements you gathered in the assessment phase and thefoundation phase of this step, you do the following:
- Containerize your workloads.
- Implement deployment processes to handle your containerized workloads.
Containerizing your workloads is a nontrivial task. What follows is ageneralized list of activities you need to adapt and extend to containerize yourworkloads. Your goal is to cover your ownneeds, such as networking and traffic management, persistent storage, secret andconfiguration injection, and fault tolerance requirements. This document coverstwo activities: building a set of container images to use as a base, andbuilding a set of container images for your workloads.
First, you automate the artifact production, so you don't have tomanually produce a new image for each new deployment. The artifactbuilding process should be automatically triggered each time the source code ismodified so that you have immediate feedback about each change.
You execute the following steps to produce each image:
- Build the image.
- Run the test suite.
- Store the image in a registry.
For example, you can useCloud Build to build your artifacts, run the test suites against them, and, if the tests aresuccessful, store the results in Artifact Registry.
You also need to establish rules and conventions for identifying yourartifacts. When producing your images, label each one to make eachexecution of your processes repeatable. For example, a popular convention is toidentify releases by usingsemantic versioning where you tag your container images when producing a release. When youproduce images that still need work before release, you can use anidentifier that ties them to the point in the codebase from which your processproduced them. For example, if you're usingGit repositories, you can use thecommit hash as an identifier for the corresponding container image that you produced whenyou pushed a commit to the main branch of your repository.
During the assessment phase of this step, you gathered information about yourartifacts, their common parts, and their runtime requirements. With thisinformation, you can design and build a set of base container images and anotherset of images for your workloads. You use the base images as a starting point tobuild the images for your workloads. The set of base images should be tightlycontrolled and supported to avoid proliferating unsupportedruntime environments.
When producing container images from base images, remember to extend your testsuites to cover the images, not only the workloads inside each image. You canuse tools likeInSpec to run compliance test suites against your runtime environments.
When you finish containerizing your workloads and implementingprocesses to automatically produce such container images, you implement thedeployment processes to use container orchestration tools. In the assessmentphase, you use the information about the deployment logic requirements that yougathered to design rich deployment processes. By using containerorchestration tools, you can focus on composing the deployment logicusing the provided mechanisms, instead of having to manually implement them. Forexample, you can useCloud Deploy to implement your deployment processes.
When designing and implementing your deployment processes,consider how to inject configuration files and secretsin your workloads, and how to manage data for stateful workloads. Configurationfiles and secret injection are instrumental to produce immutable artifacts. Bydeploying immutable artifacts, you can do the following:
- For example, you can deploy your artifacts in your development environment.Then, after testing and validating them, you move them to your qualityassurance environment. Finally, you move them to the production environment.
- You lower the chances of issues in your production environments becausethe same artifact went through multiple testing and validation activities.
If your workloads are stateful, we suggest you provision and configure thenecessary persistent storage for your data. On Google Cloud, you havedifferent options:
- Persistent disks managed with GKE
- Fully managed database services likeCloud SQL,Firestore andSpanner
- File storage services likeFilestore
- Object store services likeCloud Storage
Optimize your environment
After implementing your deployment process, you can use container orchestrationtools to start optimizing the deployment processes. For more information, seeMigrate to Google Cloud: Optimize your environment.
The requirements of this optimization iteration are the following:
- Extend your monitoring system as needed.
- Extend the test coverage.
- Increase the security of your environment.
You extend your monitoring system to cover your new artifact production, yourdeployment processes, and all of your new runtime environments.
If you want to effectively monitor, automate, and codify your processes as muchas possible, we recommend that you increase the coverage of your tests. In theassessment phase, you ensured that you had at least minimum end-to-end testcoverage. During the optimization phase, you can expand your test suites tocover more use cases.
Finally, if you want to increase the security of your environments, you canconfigurebinary authorization to allow only a set of signed images to be deployed in your clusters. You canalso enableArtifact Analysis to scan container images stored in Artifact Registry forvulnerabilities.
Migrate to deployment automation
After migrating to container orchestration tools, you can move to fulldeployment automation, and you can extend the artifact production and deploymentprocesses to automatically deploy your workloads.
Assess and discover your workloads
Building on the previous evaluation, you can now focus on the requirements ofyour deployment processes:
- Manual approval steps: Assess whether you need to support any manual stepsin your deployment processes.
- Deployment-per-time units: Assess how many deployments-per-time unitsyou need to support.
- Factors that cause a new deployment: Assess which external systemsinteract with your deployment processes.
If you need to support manual deployment steps, it doesn't mean that yourprocess cannot be automated. In this case, you automate each step of theprocess, and place the manual approval gates where appropriate.
Supporting multiple deployments per day or per hour is more complex thansupporting a few deployments per month or per year. However, if you don't deployoften, your agility and your ability to react to issues and to ship new featuresin your workloads might be reduced. For this reason, before designing andimplementing a fully automated deployment process, it's a good idea to setyour expectations and goals.
Also evaluate which factors trigger a new deployment in your runtimeenvironments. For example, you might deploy each new release in your developmentenvironment, but deploy the release in your quality assurance environment onlyif it meets certain quality criteria.
Plan and build a foundation
To extend the foundation that you built in the previous step, you provision andconfigure services to support your automated deployment processes.
For each of your runtime environments, set up the necessaryinfrastructure to support your deployment processes. For example, if youprovision and configure your deployment processes in your development, qualityassurance, pre-production, and production environments, you have the freedom andflexibility to test changes to your processes. However, if you use a singleinfrastructure to deploy your runtime environments, your environments aresimpler to manage, but less flexible when you need to change your processes.
When provisioning the service accounts and roles, consider isolatingyour environments and your workloads from each other by creating dedicatedservice accounts that don't share responsibilities. For example, don't reuse thesame service accounts for your different runtime environments.
Deploy your artifacts with fully automated processes
In this phase, you configure your deployment processes to deploy yourartifacts with no manual interventions, other than approval steps.
You can use tools likeCloud Deploy to implement your automated deployment processes, according to the requirementsyou gathered in the assessment phase of this migration step.
For any given artifact, each deployment process should execute the followingtasks:
- Deploy the artifact in the target runtime environment.
- Inject the configuration files and secrets in the deployed artifact.
- Run the compliance test suite against the newly deployed artifact.
- Promote the artifact to the production environment.
Make sure that your deployment processes provide interfaces to trigger newdeployments according to your requirements.
Code review is a necessary step when implementing automated deploymentprocesses, because of the short feedback loop that's part of these processesby design. For example, if you deploy changes to your production environmentwithout any review, you impact the stability and reliability of your productionenvironment. An unreviewed, malformed, or malicious change might cause a serviceoutage.
Optimize your environment
After automating your deployment processes, you can run anotheroptimization iteration. The requirements of this iteration are thefollowing:
- Extend your monitoring system to cover the infrastructure supportingyour automated deployment processes.
- Implement more advanced deployment patterns.
- Implement abreak glass process.
An effective monitoring system lets you plan further optimizations for yourenvironment. When you measure the behavior of your environment, you can find anybottlenecks that are hindering your performance or other issues, likeunauthorized or accidental accesses and exploits. For example, you configureyour environment so that you receive alerts when the consumption of certainresources reaches a threshold.
When you're able to efficiently orchestrate containers, you can implementadvanced deployment patterns depending on your needs. For example, you canimplementblue/green deployments to increase the reliability of your environment and reduce the impact of anyissue for your users.
What's next
- Optimize your environment.
- Learn when tofind help for your migrations.
- For more reference architectures, diagrams, and best practices, explore theCloud Architecture Center.
Contributors
Author:Marco Ferrari | Cloud Solutions Architect
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-12-08 UTC.