Migrate from AWS to Google Cloud: Migrate from Amazon EC2 to Compute Engine Stay organized with collections Save and categorize content based on your preferences.
Google Cloud provides tools, products, guidance, and professionalservices to migrate virtual machines (VMs) along with their data from AmazonElastic Compute Cloud (Amazon EC2) toCompute Engine.This document discusses how to design, implement, and validate a plan to migratefrom Amazon EC2 to Compute Engine.
The discussion in this document is intended for cloud administrators who wantdetails about how to plan and implement a migration process. It's also intendedfor decision-makers who are evaluating the opportunity to migrate and who wantto explore what migration might look like.
This document is part of a multi-part series about migrating from AWS toGoogle Cloud that includes the following documents:
- Get started
- Migrate from Amazon EC2 to Compute Engine (this document)
- Migrate from Amazon S3 to Cloud Storage
- Migrate from Amazon EKS to Google Kubernetes Engine
- Migrate from Amazon RDS and Amazon Aurora for MySQL to Cloud SQL for MySQL
- Migrate from Amazon RDS and Amazon Aurora for PostgreSQL to Cloud SQL for PostgreSQL and AlloyDB for PostgreSQL
- Migrate from Amazon RDS for SQL Server to Cloud SQL for SQL Server
- Migrate from AWS Lambda to Cloud Run
For this migration to Google Cloud, we recommend that you followthe migration framework described inMigrate to Google Cloud: Get started.
The following diagram illustrates the path of your migration journey.
You might migrate from your source environment to Google Cloud in a seriesof iterations—for example, you might migrate some workloads first and otherslater. For each separate migration iteration, you follow the phases of thegeneral migration framework:
- Assess and discover your workloads and data.
- Plan and build a foundation on Google Cloud.
- Migrate your workloads and data to Google Cloud.
- Optimize your Google Cloud environment.
For more information about the phases of this framework, seeMigrate to Google Cloud: Get started.
To design an effective migration plan, we recommend that you validate each stepof the plan, and ensure that you have a rollback strategy. To help you validateyour migration plan, seeMigrate to Google Cloud: Best practices for validating a migration plan.
Assess the source environment
In the assessment phase, you determine the requirements and dependencies tomigrate your source environment to Google Cloud.
The assessment phase is crucial for the success of your migration. You need togain deep knowledge about the workloads you want to migrate, their requirements,their dependencies, and about your current environment. You need to understandyour starting point to successfully plan and execute a Google Cloudmigration.
The assessment phase consists of the following tasks:
- Build a comprehensive inventory of your workloads.
- Catalog your workloads according to their properties and dependencies.
- Train and educate your teams on Google Cloud.
- Build experiments and proofs of concept on Google Cloud.
- Calculate the total cost of ownership (TCO) of the target environment.
- Choose the migration strategy for your workloads.
- Choose your migration tools.
- Define the migration plan and timeline.
- Validate your migration plan.
For more information about the assessment phase and these tasks, seeMigrate to Google Cloud: Assess and discover your workloads.The following sections are based on information in that document.
Build an inventory of your Amazon EC2 instances
To scope your migration, you create an inventory of your Amazon EC2 instances.You can then use the inventory to assess your deployment and operationalprocesses for deploying workloads on those instances.
To build the inventory of your Amazon EC2 instances, we recommend that you useMigration Center,Google Cloud's unified platform that helps you accelerate your end-to-endcloud journey from your current environment to Google Cloud.Migration Center lets yourun an inventory discovery on AWS.
The data that Migration Center and theMigration Center discovery client CLI provide might notfully capture the dimensions that you're interested in. In that case, you canintegrate that data with the results from other data-collection mechanisms thatyou create that are based on AWS APIs, AWS developer tools, and the AWScommand-line interface.
In addition to the data that you get from Migration Center andthe Migration Center discovery client CLI, consider thefollowing data points for each Amazon EC2 instance that you want to migrate:
- Deployment region and zone.
- Instance type and size.
- The Amazon Machine Image (AMI) that the instance is launching from.
- The instance hostname, and how other instances and workloads usethis hostname to communicate with the instance.
- The instance tags as well as metadata and user data.
- The instance virtualization type.
- The instance purchase option, such as on-demand purchase or spot purchase.
- How the instance stores data, such as using instance stores and AmazonEBS volumes.
- The instance tenancy configuration.
- Whether the instance is in a specific placement group.
- Whether the instance is in a specific autoscaling group.
- The security groups that the instance belongs to.
- Any AWS Network Firewall configuration that involves the instance.
- Whether the workloads that run on the instance are protected by AWSShield and AWS WAF.
- Whether you're controlling the processor state of your instance, and howthe workloads that run on the instance depend on the processor state.
- The configuration of the instance I/O scheduler.
- How you're exposing workloads that run on the instance to clientsthat run in your AWS environment (such as other workloads) and to externalclients.
For more information about collecting these data points, seeCreate an inventory of your EC2 instances.
Assess your deployment and operational processes
It's important to have a clear understanding of how your deployment andoperational processes work. These processes are a fundamental part of thepractices that prepare and maintain your production environment and theworkloads that run there.
Your deployment and operational processes might build the artifacts that yourworkloads need to function. Therefore, you should gather information about eachartifact type. For example, an artifact can be an operating system package, anapplication deployment package, an operating system image, a container image, orsomething else.
In addition to the artifact type, consider how you complete the following tasks:
- Develop your workloads. Assess the processes that development teams havein place to build your workloads. For example, how are your development teamsdesigning, coding, and testing your workloads?
- Generate the artifacts that you deploy in your source environment. Todeploy your workloads in your source environment, you might be generatingdeployable artifacts, such as container images or operating system images, oryou might be customizing existing artifacts, such as third-party operatingsystem images by installing and configuring software.Gathering information about how you're generating these artifacts helps you toensure that the generated artifacts are suitable for deployment inGoogle Cloud.
Store the artifacts. If you produce artifacts that you store in anartifact registry in your source environment, you need to make the artifactsavailable in your Google Cloud environment. You can do so by employingstrategies like the following:
- Establish a communication channel between the environments: Make theartifacts in your source environment reachable from the targetGoogle Cloud environment.
- Refactor the artifact build process: Complete a minor refactor of yoursource environment so that you can store artifacts in both the sourceenvironment and the target environment. This approach supports yourmigration by building infrastructure like an artifact repository before youhave to implement artifact build processes in the target Google Cloudenvironment. You can implement this approach directly, or you can build onthe previous approach of establishing a communication channel first.
Having artifacts available in both the source and target environments lets youfocus on the migration without having to implement artifact build processes inthe target Google Cloud environment as part of the migration.
Scan and sign code. As part of your artifact build processes, you might beusing code scanning to help you guard against common vulnerabilities andunintended network exposure, and code signing to help you ensure that onlytrusted code runs in your environments.
Deploy artifacts in your source environment. After you generatedeployable artifacts, you might be deploying them in your source environment.We recommend that you assess each deployment process. The assessment helpsensure that your deployment processes are compatible with Google Cloud.It also helps you to understand the effort that will be necessary toeventually refactor the processes. For example, if your deployment processeswork with your source environment only, you might need to refactor them totarget your Google Cloud environment.
Inject runtime configuration. You might be injecting runtime configurationfor specific clusters, runtime environments, or workload deployments. Theconfiguration might initialize environment variables and other configurationvalues such as secrets, credentials, and keys. To help ensure that yourruntime configuration injection processes work on Google Cloud, werecommend that you assess how you're configuring the workloads that run inyour source environment.
Logging, monitoring, and profiling. Assess the logging, monitoring, andprofiling processes that you have in place to monitor the health of yoursource environment, the metrics of interest, and how you're consuming dataprovided by these processes.
Authentication. Assess how you're authenticating against yoursource environment.
Provision and configure your resources. To prepare your sourceenvironment, you might have designed and implemented processes that provisionand configure resources. For example, you might be usingTerraform along with configuration management tools to provision and configure resourcesin your source environment.
Plan and build your foundation
In the plan and build phase, you provision and configure the infrastructure todo the following:
- Support your workloads in your Google Cloud environment.
- Connect your source environment and your Google Cloud environment tocomplete the migration.
The plan and build phase is composed of the following tasks:
- Build a resource hierarchy.
- Configure Google Cloud's Identity and Access Management (IAM).
- Set up billing.
- Set up network connectivity.
- Harden your security.
- Set up logging, monitoring, and alerting.
For more information about each of these tasks, see theMigrate to Google Cloud: Plan and build your foundation.
Migrate your workloads
To migrate your workloads from Amazon EC2 to Compute Engine, you do thefollowing:
- Migrate VMs from Amazon EC2 to Compute Engine.
- Migrate your VM disks to Persistent Disk.
- Expose workloads that run on Compute Engine to clients.
- Refactor deployment and operational processes to targetGoogle Cloud instead of targeting Amazon EC2.
The following sections provide details about each of these tasks.
Migrate your VMs to Compute Engine
To migrate VMs from Amazon EC2 to Compute Engine, we recommend that youuseMigrate to Virtual Machines,which is a fully managed service. For more information, seeMigration journey with Migrate to VMs.
As part of the migration, Migrate for VMs migrates Amazon EC2 instances intheir current state, apart fromrequired configuration changes.If your Amazon EC2 instances run customized Amazon EC2 AMIs, Migrate for VMsmigrates these customizations to Compute Engine instances. However, ifyou want to make your infrastructure reproducible, you might need to applyequivalent customizations by buildingCompute Engine operating system images as part of your deployment and operational processes, as explained later in thisdocument.
Migrate your VM disks to Persistent Disk
You can also use Migrate to VMs to migrate disks from your sourceAmazon EC2 VMs to Persistent Disk, with minimal interruptions to the workloadsthat are running on the Amazon EC2 VMs. For more information, seeMigrate VM disks and attach them to a new VM.
For example, you can migrate a data disk attached to an Amazon EC2 VM toPersistent Disk, and attach it to a new Compute Engine VM.
Expose workloads that run on Compute Engine
After you migrate your Amazon EC2 instances to Compute Engineinstances, you might need to provision and configure your Google Cloudenvironment to expose the workloads to clients.
Google Cloud offers secure and reliable services and products forexposing your workloads to clients. For workloads that run on yourCompute Engine instances, you configure resources for the followingcategories:
- Firewalls
- Traffic load balancing
- DNS names, zones, and records
- DDoS protection and web application firewalls
For each of these categories, you can start by implementing a baselineconfiguration that's similar to how you configured AWS services and resources inthe equivalent category. You can then iterate on the configuration and useadditional features that are provided by Google Cloud services.
The following sections explain how to provision and configureGoogle Cloud resources in these categories, and how they map to AWSresources in similar categories.
Firewalls
If you configured AWS security groups and AWS Network Firewall policies andrules, you can configureCloud Next Generation Firewall policies and rules.You can also provisionVPC Service Controls rules to regulate network traffic inside your VPC. You can useVPC Service Controls to control outgoing traffic from yourCompute Engine instances, and to help mitigate the risk of dataexfiltration.
For example, if you use AWS security groups to allow or deny connections toyour Amazon EC2 instances, you can configure similarVirtual Private Cloud (VPC) firewall rules that apply to your Compute Engine instances.
If you use remote access protocols like SSH or RDP to connect to your AmazonEC2 VMs, you can remove the VM's public IP address and connect to the VMremotely withIdentity-Aware Proxy (IAP).IAP TCP forwarding lets you establish an encrypted tunnel. Youcan use the tunnel to forward SSH, RDP, and other internet traffic to VMswithout assigning your VMs public IP addresses. Because connections from theIAP service originate from a reserved public IP address range,you need to create matching VPC firewall rules. If you haveWindows-based VMs and you turned on Windows Firewall, verify that the WindowsFirewall isn't configured to block RDP connections from IAP.For more information, seeTroubleshooting RDP.
Traffic load balancing
If you've configured Elastic Load Balancing (ELB) in your AWS environment, youcan configureCloud Load Balancing to distribute network traffic to help improve the scalability of your workloadsin Google Cloud. Cloud Load Balancing supports several global andregionalload balancing products that work at different layers of theOSI model,such as at the transport layer and at the application layer. You canchoose a load balancing product that's suitable for the requirements of your workloads.
Cloud Load Balancing also supportsconfiguring Transport Layer Security (TLS) to encrypt network traffic.When you configure TLS for Cloud Load Balancing, you can useself-managed or Google-managed TLS certificates,depending on your requirements.
DNS names, zones, and records
If you use Amazon Route 53 in your AWS environment, you can use the followingin Google Cloud:
- Cloud Domains to register your DNS domains.
- Cloud DNS to manage yourpublic and private DNS zones and yourDNS records.
For example, if you registered a domain by using Amazon Route 53, you cantransfer the domain registration to Cloud Domains.Similarly, if you configured public and private DNS zones using Amazon Route 53,you canmigrate that configuration to Cloud DNS.
DDoS protection and web application firewalls
If you configured AWS Shield and AWS WAF in your AWS environment, you can useGoogle Cloud Armor to help protect your Google Cloud workloads from DDoS attacks and fromcommon exploits.
Refactor deployment and operational processes
After you refactor your workloads, you refactor your deployment and operationalprocesses to do the following:
- Provision and configure resources in your Google Cloud environmentinstead of provisioning resources in your source environment.
- Build and configure workloads, and deploy them in your Google Cloudinstead of deploying them in your source environment.
You gathered information about these processes during the assessment phaseearlier in this process.
The type of refactoring that you need to consider for these processes depends onhow you designed and implemented them. The refactoring also depends on what youwant the end state to be for each process. For example, consider the following:
- You might have implemented these processes in your source environment and youintend to design and implement similar processes in Google Cloud. Forexample, you can refactor these processes to useCloud Build,Cloud Deploy,andInfrastructure Manager.
- You might have implemented these processes in another third-party environmentoutside your source environment. In this case, you need to refactor theseprocesses to target your Google Cloud environment instead of your sourceenvironment.
- A combination of the previous approaches.
Refactoring deployment and operational processes can be complex and can requiresignificant effort. If you try to perform these tasks as part of your workloadmigration, the workload migration can become more complex, and it can expose youto risks. After you assess your deployment and operational processes, you likelyhave an understanding of their design and complexity. If you estimate that yourequire substantial effort to refactor your deployment and operationalprocesses, we recommend that you consider refactoring these processes as part ofa separate, dedicated project.
For more information about how to design and implement deployment processes on Google Cloud, see:
- Migrate to Google Cloud: Deploy your workloads
- Migrate to Google Cloud: Migrate from manual deployments to automated, containerized deployments
This document focuses on the deployment processes that produce the artifacts todeploy, and deploy them in the target runtime environment. The refactoringstrategy highly depends on the complexity of these processes. The following listoutlines a possible, general, refactoring strategy:
- Provision artifact repositories on Google Cloud. For example, you canuse Artifact Registry to store artifacts and build dependencies.
- Refactor your build processes to store artifacts both in your sourceenvironment and in Artifact Registry.
- Refactor your deployment processes to deploy your workloads in your targetGoogle Cloud environment. For example, you can start by deploying asmall subset of your workloads in Google Cloud, using artifacts storedin Artifact Registry. Then, you gradually increase the number of workloadsdeployed in Google Cloud, until all the workloads to migrate run onGoogle Cloud.
- Refactor your build processes to store artifacts in Artifact Registry only.
- If necessary, migrate earlier versions of the artifacts to deploy from therepositories in your source environment to Artifact Registry. For example, you cancopy container images to Artifact Registry.
- Decommission the repositories in your source environment when you no longerrequire them.
To facilitate eventual rollbacks due to unanticipated issues during themigration, you can store container images both in your current artifactrepositories in Google Cloud while the migration to Google Cloud isin progress. Finally, as part of the decommissioning of your source environment,you can refactor your container image building processes to store artifacts inGoogle Cloud only.
Although it might not be crucial for the success of a migration, you might needto migrate your earlier versions of your artifacts from your source environmentto your artifact repositories on Google Cloud. For example, to supportrolling back your workloads to arbitrary points in time, you might need tomigrate earlier versions of your artifacts to Artifact Registry. For more information,seeMigrate images from a third-party registry.
If you're using Artifact Registry to store your artifacts, we recommend that youconfigure controls to help you secure your artifactrepositories, such as access control, data exfiltration prevention,vulnerability scanning, and Binary Authorization. For more information, seeControl access and protect artifacts.
Optimize your Google Cloud environment
Optimization is the last phase of your migration. In this phase, you iterate onoptimization tasks until your target environment meets your optimizationrequirements. The steps of each iteration are as follows:
- Assess your current environment, teams, and optimization loop.
- Establish your optimization requirements and goals.
- Optimize your environment and your teams.
- Tune the optimization loop.
You repeat this sequence until you've achieved your optimization goals.
For more information about optimizing your Google Cloud environment, seeMigrate to Google Cloud: Optimize your environment andGoogle Cloud Well-Architected Framework: Performance optimization.
What's next
- Read aboutother AWS to Google Cloud migration journeys.
- Learn how tocompare AWS and Azure services to Google Cloud.
- Learn when tofind help for your migrations.
- For more reference architectures, diagrams, and best practices, explore theCloud Architecture Center.
Contributors
Author:Marco Ferrari | Cloud Solutions Architect
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-11-20 UTC.