Migrate from AWS to Google Cloud: Migrate from AWS Lambda to Cloud Run

Last reviewed 2025-12-31 UTC

Google Cloud provides tools, products, guidance, and professionalservices to assist in migrating serverless workloads from Amazon Web Services(AWS) Lambda to Google Cloud. Although Google Cloud provides severalservices on which you can develop and deploy serverless applications, thisdocument focuses on migrating toCloud Run,a serverless runtime environment. Both AWS Lambda andCloud Run share similarities such as automatic resourceprovisioning, scaling by the cloud provider, and a pay-per-use pricing model.

This document helps you to design, implement, and validate a plan to migrateserverless workloads from AWS Lambda to Cloud Run. Additionally,it offers guidance for those evaluating the potential benefits and process ofsuch a migration.

This document is part of a multi-part series about migrating from AWS toGoogle Cloud that includes the following documents:

For more information about picking the right serverless runtime environment foryour business logic, seeSelect a managed container runtime environment.For a comprehensive mapping between AWS and Google Cloud services, seecompare AWS and Azure services to Google Cloud services.

For this migration to Google Cloud, we recommend that you followthe migration framework described inMigrate to Google Cloud: Get started.

The following diagram illustrates the path of your migration journey.

Migration path with four phases.

You might migrate from your source environment to Google Cloud in a seriesof iterations—for example, you might migrate some workloads first and otherslater. For each separate migration iteration, you follow the phases of thegeneral migration framework:

  1. Assess and discover your workloads and data.
  2. Plan and build a foundation on Google Cloud.
  3. Migrate your workloads and data to Google Cloud.
  4. Optimize your Google Cloud environment.

For more information about the phases of this framework, seeMigrate to Google Cloud: Get started.

To design an effective migration plan, we recommend that you validate each stepof the plan, and ensure that you have a rollback strategy. To help you validateyour migration plan, seeMigrate to Google Cloud: Best practices for validating a migration plan.

Migrating serverless workloads often extends beyond just moving functionsfrom one cloud provider to another. Because cloud-based applications rely on aninterconnected web of services, migrating from AWS to Google Cloud mightrequire replacing dependent AWS services with Google Cloud services. Forexample, consider a scenario in which your Lambda function interacts withAmazon SQS and Amazon SNS. To migrate this function, you will likely need toadopt Pub/Sub and Cloud Tasks to achieve similar functionality.

Migration also presents a valuable chance for you to thoroughly review yourserverless application's architecture and design decisions. Through this review,you might discover opportunities to do the following:

  • Optimize with Google Cloud built-in features: Explore whetherGoogle Cloud services offer unique advantages or better align withyour application's requirements.
  • Simplify your architecture: Assess whether streamlining is possibleby consolidating functionality or using services differently withinGoogle Cloud.
  • Improve cost-efficiency: Evaluate the potential cost differences ofrunning your refactored application on the infrastructure that is providedon Google Cloud.
  • Improve code efficiency: Refactor your code alongside the migrationprocess.

Plan your migration strategically. Don't view your migration as arehost (lift and shift) exercise. Use your migration as a chance to enhance theoverall design and code quality of your serverless application.

Assess the source environment

In the assessment phase, you determine the requirements and dependencies tomigrate your source environment to Google Cloud.

The assessment phase is crucial for the success of your migration. You need togain deep knowledge about the workloads you want to migrate, their requirements,their dependencies, and about your current environment. You need to understandyour starting point to successfully plan and execute a Google Cloudmigration.

The assessment phase consists of the following tasks:

  1. Build a comprehensive inventory of your workloads.
  2. Catalog your workloads according to their properties and dependencies.
  3. Train and educate your teams on Google Cloud.
  4. Build experiments and proofs of concept on Google Cloud.
  5. Calculate the total cost of ownership (TCO) of the target environment.
  6. Choose the migration strategy for your workloads.
  7. Choose your migration tools.
  8. Define the migration plan and timeline.
  9. Validate your migration plan.

For more information about the assessment phase and these tasks, seeMigrate to Google Cloud: Assess and discover your workloads.The following sections are based on information in that document.

Build an inventory of your AWS Lambda workloads

To define the scope of your migration, you create an inventory and collectinformation about your AWS Lambda workloads.

To build the inventory of your AWS Lambda workloads, we recommend that you do the following:

  1. To discover your AWS Lambda assets, useMigration Center.Migration Center isGoogle Cloud's unified platform thathelps you accelerate your end-to-end cloud journey from your currentenvironment to Google Cloud.
  2. To refine the inventory of your AWS Lambda workloads, use theAWS command-line interface (CLI).For example, you can use the AWS CLI to get a list of AWS Lambda functions.Then, for each function, get its configuration details and its triggers.

  3. To review recent AWS Lambda invocation logs and durations, concurrentexecutions, and errors, useCloudWatch.CloudWatch providesseveral types of metrics for AWS Lambda.For more information about viewing AWS Lambda metrics, seeViewing metrics for AWS Lambda functions.

  4. To assess your AWS Lambda workloads inventory, useGemini CLI. For example, you can add the inventory files that listthe AWS Lambda workloads to the Gemini CLI context and then promptGemini to help you assess those objects. For more information, seeGemini-powered migrations to Google Cloud.

After you build your inventory, we recommend that you gather information abouteach AWS Lambda workload in the inventory. For each workload, focus on aspectsthat help you anticipate potential friction. Also, analyze that workload tounderstand how you might need to modify the workload and its dependenciesbefore you migrate to Cloud Run. We recommend that you start bycollecting data about the following aspects of each AWS Lambda workload:

  • The use case and design
  • The source code repository
  • The deployment artifacts
  • The invocation, triggers, and outputs
  • The runtime and execution environments
  • The workload configuration
  • The access controls and permissions
  • The compliance and regulatory requirements
  • The deployment and operational processes

Use case and design

Gathering information about the use case and design of the workloads helps inidentifying a suitable migration strategy. This information also helps you tounderstand whether you need to modify your workloads and their dependenciesbefore the migration. For each workload, we recommend that you do thefollowing:

  • Gain insights into the specific use case that the workload serves, andidentify any dependencies with other systems, resources, or processes.
  • Analyze the workload's design and architecture.
  • Assess the workload's latency requirements.

Source code repository

Inventorying the source code of your AWS Lambda functions helps if you need torefactor your AWS Lambda workloads for compatibility withCloud Run. Creating this inventory involves tracking thecodebase, which is typically stored in version control systems like Git or indevelopment platforms such as GitHub or GitLab. The inventory of your sourcecode is essential for your DevOps processes, such as continuous integration andcontinuous development (CI/CD) pipelines, because these processes will also needto be updated when you migrate to Cloud Run.

Deployment artifacts

Knowing what deployment artifacts are needed by the workload is anothercomponent in helping you understand whether you might need to refactor your AWSLambda workloads. To identify what deployment artifacts that the workloadneeds, gather the following information:

  • The type of deployment package to deploy the workload.
  • Any AWS Lambda layer that contains additional code, such as libraries andother dependencies.
  • Any AWS Lambda extensions that the workload depends on.
  • The qualifiers that you configured to specify versions and aliases.
  • The deployed workload version.

Invocation, triggers, and outputs

AWS Lambda supports several invocation mechanisms, such as triggers, anddifferent invocation models, such as synchronous invocation and asynchronousinvocation. For each AWS Lambda workload, we recommend that you gather thefollowing information that is related to triggers and invocations:

  • The triggers and event source mappings that invoke the workload.
  • Whether the workload supports synchronous and asynchronous invocations.
  • The workload URLs and HTTP(S) endpoints.

Your AWS Lambda workloads can interact with other resources and systems.You need to know what resources consume the outputs of your AWS Lambdaworkloads and how those resources consume those outputs. This knowledge helpsyou to determine whether you need to modify anything that might depend on thoseoutputs, such as other systems or workloads. For each AWS Lambda workload, werecommend that you gather the following information about other resources andsystems:

  • The destination resources that the workload might send events to.
  • The destinations that receive information records for asynchronousinvocations.
  • The format for the events that the workload processes.
  • How your AWS Lambda workload and its extensions interact with AWS LambdaAPIs, or other AWS APIs.

In order to function, your AWS Lambda workloads might store persistent dataand interact with other AWS services. For each AWS Lambda workload, we recommendthat you gather the following information about data and other services:

  • Whether the workload accesses virtual private clouds (VPCs) or other privatenetworks.
  • How the workload stores persistent data, such as by using ephemeral datastorage and Amazon Elastic File System (EFS).

Runtime and execution environments

AWS Lambda supports several execution environments for your workloads. Tocorrectly map AWS Lambda execution environments to Cloud Runexecution environments, we recommend that you assess the following for each AWSLambda workload:

  • The execution environment of the workload.
  • The instruction set architecture of the computer processor on which theworkload runs.

If your AWS Lambda workloads run in language-specific runtime environments,consider the following for each AWS Lambda workload:

  • The type, version, and unique identifier of the language-specific runtimeenvironment.
  • Any modifications that you applied to the runtime environment.

Workload configuration

In order to configure your workloads as you migrate them from AWS Lambda toCloud Run, we recommend that you assess how you configured eachAWS Lambda workload.

For each AWS Lambda workload, gather information about the followingconcurrency and scalability settings:

  • The concurrency controls.
  • The scalability settings.
  • The configuration of the instances of the workload, in terms of the amountof memory available and the maximum execution time allowed.
  • Whether the workload is using AWS Lambda SnapStart, reservedconcurrency, or provisioned concurrency to reduce latency.
  • The environment variables that you configured, as well as the ones that AWSLambda configures and the workload depends on.
  • The tags and attribute-based access control.
  • The state machine to handle exceptional conditions.
  • The base images and configuration files (such as the Dockerfile) fordeployment packages that use container images.

Access controls and permissions

As part of your assessment, we recommend that you assess the securityrequirements of your AWS Lambda workloads and their configuration in terms ofaccess controls and management. This information is critical if you need toimplement similar controls in your Google Cloud environment. For eachworkload, consider the following:

  • The execution role and permissions.
  • The identity and access management configuration that the workload and itslayers use to access other resources.
  • The identity and access management configuration that other accounts andservices use to access the workload.
  • The Governance controls.

Compliance and regulatory requirements

For each AWS Lambda workload, make sure that you understand its compliance andregulatory requirements by doing the following:

  • Assess any compliance and regulatory requirements that the workloadneeds to meet.
  • Determine whether the workload is meeting these requirements.
  • Determine whether there are any future requirements that will need to bemet.

Compliance and regulatory requirements might be independent from the cloudprovider that you're using, and these requirements might have an impact on themigration as well. For example, you might need to ensure that data and networktraffic stays within the boundaries of certain geographies, such as the EuropeanUnion (EU).

Assess your deployment and operational processes

It's important to have a clear understanding of how your deployment andoperational processes work. These processes are a fundamental part of thepractices that prepare and maintain your production environment and theworkloads that run there.

Your deployment and operational processes might build the artifacts that yourworkloads need to function. Therefore, you should gather information about eachartifact type. For example, an artifact can be an operating system package, anapplication deployment package, an operating system image, a container image, orsomething else.

In addition to the artifact type, consider how you complete the following tasks:

  • Develop your workloads. Assess the processes that development teams havein place to build your workloads. For example, how are your development teamsdesigning, coding, and testing your workloads?
  • Generate the artifacts that you deploy in your source environment. Todeploy your workloads in your source environment, you might be generatingdeployable artifacts, such as container images or operating system images, oryou might be customizing existing artifacts, such as third-party operatingsystem images by installing and configuring software.Gathering information about how you're generating these artifacts helps you toensure that the generated artifacts are suitable for deployment inGoogle Cloud.
  • Store the artifacts. If you produce artifacts that you store in anartifact registry in your source environment, you need to make the artifactsavailable in your Google Cloud environment. You can do so by employingstrategies like the following:

    • Establish a communication channel between the environments: Make theartifacts in your source environment reachable from the targetGoogle Cloud environment.
    • Refactor the artifact build process: Complete a minor refactor of yoursource environment so that you can store artifacts in both the sourceenvironment and the target environment. This approach supports yourmigration by building infrastructure like an artifact repository before youhave to implement artifact build processes in the target Google Cloudenvironment. You can implement this approach directly, or you can build onthe previous approach of establishing a communication channel first.

    Having artifacts available in both the source and target environments lets youfocus on the migration without having to implement artifact build processes inthe target Google Cloud environment as part of the migration.

  • Scan and sign code. As part of your artifact build processes, you might beusing code scanning to help you guard against common vulnerabilities andunintended network exposure, and code signing to help you ensure that onlytrusted code runs in your environments.

  • Deploy artifacts in your source environment. After you generatedeployable artifacts, you might be deploying them in your source environment.We recommend that you assess each deployment process. The assessment helpsensure that your deployment processes are compatible with Google Cloud.It also helps you to understand the effort that will be necessary toeventually refactor the processes. For example, if your deployment processeswork with your source environment only, you might need to refactor them totarget your Google Cloud environment.

  • Inject runtime configuration. You might be injecting runtime configurationfor specific clusters, runtime environments, or workload deployments. Theconfiguration might initialize environment variables and other configurationvalues such as secrets, credentials, and keys. To help ensure that yourruntime configuration injection processes work on Google Cloud, werecommend that you assess how you're configuring the workloads that run inyour source environment.

  • Logging, monitoring, and profiling. Assess the logging, monitoring, andprofiling processes that you have in place to monitor the health of yoursource environment, the metrics of interest, and how you're consuming dataprovided by these processes.

  • Authentication. Assess how you're authenticating against yoursource environment.

  • Provision and configure your resources. To prepare your sourceenvironment, you might have designed and implemented processes that provisionand configure resources. For example, you might be usingTerraform along with configuration management tools to provision and configure resourcesin your source environment. If you're using Terraform, you can build yourfoundations and landing zone on Google Cloud by using the followingtools, depending on your preferred starting point:

Complete the assessment

After you build the inventories from your AWS Lambda environment, complete therest of the activities of the assessment phase as described inMigrate to Google Cloud: Assess and discover your workloads.

Plan and build your foundation

In the plan and build phase, you provision and configure the infrastructure todo the following:

  • Support your workloads in your Google Cloud environment.
  • Connect your source environment and your Google Cloud environment tocomplete the migration.

The plan and build phase is composed of the following tasks:

  1. Build a resource hierarchy.
  2. Configure Google Cloud's Identity and Access Management (IAM).
  3. Set up billing.
  4. Set up network connectivity.
  5. Harden your security.
  6. Set up logging, monitoring, and alerting.

For more information about each of these tasks, see theMigrate to Google Cloud: Plan and build your foundation.

Migrate your AWS Lambda workloads

To migrate your workloads from AWS Lambda to Cloud Run, dothe following:

  1. Design, provision, and configure your Cloud Run environment.
  2. If needed, refactor your AWS Lambda workloads to make them compatiblewith Cloud Run.
  3. Refactor your deployment and operational processes to deploy and observeyour workloads on Cloud Run.
  4. Migrate the data that is needed by your AWS Lambda workloads.
  5. Validate the migration results in terms of functionality, performance,and cost.

To help you avoid issues during the migration, and to help estimate the effortthat is needed for the migration, we recommend that you evaluate how AWS Lambdafeatures compare to similar Cloud Run features. AWS Lambda andCloud Run features might look similar when you compare them.However, differences in the design and implementation of the features in the twocloud providers can have significant effects on your migration from AWS Lambdato Cloud Run. These differences can influence both your designand refactoring decisions, as highlighted in the following sections.

Design, provision, and configure your Cloud Run environment

The first step of the migrate phase is to design your Cloud Runenvironment so that it can support the workloads that you are migrating from AWSLambda.

In order to correctly design your Cloud Run environment, use thedata that you gathered during the assessment phase about each AWS Lambdaworkload. This data helps you to do the following:

  1. Choose the right Cloud Run resources to deploy yourworkload.
  2. Design your Cloud Run resources configuration.
  3. Provision and configure the Cloud Run resources.

Choose the right Cloud Run resources

For each AWS Lambda workload to migrate, choose the rightCloud Run resource to deploy your workloads.Cloud Run supports the following main resources:

  • Cloud Run services: a resource that hosts a containerized runtimeenvironment, exposes a unique endpoint, and automatically scales theunderlying infrastructure according to demand.
  • Cloud Run jobs: a resource that executes one or more containers tocompletion.

The following table summarizes how AWS Lambda resources map to these mainCloud Run resources:

AWS Lambda resourceCloud Run resource
AWS Lambda function that gets triggered by an event such as thoseused for websites and web applications, APIs and microservices,streaming data processing, and event-driven architectures.Cloud Run service that you can invoke withtriggers.
AWS Lambda function that has been scheduled to run such as those forbackground tasks and batch jobs.Cloud Run job that runs to completion.

Beyond services and jobs, Cloud Run provides additionalresources that extend these main resources. For more information about all ofthe available Cloud Run resources, seeCloud Run resource model.

Design your Cloud Run resources configuration

Before you provision and configue your Cloud Run resources,you design their configuration. Certain AWS Lambda configuration options, suchas resource limits and request timeouts, are comparable to similarCloud Run configuration options. The following sections describethe configuration options that are available in Cloud Run forservice triggers and job execution, resource configuration, and security. Thesesections also highlight AWS Lambda configuration options that are comparable tothose in Cloud Run.

Cloud Run service triggers and job execution

Service triggers and job execution are the main design decisions that you needto consider when you migrate your AWS Lambda workloads. Cloud Runprovides a variety of options to trigger and run the event-based workloads thatare used in AWS Lambda. In addition, Cloud Run provides optionsthat you can use for streaming workloads and scheduled jobs.

When you migrate your workloads, it is often useful to first understand whattriggers and mechanisms are available in Cloud Run. Thisinformation helps with your understanding of how Cloud Run works.Then, you can use this understanding to determine which Cloud Runfeatures are comparable to AWS Lambda features and whichCloud Run features could be used when refactoring thoseworkloads.

To learn more about the service triggers that Cloud Run provides,use the following resources:

To learn more about the job execution mechanisms thatCloud Run provides, use the following resources:

To help you understand which Cloud Runinvocation or execution mechanisms are comparable to AWS Lambda invocationmechanisms, use the following table. For each Cloud Run resourcethat you need to provision, make sure to choose the right invocation orexecution mechanism.

AWS Lambda featureCloud Run feature
HTTPS trigger (function URLs)HTTPS invocation
HTTP/2 trigger (partially supported using an external APIgateway)HTTP/2 invocation (supported natively)
WebSockets (supported using external API gateway)WebSockets (supported natively)
N/A (gRPC connections not supported)gRPC connections
Asynchronous invocationCloud Tasks integration
Scheduled invocationCloud Scheduler integration
Event-based trigger in a proprietary event formatEvent-based invocation in CloudEvents format
Amazon SQS and Amazon SNS integrationPub/Sub integration
AWS Lambda Step FunctionsWorkflows integration
Note: This document focuses on migrating from AWS Lambda workloadsto Cloud Run. If you use AWS Step Functions and you need tomigrate them as well, seeMigrate from AWS Step Functions to Workflows.
Cloud Run resource configuration

To supplement the design decisions that you made for triggering and runningyour migrated workloads, Cloud Run supports several configurationoptions that let you fine tune several aspects of the runtime environment. Theseconfiguration options consist of resource services and jobs.

As mentioned earlier, you can better understand how Cloud Runworks by first developing an understanding of all of the configuration optionsthat are available in Cloud Run. This understanding then helpsyou to compare AWS Lambda features to similar Cloud Runfeatures, and helps you determine how to refactor your workloads.

To learn more about the configurations thatCloud Run services provide, use the following resources:

To learn more about the jobs thatCloud Run provides, use the following resources:

To help you understand which Cloud Runconfiguration options are comparable to AWS Lambda configuration options, usethe following table. For each Cloud Run resource that you needto provision, make sure to choose the right configuration options.

AWS Lambda featureCloud Run feature
Provisioned concurrencyMinimum instances
Reserved concurrency per instance
(The concurrency pool is sharedacross AWS Lambda functions in your AWS account.)
Maximum instances per service
N/A (not supported, one request maps to one instance)Concurrent requests per instance
N/A (depends on memory allocation)CPU allocation
Scalability settingsInstance autoscaling for services
Parallelism for jobs
Instance configuration (CPU, memory)CPU and memory limits
Maximum execution timeRequest timeout for services
Task timeout for jobs
AWS Lambda SnapStartStartup CPU boost
Environment variablesEnvironment variables
Ephemeral data storageIn-memory volume mounts
Amazon Elastic File System connectionsNFS volume mounts
S3 volume mounts are not supportedCloud Storage volume mounts
AWS Secrets Manager in AWS Lambda workloadsSecrets
Workload URLs and HTTP(S) endpointsAuto-assigned URLs
Cloud Run integrationswith Google Cloud products
Sticky sessions (using an external load balancer)Session affinity
QualifiersRevisions

In addition to the features that are listed in the previous table, you shouldalso consider the differences between how AWS Lambda andCloud Run provision instances of the execution environment. AWSLambda provisions a single instance for each concurrent request. However,Cloud Run lets youset the number of concurrent requests that an instance can serve.That is, the provisioning behavior of AWS Lambda is equivalent to setting themaximum number of concurrent requests per instance to 1 inCloud Run. Setting the maximum number of concurrent requests tomore than 1 can significantly save costs because the CPU and memory of theinstance is shared by the concurrent requests, but they are only billed once.Most HTTP frameworks are designed to handle requests in parallel.

Cloud Run security and access control

When you design your Cloud Run resources, you also need to decideon the security and access controls that you need for your migrated workloads.Cloud Run supports several configuration options to help you secure your environment,and to set roles and permissions for your Cloud Run workloads.

This section highlights the security and access controls that are available inCloud Run. This information helps you both understand how yourmigrated workloads will function in Cloud Run and identify thoseCloud Run options that you might need if you refactorthose workloads.

To learn more about the authentication mechanisms thatCloud Run provides, use the following resources:

To learn more about the security features thatCloud Run provides, use the following resources:

To help you understand which Cloud Runsecurity and access controls are comparable to those that are available in AWSLambda, use the following table. For each Cloud Run resource thatyou need to provision, make sure to choose the right access controls andsecurity features.

AWS Lambda featureCloud Run feature
Access control with AWS identity access and managementAccess control with Google Cloud's IAM
Execution roleGoogle Cloud'sIAM role
Permission boundariesGoogle Cloud'sIAM permissions andcustom audiences
Governance controlsOrganization Policy Service
Code signingBinary authorization
Full VPC accessGranular VPC egress accesscontrols

Provision and configure Cloud Run resources

After you choose the Cloud Run resources to deploy yourworkloads, you provision and configure those resources. For more informationabout provisioning Cloud Run resources, see the following:

Refactor AWS Lambda workloads

To migrate your AWS Lambda workloads to Cloud Run, you mightneed to refactor them. For example, if an event-based workload accepts triggersthat contain Amazon CloudWatch events, you might need to refactor that workloadto make it accept events in theCloudEvents format.

To help you with the refactoring of your AWS Lambda workloads, we recommend thatyou useGemini Code Assist andGemini CLI. For more information, seeGemini-powered migrations to Google Cloud.

There are several factors that may influence the amount of refactoring that youneed for each AWS Lambda workload, such as the following:

  • Architecture. Consider how the workload is designed in terms ofarchitecture. For example, AWS Lambda workloads that have clearly separatedthe business logic from the logic to access AWS-specific APIs might requireless refactoring as compared to workloads where the two logics are mixed.
  • Idempotency. Consider whether the workload is idempotent in regard toits inputs. A workload that is idempotent to inputs might require lessrefactoring as compared to workloads that need to maintain state about whichinputs they've already processed.
  • State. Consider whether the workload is stateless. A stateless workloadmight require less refactoring as compared to workloads that maintain state.For more information about the services that Cloud Runsupports to store data, seeCloud Run storage options.
  • Runtime environment. Consider whether the workload makes anyassumptions about its runtime environment. For these types of workloads,you might need to satisfy the same assumptions in theCloud Run runtime environment or refactor the workload ifyou can't assume the same for the Cloud Run runtimeenvironment. For example, if a workload requires certain packages orlibraries to be available, you need to install them in theCloud Run runtime environment that is going to host thatworkload.
  • Configuration injection. Consider whether the workload supportsusing environment variables and secrets to inject (set) its configuration. Aworkload that supports this type of injection might require less refactoringas compared to workloads that support other configuration injectionmechanisms.
  • APIs. Consider whether the workload interacts with AWS Lambda APIs.A workload that interacts with these APIs might need to be refactored to useCloud APIs andCloud Run APIs.
  • Error reporting. Consider whether the workloadreports errors using standard output and error streams. A workload thatdoes such error reporting might require less refactoring as compared toworkloads that report errors using other mechanisms.

For more information about developing and optimizing Cloud Runworkloads, see the following resources:

Refactor deployment and operational processes

After you refactor your workloads, you refactor your deployment and operationalprocesses to do the following:

  • Provision and configure resources in your Google Cloud environmentinstead of provisioning resources in your source environment.
  • Build and configure workloads, and deploy them in your Google Cloudinstead of deploying them in your source environment.

You gathered information about these processes during the assessment phaseearlier in this process.

The type of refactoring that you need to consider for these processes depends onhow you designed and implemented them. The refactoring also depends on what youwant the end state to be for each process. For example, consider the following:

  • You might have implemented these processes in your source environment and youintend to design and implement similar processes in Google Cloud. Forexample, you can refactor these processes to useCloud Build,Cloud Deploy,andInfrastructure Manager.
  • You might have implemented these processes in another third-party environmentoutside your source environment. In this case, you need to refactor theseprocesses to target your Google Cloud environment instead of your sourceenvironment.
  • A combination of the previous approaches.

Refactoring deployment and operational processes can be complex and can requiresignificant effort. If you try to perform these tasks as part of your workloadmigration, the workload migration can become more complex, and it can expose youto risks. After you assess your deployment and operational processes, you likelyhave an understanding of their design and complexity. If you estimate that yourequire substantial effort to refactor your deployment and operationalprocesses, we recommend that you consider refactoring these processes as part ofa separate, dedicated project.

For more information about how to design and implement deployment processes on Google Cloud, see:

This document focuses on the deployment processes that produce the artifacts todeploy, and deploy them in the target runtime environment. The refactoringstrategy highly depends on the complexity of these processes. The following listoutlines a possible, general, refactoring strategy:

  1. Provision artifact repositories on Google Cloud. For example, you canuse Artifact Registry to store artifacts and build dependencies.
  2. Refactor your build processes to store artifacts both in your sourceenvironment and in Artifact Registry.
  3. Refactor your deployment processes to deploy your workloads in your targetGoogle Cloud environment. For example, you can start by deploying asmall subset of your workloads in Google Cloud, using artifacts storedin Artifact Registry. Then, you gradually increase the number of workloadsdeployed in Google Cloud, until all the workloads to migrate run onGoogle Cloud.
  4. Refactor your build processes to store artifacts in Artifact Registry only.
  5. If necessary, migrate earlier versions of the artifacts to deploy from therepositories in your source environment to Artifact Registry. For example, you cancopy container images to Artifact Registry.
  6. Decommission the repositories in your source environment when you no longerrequire them.

To facilitate eventual rollbacks due to unanticipated issues during themigration, you can store container images both in your current artifactrepositories in Google Cloud while the migration to Google Cloud isin progress. Finally, as part of the decommissioning of your source environment,you can refactor your container image building processes to store artifacts inGoogle Cloud only.

Although it might not be crucial for the success of a migration, you might needto migrate your earlier versions of your artifacts from your source environmentto your artifact repositories on Google Cloud. For example, to supportrolling back your workloads to arbitrary points in time, you might need tomigrate earlier versions of your artifacts to Artifact Registry. For more information,seeMigrate images from a third-party registry.

If you're using Artifact Registry to store your artifacts, we recommend that youconfigure controls to help you secure your artifactrepositories, such as access control, data exfiltration prevention,vulnerability scanning, and Binary Authorization. For more information, seeControl access and protect artifacts.

Refactor operational processes

As part of your migration to Cloud Run, we recommend that yourefactor your operational processes to constantly and effectively monitor yourCloud Run environment.

Cloud Run integrates with the following operational services:

Migrate data

The assessment phase earlier in this process should have helped you determinewhether the AWS Lambda workloads that you're migrating either depend on orproduce data that resides in your AWS environment. For example, youmight have determined that you need to migrate data from AWS S3 to Cloud Storage, or from Amazon RDS and Aurora to Cloud SQL andAlloyDB for PostgreSQL. For more information about migrating data from AWS to Google Cloud, seeMigrate from AWS to Google Cloud: Get started.

As with refactoring deployment and operational processes, migrating data fromAWS to Google Cloud can be complex and can require significant effort. Ifyou try to perform these data migration tasks as part of the migration of yourAWS Lambda workloads, the migration can become complex and can expose you torisks. After you analyze the data to migrate, you'll likely have anunderstanding of the size and complexity of the data. If you estimate that yourequire substantial effort to migrate this data, we recommend that you considermigrating data as part of a separate, dedicated project.

Validate the migration results

Validating your workload migration isn't a one-time event but a continuousprocess. You need to maintain focus on testing and validation before, during,and after migrating from AWS Lambda to Cloud Run.

To help ensure a successful migration with optimal performance and minimaldisruptions, we recommend that you use the following process to validate theworkloads that you're migrating from AWS Lambda to Cloud Run:

  • Before you start the migration phase, refactor your existing testcases to take into account the target Google Cloud environment.
  • During the migration, validate test results at each migrationmilestone and conduct thorough integration tests.
  • After the migrations, do the following testing:
    • Baseline testing: Establish performance benchmarks of theoriginal workload on AWS Lambda, such as execution time, resourceusage, and error rates under different loads. Replicate these tests onCloud Run to identify discrepancies that could point tomigration or configuration problems.
    • Functional testing: Ensure that the core logic of yourworkloads remains consistent by creating and executing test cases thatcover various input and expected output scenarios in both environments.
    • Load testing: Gradually increase traffic to evaluate the performanceand scalability of Cloud Run under real-worldconditions. To help ensure a seamless migration, address discrepanciessuch as errors and resource limitations.

Optimize your Google Cloud environment

Optimization is the last phase of your migration. In this phase, you iterate onoptimization tasks until your target environment meets your optimizationrequirements. The steps of each iteration are as follows:

  1. Assess your current environment, teams, and optimization loop.
  2. Establish your optimization requirements and goals.
  3. Optimize your environment and your teams.
  4. Tune the optimization loop.

You repeat this sequence until you've achieved your optimization goals.

For more information about optimizing your Google Cloud environment, seeMigrate to Google Cloud: Optimize your environment andGoogle Cloud Well-Architected Framework: Performance optimization.

What's next

Contributors

Authors:

Other contributors:

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-31 UTC.