![]() | |
Developer(s) | Amazon.com |
---|---|
Initial release | November 13, 2014; 10 years ago (2014-11-13) |
Operating system | Cross-platform |
Available in | English |
Website | aws![]() |
AWS Lambda is anevent-driven,serverlessFunction as a Service (FaaS) provided byAmazon as a part ofAmazon Web Services. It is designed to enable developers to run code without provisioning or managing servers. It executes code in response toevents and automatically manages the computing resources required by that code. It was introduced on November 13, 2014.[1]
Each AWS Lambda instance runs within a lightweight, isolated environment powered byFirecracker microVMs. These microVMs are initialized with a runtime environment based onAmazon Linux (Amazon Linux AMI or Amazon Linux 2), a custom Linux distribution developed by AWS. Firecracker provides hardware-virtualization-based isolation, aiming to achieve near-bare-metal performance with minimal overhead. AWS claims that, unlike traditional virtual machines, these microVMs launch in milliseconds, enabling rapid and secure function execution with a minimal memory footprint. The Amazon Linux AMI is specifically optimized for cloud-native and serverless workloads, aiming to provide a lightweight, secure, and performant runtime environment.[2][3][4]
As of 2025[update], AWS Lambda supportsNode.js,Python,Java,Go,.NET,Ruby and custom runtimes.[5]
Rust and Go generally exhibit lower cold start times in AWS Lambda compared to Java and C#[6]because they compile to native static binaries, eliminating the need for a virtual machine (JVM or .NET CLR) and reducing runtime initialization overhead. Go has some minimal runtime initialization, including garbage collection and goroutine management, but its impact on cold start time is relatively low. Rust, which is fully ahead-of-time (AOT) compiled and does not require a runtime, often achieves the lowest cold start latency among supported languages.[7][8][9][10][11][12]
Java and C# run on managed runtime environments, introducing additional cold start latency due to runtime initialization and Just-In-Time (JIT) compilation. However, modern optimizations have mitigated some of these challenges. .NET 7 and .NET 8 support Ahead-of-Time (AOT) compilation, reducing cold start times by precompiling code.[13][14] Additionally, AWS Lambda SnapStart for Java 11 and 17 pre-warms and snapshots execution state, significantly decreasing cold start overhead for Java-based functions.[15][16] Despite these optimizations, Rust and Go typically maintain lower cold start times due to their minimal runtime dependencies.[7][8]
In long-running workloads, JIT compilation in Java and .NET may improve execution speed through dynamic optimizations. However, this benefit is workload-dependent, and Rust’s AOT compilation often provides better performance consistency, particularly for CPU-bound tasks.[17] For short-lived Lambda invocations, Rust and Go generally maintain more predictable performance, as JIT optimizations may not have sufficient time to take effect.[8][18]
Historically, Rust and Go required additional effort in deployment due to cross-compilation and static linking challenges. Rust, in particular, often necessitates MUSL-based static linking for AWS Lambda compatibility. However, advancements in deployment tooling, including AWS Serverless Application Model (AWS SAM), GitHub Actions, and Lambda container images, have simplified this process. Go benefits from native static linking support, making its deployment process comparatively straightforward. AWS Lambda's support for container images further reduces runtime compatibility concerns, enabling the use of custom runtimes and dependencies.[7][8][19][20][21][22]
In 2019, at the AWS annual cloud computing conference (AWS re:Invent), the AWS Lambda team announced "Provisioned Concurrency", a feature that "keeps functions initialized and hyper-ready to respond in double-digit milliseconds."[23] The Lambda team described Provisioned Concurrency as "ideal for implementing interactive services, such as web and mobile backends, latency-sensitivemicroservices, or synchronous APIs."[24]
The Lambda Function URL gives Lambda a unique and permanentURL which can be accessed byauthenticated and non-authenticated users alike.[25]
AWS Lambda layer is a ZIP archive containing libraries, frameworks or custom code that can be added to AWS Lambda functions.[26] As of December 2024, AWS Lambda layers have significant limitations:[8][27]
Migration from AWS Lambda to other AWS compute services, such as Amazon ECS, presents challenges due to tight integration with AWS Lambda's APIs, often referred to as service lock-in.[28][29] Tools like AWS Lambda Web Adapter offer a pathway forportability by enabling developers to build web applications using familiar frameworks under a monolithic Lambda design pattern.[28][29] However, this approach introduces limitations, including coarser-grained alerting and access controls, potential cold start delays with large dependencies, and limited suitability for non-HTTP APIs.[8]
AWS Lambda Powertools is an open-source library developed by AWS that provides utilities for observability, tracing, and logging in AWS Lambda functions.[30] It includes structured logging, metrics, and tracing tools for Python, Java, and TypeScript. As of March 2025, it also includes data masking support for Python.[31][32]
As of March 2025, AWS Lambda supports limited vertical scaling by increasing the number of virtual central processing units (vCPUs) through memory allocation. However, it does not allow an increase in single-thread performance, as clock speed remains fixed. When a function is allocated only one vCPU, multiple threads share the same core, resulting in context switching rather than true parallel execution. As a result, multi-threading in single-vCPU environments is primarily beneficial for input/output (I/O)-bound workloads rather than computationally intensive tasks.[33][34][35][36][37]
Allocating additional memory in AWS Lambda enables multiple vCPUs, allowing for parallel execution. However, the clock speed per core remains unchanged, limiting individual thread performance. This configuration makes AWS Lambda suitable for workloads that scale horizontally or leverage parallelism but less optimal for applications that require high single-thread performance.[33][34][35][36][37]
In April 2022, researchers found cryptomining malware targeting AWS Lambda named "Denonia".[38][39][40]