Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork4.5k
Add telemetry for Lambda Managed Instances#13466
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
| try: | ||
| invocation_result=self.version_manager.invoke(invocation=invocation) | ||
| function_config=self.version_manager.function_version.config | ||
| function_counter.labels( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Given the counter in thefinally clause, this counter seems to produce double counting 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Great catch! I agree about your thoughts in the PR description - we should have a single point for the telemetry, instead of having it split up - this would have already prevented this double counting.
github-actionsbot commentedDec 4, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
github-actionsbot commentedDec 4, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
github-actionsbot commentedDec 4, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Test Results (amd64) - Integration, Bootstrap 5 files 5 suites 2h 32m 25s ⏱️ Results for commit44105d8. ♻️ This comment has been updated with latest results. |
github-actionsbot commentedDec 4, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
dfangl left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
LGTM in principal, but I believe there are some errors in how we construct the telemetry labels now, which we should fix before merging!
| try: | ||
| invocation_result=self.version_manager.invoke(invocation=invocation) | ||
| function_config=self.version_manager.function_version.config | ||
| function_counter.labels( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Great catch! I agree about your thoughts in the PR description - we should have a single point for the telemetry, instead of having it split up - this would have already prevented this double counting.
localstack-core/localstack/services/lambda_/invocation/event_manager.py OutdatedShow resolvedHide resolved
Uh oh!
There was an error while loading.Please reload this page.
localstack-core/localstack/services/lambda_/invocation/event_manager.py OutdatedShow resolvedHide resolved
Uh oh!
There was an error while loading.Please reload this page.
dfangl left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Thanks for addressing those comments!
vittoriopolverino left a comment• edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
praise: thank you for taking the time to thoroughly consider the metric design and for suggestinginitialization_type instead of the flag. It significantly improves clarity 🚀
note: the schema bump shouldn't cause any major difficulties when querying the raw data because we've only added a new attribute. For historical data (before versionv4.12.0), you will simply see empty values for theinitialization_type attribute.
note: thelambda.function metric counter has now hit its soft limit of6 labels.
note: I have already updated the metrics dictionary ✅
1224e92 intomainUh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Motivation
We're currently missing runtime telemetry for Lambda Managed Instances (LMI) to track errors and usage of the feature beyond the control plane API.
Changes
initialization_type.This maps to an AWS-defined property and is more extensible than a simple boolean flaguses_capacity_provider.Limitations
provisioned-concurrencyis currently not taken into account because it is not easily accessible in the context (might require enriching the invocation result)lambda.Invoke. However, this would require some refactorings and API enrichment (e.g., passing ESM metadata throughlambda.Invokevia custom fields or headers)Tests
The following tests can be used to validate that the metrics work as expected:
tests.aws.services.lambda_.test_lambda_managed_instances.TestLambdaManagedInstances.test_lifecycleshould generate acreateandinvokeevent with the newinitialization_type=lambda-managed-instances.aws.services.lambda_.test_lambda.TestLambdaFeatures.test_invocation_type_eventshould generate acreateandinvokeevent with theinitialization_type=on-demandand theinvocation_type=eventfor each execution (node & python). Validate that theinvokeevent is not recorded twice.Related
Part of DRG-36