- Notifications
You must be signed in to change notification settings - Fork686
feat(multi-runner)!: support running the scale-down lambda once for every runner group#4858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Draft
iainlane wants to merge3 commits intogithub-aws-runners:mainChoose a base branch fromiainlane:iainlane/multi-runner-scale-down-once
base:main
Could not load branches
Branch not found:{{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline, and old review comments may become outdated.
Draft
Uh oh!
There was an error while loading.Please reload this page.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
… every runner groupIterating the list of active runners in the GitHub API can be slow andexpensive in terms of rate limit consumption. It's a paginated API,returning up to 100 runners per page. With several thousand runnersacross many runner groups, running `scale-down` once per runner groupcan quickly eat up large portions of the rate limit.Here we break the Terraform `scale-down` module into its own sub-module,so that `multi-runner` can create one instance of the Lambda functioninstead of the `runner` module managing it. A flag is added to the`runner` module to disable the `scale-down` function creation in the`multi-runner` case.Then the Lambda's code is modified to accept a list of configurations,and process them all.With this, we only need to fetch the list of runners once for all runnergroups.BREAKING CHANGE: When using the `multi-runner` module, the per-group`scale_down_schedule_expression` is no longer supported.Only needed if you are using the `multi-runner` module.One instance of `scale-down` will now handle all runner groups.1. Remove any `scale_down_schedule_expression` settings from your `multi_runner_config` runner configs.2. To customise the frequency of the consolidated `scale-down` function, set the `scale_down_schedule_expression` variable on the `multi-runner` module itself.
Now we're potentially running multiple configurations in one scale-downinvocation, if we continue to use the environment we could start to hitsize limits: on Lambda, environment variables are limited to 4K.Adopt the approach we use elsewhere and switch to SSM parameter storefor config. Here we add all the necessary IAM permissions, arrange tostore the config in the store and then read it back in `scale-down`.A more strict parser is also introduced, ensuring that we detect moreinvalid configurations and reject them with clear error messages.
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Note: this is mainly an idea / proof of concept right now and I’ve not actually tried running it!
Iterating the list of active runners in the GitHub API can be slow and expensive in terms of rate limit consumption. It's a paginated API, returning up to 100 runners per page. With several thousand runners across many runner groups, running
scale-downonce per runner group can quickly eat up large portions of the rate limit.Here we break the Terraform
scale-downmodule into its own sub-module, so thatmulti-runnercan create one instance of the Lambda function instead of therunnermodule managing it. A flag is added to therunnermodule to disable thescale-downfunction creation in themulti-runnercase.Then the Lambda's code is modified to accept a list of configurations, and process them all.
With this, we only need to fetch the list of runners once for all runner groups.
Now we're potentially running multiple configurations in one scale-down invocation, if we continue to use the environment to pass runner config to the lambda we could start to hit size limits: on Lambda, environment variables are limited to 4K.
Adopt the approach we use elsewhere and switch to SSM parameter store for config. Here we add all the necessary IAM permissions, arrange to store the config in the store and then read it back in
scale-down.A more strict parser is also introduced, ensuring that we detect more invalid configurations and reject them with clear error messages.
BREAKING CHANGE: When using the
multi-runnermodule, the per-groupscale_down_schedule_expressionis no longer supported.Only needed if you are using the
multi-runnermodule.One instance of
scale-downwill now handle all runner groups.scale_down_schedule_expressionsettings from yourmulti_runner_configrunner configs.scale-downfunction, set thescale_down_schedule_expressionvariable on themulti-runnermodule itself.