Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

feat(multi-runner)!: support running the scale-down lambda once for every runner group#4858

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Draft
iainlane wants to merge3 commits intogithub-aws-runners:main
base:main
Choose a base branch
Loading
fromiainlane:iainlane/multi-runner-scale-down-once

Conversation

@iainlane
Copy link
Contributor

Note: this is mainly an idea / proof of concept right now and I’ve not actually tried running it!

Iterating the list of active runners in the GitHub API can be slow and expensive in terms of rate limit consumption. It's a paginated API, returning up to 100 runners per page. With several thousand runners across many runner groups, runningscale-down once per runner group can quickly eat up large portions of the rate limit.

Here we break the Terraformscale-down module into its own sub-module, so thatmulti-runner can create one instance of the Lambda function instead of therunner module managing it. A flag is added to therunner module to disable thescale-down function creation in themulti-runner case.

Then the Lambda's code is modified to accept a list of configurations, and process them all.

With this, we only need to fetch the list of runners once for all runner groups.

Now we're potentially running multiple configurations in one scale-down invocation, if we continue to use the environment to pass runner config to the lambda we could start to hit size limits: on Lambda, environment variables are limited to 4K.

Adopt the approach we use elsewhere and switch to SSM parameter store for config. Here we add all the necessary IAM permissions, arrange to store the config in the store and then read it back inscale-down.

A more strict parser is also introduced, ensuring that we detect more invalid configurations and reject them with clear error messages.

BREAKING CHANGE: When using themulti-runner module, the per-groupscale_down_schedule_expression is no longer supported.

Only needed if you are using themulti-runner module.

One instance ofscale-down will now handle all runner groups.

  1. Remove anyscale_down_schedule_expression settings from yourmulti_runner_config runner configs.
  2. To customise the frequency of the consolidatedscale-down function, set thescale_down_schedule_expression variable on themulti-runner module itself.

iainlaneand others added3 commitsOctober 21, 2025 16:02
… every runner groupIterating the list of active runners in the GitHub API can be slow andexpensive in terms of rate limit consumption. It's a paginated API,returning up to 100 runners per page. With several thousand runnersacross many runner groups, running `scale-down` once per runner groupcan quickly eat up large portions of the rate limit.Here we break the Terraform `scale-down` module into its own sub-module,so that `multi-runner` can create one instance of the Lambda functioninstead of the `runner` module managing it. A flag is added to the`runner` module to disable the `scale-down` function creation in the`multi-runner` case.Then the Lambda's code is modified to accept a list of configurations,and process them all.With this, we only need to fetch the list of runners once for all runnergroups.BREAKING CHANGE: When using the `multi-runner` module, the per-group`scale_down_schedule_expression` is no longer supported.Only needed if you are using the `multi-runner` module.One instance of `scale-down` will now handle all runner groups.1. Remove any `scale_down_schedule_expression` settings from your   `multi_runner_config` runner configs.2. To customise the frequency of the consolidated `scale-down` function,   set the `scale_down_schedule_expression` variable on the   `multi-runner` module itself.
Now we're potentially running multiple configurations in one scale-downinvocation, if we continue to use the environment we could start to hitsize limits: on Lambda, environment variables are limited to 4K.Adopt the approach we use elsewhere and switch to SSM parameter storefor config. Here we add all the necessary IAM permissions, arrange tostore the config in the store and then read it back in `scale-down`.A more strict parser is also introduced, ensuring that we detect moreinvalid configurations and reject them with clear error messages.
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

No reviews

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

1 participant

@iainlane

[8]ページ先頭

©2009-2025 Movatter.jp