feat(multi-runner)!: support running the scale-down lambda once for every runner group#4858
Draft
iainlane wants to merge 3 commits intogithub-aws-runners:mainfrom
Draft
Conversation
… every runner group Iterating the list of active runners in the GitHub API can be slow and expensive in terms of rate limit consumption. It's a paginated API, returning up to 100 runners per page. With several thousand runners across many runner groups, running `scale-down` once per runner group can quickly eat up large portions of the rate limit. Here we break the Terraform `scale-down` module into its own sub-module, so that `multi-runner` can create one instance of the Lambda function instead of the `runner` module managing it. A flag is added to the `runner` module to disable the `scale-down` function creation in the `multi-runner` case. Then the Lambda's code is modified to accept a list of configurations, and process them all. With this, we only need to fetch the list of runners once for all runner groups. BREAKING CHANGE: When using the `multi-runner` module, the per-group `scale_down_schedule_expression` is no longer supported. Only needed if you are using the `multi-runner` module. One instance of `scale-down` will now handle all runner groups. 1. Remove any `scale_down_schedule_expression` settings from your `multi_runner_config` runner configs. 2. To customise the frequency of the consolidated `scale-down` function, set the `scale_down_schedule_expression` variable on the `multi-runner` module itself.
Now we're potentially running multiple configurations in one scale-down invocation, if we continue to use the environment we could start to hit size limits: on Lambda, environment variables are limited to 4K. Adopt the approach we use elsewhere and switch to SSM parameter store for config. Here we add all the necessary IAM permissions, arrange to store the config in the store and then read it back in `scale-down`. A more strict parser is also introduced, ensuring that we detect more invalid configurations and reject them with clear error messages.
Contributor
|
This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed if no further activity occurs. Thank you for your contributions. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Note: this is mainly an idea / proof of concept right now and I’ve not actually tried running it!
Iterating the list of active runners in the GitHub API can be slow and expensive in terms of rate limit consumption. It's a paginated API, returning up to 100 runners per page. With several thousand runners across many runner groups, running
scale-downonce per runner group can quickly eat up large portions of the rate limit.Here we break the Terraform
scale-downmodule into its own sub-module, so thatmulti-runnercan create one instance of the Lambda function instead of therunnermodule managing it. A flag is added to therunnermodule to disable thescale-downfunction creation in themulti-runnercase.Then the Lambda's code is modified to accept a list of configurations, and process them all.
With this, we only need to fetch the list of runners once for all runner groups.
Now we're potentially running multiple configurations in one scale-down invocation, if we continue to use the environment to pass runner config to the lambda we could start to hit size limits: on Lambda, environment variables are limited to 4K.
Adopt the approach we use elsewhere and switch to SSM parameter store for config. Here we add all the necessary IAM permissions, arrange to store the config in the store and then read it back in
scale-down.A more strict parser is also introduced, ensuring that we detect more invalid configurations and reject them with clear error messages.
BREAKING CHANGE: When using the
multi-runnermodule, the per-groupscale_down_schedule_expressionis no longer supported.Only needed if you are using the
multi-runnermodule.One instance of
scale-downwill now handle all runner groups.scale_down_schedule_expressionsettings from yourmulti_runner_configrunner configs.scale-downfunction, set thescale_down_schedule_expressionvariable on themulti-runnermodule itself.