- Notifications
You must be signed in to change notification settings - Fork1.1k
chore(docs): add external provisioner configuration for prebuilds#20305
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Changes fromall commits
a18c980adaee6a52c2e61d24ccdfFile filter
Filter by extension
Conversations
Uh oh!
There was an error while loading.Please reload this page.
Jump to
Uh oh!
There was an error while loading.Please reload this page.
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -364,6 +364,63 @@ For example, the [`ami`](https://registry.terraform.io/providers/hashicorp/aws/l | ||
| has [`ForceNew`](https://github.com/hashicorp/terraform-provider-aws/blob/main/internal/service/ec2/ec2_instance.go#L75-L81) set, | ||
| since the AMI cannot be changed in-place._ | ||
| ### Preventing prebuild queue contention (recommended) | ||
| The section [Managing prebuild provisioning queues](#managing-prebuild-provisioning-queues) covers how to recover when prebuilds have already overwhelmed the provisioner queue. | ||
| This section outlines a **best-practice configuration** to prevent that situation by isolating prebuild jobs to a dedicated provisioner pool. | ||
| This setup is optional and requires minor template changes. | ||
| Coder supports [external provisioners and provisioner tags](../../provisioners/index.md), which allows you to route jobs to provisioners with matching tags. | ||
| By creating external provisioners with a special tag (e.g., `is_prebuild=true`) and updating the template to conditionally add that tag for prebuild jobs, | ||
| all prebuild work is handled by the prebuild pool. | ||
| This keeps other provisioners available to handle user-initiated jobs. | ||
| #### Setup | ||
| 1) Create a provisioner key with a prebuild tag (e.g., `is_prebuild=true`). | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others.Learn more. How does this tag interact with other tags? Let's say I already have provisioners with separate tags to process only aws jobs in one pool and only gke in another. Or perhaps I have a provisioner deployed in each region. Would I need to add a new provisioner key for each of these sets but with the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others.Learn more. Multiple Multiple instances of the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others.Learn more.
It depends on the tags that the user has defined in their template. I didn’t go into much detail here because that could make it harder to understand. Basically, the provisioner key needs to include all the tags already specified in the template, plus this new one.
In your example, if you want to update your provisioners deployed in AWS and GKE, you would need to create 2 new provisioner key with those same tags and the additional For instance, in the dogfood template we already have two static tags: For the provisioners to handle prebuild jobs for this template, the key must be created with: Contributor There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others.Learn more. I get your point. If we're being overly verbose it will be hard to understand. On the other hand, I don't know that what you said above will be obvious to readers without additional context. Don't feel pressured to change anything at my insistence, but perhaps consider whether there is a way to illustrate this interaction simply. | ||
| Provisioner keys are org-scoped and their tags are inferred automatically by provisioner daemons that use the key. | ||
| See [Scoped Key](../../provisioners/index.md#scoped-key-recommended) for instructions on how to create a provisioner key. | ||
| 2) Deploy a separate provisioner pool using that key (for example, via the [Helm coder-provisioner chart](https://github.com/coder/coder/pkgs/container/chart%2Fcoder-provisioner)). | ||
| Daemons in this pool will only execute jobs that include all of the tags specified in their provisioner key. | ||
| See [External provisioners](../../provisioners/index.md) for environment-specific deployment examples. | ||
| 3) Update the template to conditionally add the prebuild tag for prebuild jobs. | ||
| ```hcl | ||
| data "coder_workspace_tags" "prebuilds" { | ||
| count = data.coder_workspace_owner.me.name == "prebuilds" ? 1 : 0 | ||
| tags = { | ||
| "is_prebuild" = "true" | ||
| } | ||
| } | ||
| ``` | ||
| Prebuild workspaces are a special type of workspace owned by the system user `prebuilds`. | ||
| The value `data.coder_workspace_owner.me.name` returns the name of the workspace owner, for prebuild workspaces, this value is `"prebuild"`. | ||
| Because the condition evaluates based on the workspace owner, provisioning or deprovisioning prebuilds automatically applies the prebuild tag, whereas regular jobs (like workspace creation or template import) do not. | ||
| > [!NOTE] | ||
| > The prebuild provisioner pool can still accept non-prebuild jobs. | ||
| > To achieve a fully isolated setup, add an additional tag (`is_prebuild=false`) to your standard provisioners, ensuring a clean separation between prebuild and non-prebuild workloads. | ||
| > See [Provisioner Tags](../../provisioners/index.md#provisioner-tags) for further details. | ||
| #### Validation | ||
| To confirm that prebuild jobs are correctly routed to the new provisioner pool, use the Provisioner Jobs dashboard or the [`coder provisioner jobs list`](../../../reference/cli/provisioner_jobs_list.md) CLI command to inspect job metadata and tags. | ||
| Follow these steps: | ||
| 1) Publish the new template version. | ||
| 2) Wait for the prebuilds reconciliation loop to run. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others.Learn more. What are the failure scenarios for this approach? Member There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others.Learn more. The jobs will remain in a pending state as there will be no provisioner to pick them up. My understanding is that the reconcilation loop should detect that there are in-progress jobs and not completely spam the queue (ref:https://github.com/coder/coder/blob/main/coderd/prebuilds/preset_snapshot.go#L394-L410) but do correct me if I'm not understanding correctly. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others.Learn more. What@johnstcn said, the prebuild jobs would remain in a pending state, waiting for a provisioner daemon with matching tags to become available. The reconciliation loop already accounts for pending prebuilds (jobs in the queue) and subtracts them from the desired count, so this wouldn’t impact the loop itself. The only side effect is that the queue for prebuild-related jobs could continue to grow if new template versions are imported or existing prebuilds are claimed. Should I add another step here to validate the status of the provisioners from Coder’s perspective, for example, by checking the Provisioners page or running the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others.Learn more. I think that would be a good idea, but I'll leave the decision up to you. | ||
| The loop frequency is controlled by the configuration value [`CODER_WORKSPACE_PREBUILDS_RECONCILIATION_INTERVAL`](../../../reference/cli/server.md#--workspace-prebuilds-reconciliation-interval). | ||
| When the loop runs, it will provision prebuilds for the new template version and deprovision prebuilds for the previous version. | ||
| Both provisioning and deprovisioning jobs for prebuilds should display the tag `is_prebuild=true`. | ||
| 3) Create a new workspace from a preset. | ||
| Whether the preset uses a prebuild pool or not, the resulting job should not include the `is_prebuild=true` tag. | ||
| This confirms that only prebuild-related jobs are routed to the dedicated prebuild provisioner pool. | ||
| ### Monitoring and observability | ||
| #### Available metrics | ||
Uh oh!
There was an error while loading.Please reload this page.