Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

chore(docs): add external provisioner configuration for prebuilds#20305

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
Merged
Changes fromall commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view

Some comments aren't visible on the classic Files Changed page.

View file
Open in desktop
Original file line numberDiff line numberDiff line change
Expand Up@@ -364,6 +364,63 @@ For example, the [`ami`](https://registry.terraform.io/providers/hashicorp/aws/l
has [`ForceNew`](https://github.com/hashicorp/terraform-provider-aws/blob/main/internal/service/ec2/ec2_instance.go#L75-L81) set,
since the AMI cannot be changed in-place._

### Preventing prebuild queue contention (recommended)

The section [Managing prebuild provisioning queues](#managing-prebuild-provisioning-queues) covers how to recover when prebuilds have already overwhelmed the provisioner queue.
This section outlines a **best-practice configuration** to prevent that situation by isolating prebuild jobs to a dedicated provisioner pool.
This setup is optional and requires minor template changes.

Coder supports [external provisioners and provisioner tags](../../provisioners/index.md), which allows you to route jobs to provisioners with matching tags.
By creating external provisioners with a special tag (e.g., `is_prebuild=true`) and updating the template to conditionally add that tag for prebuild jobs,
all prebuild work is handled by the prebuild pool.
This keeps other provisioners available to handle user-initiated jobs.

#### Setup

1) Create a provisioner key with a prebuild tag (e.g., `is_prebuild=true`).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

How does this tag interact with other tags? Let's say I already have provisioners with separate tags to process only aws jobs in one pool and only gke in another. Or perhaps I have a provisioner deployed in each region.

Would I need to add a new provisioner key for each of these sets but with theis_prebuild flag included?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Multiplecoder_workspace_tags are cumulative:https://github.com/coder/coder/blob/main/coderd/dynamicparameters/tags_internal_test.go#L200-L242

Multiple instances of thecoder_workspace_tags data source cannot clobber existing tag values.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

How does this tag interact with other tags?

It depends on the tags that the user has defined in their template. I didn’t go into much detail here because that could make it harder to understand. Basically, the provisioner key needs to include all the tags already specified in the template, plus this new one.

Would I need to add a new provisioner key for each of these sets but with the is_prebuild flag included?

In your example, if you want to update your provisioners deployed in AWS and GKE, you would need to create 2 new provisioner key with those same tags and the additionalis_prebuild tag.

For instance, in the dogfood template we already have two static tags:

data "coder_workspace_tags" "tags" {  tags = {    "cluster" = "dogfood-v2"    "env"     = "gke"  }}

For the provisioners to handle prebuild jobs for this template, the key must be created with:--tag cluster=dogfood-v2 --tag env=gke --tag is_prebuild=true

Copy link
Contributor

@SasSwartSasSwartOct 15, 2025
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I get your point. If we're being overly verbose it will be hard to understand. On the other hand, I don't know that what you said above will be obvious to readers without additional context.

Don't feel pressured to change anything at my insistence, but perhaps consider whether there is a way to illustrate this interaction simply.

Provisioner keys are org-scoped and their tags are inferred automatically by provisioner daemons that use the key.
See [Scoped Key](../../provisioners/index.md#scoped-key-recommended) for instructions on how to create a provisioner key.

2) Deploy a separate provisioner pool using that key (for example, via the [Helm coder-provisioner chart](https://github.com/coder/coder/pkgs/container/chart%2Fcoder-provisioner)).
Daemons in this pool will only execute jobs that include all of the tags specified in their provisioner key.
See [External provisioners](../../provisioners/index.md) for environment-specific deployment examples.

3) Update the template to conditionally add the prebuild tag for prebuild jobs.

```hcl
data "coder_workspace_tags" "prebuilds" {
count = data.coder_workspace_owner.me.name == "prebuilds" ? 1 : 0
tags = {
"is_prebuild" = "true"
}
}
```

Prebuild workspaces are a special type of workspace owned by the system user `prebuilds`.
The value `data.coder_workspace_owner.me.name` returns the name of the workspace owner, for prebuild workspaces, this value is `"prebuild"`.
Because the condition evaluates based on the workspace owner, provisioning or deprovisioning prebuilds automatically applies the prebuild tag, whereas regular jobs (like workspace creation or template import) do not.

> [!NOTE]
> The prebuild provisioner pool can still accept non-prebuild jobs.
> To achieve a fully isolated setup, add an additional tag (`is_prebuild=false`) to your standard provisioners, ensuring a clean separation between prebuild and non-prebuild workloads.
> See [Provisioner Tags](../../provisioners/index.md#provisioner-tags) for further details.

#### Validation

To confirm that prebuild jobs are correctly routed to the new provisioner pool, use the Provisioner Jobs dashboard or the [`coder provisioner jobs list`](../../../reference/cli/provisioner_jobs_list.md) CLI command to inspect job metadata and tags.
Follow these steps:

1) Publish the new template version.

2) Wait for the prebuilds reconciliation loop to run.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

What are the failure scenarios for this approach?
If prebuild specific provisioners fail to deploy for a specific tagset, will those jobs remain pending? Will that result in any kind of back pressure where the pending jobs impact the reconciliation loop?

Copy link
Member

@johnstcnjohnstcnOct 15, 2025
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

The jobs will remain in a pending state as there will be no provisioner to pick them up. My understanding is that the reconcilation loop should detect that there are in-progress jobs and not completely spam the queue (ref:https://github.com/coder/coder/blob/main/coderd/prebuilds/preset_snapshot.go#L394-L410) but do correct me if I'm not understanding correctly.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

What@johnstcn said, the prebuild jobs would remain in a pending state, waiting for a provisioner daemon with matching tags to become available. The reconciliation loop already accounts for pending prebuilds (jobs in the queue) and subtracts them from the desired count, so this wouldn’t impact the loop itself.

The only side effect is that the queue for prebuild-related jobs could continue to grow if new template versions are imported or existing prebuilds are claimed.

Should I add another step here to validate the status of the provisioners from Coder’s perspective, for example, by checking the Provisioners page or running thecoder provisioner list CLI command?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I think that would be a good idea, but I'll leave the decision up to you.

The loop frequency is controlled by the configuration value [`CODER_WORKSPACE_PREBUILDS_RECONCILIATION_INTERVAL`](../../../reference/cli/server.md#--workspace-prebuilds-reconciliation-interval).
When the loop runs, it will provision prebuilds for the new template version and deprovision prebuilds for the previous version.
Both provisioning and deprovisioning jobs for prebuilds should display the tag `is_prebuild=true`.

3) Create a new workspace from a preset.
Whether the preset uses a prebuild pool or not, the resulting job should not include the `is_prebuild=true` tag.
This confirms that only prebuild-related jobs are routed to the dedicated prebuild provisioner pool.

### Monitoring and observability

#### Available metrics
Expand Down
Loading

[8]ページ先頭

©2009-2025 Movatter.jp