- Notifications
You must be signed in to change notification settings - Fork928
chore(docs): tweak replica verbiage on reference architectures#16076
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Changes fromall commits
File filter
Filter by extension
Conversations
Uh oh!
There was an error while loading.Please reload this page.
Jump to
Uh oh!
There was an error while loading.Please reload this page.
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
@@ -12,9 +12,9 @@ tech startups, educational units, or small to mid-sized enterprises. | ||||||
### Coderd nodes | ||||||
| Users | Node capacity | Replicas| GCP | AWS | Azure | | ||||||
|-------------|---------------------|--------------------------|-----------------|------------|-------------------| | ||||||
| Up to 1,000 | 2 vCPU, 8 GB memory | 1-2nodes, 1 coderd each | `n1-standard-2` | `t3.large` | `Standard_D2s_v3` | | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others.Learn more. Suggested change
Is it technically possible to run more than 1 coderd on each node? If yes does this benefit any of the use cases or customers? Why would someone run multiple coderd on a single node? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others.Learn more.
Yes, this can happen automatically during a rollout or during node unavailability.
As far as I'm aware, the main reason to do this would be more for redundancy in case one or more pods become unavilable for whatever reason. The only other reason I could imagine for running multiple replicas on a single node is to spread out connections across more coderd replicas to minimize the user-facing impact of a single pod failing. However, this won't protect against a failure of the underlying node. I'll defer to@spikecurtis to weigh in more on the pros and cons of running multiple replicas per node. [1]https://github.com/coder/coder/blob/main/helm/coder/values.yaml#L223-L237 There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others.Learn more. In any reference architectures we should always recommend having 1 coderd per node. There are generally 2 reasons for multiple replicas: fault tolerance and scale. For fault tolerance, you want the replicas spread out into different failure domains. Having all replicas on the same node means you aren't tolerant of node-level faults. There might still be some residual value in being tolerant to replica level faults: e.g. software crashes, OOM. But, most people would rather the higher fault tolerance. For scale, coderd is written to take advantage of multiple CPU cores in one process, so there is no scale advantage of putting multiple coderd instances on a single node. In fact, it's likely bad for scale since you have multiple processes competing for resources, and extra overhead of coderd to coderd communication. | ||||||
**Footnotes**: | ||||||
@@ -23,19 +23,19 @@ tech startups, educational units, or small to mid-sized enterprises. | ||||||
### Provisioner nodes | ||||||
| Users | Node capacity | Replicas | GCP | AWS | Azure | | ||||||
|-------------|----------------------|-------------------------------|------------------|--------------|-------------------| | ||||||
| Up to 1,000 | 8 vCPU, 32 GB memory | 2 nodes, 30 provisioners each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` | | ||||||
stirby marked this conversation as resolved. Show resolvedHide resolvedUh oh!There was an error while loading.Please reload this page. | ||||||
**Footnotes**: | ||||||
- An external provisioner is deployed as Kubernetes pod. | ||||||
### Workspace nodes | ||||||
| Users | Node capacity | Replicas| GCP | AWS | Azure | | ||||||
|-------------|----------------------|------------------------------|------------------|--------------|-------------------| | ||||||
| Up to 1,000 | 8 vCPU, 32 GB memory | 64nodes, 16 workspaces each | `t2d-standard-8` | `t3.2xlarge` | `Standard_D8s_v3` | | ||||||
**Footnotes**: | ||||||
@@ -48,4 +48,4 @@ tech startups, educational units, or small to mid-sized enterprises. | ||||||
| Users | Node capacity | Replicas | Storage | GCP | AWS | Azure | | ||||||
|-------------|---------------------|----------|---------|--------------------|---------------|-------------------| | ||||||
| Up to 1,000 | 2 vCPU, 8 GB memory | 1node | 512 GB | `db-custom-2-7680` | `db.t3.large` | `Standard_D2s_v3` | |
Uh oh!
There was an error while loading.Please reload this page.