@@ -4,53 +4,76 @@ The 1,000 users architecture is designed to cover a wide range of workflows.
44Examples of subjects that might utilize this architecture include medium-sized
55tech startups, educational units, or small to mid-sized enterprises.
66
7- ** Target load** : API: up to 180 RPS
7+ The recommendations on this page apply to deployments with up to the following limits. If your needs
8+ exceed any of these limits, consider increasing deployment resources or moving to the[ next-higher
9+ architectural tier] ( ./2k-users ) .
810
9- ** High Availability** : non-essential for small deployments
11+ | Users| Concurrent Running Workspaces| Concurrent Builds|
12+ | -------| -------------------------------| -------------------|
13+ | 1000| 600| 60|
1014
1115##Hardware recommendations
1216
13- ###Coderd nodes
17+ ###Coderd
1418
15- | Users | Node capacity | Replicas| GCP | AWS | Azure |
16- | ------------- | --------------------- | -------------------------- | ----------------- | ------------ | --------- ----------|
17- | Up to 1,000 | 2 vCPU, 8 GBmemory | 1-2 nodes, 1 coderd each | ` n1-standard-2 ` | ` m5.large ` | ` Standard_D2s_v3 ` |
19+ | vCPU | Memory | Replicas|
20+ | ------| --------| ----------|
21+ | 2 | 8 GB| 3 |
1822
19- ** Footnotes ** :
23+ ** Notes ** :
2024
25+ - "General purpose" virtual machines, such as N4-series in GCP or M8-series in AWS work well.
26+ - If deploying on Kubernetes:
27+ - Set CPU request and limit to` 2000m `
28+ - Set Memory request and limit to` 8Gi `
29+ - Coderd does not typically benefit from high performance disks like SSDs (unless you are co-locating provisioners).
2130- For small deployments (ca. 100 users, 10 concurrent workspace builds), it is
22- acceptable to deploy provisioners on` coderd ` nodes.
31+ acceptable to deploy provisioners on` coderd ` replicas.
32+ - Coderd instances should be deployed in the same region as the database.
33+
34+ ###Provisioners
35+
36+ | vCPU| Memory| Replicas|
37+ | ------| --------| ----------|
38+ | 0.1| 256 MB| 60|
2339
24- ### Provisioner nodes
40+ ** Notes ** :
2541
26- | Users| Node capacity| Replicas| GCP| AWS| Azure|
27- | -------------| ----------------------| -------------------------------| ------------------| --------------| -------------------|
28- | Up to 1,000| 8 vCPU, 32 GB memory| 2 nodes, 30 provisioners each| ` t2d-standard-8 ` | ` c5.2xlarge ` | ` Standard_D8s_v3 ` |
42+ - "General purpose" virtual machines, such as N4-series in GCP or M8-series in AWS work well.
43+ - If deploying on Kubernetes:
44+ - Set CPU request and limit to` 100m `
45+ - Set Memory request and limit to` 256MB `
46+ - If deploying on virtual machines, stack up to 30 provisioners per machine with a cummensurate amount of memory and CPU.
47+ - Provisioners benefit from high performance disks like SSDs.
48+ - For small deployments (ca. 100 users, 10 concurrent workspace builds), it is
49+ acceptable to deploy provisioners on` coderd ` nodes.
50+ - If deploying workspaces to multiple clouds or multiple Kubernetes clusters, divide the provisioner replicas among the
51+ clouds or clusters according to expected usage.
2952
30- ** Footnotes ** :
53+ ### Database
3154
32- - An external provisioner is deployed as Kubernetes pod.
55+ | vCPU| Memory| Replicas|
56+ | ------| --------| ----------|
57+ | 8| 30 GB| 1|
3358
34- ### Workspace nodes
59+ ** Notes ** :
3560
36- | Users| Node capacity| Replicas| GCP| AWS| Azure|
37- | -------------| ----------------------| ------------------------------| ------------------| --------------| -------------------|
38- | Up to 1,000| 8 vCPU, 32 GB memory| 64 nodes, 16 workspaces each| ` t2d-standard-8 ` | ` m5.2xlarge ` | ` Standard_D8s_v3 ` |
61+ - "General purpose" virtual machines, such as the M8-series in AWS work well.
62+ - Deploy in the same region as` coderd `
3963
40- ** Footnotes ** :
64+ ### Workspaces
4165
42- - Assumed that a workspace user needs at minimum 2 GB memory to perform. We
43- recommend against over-provisioning memory for developer workloads, as this my
44- lead to OOMKiller invocations.
45- - Maximum number of Kubernetes workspace pods per node: 256
66+ Workspace sizing depends very heavily on the exact use case, even down to project size and programming language for
67+ development use cases.
4668
47- ###Database nodes
69+ The following resource requirements are for the Coder Workspace Agent, which runs alongside your end users work, and as
70+ such should be interpreted as the_ bare minimum_ requirements for a Coder workspace.
4871
49- | Users | Node capacity | Replicas | Storage | GCP | AWS | Azure |
50- | ------------- | --------------------- | ---------- | --------- | -------------------- | --------------- | ----------- --------|
51- | Up to 1,000 | 2 vCPU, 8 GB memory | 1 node | 512 GB | ` db-custom-2-7680 ` | ` db.m5.large ` | ` Standard_D2s_v3 ` |
72+ | vCPU | Memory |
73+ | ------| --------|
74+ | 0.1 | 128 MB |
5275
53- ** Footnotes for AWS instance types** :
76+ ## Footnotes for AWS instance types
5477
5578- For production deployments, we recommend using non-burstable instance types,
5679 such as` m5 ` or` c5 ` , instead of burstable instances, such as` t3 ` .