Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

docs: create WIP 10k scale doc#20213

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
spikecurtis wants to merge1 commit intomain
base:main
Choose a base branch
Loading
fromspike/internal-1025-wip-10k-scale
Open
Show file tree
Hide file tree
Changes fromall commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
View file
Open in desktop
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,96 @@
# Reference Architecture: up to 10,000 users

> [!CAUTION]
> This page is a work in progress.
>
> We are actively testing different load profiles for this user target and will be updating
> recommendations. Use these recommendations as a starting point, but monitor your cluster resource
> utilization and adjust.

The 10,000 users architecture targets large-scale enterprises with globally-distributed development
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I think it'd be nice to detail out the numbers we are expecting when we say 10k users - 600 concurrent builds, 6000 concurrent workspaces, etc.

teams in multiple geographic regions.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Suggested change
teams in multiple geographic regions.
teams.

eitherglobally-distributedor multiple geaographic regions has to go.


**Geographic Distribution**: For these tests we deploy on 3 Cloud-managed Kubernetes clusters in
different regions.

1. USA - Primary (also contains the PostgreSQL database deployment).
2. Europe - Workspace Proxies
3. Asia - Workspace Proxies
Comment on lines +13 to +18
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Suggested change
**Geographic Distribution**: For these tests we deploy on 3Cloud-managed Kubernetes clusters in
differentregions.
1. USA - Primary(also containsthe PostgreSQL database deployment).
2. Europe - Workspace Proxies
3. Asia - Workspace Proxies
**Geographic Distribution**: For these tests we deploy on 3cloud-managed Kubernetes clusters in
the followingregions:"
1. USA - Primary- Coderd collocated withthe PostgreSQL database deployment.
2. Europe - Workspace Proxies
3. Asia - Workspace Proxies


**High Availability**: Typically, such scale requires a fully-managed HA
PostgreSQL service, and all Coder observability features enabled for operational
purposes.

**Observability**: Deploy monitoring solutions to gather Prometheus metrics and
visualize them with Grafana to gain detailed insights into infrastructure and
application behavior. This allows operators to respond quickly to incidents and
continuously improve the reliability and performance of the platform.

## Load Types

**Workspace Network Traffic**: 6000 concurrent workspaces (2000 per region), all sending 10 kB/s application traffic

**API**: TBD

## Hardware recommendations

### Coderd

These are deployed in the Primary region only.

| vCPU Limit | Memory Limit | Replicas | GCP Node Pool Machine Type |
|----------------|--------------|----------|----------------------------|
| 4 vCPU (4000m) | 12 GiB | 10 | `c2d-standard-16` |

### Provisioners

These are deployed in each of the 3 regions.

| vCPU Limit | Memory Limit | Replicas | GCP Node Pool Machine Type |
|-----------------|--------------|----------|----------------------------|
| 0.1 vCPU (100m) | 1 GiB | 200 | `c2d-standard-16` |

**Footnotes**:

- Each provisioner handles a single concurrent build, so this configuration imples 200 concurrent
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Suggested change
- Each provisioner handles a single concurrent build, so this configurationimples 200 concurrent
- Each provisioner handles a single concurrent build, so this configurationimplies 200 concurrent

workspace builds per region.
- Provisioners are run as a separate Kubernetes Deployment from Coderd, although they may
share the same node pool.
- Separate provisioners into different namespaces in favor of zero-trust or
multi-cloud deployments.

### Workspace Proxies

These are deployed in the non-Primary regions only.

| vCPU Limit | Memory Limit | Replicas | GCP Node Pool Machine Type |
|----------------|--------------|----------|----------------------------|
| 4 vCPU (4000m) | 12 GiB | 10 | `c2d-standard-16` |

**Footnotes**:

- Our testing implies this is somewhat overspecced for the loads we have tried. We are in process of revising these numbers.

### Workspaces

Thse numbers are for each of the 3 regions. We recommend that you use a separate node pool for user Workspaces.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Suggested change
Thse numbers are for each of the 3 regions. We recommend that you use a separate node pool for user Workspaces.
These numbers are for each of the 3 regions. We recommend that you use a separate node pool for user Workspaces.


| Users | Node capacity | Replicas | GCP | AWS | Azure |
|-------------|----------------------|-------------------------------|------------------|--------------|-------------------|
| Up to 3,000 | 8 vCPU, 32 GB memory | 256 nodes, 12 workspaces each | `t2d-standard-8` | `m5.2xlarge` | `Standard_D8s_v3` |

**Footnotes**:

- Assumed that a workspace user needs 2 GB memory to perform
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

How realistic is that 2 GB? That's the request for the workspace pod or both request and limit?

- Maximum number of Kubernetes workspace pods per node: 256
- As workspace nodes can be distributed between regions, on-premises networks
and cloud areas, consider different namespaces in favor of zero-trust or
multi-cloud deployments.

### Database nodes

We conducted our test using the `db-custom-16-61440` tier on Google Cloud SQL.

**Footnotes**:

- This database tier was only just able to keep up with 600 concurrent builds in our tests.
View file
Open in desktop
Original file line numberDiff line numberDiff line change
Expand Up@@ -220,6 +220,8 @@ For sizing recommendations, see the below reference architectures:

- [Up to 3,000 users](3k-users.md)

- DRAFT: [Up to 10,000 users](10k-users.md)

### AWS Instance Types

For production AWS deployments, we recommend using non-burstable instance types,
Expand Down
5 changes: 5 additions & 0 deletionsdocs/manifest.json
View file
Open in desktop
Original file line numberDiff line numberDiff line change
Expand Up@@ -391,6 +391,11 @@
"title": "Up to 3,000 Users",
"description": "Enterprise-scale architecture recommendations for Coder deployments that support up to 3,000 users",
"path": "./admin/infrastructure/validated-architectures/3k-users.md"
},
{
"title": "Up to 10,000 Users",
"description": "Enterprise-scale architecture recommendations for Coder deployments that support up to 10,000 users",
"path": "./admin/infrastructure/validated-architectures/10k-users.md"
}
]
},
Expand Down
Loading

[8]ページ先頭

©2009-2025 Movatter.jp