Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitd0f434b

Browse files
authored
feat(docs): add bridge documentation for early access (coder#20188)
1 parentbe22c38 commitd0f434b

File tree

8 files changed

+1725
-0
lines changed

8 files changed

+1725
-0
lines changed

‎docs/ai-coder/ai-bridge.md‎

Lines changed: 268 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,268 @@
1+
#AI Bridge
2+
3+
>[!NOTE]
4+
>AI Bridge is currently an_experimental_ feature.
5+
6+
![AI bridge diagram](../images/aibridge/aibridge_diagram.png)
7+
8+
Bridge is a smart proxy for AI. It acts as a man-in-the-middle between your users' coding agents / IDEs
9+
and providers like OpenAI and Anthropic. By intercepting all the AI traffic between these clients and
10+
the upstream APIs, Bridge can record user prompts, token usage, and tool invocations.
11+
12+
Bridge solves 3 key problems:
13+
14+
1.**Centralized authn/z management**: no more issuing & managing API tokens for OpenAI/Anthropic usage.
15+
Users use their Coder session or API tokens to authenticate with`coderd` (Coder control plane), and
16+
`coderd` securely communicates with the upstream APIs on their behalf. Use a single key for all users.
17+
2.**Auditing and attribution**: all interactions with AI services, whether autonomous or human-initiated,
18+
will be audited and attributed back to a user.
19+
3.**Centralized MCP administration**: define a set of approved MCP servers and tools which your users may
20+
use, and prevent users from using their own.
21+
22+
##When to use AI Bridge
23+
24+
As the library of LLMs and their associated tools grow, administrators are pressured to provide auditing, measure adoption, provide tools through MCP, and track token spend. Disparate SAAS platforms provide_some_ of these for_some_ tools, but there is no centralized, secure solution for these challenges.
25+
26+
If you are an administrator or devops leader looking to:
27+
28+
- Measure AI tooling adoption across teams or projects
29+
- Provide an LLM audit trail to security administrators
30+
- Manage token spend in a central dashboard
31+
- Investigate opportunities for AI automation
32+
- Uncover the high-leverage use cases from experienced engineers
33+
34+
We advise trying Bridge as self-hosted proxy to monitor LLM usage agnostically across AI powered IDEs like Cursor and headless agents like Claude Code.
35+
36+
##Setup
37+
38+
Bridge runs inside the Coder control plane, requiring no separate compute to deploy or scale. Once enabled,`coderd` hosts the bridge in-memory and brokers traffic to your configured AI providers on behalf of authenticated users.
39+
40+
**Required**:
41+
42+
1. A**premium** licensed Coder deployment
43+
1. Feature must be[enabled](#activation) using the server flag
44+
1. One or more[provider](#providers) API keys must be configured
45+
46+
###Activation
47+
48+
To enable this feature, activate the`aibridge` experiment using an environment variable or a CLI flag.
49+
Additionally, you will need to enable Bridge explicitly:
50+
51+
```sh
52+
CODER_EXPERIMENTS="aibridge" CODER_AIBRIDGE_ENABLED=true coder server
53+
# or
54+
coder server --experiments=aibridge --aibridge-enabled=true
55+
```
56+
57+
_If you have other experiments enabled, separate them by commas._
58+
59+
###Providers
60+
61+
Bridge currently supports OpenAI and Anthropic APIs.
62+
63+
**API Key**:
64+
65+
The single key used to authenticate all requests from Bridge to OpenAI/Anthropic APIs.
66+
67+
-`CODER_AIBRIDGE_OPENAI_KEY` or`--aibridge-openai-key`
68+
-`CODER_AIBRIDGE_ANTHROPIC_KEY` or`--aibridge-anthropic-key`
69+
70+
**Base URL**:
71+
72+
The API to which Bridge will relay requests.
73+
74+
-`CODER_AIBRIDGE_OPENAI_BASE_URL` or`--aibridge-openai-base-url`, defaults to`https://api.openai.com/v1/`
75+
-`CODER_AIBRIDGE_ANTHROPIC_BASE_URL` or`--aibridge-anthropic-base-url`, defaults to`https://api.anthropic.com/`
76+
77+
Bridge is compatible with_[Google Vertex AI](https://cloud.google.com/vertex-ai?hl=en)_,_[AWS Bedrock](https://aws.amazon.com/bedrock/)_, and other LLM brokers. You may specify the base URL(s) above to the appropriate API endpoint for your provider.
78+
79+
---
80+
81+
>[!NOTE]
82+
>See[Supported APIs](#supported-apis) section below for a comprehensive list.
83+
84+
##Collected Data
85+
86+
Bridge collects:
87+
88+
- The last`user` prompt of each request
89+
- All token usage (associated with each prompt)
90+
- Every tool invocation
91+
92+
All of these records are associated to an "interception" record, which maps 1:1 with requests received from clients but may involve several interactions with upstream providers. Interceptions are associated with a Coder identity, allowing you to map consumption and cost with teams or individuals in your organization:
93+
94+
![User Prompt logging](../images/aibridge/grafana_user_prompts_logging.png)
95+
96+
These logs can be used to determine usage patterns, track costs, and evaluate tooling adoption.
97+
98+
This data is currently accessible through the API and CLI (experimental), which we advise administrators export to their observability platform of choice. We've configured a Grafana dashboard to display Claude Code usage internally which can be imported as a starting point for your tooling adoption metrics.
99+
100+
![User Leaderboard](../images/aibridge/grafana_user_leaderboard.png)
101+
102+
We provide an example Grafana dashboard that you can import as a starting point for your tooling adoption metrics. See[here](../examples/monitoring/dashboards/grafana/aibridge/README.md).
103+
104+
##Implementation Details
105+
106+
`coderd` runs an in-memory instance of`aibridged`, whose logic is mostly contained inhttps://github.com/coder/aibridge. In future releases we will support running external instances for higher throughput and complete memory isolation from`coderd`.
107+
108+
<details>
109+
<summary>See a diagram of how Bridge interception works</summary>
110+
111+
```mermaid
112+
113+
sequenceDiagram
114+
actor User
115+
participant Client
116+
participant Bridge
117+
118+
User->>Client: Issues prompt
119+
activate Client
120+
121+
Note over User, Client: Coder session key used<br>as AI token
122+
Client-->>Bridge: Sends request
123+
124+
activate Bridge
125+
Note over Client, Bridge: Coder session key <br>passed along
126+
127+
Note over Bridge: Authenticate
128+
Note over Bridge: Parse request
129+
130+
alt Rejected
131+
Bridge-->>Client: Send response
132+
Client->>User: Display response
133+
end
134+
135+
Note over Bridge: If first request, establish <br>connection(s) with MCP server(s)<br>and list tools
136+
137+
Note over Bridge: Inject MCP tools
138+
139+
Bridge-->>AIProvider: Send modified request
140+
141+
activate AIProvider
142+
143+
AIProvider-->>Bridge: Send response
144+
145+
Note over Client: Client is unaware of injected<br>tools and invocations,<br>just receives one long response
146+
147+
alt Has injected tool calls
148+
loop
149+
Note over Bridge: Invoke injected tool
150+
Bridge-->>AIProvider: Send tool result
151+
AIProvider-->>Bridge: Send response
152+
end
153+
end
154+
155+
deactivate AIProvider
156+
157+
Bridge-->>Client: Relay response
158+
deactivate Bridge
159+
160+
Client->>User: Display response
161+
deactivate Client
162+
```
163+
164+
</details>
165+
166+
##MCP
167+
168+
[Model Context Protocol (MCP)](https://modelcontextprotocol.io/docs/getting-started/intro) is a mechanism for connecting AI applications to external systems.
169+
170+
Bridge can connect to MCP servers and inject tools automatically, enabling you to centrally manage the list of tools you wish to grant your users.
171+
172+
>[!NOTE]
173+
>Only MCP servers which support OAuth2 Authorization are supported currently. In future releases we will support[optional authorization](https://modelcontextprotocol.io/specification/2025-06-18/basic/authorization#protocol-requirements).
174+
>
175+
>[_Streamable HTTP_](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#streamable-http) is the only supported transport currently. In future releases we will support the (now deprecated)[_Server-Sent Events_](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#backwards-compatibility) transport.
176+
177+
Bridge makes use of[External Auth](../admin/external-auth/index.md) applications, as they define OAuth2 connections to upstream services. If your External Auth application hosts a remote MCP server, you can configure Bridge to connect to it, retrieve its tools and inject them into requests automatically - all while using each individual user's access token.
178+
179+
For example, GitHub has a[remote MCP server](https://github.com/github/github-mcp-server?tab=readme-ov-file#remote-github-mcp-server) and we can use it as follows.
180+
181+
```bash
182+
CODER_EXTERNAL_AUTH_0_TYPE=github
183+
CODER_EXTERNAL_AUTH_0_CLIENT_ID=...
184+
CODER_EXTERNAL_AUTH_0_CLIENT_SECRET=...
185+
# Tell Bridge where it can find this service's remote MCP server.
186+
CODER_EXTERNAL_AUTH_0_MCP_URL=https://api.githubcopilot.com/mcp/
187+
```
188+
189+
See the diagram in[Implementation Details](#implementation-details) for more information.
190+
191+
You can also control which tools are injected by using an allow and/or a deny regular expression on the tool names:
192+
193+
```bash
194+
CODER_EXTERNAL_AUTH_0_MCP_TOOL_ALLOW_REGEX=(.+_gist.*)
195+
CODER_EXTERNAL_AUTH_0_MCP_TOOL_DENY_REGEX=(create_gist)
196+
```
197+
198+
In the above example, all tools containing`_gist` in their name will be allowed, but`create_gist` is denied.
199+
200+
The logic works as follows:
201+
202+
- If neither the allow/deny patterns are defined, all tools will be injected.
203+
- The deny pattern takes precedence.
204+
- If only a deny pattern is defined, all tools are injected except those explicitly denied.
205+
206+
In the above example, if you prompted your AI model with "list your available github tools by name", it would reply something like:
207+
208+
>Certainly! Here are the GitHub-related tools that I have available:
209+
>
210+
>1.`bmcp_github_update_gist`
211+
>2.`bmcp_github_list_gists`
212+
213+
Bridge marks automatically injected tools with a prefix`bmcp_` ("bridged MCP"). It also namespaces all tool names by the ID of their associated External Auth application (in this case`github`).
214+
215+
##Tool Injection
216+
217+
If a model decides to invoke a tool and it has a`bmcp_` suffix and Bridge has a connection with the related MCP server, it will invoke the tool. The tool result will be passed back to the upstream AI provider, and this will loop until the model has all of its required data. These inner loops are not relayed back to the client; all it seems is the result of this loop. See[Implementation Details](#implementation-details).
218+
219+
In contrast, tools which are defined by the client (i.e. the[`Bash` tool](https://docs.claude.com/en/docs/claude-code/settings#tools-available-to-claude) defined by_Claude Code_) cannot be invoked by Bridge, and the tool call from the model will be relayed to the client, after which it will invoke the tool.
220+
221+
If you have the`oauth2` and`mcp-server-http` experiments enabled, Coder's own[internal MCP tools](mcp-server.md) will be injected automatically.
222+
223+
###Troubleshooting
224+
225+
-**Too many tools**: should you receive an error like`Invalid 'tools': array too long. Expected an array with maximum length 128, but got an array with length 132 instead`, you can reduce the number by filtering out tools using the allow/deny patterns documented in the[MCP](#mcp) section.
226+
227+
-**Coder MCP tools not being injected**: in order for Coder MCP tools to be injected, the internal MCP server needs to be active. Follow the instructions in the[MCP Server](mcp-server.md) page to enable it.
228+
229+
-**External Auth tools not being injected**: this is generally due to the requesting user not being authenticated against the External Auth app; when this is the case, no attempt is made to connect to the MCP server.
230+
231+
##Known Issues / Limitations
232+
233+
- Codex CLI currently does not work with Bridge due to a JSON marshaling issue:https://github.com/coder/aibridge/issues/19
234+
- Claude Code web searches do not report correctly:https://github.com/coder/aibridge/issues/11
235+
236+
##Supported APIs
237+
238+
API support is broken down into two categories:
239+
240+
-**Intercepted**: requests are intercepted, audited, and augmented - full Bridge functionality
241+
-**Passthrough**: requests are proxied directly to the upstream, no auditing or augmentation takes place
242+
243+
Where relevant, both streaming and non-streaming requests are supported.
244+
245+
###OpenAI
246+
247+
**Intercepted**:
248+
249+
-[`/v1/chat/completions`](https://platform.openai.com/docs/api-reference/chat/create)
250+
251+
**Passthrough**:
252+
253+
-[`/v1/models(/*)`](https://platform.openai.com/docs/api-reference/models/list)
254+
-[`/v1/responses`](https://platform.openai.com/docs/api-reference/responses/create)_(Interception support coming in**Beta**)_
255+
256+
###Anthropic
257+
258+
**Intercepted**:
259+
260+
-[`/v1/messages`](https://docs.claude.com/en/api/messages)
261+
262+
**Passthrough**:
263+
264+
-[`/v1/models(/*)`](https://docs.claude.com/en/api/models-list)
265+
266+
##Troubleshooting
267+
268+
To report a bug, file a feature request, or view a list of known issues, please visit our[GitHub repository for Bridge](https://github.com/coder/aibridge). If you encounter issues with Bridge during early access, please reach out to us via[Discord](https://discord.gg/coder).
179 KB
Loading
50.7 KB
Loading
217 KB
Loading

‎docs/manifest.json‎

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -907,6 +907,13 @@
907907
"description":"Connect to agents Coder with a MCP server",
908908
"path":"./ai-coder/mcp-server.md",
909909
"state": ["beta"]
910+
},
911+
{
912+
"title":"AI Bridge",
913+
"description":"Centralized LLM and MCP proxy for platform teams",
914+
"path":"./ai-coder/ai-bridge.md",
915+
"icon_path":"./images/icons/api.svg",
916+
"state": ["premium","early access"]
910917
}
911918
]
912919
},
Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
#AI Bridge Grafana Dashboard
2+
3+
![AI Bridge example Grafana Dashboard](./grafana_dashboard.png)A sample Grafana dashboard for monitoring AI Bridge token usage, costs, and cache hit rates in Coder.
4+
5+
The dashboard includes three main sections with multiple visualization panels:
6+
7+
**Usage Leaderboards** - Track token consumption across your organization:
8+
- Bar chart showing input, output, cache read, and cache write tokens per user
9+
- Total usage statistics with breakdowns by token type
10+
11+
**Approximate Cost Table** - Estimate AI spending by joining token usage with live pricing data from LiteLLM:
12+
- Per-provider and per-model cost breakdown
13+
- Input, output, cache read, and cache write costs
14+
- Total cost calculations with footer summaries
15+
16+
**Interceptions** - Monitor AI API calls over time:
17+
- Time-series bar chart of interceptions by user
18+
- Total interception count
19+
20+
**Prompts & Tool Calls Details** - Inspect actual AI interactions:
21+
- User Prompts table showing all prompts sent to AI models with timestamps
22+
- Tool Calls table displaying MCP tool invocations, inputs, and errors (color-coded for failures)
23+
24+
All panels support filtering by time range, username, provider (Anthropic, OpenAI, etc.), and model using regex patterns.
25+
26+
##Setup
27+
28+
1.**Install the Infinity plugin**:`grafana-cli plugins install yesoreyeram-infinity-datasource`
29+
30+
2.**Configure data sources**:
31+
-**PostgreSQL datasource** (`coder-observability-ro`): Connect to your Coder database with read access to`aibridge_interceptions`,`aibridge_token_usages`,`aibridge_user_prompts`,`aibridge_tool_usages` and`users`
32+
-**Infinity datasource** (`litellm-pricing-data`): Point to`https://raw.githubusercontent.com/BerriAI/litellm/refs/heads/main/model_prices_and_context_window.json` for model pricing data
33+
34+
3.**Import**: Download[`dashboard.json`](https://raw.githubusercontent.com/coder/coder/main/examples/monitoring/dashboards/grafana/aibridge/dashboard.json) from this directory, then in Grafana navigate to**Dashboards****Import****Upload JSON file**. Map the data sources when prompted.
35+
36+
##Features
37+
38+
- Token usage leaderboards by user, provider, and model
39+
- Filterable by time range, username, provider, and model (regex supported)

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp