You signed in with another tab or window.Reload to refresh your session.You signed out in another tab or window.Reload to refresh your session.You switched accounts on another tab or window.Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ai-coder/ai-bridge.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,6 +24,7 @@ Bridge solves 3 key problems:
24
24
As the library of LLMs and their associated tools grow, administrators are pressured to provide auditing, measure adoption, provide tools through MCP, and track token spend. Disparate SAAS platforms provide_some_ of these for_some_ tools, but there is no centralized, secure solution for these challenges.
25
25
26
26
If you are administrator or devops leader looking to:
27
+
27
28
- Measure AI tooling adoption across teams or projects
28
29
- Provide an LLM audit trail to security administrators
29
30
- Manage token spend in a central dashboard
@@ -92,7 +93,6 @@ All of these records are associated to an "interception" record, which maps 1:1
These logs can be used to determine usage patterns, track costs, and evaluate tooling adoption.
97
97
98
98
This data is currently accessible through the API and CLI (experimental), which we advise administrators export to their observability platform of choice. We've configured a Grafana dashboard to display Claude Code usage internally which can be imported as a starting point for your tooling adoption metrics.