Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A Model Context Protocol (MCP) Gateway & Registry. Serves as a central management point for tools, resources, and prompts that can be accessed by MCP-compatible LLM applications. Converts REST API endpoints to MCP, composes virtual MCP servers with added security and observability, and converts between protocols (stdio, SSE, Streamable HTTP).

License

NotificationsYou must be signed in to change notification settings

IBM/mcp-context-forge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Model Context Protocol gateway & proxy - unify REST, MCP, and A2A with federation, virtual servers, retries, security, and an optional admin UI.

Build Python Package CodeQL Bandit Security Dependency Review Tests & Coverage Lint & Static Analysis

Secure Docker Build Deploy to IBM Code Engine

AsyncLicense PyPI Docker Image 

ContextForge MCP Gateway is a feature-rich gateway, proxy and MCP Registry that federates MCP and REST services - unifying discovery, auth, rate-limiting, observability, virtual servers, multi-transport protocols, and an optional Admin UI into one clean endpoint for your AI clients. It runs as a fully compliant MCP server, deployable via PyPI or Docker, and scales to multi-cluster environments on Kubernetes with Redis-backed federation and caching.

MCP Gateway

Table of Contents

🚀 Overview & Goals

ContextForge is a gateway, registry, and proxy that sits in front of anyModel Context Protocol (MCP) server, A2A server or REST API-exposing a unified endpoint for all your AI clients. See theproject roadmap for more details.

It currently supports:

  • Federation across multiple MCP and REST services
  • A2A (Agent-to-Agent) integration for external AI agents (OpenAI, Anthropic, custom)
  • gRPC-to-MCP translation via automatic reflection-based service discovery
  • Virtualization of legacy APIs as MCP-compliant tools and servers
  • Transport over HTTP, JSON-RPC, WebSocket, SSE (with configurable keepalive), stdio and streamable-HTTP
  • An Admin UI for real-time management, configuration, and log monitoring (with airgapped deployment support)
  • Built-in auth, retries, and rate-limiting with user-scoped OAuth tokens and unconditional X-Upstream-Authorization header support
  • OpenTelemetry observability with Phoenix, Jaeger, Zipkin, and other OTLP backends
  • Scalable deployments via Docker or PyPI, Redis-backed caching, and multi-cluster federation

MCP Gateway Architecture

For a list of upcoming features, check out theContextForge Roadmap

Note on Multi‑Tenancy (v0.7.0): A comprehensive multi‑tenant architecture with email authentication, teams, RBAC, and resource visibility is available since v0.7.0. If upgrading from an older version, see theMigration Guide andChangelog for details.

⚠️ Important: SeeSECURITY.md for more details.


🔌 Gateway Layer with Protocol Flexibility
  • Sits in front of any MCP server or REST API
  • Lets you choose your MCP protocol version (e.g.,2025-03-26)
  • Exposes a single, unified interface for diverse backends
🌐 Federation of Peer Gateways (MCP Registry)
  • Auto-discovers or configures peer gateways (via mDNS or manual)
  • Performs health checks and merges remote registries transparently
  • Supports Redis-backed syncing and fail-over
🧩 Virtualization of REST/gRPC Services
  • Wraps non-MCP services as virtual MCP servers
  • Registers tools, prompts, and resources with minimal configuration
  • gRPC-to-MCP translation via server reflection protocol
  • Automatic service discovery and method introspection
🔁 REST-to-MCP Tool Adapter
  • Adapts REST APIs into tools with:

    • Automatic JSON Schema extraction
    • Support for headers, tokens, and custom auth
    • Retry, timeout, and rate-limit policies
🧠 Unified Registries
  • Prompts: Jinja2 templates, multimodal support, rollback/versioning
  • Resources: URI-based access, MIME detection, caching, SSE updates
  • Tools: Native or adapted, with input validation and concurrency controls
📈 Admin UI, Observability & Dev Experience
  • Admin UI built with HTMX + Alpine.js
  • Real-time log viewer with filtering, search, and export capabilities
  • Auth: Basic, JWT, or custom schemes
  • Structured logs, health endpoints, metrics
  • 400+ tests, Makefile targets, live reload, pre-commit hooks
🔍 OpenTelemetry Observability
  • Vendor-agnostic tracing with OpenTelemetry (OTLP) protocol support
  • Multiple backend support: Phoenix (LLM-focused), Jaeger, Zipkin, Tempo, DataDog, New Relic
  • Distributed tracing across federated gateways and services
  • Automatic instrumentation of tools, prompts, resources, and gateway operations
  • LLM-specific metrics: Token usage, costs, model performance
  • Zero-overhead when disabled with graceful degradation
  • Easy configuration via environment variables

Quick start with Phoenix (LLM observability):

# Start Phoenixdocker run -p 6006:6006 -p 4317:4317 arizephoenix/phoenix:latest# Configure gatewayexport OTEL_ENABLE_OBSERVABILITY=trueexport OTEL_TRACES_EXPORTER=otlpexport OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317# Run gateway - traces automatically sent to Phoenixmcpgateway

SeeObservability Documentation for detailed setup with other backends.


Quick Start - PyPI

ContextForge is published onPyPI asmcp-contextforge-gateway.


TLDR;:(single command usinguv)

# Quick start with environment variablesBASIC_AUTH_PASSWORD=pass \MCPGATEWAY_UI_ENABLED=true \MCPGATEWAY_ADMIN_API_ENABLED=true \PLATFORM_ADMIN_EMAIL=admin@example.com \PLATFORM_ADMIN_PASSWORD=changeme \PLATFORM_ADMIN_FULL_NAME="Platform Administrator" \uvx --from mcp-contextforge-gateway mcpgateway --host 0.0.0.0 --port 4444# Or better: use the provided .env.examplecp .env.example .env# Edit .env to customize your settingsuvx --from mcp-contextforge-gateway mcpgateway --host 0.0.0.0 --port 4444
📋 Prerequisites
  • Python ≥ 3.10 (3.11 recommended)
  • curl + jq - only for the last smoke-test step

1 - Install & run (copy-paste friendly)

# 1️⃣  Isolated env + install from pypimkdir mcpgateway&&cd mcpgatewaypython3 -m venv .venv&&source .venv/bin/activatepip install --upgrade pippip install mcp-contextforge-gateway# 2️⃣  Copy and customize the configuration# Download the example environment filecurl -O https://raw.githubusercontent.com/IBM/mcp-context-forge/main/.env.examplecp .env.example .env# Edit .env to customize your settings (especially passwords!)# Or set environment variables directly:export MCPGATEWAY_UI_ENABLED=trueexport MCPGATEWAY_ADMIN_API_ENABLED=trueexport PLATFORM_ADMIN_EMAIL=admin@example.comexport PLATFORM_ADMIN_PASSWORD=changemeexport PLATFORM_ADMIN_FULL_NAME="Platform Administrator"BASIC_AUTH_PASSWORD=pass JWT_SECRET_KEY=my-test-key \  mcpgateway --host 0.0.0.0 --port 4444&# admin/pass# 3️⃣  Generate a bearer token & smoke-test the APIexport MCPGATEWAY_BEARER_TOKEN=$(python3 -m mcpgateway.utils.create_jwt_token \    --username admin@example.com --exp 10080 --secret my-test-key)curl -s -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     http://127.0.0.1:4444/version| jq
Windows (PowerShell) quick-start
# 1️⃣  Isolated env + install from PyPImkdir mcpgateway ; cd mcpgatewaypython3-m venv .venv ; .\.venv\Scripts\Activate.ps1pip install--upgrade pippip install mcp-contextforge-gateway# 2️⃣  Copy and customize the configuration# Download the example environment fileInvoke-WebRequest-Uri"https://raw.githubusercontent.com/IBM/mcp-context-forge/main/.env.example"-OutFile".env.example"Copy-Item .env.example .env# Edit .env to customize your settings# Or set environment variables (session-only)$Env:MCPGATEWAY_UI_ENABLED="true"$Env:MCPGATEWAY_ADMIN_API_ENABLED="true"$Env:BASIC_AUTH_PASSWORD="changeme"# admin/changeme$Env:JWT_SECRET_KEY="my-test-key"$Env:PLATFORM_ADMIN_EMAIL="admin@example.com"$Env:PLATFORM_ADMIN_PASSWORD="changeme"$Env:PLATFORM_ADMIN_FULL_NAME="Platform Administrator"# 3️⃣  Launch the gatewaymcpgateway.exe--host0.0.0.0--port4444#   Optional: background it# Start-Process -FilePath "mcpgateway.exe" -ArgumentList "--host 0.0.0.0 --port 4444"# 4️⃣  Bearer token and smoke-test$Env:MCPGATEWAY_BEARER_TOKEN= python3-m mcpgateway.utils.create_jwt_token`--username admin@example.com--exp10080--secret my-test-keycurl-s-H"Authorization: Bearer$Env:MCPGATEWAY_BEARER_TOKEN"`     http://127.0.0.1:4444/version| jq
⚡ Alternative: uv (faster)
# 1️⃣  Isolated env + install from PyPI using uvmkdir mcpgateway ; cd mcpgatewayuv venv.\.venv\Scripts\activateuv pip install mcp-contextforge-gateway# Continue with steps 2️⃣-4️⃣ above...
More configuration

Copy.env.example to.env and tweak any of the settings (or use them as env variables).

🚀 End-to-end demo (register a local MCP server)
# 1️⃣  Spin up the sample GO MCP time server using mcpgateway.translate & docker (replace docker with podman if needed)python3 -m mcpgateway.translate \     --stdio"docker run --rm -i ghcr.io/ibm/fast-time-server:latest -transport=stdio" \     --expose-sse \     --port 8003# Or using the official mcp-server-git using uvx:pip install uv# to install uvx, if not already installedpython3 -m mcpgateway.translate --stdio"uvx mcp-server-git" --expose-sse --port 9000# Alternative: running the local binary# cd mcp-servers/go/fast-time-server; make build# python3 -m mcpgateway.translate --stdio "./dist/fast-time-server -transport=stdio" --expose-sse --port 8002# NEW: Expose via multiple protocols simultaneously!python3 -m mcpgateway.translate \     --stdio"uvx mcp-server-git" \     --expose-sse \     --expose-streamable-http \     --port 9000# Now accessible via both /sse (SSE) and /mcp (streamable HTTP) endpoints# 2️⃣  Register it with the gatewaycurl -s -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{"name":"fast_time","url":"http://localhost:8003/sse"}' \     http://localhost:4444/gateways# 3️⃣  Verify tool catalogcurl -s -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/tools| jq# 4️⃣  Create a *virtual server* bundling those tools. Use the ID of tools from the tool catalog (Step #3) and pass them in the associatedTools list.curl -s -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{"server":{"name":"time_server","description":"Fast time tools","associated_tools":[<ID_OF_TOOLS>]}}' \     http://localhost:4444/servers| jq# Example curlcurl -s -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN"     -H"Content-Type: application/json"     -d'{"server":{"name":"time_server","description":"Fast time tools","associated_tools":["6018ca46d32a4ac6b4c054c13a1726a2"]}}' \     http://localhost:4444/servers| jq# 5️⃣  List servers (should now include the UUID of the newly created virtual server)curl -s -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/servers| jq# 6️⃣  Client HTTP endpoint. Inspect it interactively with the MCP Inspector CLI (or use any MCP client)npx -y @modelcontextprotocol/inspector# Transport Type: Streamable HTTP, URL: http://localhost:4444/servers/UUID_OF_SERVER_1/mcp,  Header Name: "Authorization", Bearer Token
🖧 Using the stdio wrapper (mcpgateway-wrapper)
export MCP_AUTH="Bearer${MCPGATEWAY_BEARER_TOKEN}"export MCP_SERVER_URL=http://localhost:4444/servers/UUID_OF_SERVER_1/mcppython3 -m mcpgateway.wrapper# Ctrl-C to exit

You can also run it withuv or inside Docker/Podman - see theContainers section above.

In MCP Inspector, defineMCP_AUTH andMCP_SERVER_URL env variables, and selectpython3 as the Command, and-m mcpgateway.wrapper as Arguments.

echo$PWD/.venv/bin/python3# Using the Python3 full path ensures you have a working venvexport MCP_SERVER_URL='http://localhost:4444/servers/UUID_OF_SERVER_1/mcp'export MCP_AUTH="Bearer${MCPGATEWAY_BEARER_TOKEN}"npx -y @modelcontextprotocol/inspector

or

Pass the url and auth as arguments (no need to set environment variables)

npx -y @modelcontextprotocol/inspectorcommand as`python`Arguments as`-m mcpgateway.wrapper --url"http://localhost:4444/servers/UUID_OF_SERVER_1/mcp" --auth"Bearer <your token>"`

When using a MCP Client such as Claude with stdio:

{"mcpServers": {"mcpgateway-wrapper": {"command":"python","args": ["-m","mcpgateway.wrapper"],"env": {"MCP_AUTH":"Bearer your-token-here","MCP_SERVER_URL":"http://localhost:4444/servers/UUID_OF_SERVER_1","MCP_TOOL_CALL_TIMEOUT":"120"      }    }  }}

Quick Start - Containers

Use the official OCI image from GHCR withDockerorPodman.Please note: Currently, arm64 is not supported. If you are e.g. running on MacOS, install via PyPi.

🚀 Quick Start - Docker Compose

Get a full stack running with MariaDB and Redis in under 30 seconds:

# Clone and start the stackgit clone https://github.com/IBM/mcp-context-forge.gitcd mcp-context-forge# Start with MariaDB (recommended for production)docker compose up -d# Or start with PostgreSQL# Uncomment postgres in docker-compose.yml and comment mariadb section# docker compose up -d# Check statusdocker compose ps# View logsdocker compose logs -f gateway# Access Admin UI: http://localhost:4444/admin (admin/changeme)# Generate API tokendocker composeexec gateway python3 -m mcpgateway.utils.create_jwt_token \  --username admin@example.com --exp 10080 --secret my-test-key

What you get:

  • 🗄️MariaDB 10.6 - Production-ready database with 36+ tables
  • 🚀MCP Gateway - Full-featured gateway with Admin UI
  • 📊Redis - High-performance caching and session storage
  • 🔧Admin Tools - pgAdmin, Redis Insight for database management
  • 🌐Nginx Proxy - Caching reverse proxy (optional)

☸️ Quick Start - Helm (Kubernetes)

Deploy to Kubernetes with enterprise-grade features:

# Add Helm repository (when available)# helm repo add mcp-context-forge https://ibm.github.io/mcp-context-forge# helm repo update# For now, use local chartgit clone https://github.com/IBM/mcp-context-forge.gitcd mcp-context-forge/charts/mcp-stack# Install with MariaDBhelm install mcp-gateway. \  --set mcpContextForge.secret.PLATFORM_ADMIN_EMAIL=admin@yourcompany.com \  --set mcpContextForge.secret.PLATFORM_ADMIN_PASSWORD=changeme \  --set mcpContextForge.secret.JWT_SECRET_KEY=your-secret-key \  --set postgres.enabled=false \  --set mariadb.enabled=true# Or install with PostgreSQL (default)helm install mcp-gateway. \  --set mcpContextForge.secret.PLATFORM_ADMIN_EMAIL=admin@yourcompany.com \  --set mcpContextForge.secret.PLATFORM_ADMIN_PASSWORD=changeme \  --set mcpContextForge.secret.JWT_SECRET_KEY=your-secret-key# Check deployment statuskubectl get pods -l app.kubernetes.io/name=mcp-context-forge# Port forward to access Admin UIkubectl port-forward svc/mcp-gateway-mcp-context-forge 4444:80# Access: http://localhost:4444/admin# Generate API tokenkubectlexec deployment/mcp-gateway-mcp-context-forge -- \  python3 -m mcpgateway.utils.create_jwt_token \  --username admin@yourcompany.com --exp 10080 --secret your-secret-key

Enterprise Features:

  • 🔄Auto-scaling - HPA with CPU/memory targets
  • 🗄️Database Choice - PostgreSQL, MariaDB, or MySQL
  • 📊Observability - Prometheus metrics, OpenTelemetry tracing
  • 🔒Security - RBAC, network policies, secret management
  • 🚀High Availability - Multi-replica deployments with Redis clustering
  • 📈Monitoring - Built-in Grafana dashboards and alerting

🐳 Docker (Single Container)

1 - Minimum viable run

docker run -d --name mcpgateway \  -p 4444:4444 \  -e MCPGATEWAY_UI_ENABLED=true \  -e MCPGATEWAY_ADMIN_API_ENABLED=true \  -e HOST=0.0.0.0 \  -e JWT_SECRET_KEY=my-test-key \  -e BASIC_AUTH_USER=admin \  -e BASIC_AUTH_PASSWORD=changeme \  -e AUTH_REQUIRED=true \  -e PLATFORM_ADMIN_EMAIL=admin@example.com \  -e PLATFORM_ADMIN_PASSWORD=changeme \  -e PLATFORM_ADMIN_FULL_NAME="Platform Administrator" \  -e DATABASE_URL=sqlite:///./mcp.db \  -e SECURE_COOKIES=false \  ghcr.io/ibm/mcp-context-forge:1.0.0-BETA-1# Note: when not running over SSL, use SECURE_COOKIES=false to prevent the browser denying access.# Tail logs (Ctrl+C to quit)docker logs -f mcpgateway# Generating an API keydocker run --rm -it ghcr.io/ibm/mcp-context-forge:1.0.0-BETA-1 \  python3 -m mcpgateway.utils.create_jwt_token --username admin@example.com --exp 0 --secret my-test-key

Browse tohttp://localhost:4444/admin (useradmin / passchangeme).

2 - Persist the SQLite database

mkdir -p$(pwd)/datatouch$(pwd)/data/mcp.dbsudo chown -R :docker$(pwd)/datachmod 777$(pwd)/datadocker run -d --name mcpgateway \  --restart unless-stopped \  -p 4444:4444 \  -v$(pwd)/data:/data \  -e MCPGATEWAY_UI_ENABLED=true \  -e MCPGATEWAY_ADMIN_API_ENABLED=true \  -e DATABASE_URL=sqlite:////data/mcp.db \  -e HOST=0.0.0.0 \  -e JWT_SECRET_KEY=my-test-key \  -e BASIC_AUTH_USER=admin \  -e BASIC_AUTH_PASSWORD=changeme \  -e PLATFORM_ADMIN_EMAIL=admin@example.com \  -e PLATFORM_ADMIN_PASSWORD=changeme \  -e PLATFORM_ADMIN_FULL_NAME="Platform Administrator" \  ghcr.io/ibm/mcp-context-forge:1.0.0-BETA-1

SQLite now lives on the host at./data/mcp.db.

3 - Local tool discovery (host network)

mkdir -p$(pwd)/datatouch$(pwd)/data/mcp.dbsudo chown -R :docker$(pwd)/datachmod 777$(pwd)/datadocker run -d --name mcpgateway \  --network=host \  -e MCPGATEWAY_UI_ENABLED=true \  -e MCPGATEWAY_ADMIN_API_ENABLED=true \  -e HOST=0.0.0.0 \  -e PORT=4444 \  -e DATABASE_URL=sqlite:////data/mcp.db \  -e PLATFORM_ADMIN_EMAIL=admin@example.com \  -e PLATFORM_ADMIN_PASSWORD=changeme \  -e PLATFORM_ADMIN_FULL_NAME="Platform Administrator" \  -v$(pwd)/data:/data \  ghcr.io/ibm/mcp-context-forge:1.0.0-BETA-1

Using--network=host allows Docker to access the local network, allowing you to add MCP servers running on your host. SeeDocker Host network driver documentation for more details.

4 - Airgapped deployment (no internet access)

For environments without internet access, build a container with bundled UI assets:

# Build airgapped container (downloads CDN assets during build)docker build -f Containerfile.lite -t mcpgateway:airgapped.# Run in airgapped modedocker run -d --name mcpgateway \  -p 4444:4444 \  -e MCPGATEWAY_UI_AIRGAPPED=true \  -e MCPGATEWAY_UI_ENABLED=true \  -e MCPGATEWAY_ADMIN_API_ENABLED=true \  -e HOST=0.0.0.0 \  -e JWT_SECRET_KEY=my-test-key \  -e BASIC_AUTH_USER=admin \  -e BASIC_AUTH_PASSWORD=changeme \  -e AUTH_REQUIRED=true \  -e DATABASE_URL=sqlite:///./mcp.db \  mcpgateway:airgapped

The Admin UI will work completely offline with all CSS/JS assets (~932KB) served locally.


🦭 Podman (rootless-friendly)

1 - Basic run

podman run -d --name mcpgateway \  -p 4444:4444 \  -e HOST=0.0.0.0 \  -e DATABASE_URL=sqlite:///./mcp.db \  ghcr.io/ibm/mcp-context-forge:1.0.0-BETA-1

2 - Persist SQLite

mkdir -p$(pwd)/datatouch$(pwd)/data/mcp.dbsudo chown -R :docker$(pwd)/datachmod 777$(pwd)/datapodman run -d --name mcpgateway \  --restart=on-failure \  -p 4444:4444 \  -v$(pwd)/data:/data \  -e DATABASE_URL=sqlite:////data/mcp.db \  ghcr.io/ibm/mcp-context-forge:1.0.0-BETA-1

3 - Host networking (rootless)

mkdir -p$(pwd)/datatouch$(pwd)/data/mcp.dbsudo chown -R :docker$(pwd)/datachmod 777$(pwd)/datapodman run -d --name mcpgateway \  --network=host \  -v$(pwd)/data:/data \  -e DATABASE_URL=sqlite:////data/mcp.db \  ghcr.io/ibm/mcp-context-forge:1.0.0-BETA-1

✏️ Docker/Podman tips
  • .env files - Put all the-e FOO= lines into a file and replace them with--env-file .env. See the provided.env.example for reference.

  • Pinned tags - Use an explicit version (e.g.v0.9.0) instead oflatest for reproducible builds.

  • JWT tokens - Generate one in the running container:

    dockerexec mcpgateway python3 -m mcpgateway.utils.create_jwt_token --username admin@example.com --exp 10080 --secret my-test-key
  • Upgrades - Stop, remove, and rerun with the same-v $(pwd)/data:/data mount; your DB and config stay intact.


🚑 Smoke-test the running container
curl -s -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     http://localhost:4444/health| jqcurl -s -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     http://localhost:4444/tools| jqcurl -s -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     http://localhost:4444/version| jq

🖧 Running the MCP Gateway stdio wrapper

Themcpgateway.wrapper lets you connect to the gateway overstdio while keeping JWT authentication. You should run this from the MCP Client. The example below is just for testing.

# Set environment variablesexport MCPGATEWAY_BEARER_TOKEN=$(python3 -m mcpgateway.utils.create_jwt_token --username admin@example.com --exp 10080 --secret my-test-key)export MCP_AUTH="Bearer${MCPGATEWAY_BEARER_TOKEN}"export MCP_SERVER_URL='http://localhost:4444/servers/UUID_OF_SERVER_1/mcp'export MCP_TOOL_CALL_TIMEOUT=120export MCP_WRAPPER_LOG_LEVEL=DEBUG# or OFF to disable loggingdocker run --rm -i \  -e MCP_AUTH=$MCP_AUTH \  -e MCP_SERVER_URL=http://host.docker.internal:4444/servers/UUID_OF_SERVER_1/mcp \  -e MCP_TOOL_CALL_TIMEOUT=120 \  -e MCP_WRAPPER_LOG_LEVEL=DEBUG \  ghcr.io/ibm/mcp-context-forge:1.0.0-BETA-1 \  python3 -m mcpgateway.wrapper

Testingmcpgateway.wrapper by hand:

Because the wrapper speaks JSON-RPC over stdin/stdout, you can interact with it using nothing more than a terminal or pipes.

# Start the MCP Gateway Wrapperexport MCP_AUTH="Bearer${MCPGATEWAY_BEARER_TOKEN}"export MCP_SERVER_URL=http://localhost:4444/servers/YOUR_SERVER_UUIDpython3 -m mcpgateway.wrapper
Initialize the protocol
# Initialize the protocol{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2025-03-26","capabilities":{},"clientInfo":{"name":"demo","version":"0.0.1"}}}# Then after the reply:{"jsonrpc":"2.0","method":"notifications/initialized","params":{}}# Get prompts{"jsonrpc":"2.0","id":4,"method":"prompts/list"}{"jsonrpc":"2.0","id":5,"method":"prompts/get","params":{"name":"greeting","arguments":{"user":"Bob"}}}# Get resources{"jsonrpc":"2.0","id":6,"method":"resources/list"}{"jsonrpc":"2.0","id":7,"method":"resources/read","params":{"uri":"https://example.com/some.txt"}}# Get / call tools{"jsonrpc":"2.0","id":2,"method":"tools/list"}{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"get_system_time","arguments":{"timezone":"Europe/Dublin"}}}
Expected responses from mcpgateway.wrapper
{"jsonrpc":"2.0","id":1,"result":{"protocolVersion":"2025-03-26","capabilities":{"experimental":{},"prompts":{"listChanged":false},"resources":{"subscribe":false,"listChanged":false},"tools":{"listChanged":false}},"serverInfo":{"name":"mcpgateway-wrapper","version":"0.9.0"}}}# When there's no tools{"jsonrpc":"2.0","id":2,"result":{"tools":[]}}# After you add some tools and create a virtual server{"jsonrpc":"2.0","id":2,"result":{"tools":[{"annotations":{"readOnlyHint":false,"destructiveHint":true,"idempotentHint":false,"openWorldHint":true},"description":"Convert time between different timezones","inputSchema":{"properties":{"source_timezone":{"description":"Source IANA timezone name","type":"string"},"target_timezone":{"description":"Target IANA timezone name","type":"string"},"time":{"description":"Time to convert in RFC3339 format or common formats like '2006-01-02 15:04:05'","type":"string"}},"required":["time","source_timezone","target_timezone"],"type":"object"},"name":"convert_time"},{"annotations":{"readOnlyHint":false,"destructiveHint":true,"idempotentHint":false,"openWorldHint":true},"description":"Get current system time in specified timezone","inputSchema":{"properties":{"timezone":{"description":"IANA timezone name (e.g., 'America/New_York', 'Europe/London'). Defaults to UTC","type":"string"}},"type":"object"},"name":"get_system_time"}]}}# Running the time tool:{"jsonrpc":"2.0","id":3,"result":{"content":[{"type":"text","text":"2025-07-09T00:09:45+01:00"}]}}

🧩 Running from an MCP Client (mcpgateway.wrapper)

Themcpgateway.wrapper exposes everything your Gateway knows about overstdio, so any MCP client thatcan't (orshouldn't) open an authenticated SSE stream still gets full tool-calling power.

Remember to substitute your real Gateway URL (and server ID) forhttp://localhost:4444/servers/UUID_OF_SERVER_1/mcp.When inside Docker/Podman, that often becomeshttp://host.docker.internal:4444/servers/UUID_OF_SERVER_1/mcp (macOS/Windows) or the gateway container's hostname (Linux).


🐳 Docker / Podman
export MCP_AUTH="Bearer$MCPGATEWAY_BEARER_TOKEN"docker run -i --rm \  --network=host \  -e MCP_SERVER_URL=http://localhost:4444/servers/UUID_OF_SERVER_1/mcp \  -e MCP_AUTH=${MCP_AUTH} \  -e MCP_TOOL_CALL_TIMEOUT=120 \  ghcr.io/ibm/mcp-context-forge:1.0.0-BETA-1 \  python3 -m mcpgateway.wrapper

📦 pipx (one-liner install & run)
# Install gateway package in its own isolated venvpipx install --include-deps mcp-contextforge-gateway# Run the stdio wrapperMCP_AUTH="Bearer${MCPGATEWAY_BEARER_TOKEN}" \MCP_SERVER_URL=http://localhost:4444/servers/UUID_OF_SERVER_1/mcp \python3 -m mcpgateway.wrapper# Alternatively with uvuv run --directory. -m mcpgateway.wrapper

Claude Desktop JSON (uses the host Python that pipx injected):

{"mcpServers": {"mcpgateway-wrapper": {"command":"python3","args": ["-m","mcpgateway.wrapper"],"env": {"MCP_AUTH":"Bearer <your-token>","MCP_SERVER_URL":"http://localhost:4444/servers/UUID_OF_SERVER_1/mcp","MCP_TOOL_CALL_TIMEOUT":"120"      }    }  }}

⚡ uv / uvx (light-speed venvs)

1 - Installuv (uvx is an alias it provides)

# (a) official one-linercurl -Ls https://astral.sh/uv/install.sh| sh# (b) or via pipxpipx install uv

2 - Create an on-the-spot venv & run the wrapper

# Create venv in ~/.venv/mcpgateway (or current dir if you prefer)uv venv~/.venv/mcpgatewaysource~/.venv/mcpgateway/bin/activate# Install the gateway package using uvuv pip install mcp-contextforge-gateway# Launch wrapperMCP_AUTH="Bearer${MCPGATEWAY_BEARER_TOKEN}" \MCP_SERVER_URL=http://localhost:4444/servers/UUID_OF_SERVER_1/mcp \uv run --directory. -m mcpgateway.wrapper# Use this just for testing, as the Client will run the uv command

Claude Desktop JSON (runs throughuvx)

{"mcpServers": {"mcpgateway-wrapper": {"command":"uvx","args": ["run","--","python","-m","mcpgateway.wrapper"      ],"env": {"MCP_AUTH":"Bearer <your-token>","MCP_SERVER_URL":"http://localhost:4444/servers/UUID_OF_SERVER_1/mcp"    }  }}

🚀 Using with Claude Desktop (or any GUI MCP client)

  1. Edit ConfigFile ▸ Settings ▸ Developer ▸ Edit Config
  2. Paste one of the JSON blocks above (Docker / pipx / uvx).
  3. Restart the app so the new stdio server is spawned.
  4. Open logs in the same menu to verifymcpgateway-wrapper started and listed your tools.

Need help? See:


🚀 Quick Start: VS Code Dev Container

Spin up a fully-loaded dev environment (Python 3.11, Docker/Podman CLI, all project dependencies) in just two clicks.


📋 Prerequisites
🧰 Setup Instructions

1 - Clone & Open

git clone https://github.com/ibm/mcp-context-forge.gitcd mcp-context-forgecode.

VS Code will detect the.devcontainer and prompt:"Reopen in Container"or manually run:Ctrl/Cmd ⇧ PDev Containers: Reopen in Container


2 - First-Time Build (Automatic)

The container build will:

  • Install system packages & Python 3.11
  • Runmake install-dev to pull all dependencies
  • Execute tests to verify the toolchain

You'll land in/workspace ready to develop.

🛠️ Daily Developer Workflow

Common tasks inside the container:

# Start dev server (hot reload)make dev# http://localhost:4444# Run tests & lintersmaketestmake lint

Optional:

  • make bash - drop into an interactive shell
  • make clean - clear build artefacts & caches
  • Port forwarding is automatic (customize via.devcontainer/devcontainer.json)
☁️ GitHub Codespaces: 1-Click Cloud IDE

No local Docker? Use Codespaces:

  1. Go to the repo →Code ▸ Codespaces ▸ Create codespace on main
  2. Wait for the container image to build in the cloud
  3. Develop using the same workflow above

Quick Start (manual install)

Prerequisites

  • Python ≥ 3.10
  • GNU Make (optional, but all common workflows are available as Make targets)
  • Optional:Docker / Podman for containerized runs

One-liner (dev)

make venv install serve

What it does:

  1. Creates / activates a.venv in your home folder~/.venv/mcpgateway
  2. Installs the gateway and necessary dependencies
  3. LaunchesGunicorn (Uvicorn workers) onhttp://localhost:4444

For development, you can use:

make install-dev# Install development dependencies, ex: linters and test harnessmake lint# optional: run style checks (ruff, mypy, etc.)

Containerized (self-signed TLS)

Container Runtime Support

This project supports both Docker and Podman. The Makefile automatically detectswhich runtime is available and handles image naming differences.

Auto-detection

make container-build# Uses podman if available, otherwise docker> You can use docker or podman, ex:```bashmake podman# build production imagemake podman-run-ssl    # run at https://localhost:4444# or listen on port 4444 on your host directly, adds --network=host to podmanmake podman-run-ssl-host

Smoke-test the API

curl -k -sX GET \     -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     https://localhost:4444/tools| jq

You should receive[] until you register a tool.


Installation

Via Make

make venv install# create .venv + install depsmake serve# gunicorn on :4444

UV (alternative)

uv venv&&source .venv/bin/activateuv pip install -e'.[dev]'# IMPORTANT: in zsh, quote to disable glob expansion!

pip (alternative)

python3 -m venv .venv&&source .venv/bin/activatepip install -e".[dev]"

Optional (PostgreSQL adapter)

You can configure the gateway with SQLite, PostgreSQL (or any other compatible database) in .env.

When using PostgreSQL, you need to installpsycopg2 driver.

uv pip install psycopg2-binary# dev convenience# oruv pip install psycopg2# production build

Quick Postgres container

docker run --name mcp-postgres \  -e POSTGRES_USER=postgres \  -e POSTGRES_PASSWORD=mysecretpassword \  -e POSTGRES_DB=mcp \  -p 5432:5432 -d postgres

Amake compose-up target is provided along with adocker-compose.yml file to make this process simpler.


🔄 Upgrading to v0.7.0

⚠️ CRITICAL: Version 0.7.0 introduces comprehensive multi-tenancy and requires database migration.

Backup Your Data First

Before upgrading to v0.7.0,always backup your database, environment configuration, and export your settings:

# Backup database (SQLite example)cp mcp.db mcp.db.backup.$(date +%Y%m%d_%H%M%S)# Backup existing .env filecp .env .env.bak# Export configuration via Admin UI or APIcurl -u admin:changeme"http://localhost:4444/admin/export/configuration" \     -o config_backup_$(date +%Y%m%d_%H%M%S).json

Migration Process

  1. Update.env - Copy new settings:cp .env.example .env then configurePLATFORM_ADMIN_EMAIL and other required multi-tenancy settings
  2. Run migration - Database schema updates automatically:python3 -m mcpgateway.bootstrap_db
  3. Verify migration - Use verification script:python3 scripts/verify_multitenancy_0_7_0_migration.py

If Migration Fails

If the database migration fails or you encounter issues:

  1. Restore database backup:cp mcp.db.backup.YYYYMMDD_HHMMSS mcp.db
  2. Restore .env backup:cp .env.bak .env
  3. Delete corrupted database:rm mcp.db (if migration partially completed)
  4. Restore configuration: Import your exported configuration via Admin UI

Complete Migration Guide

For detailed upgrade instructions, troubleshooting, and rollback procedures, see:


Configuration (.env or env vars)

⚠️ If any required.env variable is missing or invalid, the gateway will fail fast at startup with a validation error via Pydantic.

You can get started by copying the provided.env.example to.env and making the necessary edits to fit your environment.

🔧 Environment Configuration Variables

Basic

SettingDescriptionDefaultOptions
APP_NAMEGateway / OpenAPI titleMCP Gatewaystring
HOSTBind address for the app127.0.0.1IPv4/IPv6
PORTPort the server listens on44441-65535
DATABASE_URLSQLAlchemy connection URLsqlite:///./mcp.dbany SQLAlchemy dialect
APP_ROOT_PATHSubpath prefix for app (e.g./gateway)(empty)string
TEMPLATES_DIRPath to Jinja2 templatesmcpgateway/templatespath
STATIC_DIRPath to static filesmcpgateway/staticpath
PROTOCOL_VERSIONMCP protocol version supported2025-03-26string
FORGE_CONTENT_TYPEContent-Type for outgoing requests to Forgeapplication/jsonapplication/json,application/x-www-form-urlencoded

💡 UseAPP_ROOT_PATH=/foo if reverse-proxying under a subpath likehttps://host.com/foo/.🔄 UseFORGE_CONTENT_TYPE=application/x-www-form-urlencoded to send URL-encoded form data instead of JSON.

Authentication

SettingDescriptionDefaultOptions
BASIC_AUTH_USERUsername for Admin UI login and HTTP Basic authenticationadminstring
BASIC_AUTH_PASSWORDPassword for Admin UI login and HTTP Basic authenticationchangemestring
PLATFORM_ADMIN_EMAILEmail for bootstrap platform admin user (auto-created with admin privileges)admin@example.comstring
AUTH_REQUIREDRequire authentication for all API routestruebool
JWT_ALGORITHMAlgorithm used to sign the JWTs (HS256 is default, HMAC-based)HS256PyJWT algs
JWT_SECRET_KEYSecret key used tosign JWT tokens for API accessmy-test-keystring
JWT_PUBLIC_KEY_PATHIf an asymmetric algorithm is used, a public key is required(empty)path to pem
JWT_PRIVATE_KEY_PATHIf an asymmetric algorithm is used, a private key is required(empty)path to pem
JWT_AUDIENCEJWT audience claim for token validationmcpgateway-apistring
JWT_AUDIENCE_VERIFICATIONDisables jwt audience verification (useful for DCR)trueboolean
JWT_ISSUERJWT issuer claim for token validationmcpgatewaystring
TOKEN_EXPIRYExpiry of generated JWTs in minutes10080int > 0
REQUIRE_TOKEN_EXPIRATIONRequire all JWT tokens to have expiration claimsfalsebool
AUTH_ENCRYPTION_SECRETPassphrase used to derive AES key for encrypting tool auth headersmy-test-saltstring
OAUTH_REQUEST_TIMEOUTOAuth request timeout in seconds30int > 0
OAUTH_MAX_RETRIESMaximum retries for OAuth token requests3int > 0
OAUTH_DEFAULT_TIMEOUTDefault OAuth token timeout in seconds3600int > 0

🔐BASIC_AUTH_USER/PASSWORD are used for:

  • Logging into the web-based Admin UI
  • Accessing APIs via Basic Auth (curl -H "Authorization: Bearer $MCPGATEWAY_BEARER_TOKEN")

🔑JWT_SECRET_KEY is used to:

  • Sign JSON Web Tokens (Authorization: Bearer <token>)

  • Generate tokens via:

    export MCPGATEWAY_BEARER_TOKEN=$(python3 -m mcpgateway.utils.create_jwt_token --username admin@example.com --exp 0 --secret my-test-key)echo$MCPGATEWAY_BEARER_TOKEN
  • Tokens allow non-interactive API clients to authenticate securely.

🧪 SetAUTH_REQUIRED=false during development if you want to disable all authentication (e.g. for local testing or open APIs) or clients that don't support SSE authentication.In production, you should use the SSE to stdiomcpgateway-wrapper for such tools that don't support authenticated SSE, while still ensuring the gateway uses authentication.

🔐AUTH_ENCRYPTION_SECRET is used to encrypt and decrypt tool authentication credentials (auth_value).You must set the same value across environments to decode previously stored encrypted auth values.Recommended: use a long, random string.

UI Features

SettingDescriptionDefaultOptions
MCPGATEWAY_UI_ENABLEDEnable the interactive Admin dashboardfalsebool
MCPGATEWAY_ADMIN_API_ENABLEDEnable API endpoints for admin opsfalsebool
MCPGATEWAY_BULK_IMPORT_ENABLEDEnable bulk import endpoint for toolstruebool
MCPGATEWAY_BULK_IMPORT_MAX_TOOLSMaximum number of tools per bulk import request200int
MCPGATEWAY_BULK_IMPORT_RATE_LIMITRate limit for bulk import endpoint (requests per minute)10int
MCPGATEWAY_UI_TOOL_TEST_TIMEOUTTool test timeout in milliseconds for the admin UI60000int
MCPCONTEXT_UI_ENABLEDEnable ContextForge UI featurestruebool

🖥️ Set both UI and Admin API tofalse to disable management UI and APIs in production.📥 The bulk import endpoint allows importing up to 200 tools in a single request via/admin/tools/import.⏱️ IncreaseMCPGATEWAY_UI_TOOL_TEST_TIMEOUT if your tools make multiple API calls or operate in high-latency environments.

A2A (Agent-to-Agent) Features

SettingDescriptionDefaultOptions
MCPGATEWAY_A2A_ENABLEDEnable A2A agent featurestruebool
MCPGATEWAY_A2A_MAX_AGENTSMaximum number of A2A agents allowed100int
MCPGATEWAY_A2A_DEFAULT_TIMEOUTDefault timeout for A2A HTTP requests (seconds)30int
MCPGATEWAY_A2A_MAX_RETRIESMaximum retry attempts for A2A calls3int
MCPGATEWAY_A2A_METRICS_ENABLEDEnable A2A agent metrics collectiontruebool

🤖A2A Integration: Register external AI agents (OpenAI, Anthropic, custom) and expose them as MCP tools📊Metrics: Track agent performance, success rates, and response times🔒Security: Encrypted credential storage and configurable authentication🎛️Admin UI: Dedicated tab for agent management with test functionality

A2A Configuration Effects:

  • MCPGATEWAY_A2A_ENABLED=false: Completely disables A2A features (API endpoints return 404, admin tab hidden)
  • MCPGATEWAY_A2A_METRICS_ENABLED=false: Disables metrics collection while keeping functionality

ToolOps

ToolOps streamlines the entire workflow by enabling seamless tool enrichment, automated test case generation, and comprehensive tool validation.

SettingDescriptionDefaultOptions
TOOLOPS_ENABLEDEnable ToolOps functionalityfalsebool

LLM Chat MCP Client

The LLM Chat MCP Client allows you to interact with MCP servers using conversational AI from multiple LLM providers. This feature enables natural language interaction with tools, resources, and prompts exposed by MCP servers.

SettingDescriptionDefaultOptions
LLMCHAT_ENABLEDEnable LLM Chat functionalityfalsebool
LLM_PROVIDERLLM provider selectionazure_openaiazure_openai,openai,anthropic,aws_bedrock,ollama

Azure OpenAI Configuration:

SettingDescriptionDefaultOptions
AZURE_OPENAI_ENDPOINTAzure OpenAI endpoint URL(none)string
AZURE_OPENAI_API_KEYAzure OpenAI API key(none)string
AZURE_OPENAI_DEPLOYMENTAzure OpenAI deployment name(none)string
AZURE_OPENAI_API_VERSIONAzure OpenAI API version2024-02-15-previewstring
AZURE_OPENAI_TEMPERATURESampling temperature0.7float (0.0-2.0)
AZURE_OPENAI_MAX_TOKENSMaximum tokens to generate(none)int

OpenAI Configuration:

SettingDescriptionDefaultOptions
OPENAI_API_KEYOpenAI API key(none)string
OPENAI_MODELOpenAI model namegpt-4o-ministring
OPENAI_BASE_URLBase URL for OpenAI-compatible endpoints(none)string
OPENAI_TEMPERATURESampling temperature0.7float (0.0-2.0)
OPENAI_MAX_RETRIESMaximum number of retries2int

Anthropic Claude Configuration:

SettingDescriptionDefaultOptions
ANTHROPIC_API_KEYAnthropic API key(none)string
ANTHROPIC_MODELClaude model nameclaude-3-5-sonnet-20241022string
ANTHROPIC_TEMPERATURESampling temperature0.7float (0.0-1.0)
ANTHROPIC_MAX_TOKENSMaximum tokens to generate4096int
ANTHROPIC_MAX_RETRIESMaximum number of retries2int

AWS Bedrock Configuration:

SettingDescriptionDefaultOptions
AWS_BEDROCK_MODEL_IDBedrock model ID(none)string
AWS_BEDROCK_REGIONAWS region nameus-east-1string
AWS_BEDROCK_TEMPERATURESampling temperature0.7float (0.0-1.0)
AWS_BEDROCK_MAX_TOKENSMaximum tokens to generate4096int
AWS_ACCESS_KEY_IDAWS access key ID (optional)(none)string
AWS_SECRET_ACCESS_KEYAWS secret access key (optional)(none)string
AWS_SESSION_TOKENAWS session token (optional)(none)string

IBM WatsonX AI

SettingDescriptionDefaultOptions
WATSONX_URLwatsonx url(none)string
WATSONX_APIKEYAPI key(none)string
WATSONX_PROJECT_IDProject Id for WatsonX(none)string
WATSONX_MODEL_IDWatsonx model idibm/granite-13b-chat-v2string
WATSONX_TEMPERATUREtemperature (optional)0.7float (0.0-1.0)

Ollama Configuration:

SettingDescriptionDefaultOptions
OLLAMA_BASE_URLOllama base URLhttp://localhost:11434string
OLLAMA_MODELOllama model namellama3.2string
OLLAMA_TEMPERATURESampling temperature0.7float (0.0-2.0)

⚙️ToolOps: To manage the complete tool workflow — enrich tools, generate test cases automatically, and validate them with ease.🤖LLM Chat Integration: Chat with MCP servers using natural language powered by Azure OpenAI, OpenAI, Anthropic Claude, AWS Bedrock, or Ollama🔧Flexible Providers: Switch between different LLM providers without changing your MCP integration🔒Security: API keys and credentials are securely stored and never exposed in responses🎛️Admin UI: Dedicated LLM Chat tab in the admin interface for interactive conversations

ToolOps Configuration Effects:

  • TOOLOPS_ENABLED=false (default): Completely disables ToolOps features (API endpoints return 404, admin tab hidden)
  • TOOLOPS_ENABLED=true: Enables ToolOps functionality in the UI

LLM Chat Configuration Effects:

  • LLMCHAT_ENABLED=false (default): Completely disables LLM Chat features (API endpoints return 404, admin tab hidden)
  • LLMCHAT_ENABLED=true: Enables LLM Chat functionality with the selected provider

Provider Requirements:

  • Azure OpenAI: RequiresAZURE_OPENAI_ENDPOINT,AZURE_OPENAI_API_KEY, andAZURE_OPENAI_DEPLOYMENT
  • OpenAI: RequiresOPENAI_API_KEY
  • Anthropic: RequiresANTHROPIC_API_KEY andpip install langchain-anthropic
  • AWS Bedrock: RequiresAWS_BEDROCK_MODEL_ID andpip install langchain-aws boto3. Uses AWS credential chain if explicit credentials not provided.IBM WatsonX AI: RequiresWATSONX_URL,WATSONX_APIKEY,WATSONX_PROJECT_ID,WATSONX_MODEL_ID andpip install langchain-ibm.
  • Ollama: Requires local Ollama instance running (default:http://localhost:11434)

Redis Configurations: For maintaining Chat Sessions in multi-worker environment

SettingDescriptionDefaultOptions
LLMCHAT_SESSION_TTLSeconds for active_session key TTL300int
LLMCHAT_SESSION_LOCK_TTLSeconds for lock expiry30int
LLMCHAT_SESSION_LOCK_RETRIESHow many times to poll while waiting10int
LLMCHAT_SESSION_LOCK_WAITSeconds between polls0.2float
LLMCHAT_CHAT_HISTORY_TTLSeconds for chat history expiry3600int
LLMCHAT_CHAT_HISTORY_MAX_MESSAGESMaximum message history to store per user50int

Documentation:

LLM Settings (Internal API)

The LLM Settings feature enables MCP Gateway to act as a unified LLM provider with an OpenAI-compatible API. Configure multiple external LLM providers through the Admin UI and expose them through a single proxy endpoint.

SettingDescriptionDefaultOptions
LLM_API_PREFIXAPI prefix for internal LLM endpoints/v1string
LLM_REQUEST_TIMEOUTRequest timeout for LLM API calls (seconds)120int
LLM_STREAMING_ENABLEDEnable streaming responsestruebool
LLM_HEALTH_CHECK_INTERVALProvider health check interval (seconds)300int

Gateway Provider Settings (for LLM Chat with provider=gateway):

SettingDescriptionDefaultOptions
GATEWAY_MODELDefault model to usegpt-4ostring
GATEWAY_BASE_URLBase URL for gateway LLM API(auto)string
GATEWAY_TEMPERATURESampling temperature0.7float

Features:

  • OpenAI-Compatible API: Exposes/v1/chat/completions and/v1/models endpoints compatible with any OpenAI client
  • Multi-Provider Support: Configure OpenAI, Azure OpenAI, Anthropic, Ollama, Google, Mistral, Cohere, AWS Bedrock, Groq, and more
  • Admin UI Management: Add, edit, enable/disable, and test providers through the Admin UI (LLM Settings tab)
  • Model Discovery: Fetch available models from providers and sync them to the database
  • Health Monitoring: Automatic health checks with status indicators
  • Unified Interface: Route requests to any configured provider through a single API

API Endpoints:

# List available modelscurl -H"Authorization: Bearer$TOKEN" http://localhost:4444/v1/models# Chat completioncurl -X POST -H"Authorization: Bearer$TOKEN" \  -H"Content-Type: application/json" \  -d'{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}' \  http://localhost:4444/v1/chat/completions

🔧Configuration: Providers are managed through the Admin UI under "LLM Settings > Providers"📋Models: View and manage models under "LLM Settings > Models"⚡Testing: Test models directly from the Admin UI with the "Test" feature

Email-Based Authentication & User Management

SettingDescriptionDefaultOptions
EMAIL_AUTH_ENABLEDEnable email-based authentication systemtruebool
PLATFORM_ADMIN_EMAILEmail for bootstrap platform admin useradmin@example.comstring
PLATFORM_ADMIN_PASSWORDPassword for bootstrap platform admin userchangemestring
PLATFORM_ADMIN_FULL_NAMEFull name for bootstrap platform admin userPlatform Administratorstring
ARGON2ID_TIME_COSTArgon2id time cost (iterations)3int > 0
ARGON2ID_MEMORY_COSTArgon2id memory cost in KiB65536int > 0
ARGON2ID_PARALLELISMArgon2id parallelism (threads)1int > 0
PASSWORD_MIN_LENGTHMinimum password length8int > 0
PASSWORD_REQUIRE_UPPERCASERequire uppercase letters in passwordsfalsebool
PASSWORD_REQUIRE_LOWERCASERequire lowercase letters in passwordsfalsebool
PASSWORD_REQUIRE_NUMBERSRequire numbers in passwordsfalsebool
PASSWORD_REQUIRE_SPECIALRequire special characters in passwordsfalsebool
MAX_FAILED_LOGIN_ATTEMPTSMaximum failed login attempts before lockout5int > 0
ACCOUNT_LOCKOUT_DURATION_MINUTESAccount lockout duration in minutes30int > 0

MCP Client Authentication

SettingDescriptionDefaultOptions
MCP_CLIENT_AUTH_ENABLEDEnable JWT authentication for MCP client operationstruebool
TRUST_PROXY_AUTHTrust proxy authentication headersfalsebool
PROXY_USER_HEADERHeader containing authenticated username from proxyX-Authenticated-Userstring

🔐MCP Client Auth: WhenMCP_CLIENT_AUTH_ENABLED=false, you must setTRUST_PROXY_AUTH=true if using a trusted authentication proxy. This is a security-sensitive setting.

SSO (Single Sign-On) Configuration

SettingDescriptionDefaultOptions
SSO_ENABLEDMaster switch for Single Sign-On authenticationfalsebool
SSO_AUTO_CREATE_USERSAutomatically create users from SSO providerstruebool
SSO_TRUSTED_DOMAINSTrusted email domains (JSON array)[]JSON array
SSO_PRESERVE_ADMIN_AUTHPreserve local admin authentication when SSO enabledtruebool
SSO_REQUIRE_ADMIN_APPROVALRequire admin approval for new SSO registrationsfalsebool
SSO_ISSUERSOptional JSON array of issuer URLs for SSO providers(none)JSON array

GitHub OAuth:

SettingDescriptionDefaultOptions
SSO_GITHUB_ENABLEDEnable GitHub OAuth authenticationfalsebool
SSO_GITHUB_CLIENT_IDGitHub OAuth client ID(none)string
SSO_GITHUB_CLIENT_SECRETGitHub OAuth client secret(none)string
SSO_GITHUB_ADMIN_ORGSGitHub orgs granting admin privileges (JSON)[]JSON array

Google OAuth:

SettingDescriptionDefaultOptions
SSO_GOOGLE_ENABLEDEnable Google OAuth authenticationfalsebool
SSO_GOOGLE_CLIENT_IDGoogle OAuth client ID(none)string
SSO_GOOGLE_CLIENT_SECRETGoogle OAuth client secret(none)string
SSO_GOOGLE_ADMIN_DOMAINSGoogle admin domains (JSON)[]JSON array

IBM Security Verify OIDC:

SettingDescriptionDefaultOptions
SSO_IBM_VERIFY_ENABLEDEnable IBM Security Verify OIDC authenticationfalsebool
SSO_IBM_VERIFY_CLIENT_IDIBM Security Verify client ID(none)string
SSO_IBM_VERIFY_CLIENT_SECRETIBM Security Verify client secret(none)string
SSO_IBM_VERIFY_ISSUERIBM Security Verify OIDC issuer URL(none)string

Keycloak OIDC:

SettingDescriptionDefaultOptions
SSO_KEYCLOAK_ENABLEDEnable Keycloak OIDC authenticationfalsebool
SSO_KEYCLOAK_BASE_URLKeycloak base URL(none)string
SSO_KEYCLOAK_REALMKeycloak realm namemasterstring
SSO_KEYCLOAK_CLIENT_IDKeycloak client ID(none)string
SSO_KEYCLOAK_CLIENT_SECRETKeycloak client secret(none)string
SSO_KEYCLOAK_MAP_REALM_ROLESMap Keycloak realm roles to gateway teamstruebool
SSO_KEYCLOAK_MAP_CLIENT_ROLESMap Keycloak client roles to gateway RBACfalsebool
SSO_KEYCLOAK_USERNAME_CLAIMJWT claim for usernamepreferred_usernamestring
SSO_KEYCLOAK_EMAIL_CLAIMJWT claim for emailemailstring
SSO_KEYCLOAK_GROUPS_CLAIMJWT claim for groups/rolesgroupsstring

Microsoft Entra ID OIDC:

SettingDescriptionDefaultOptions
SSO_ENTRA_ENABLEDEnable Microsoft Entra ID OIDC authenticationfalsebool
SSO_ENTRA_CLIENT_IDMicrosoft Entra ID client ID(none)string
SSO_ENTRA_CLIENT_SECRETMicrosoft Entra ID client secret(none)string
SSO_ENTRA_TENANT_IDMicrosoft Entra ID tenant ID(none)string

Generic OIDC Provider (Auth0, Authentik, etc.):

SettingDescriptionDefaultOptions
SSO_GENERIC_ENABLEDEnable generic OIDC provider authenticationfalsebool
SSO_GENERIC_PROVIDER_IDProvider ID (e.g., keycloak, auth0, authentik)(none)string
SSO_GENERIC_DISPLAY_NAMEDisplay name shown on login page(none)string
SSO_GENERIC_CLIENT_IDGeneric OIDC client ID(none)string
SSO_GENERIC_CLIENT_SECRETGeneric OIDC client secret(none)string
SSO_GENERIC_AUTHORIZATION_URLAuthorization endpoint URL(none)string
SSO_GENERIC_TOKEN_URLToken endpoint URL(none)string
SSO_GENERIC_USERINFO_URLUserinfo endpoint URL(none)string
SSO_GENERIC_ISSUEROIDC issuer URL(none)string
SSO_GENERIC_SCOPEOAuth scopes (space-separated)openid profile emailstring

Okta OIDC:

SettingDescriptionDefaultOptions
SSO_OKTA_ENABLEDEnable Okta OIDC authenticationfalsebool
SSO_OKTA_CLIENT_IDOkta client ID(none)string
SSO_OKTA_CLIENT_SECRETOkta client secret(none)string
SSO_OKTA_ISSUEROkta issuer URL(none)string

SSO Admin Assignment:

SettingDescriptionDefaultOptions
SSO_AUTO_ADMIN_DOMAINSEmail domains that automatically get admin privileges[]JSON array

OAuth 2.0 Dynamic Client Registration (DCR) & PKCE

ContextForge implementsOAuth 2.0 Dynamic Client Registration (RFC 7591) andPKCE (RFC 7636) for seamless integration with OAuth-protected MCP servers and upstream API gateways like HyperMCP.

Key Features:

  • ✅ Automatic client registration with Authorization Servers (no manual credential configuration)
  • ✅ Authorization Server metadata discovery (RFC 8414)
  • ✅ PKCE (Proof Key for Code Exchange) enabled for all Authorization Code flows
  • ✅ Support for public clients (PKCE-only, no client secret)
  • ✅ Encrypted credential storage with Fernet encryption
  • ✅ Configurable issuer allowlist for security
SettingDescriptionDefaultOptions
DCR_ENABLEDEnable Dynamic Client Registration (RFC 7591)truebool
DCR_AUTO_REGISTER_ON_MISSING_CREDENTIALSAuto-register when gateway has issuer but no client_idtruebool
DCR_DEFAULT_SCOPESDefault OAuth scopes to request during DCR["mcp:read"]JSON array
DCR_ALLOWED_ISSUERSAllowlist of trusted issuer URLs (empty = allow any)[]JSON array
DCR_TOKEN_ENDPOINT_AUTH_METHODToken endpoint auth methodclient_secret_basicclient_secret_basic,client_secret_post,none
DCR_METADATA_CACHE_TTLAS metadata cache TTL in seconds3600int
DCR_CLIENT_NAME_TEMPLATETemplate for client_name in DCR requestsMCP Gateway ({gateway_name})string
OAUTH_DISCOVERY_ENABLEDEnable AS metadata discovery (RFC 8414)truebool
OAUTH_PREFERRED_CODE_CHALLENGE_METHODPKCE code challenge methodS256S256,plain
JWT_AUDIENCE_VERIFICATIONJWT audience verification (disable for DCR)truebool

Documentation:

Personal Teams Configuration

SettingDescriptionDefaultOptions
AUTO_CREATE_PERSONAL_TEAMSEnable automatic personal team creation for new userstruebool
PERSONAL_TEAM_PREFIXPersonal team naming prefixpersonalstring
MAX_TEAMS_PER_USERMaximum number of teams a user can belong to50int > 0
MAX_MEMBERS_PER_TEAMMaximum number of members per team100int > 0
INVITATION_EXPIRY_DAYSNumber of days before team invitations expire7int > 0
REQUIRE_EMAIL_VERIFICATION_FOR_INVITESRequire email verification for team invitationstruebool

MCP Server Catalog

🆕New in v0.7.0: The MCP Server Catalog allows you to define a catalog of pre-configured MCP servers in a YAML file for easy discovery and management via the Admin UI.

SettingDescriptionDefaultOptions
MCPGATEWAY_CATALOG_ENABLEDEnable MCP server catalog featuretruebool
MCPGATEWAY_CATALOG_FILEPath to catalog configuration filemcp-catalog.ymlstring
MCPGATEWAY_CATALOG_AUTO_HEALTH_CHECKAutomatically health check catalog serverstruebool
MCPGATEWAY_CATALOG_CACHE_TTLCatalog cache TTL in seconds3600int > 0
MCPGATEWAY_CATALOG_PAGE_SIZENumber of catalog servers per page12int > 0

Key Features:

  • 🔄 Refresh Button - Manually refresh catalog without page reload
  • 🔍 Debounced Search - Optimized search with 300ms debounce
  • 📝 Custom Server Names - Specify custom names when registering
  • 🔌 Transport Detection - Auto-detect SSE, WebSocket, or HTTP transports
  • 🔐 OAuth Support - Register OAuth servers and configure later
  • ⚡ Better Error Messages - User-friendly errors for common issues

Documentation:

Security

SettingDescriptionDefaultOptions
SKIP_SSL_VERIFYSkip upstream TLS verificationfalsebool
ENVIRONMENTDeployment environment (affects security defaults)developmentdevelopment/production
APP_DOMAINDomain for production CORS originslocalhoststring
ALLOWED_ORIGINSCORS allow-listAuto-configured by environmentJSON array
CORS_ENABLEDEnable CORStruebool
CORS_ALLOW_CREDENTIALSAllow credentials in CORStruebool
SECURE_COOKIESForce secure cookie flagstruebool
COOKIE_SAMESITECookie SameSite attributelaxstrict/lax/none
SECURITY_HEADERS_ENABLEDEnable security headers middlewaretruebool
X_FRAME_OPTIONSX-Frame-Options header valueDENYDENY/SAMEORIGIN/""/null
X_CONTENT_TYPE_OPTIONS_ENABLEDEnable X-Content-Type-Options: nosniff headertruebool
X_XSS_PROTECTION_ENABLEDEnable X-XSS-Protection headertruebool
X_DOWNLOAD_OPTIONS_ENABLEDEnable X-Download-Options: noopen headertruebool
HSTS_ENABLEDEnable HSTS headertruebool
HSTS_MAX_AGEHSTS max age in seconds31536000int
HSTS_INCLUDE_SUBDOMAINSInclude subdomains in HSTS headertruebool
REMOVE_SERVER_HEADERSRemove server identificationtruebool
DOCS_ALLOW_BASIC_AUTHAllow Basic Auth for docs (in addition to JWT)falsebool
MIN_SECRET_LENGTHMinimum length for secret keys (JWT, encryption)32int
MIN_PASSWORD_LENGTHMinimum length for passwords12int
REQUIRE_STRONG_SECRETSEnforce strong secrets (fail startup on weak secrets)falsebool

CORS Configuration: WhenENVIRONMENT=development, CORS origins are automatically configured for common development ports (3000, 8080, gateway port). In production, origins are constructed fromAPP_DOMAIN (e.g.,https://yourdomain.com,https://app.yourdomain.com). You can override this by explicitly settingALLOWED_ORIGINS.

Security Headers: The gateway automatically adds configurable security headers to all responses including CSP, X-Frame-Options, X-Content-Type-Options, X-Download-Options, and HSTS (on HTTPS). All headers can be individually enabled/disabled. Sensitive server headers are removed.

Security Validation: SetREQUIRE_STRONG_SECRETS=true to enforce minimum lengths for JWT secrets and passwords at startup. This helps prevent weak credentials in production. Default isfalse for backward compatibility.

iframe Embedding: The gateway controls iframe embedding through bothX-Frame-Options header and CSPframe-ancestors directive (both are automatically synced). Options:

  • X_FRAME_OPTIONS=DENY (default): Blocks all iframe embedding
  • X_FRAME_OPTIONS=SAMEORIGIN: Allows embedding from same domain only
  • X_FRAME_OPTIONS="ALLOW-ALL": Allows embedding from all sources (setsframe-ancestors * file: http: https:)
  • X_FRAME_OPTIONS=null ornone: Completely removes iframe restrictions (no headers sent)

Modern browsers prioritize CSPframe-ancestors over the legacyX-Frame-Options header. Both are now kept in sync automatically.

Cookie Security: Authentication cookies are automatically configured with HttpOnly, Secure (in production), and SameSite attributes for CSRF protection.

Note: do not quote the ALLOWED_ORIGINS values, this needs to be valid JSON, such as:ALLOWED_ORIGINS=["http://localhost", "http://localhost:4444"]

Documentation endpoints (/docs,/redoc,/openapi.json) are always protected by authentication.By default, they require Bearer token authentication. SettingDOCS_ALLOW_BASIC_AUTH=true enables HTTP Basic Authentication as an additional method using the same credentials asBASIC_AUTH_USER andBASIC_AUTH_PASSWORD.

Ed25519 Certificate Signing

MCP Gateway supportsEd25519 digital signatures for certificate validation and integrity verification. This cryptographic signing mechanism ensures that CA certificates used by the gateway are authentic and haven't been tampered with.

SettingDescriptionDefaultOptions
ENABLE_ED25519_SIGNINGEnable Ed25519 signing for certificatesfalsebool
ED25519_PRIVATE_KEYEd25519 private key for signing (PEM format)(none)string
PREV_ED25519_PRIVATE_KEYPrevious Ed25519 private key for key rotation(none)string

How It Works:

  1. Certificate Signing - WhenENABLE_ED25519_SIGNING=true, the gateway signs the CA certificate of each MCP server/gateway using the Ed25519 private key.

  2. Certificate Validation - Before using a CA certificate for subsequent calls, the gateway validates its signature to ensure authenticity and integrity.

  3. Disabled Mode - WhenENABLE_ED25519_SIGNING=false, certificates are neither signed nor validated (default behavior).

Key Generation:

# Generate a new Ed25519 key pairpython mcpgateway/utils/generate_keys.py# Output will show:# - Private key (set this to ED25519_PRIVATE_KEY)

Key Rotation:

To rotate keys without invalidating existing signed certificates:

  1. Move the currentED25519_PRIVATE_KEY value toPREV_ED25519_PRIVATE_KEY
  2. Generate a new key pair using the command above
  3. Set the new private key toED25519_PRIVATE_KEY
  4. The gateway will automatically re-sign valid certificates at the point of key change

Example Configuration:

# Enable Ed25519 signingENABLE_ED25519_SIGNING=true# Current signing key (PEM format)ED25519_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----MC4CAQAwBQYDK2VwBCIEIJ5pW... (your key here)-----END PRIVATE KEY-----"# Previous key for rotation (optional)PREV_ED25519_PRIVATE_KEY="-----BEGIN PRIVATE KEY-----MC4CAQAwBQYDK2VwBCIEIOld... (old key here)-----END PRIVATE KEY-----"

🔐Security Best Practices:

  • Store private keys securely (use secrets management tools like Vault, AWS Secrets Manager, etc.)
  • Rotate keys periodically (recommended: every 90-180 days)
  • Never commit private keys to version control
  • Use environment variables or encrypted config files

🔑Public Key Derivation:

  • Public keys are automatically derived from private keys
  • No need to configure public keys separately
  • BothED25519_PUBLIC_KEY andPREV_ED25519_PUBLIC_KEY are computed at startup

Performance:

  • Ed25519 signing is extremely fast (~64 microseconds per signature)
  • Minimal impact on gateway performance
  • Recommended for production deployments requiring certificate integrity

Response Compression

MCP Gateway includes automatic response compression middleware that reduces bandwidth usage by 30-70% for text-based responses (JSON, HTML, CSS, JS). Compression is negotiated automatically based on clientAccept-Encoding headers with algorithm priority:Brotli (best compression) >Zstd (fastest) >GZip (universal fallback).

SettingDescriptionDefaultOptions
COMPRESSION_ENABLEDEnable response compressiontruebool
COMPRESSION_MINIMUM_SIZEMinimum response size in bytes to compress500int (0=compress all)
COMPRESSION_GZIP_LEVELGZip compression level (1=fast, 9=best)6int (1-9)
COMPRESSION_BROTLI_QUALITYBrotli quality (0-3=fast, 4-9=balanced, 10-11=max)4int (0-11)
COMPRESSION_ZSTD_LEVELZstd level (1-3=fast, 4-9=balanced, 10+=slow)3int (1-22)

Compression Behavior:

  • Automatically negotiates algorithm based on clientAccept-Encoding header
  • Only compresses responses larger thanCOMPRESSION_MINIMUM_SIZE bytes (small responses not worth compression overhead)
  • AddsVary: Accept-Encoding header for proper cache behavior
  • No client changes required (browsers/clients handle decompression automatically)
  • Typical compression ratios: JSON responses 40-60%, HTML responses 50-70%

Performance Impact:

  • CPU overhead: <5% (balanced settings)
  • Bandwidth reduction: 30-70% for text responses
  • Latency impact: <10ms for typical responses

Testing Compression:

# Start servermake dev# Test Brotli (best compression)curl -H"Accept-Encoding: br" http://localhost:8000/openapi.json -v| grep -i"content-encoding"# Test GZip (universal fallback)curl -H"Accept-Encoding: gzip" http://localhost:8000/openapi.json -v| grep -i"content-encoding"# Test Zstd (fastest)curl -H"Accept-Encoding: zstd" http://localhost:8000/openapi.json -v| grep -i"content-encoding"

Tuning for Production:

# High-traffic (optimize for speed)COMPRESSION_GZIP_LEVEL=4COMPRESSION_BROTLI_QUALITY=3COMPRESSION_ZSTD_LEVEL=1# Bandwidth-constrained (optimize for size)COMPRESSION_GZIP_LEVEL=9COMPRESSION_BROTLI_QUALITY=11COMPRESSION_ZSTD_LEVEL=9

Note: SeeScaling Guide for compression performance optimization at scale.

Logging

MCP Gateway provides flexible logging withstdout/stderr output by default andoptional file-based logging. When file logging is enabled, it provides JSON formatting for structured logs and text formatting for console output.

SettingDescriptionDefaultOptions
LOG_LEVELMinimum log levelINFODEBUG...CRITICAL
LOG_FORMATConsole log formatjsonjson,text
LOG_TO_FILEEnable file loggingfalsetrue,false
LOG_FILELog filename (when enabled)nullmcpgateway.log
LOG_FOLDERDirectory for log filesnulllogs,/var/log/gateway
LOG_FILEMODEFile write modea+a+ (append),w (overwrite)
LOG_ROTATION_ENABLEDEnable log file rotationfalsetrue,false
LOG_MAX_SIZE_MBMax file size before rotation (MB)1Any positive integer
LOG_BACKUP_COUNTNumber of backup files to keep5Any non-negative integer
LOG_BUFFER_SIZE_MBSize of in-memory log buffer (MB)1.0float > 0

Logging Behavior:

  • Default: Logs only tostdout/stderr with human-readable text format
  • File Logging: WhenLOG_TO_FILE=true, logs toboth file (JSON format) and console (text format)
  • Log Rotation: WhenLOG_ROTATION_ENABLED=true, files rotate atLOG_MAX_SIZE_MB withLOG_BACKUP_COUNT backup files (e.g.,.log.1,.log.2)
  • Directory Creation: Log folder is automatically created if it doesn't exist
  • Centralized Service: All modules use the unifiedLoggingService for consistent formatting

Example Configurations:

# Default: stdout/stderr only (recommended for containers)LOG_LEVEL=INFO# No additional config needed - logs to stdout/stderr# Optional: Enable file logging (no rotation)LOG_TO_FILE=trueLOG_FOLDER=/var/log/mcpgatewayLOG_FILE=gateway.logLOG_FILEMODE=a+# Optional: Enable file logging with rotationLOG_TO_FILE=trueLOG_ROTATION_ENABLED=trueLOG_MAX_SIZE_MB=10LOG_BACKUP_COUNT=3LOG_FOLDER=/var/log/mcpgatewayLOG_FILE=gateway.log

Default Behavior:

  • Logs are writtenonly to stdout/stderr in human-readable text format
  • File logging isdisabled by default (no files created)
  • SetLOG_TO_FILE=true to enable optional file logging with JSON format

Observability (OpenTelemetry)

MCP Gateway includesvendor-agnostic OpenTelemetry support for distributed tracing. Works with Phoenix, Jaeger, Zipkin, Tempo, DataDog, New Relic, and any OTLP-compatible backend.

SettingDescriptionDefaultOptions
OTEL_ENABLE_OBSERVABILITYMaster switch for observabilitytruetrue,false
OTEL_SERVICE_NAMEService identifier in tracesmcp-gatewaystring
OTEL_SERVICE_VERSIONService version in traces0.9.0string
OTEL_DEPLOYMENT_ENVIRONMENTEnvironment tag (dev/staging/prod)developmentstring
OTEL_TRACES_EXPORTERTrace exporter backendotlpotlp,jaeger,zipkin,console,none
OTEL_RESOURCE_ATTRIBUTESCustom resource attributes(empty)key=value,key2=value2

OTLP Configuration (for Phoenix, Tempo, DataDog, etc.):

SettingDescriptionDefaultOptions
OTEL_EXPORTER_OTLP_ENDPOINTOTLP collector endpoint(none)http://localhost:4317
OTEL_EXPORTER_OTLP_PROTOCOLOTLP protocolgrpcgrpc,http/protobuf
OTEL_EXPORTER_OTLP_HEADERSAuthentication headers(empty)api-key=secret,x-auth=token
OTEL_EXPORTER_OTLP_INSECURESkip TLS verificationtruetrue,false

Alternative Backends (optional):

SettingDescriptionDefaultOptions
OTEL_EXPORTER_JAEGER_ENDPOINTJaeger collector endpointhttp://localhost:14268/api/tracesURL
OTEL_EXPORTER_ZIPKIN_ENDPOINTZipkin collector endpointhttp://localhost:9411/api/v2/spansURL

Performance Tuning:

SettingDescriptionDefaultOptions
OTEL_TRACES_SAMPLERSampling strategyparentbased_traceidratioalways_on,always_off,traceidratio
OTEL_TRACES_SAMPLER_ARGSample rate (0.0-1.0)0.1float (0.1 = 10% sampling)
OTEL_BSP_MAX_QUEUE_SIZEMax queued spans2048int > 0
OTEL_BSP_MAX_EXPORT_BATCH_SIZEMax batch size for export512int > 0
OTEL_BSP_SCHEDULE_DELAYExport interval (ms)5000int > 0

Quick Start with Phoenix:

# Start Phoenix for LLM observabilitydocker run -p 6006:6006 -p 4317:4317 arizephoenix/phoenix:latest# Configure gatewayexport OTEL_ENABLE_OBSERVABILITY=trueexport OTEL_TRACES_EXPORTER=otlpexport OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317# Run gateway - traces automatically sent to Phoenixmcpgateway

🔍What Gets Traced: Tool invocations, prompt rendering, resource fetching, gateway federation, health checks, plugin execution (if enabled)

🚀Zero Overhead: WhenOTEL_ENABLE_OBSERVABILITY=false, all tracing is disabled with no performance impact

📊View Traces: Phoenix UI athttp://localhost:6006, Jaeger athttp://localhost:16686, or your configured backend

Internal Observability & Tracing

The gateway includes built-in observability features for tracking HTTP requests, spans, and traces independent of OpenTelemetry. This provides database-backed trace storage and analysis directly in the Admin UI.

SettingDescriptionDefaultOptions
OBSERVABILITY_ENABLEDEnable internal observability tracing and metricsfalsebool
OBSERVABILITY_TRACE_HTTP_REQUESTSAutomatically trace HTTP requeststruebool
OBSERVABILITY_TRACE_RETENTION_DAYSNumber of days to retain trace data7int (≥ 1)
OBSERVABILITY_MAX_TRACESMaximum number of traces to retain100000int (≥ 1000)
OBSERVABILITY_SAMPLE_RATETrace sampling rate (0.0-1.0)1.0float (0.0-1.0)
OBSERVABILITY_EXCLUDE_PATHSPaths to exclude from tracing (regex patterns)/health,/healthz,/ready,/metrics,/static/.*comma-separated
OBSERVABILITY_METRICS_ENABLEDEnable metrics collectiontruebool
OBSERVABILITY_EVENTS_ENABLEDEnable event logging within spanstruebool

Key Features:

  • 📊Database-backed storage: Traces stored in SQLite/PostgreSQL for persistence
  • 🔍Admin UI integration: View traces, spans, and metrics in the diagnostics tab
  • 🎯Sampling control: Configure sampling rate to reduce overhead in high-traffic scenarios
  • 🕐Automatic cleanup: Old traces automatically purged based on retention settings
  • 🚫Path filtering: Exclude health checks and static resources from tracing

Configuration Effects:

  • OBSERVABILITY_ENABLED=false: Completely disables internal observability (no database writes, zero overhead)
  • OBSERVABILITY_SAMPLE_RATE=0.1: Traces 10% of requests (useful for high-volume production)
  • OBSERVABILITY_EXCLUDE_PATHS=/health,/metrics: Prevents noisy endpoints from creating traces

📝Note: This is separate from OpenTelemetry. You can use both systems simultaneously - internal observability for Admin UI visibility and OpenTelemetry for external systems like Phoenix/Jaeger.

🎛️Admin UI Access: When enabled, traces appear inAdmin → Diagnostics → Observability tab with filtering, search, and export capabilities

Prometheus Metrics

The gateway exposes Prometheus-compatible metrics at/metrics/prometheus for monitoring and alerting.

SettingDescriptionDefaultOptions
ENABLE_METRICSEnable Prometheus metrics instrumentationtruebool
METRICS_EXCLUDED_HANDLERSRegex patterns for paths to exclude from metrics(empty)comma-separated
METRICS_NAMESPACEPrometheus metrics namespace (prefix)defaultstring
METRICS_SUBSYSTEMPrometheus metrics subsystem (secondary prefix)(empty)string
METRICS_CUSTOM_LABELSStatic custom labels for app_info gauge(empty)key=value,...

Key Features:

  • 📊Standard metrics: HTTP request duration, response codes, active requests
  • 🏷️Custom labels: Add static labels (environment, region, team) for filtering in Prometheus/Grafana
  • 🚫Path exclusions: Prevent high-cardinality issues by excluding dynamic paths
  • 📈Namespace isolation: Group metrics by application or organization

Configuration Examples:

# Production deployment with custom labelsENABLE_METRICS=trueMETRICS_NAMESPACE=mycompanyMETRICS_SUBSYSTEM=gatewayMETRICS_CUSTOM_LABELS=environment=production,region=us-east-1,team=platform# Exclude high-volume endpoints from metricsMETRICS_EXCLUDED_HANDLERS=/servers/.*/sse,/static/.*,.*health.*# Disable metrics for developmentENABLE_METRICS=false

Metric Names:

  • With namespace + subsystem:mycompany_gateway_http_requests_total
  • Default (no namespace/subsystem):default_http_requests_total

⚠️High-Cardinality Warning: Never use high-cardinality values (user IDs, request IDs, timestamps) inMETRICS_CUSTOM_LABELS. Only use low-cardinality static values (environment, region, cluster).

📊Prometheus Endpoint: Access metrics atGET /metrics/prometheus (requires authentication ifAUTH_REQUIRED=true)

🎯Grafana Integration: Import metrics into Grafana dashboards using the configured namespace as a filter

Transport

SettingDescriptionDefaultOptions
TRANSPORT_TYPEEnabled transportsallhttp,ws,sse,stdio,all
WEBSOCKET_PING_INTERVALWebSocket ping (secs)30int > 0
SSE_RETRY_TIMEOUTSSE retry timeout (ms)5000int > 0
SSE_KEEPALIVE_ENABLEDEnable SSE keepalive eventstruebool
SSE_KEEPALIVE_INTERVALSSE keepalive interval (secs)30int > 0
USE_STATEFUL_SESSIONSstreamable http configfalsebool
JSON_RESPONSE_ENABLEDjson/sse streams (streamable http)truebool

💡 SSE Keepalive Events: The gateway sends periodic keepalive events to prevent connection timeouts with proxies and load balancers. Disable withSSE_KEEPALIVE_ENABLED=false if your client doesn't handle unknown event types. Common intervals: 30s (default), 60s (AWS ALB), 240s (Azure).

Federation

SettingDescriptionDefaultOptions
FEDERATION_ENABLEDEnable federationtruebool
FEDERATION_DISCOVERYAuto-discover peersfalsebool
FEDERATION_PEERSComma-sep peer URLs[]JSON array
FEDERATION_TIMEOUTGateway timeout (secs)30int > 0
FEDERATION_SYNC_INTERVALSync interval (secs)300int > 0

Resources

SettingDescriptionDefaultOptions
RESOURCE_CACHE_SIZELRU cache size1000int > 0
RESOURCE_CACHE_TTLCache TTL (seconds)3600int > 0
MAX_RESOURCE_SIZEMax resource bytes10485760int > 0
ALLOWED_MIME_TYPESAcceptable MIME typessee codeJSON array

Tools

SettingDescriptionDefaultOptions
TOOL_TIMEOUTTool invocation timeout (secs)60int > 0
MAX_TOOL_RETRIESMax retry attempts3int ≥ 0
TOOL_RATE_LIMITTool calls per minute100int > 0
TOOL_CONCURRENT_LIMITConcurrent tool invocations10int > 0
GATEWAY_TOOL_NAME_SEPARATORTool name separator for gateway routing--,--,_,.

Prompts

SettingDescriptionDefaultOptions
PROMPT_CACHE_SIZECached prompt templates100int > 0
MAX_PROMPT_SIZEMax prompt template size (bytes)102400int > 0
PROMPT_RENDER_TIMEOUTJinja render timeout (secs)10int > 0

Health Checks

SettingDescriptionDefaultOptions
HEALTH_CHECK_INTERVALHealth poll interval (secs)60int > 0
HEALTH_CHECK_TIMEOUTHealth request timeout (secs)10int > 0
UNHEALTHY_THRESHOLDFail-count before peer deactivation,3int > 0
Set to -1 if deactivation is not needed.
GATEWAY_VALIDATION_TIMEOUTGateway URL validation timeout (secs)5int > 0
MAX_CONCURRENT_HEALTH_CHECKSMax Concurrent health checks20int > 0
FILELOCK_NAMEFile lock for leader electiongateway_service_leader.lockstring
DEFAULT_ROOTSDefault root paths for resources[]JSON array

Database

SettingDescriptionDefaultOptions
DB_POOL_SIZE .SQLAlchemy connection pool size200int > 0
DB_MAX_OVERFLOW.Extra connections beyond pool10int ≥ 0
DB_POOL_TIMEOUT.Wait for connection (secs)30int > 0
DB_POOL_RECYCLE.Recycle connections (secs)3600int > 0
DB_MAX_RETRIES .Max Retry Attempts3int > 0
DB_RETRY_INTERVAL_MSRetry Interval (ms)2000int > 0

Cache Backend

SettingDescriptionDefaultOptions
CACHE_TYPEBackend typedatabasenone,memory,database,redis
REDIS_URLRedis connection URL(none)string or empty
CACHE_PREFIXKey prefixmcpgw:string
SESSION_TTLSession validity (secs)3600int > 0
MESSAGE_TTLMessage retention (secs)600int > 0
REDIS_MAX_RETRIESMax Retry Attempts3int > 0
REDIS_RETRY_INTERVAL_MSRetry Interval (ms)2000int > 0

🧠none disables caching entirely. Usememory for dev,database for local persistence, orredis for distributed caching across multiple instances.

Database Management

MCP Gateway uses Alembic for database migrations. Common commands:

  • make db-current - Show current database version
  • make db-upgrade - Apply pending migrations
  • make db-migrate - Create new migration
  • make db-history - Show migration history
  • make db-status - Detailed migration status

Troubleshooting

Common Issues:

  • "No 'script_location' key found": Ensure you're running from the project root directory.

  • "Unknown SSE event: keepalive" warnings: Some MCP clients don't recognize keepalive events. These warnings are harmless and don't affect functionality. To disable:SSE_KEEPALIVE_ENABLED=false

  • Connection timeouts with proxies/load balancers: If experiencing timeouts, adjust keepalive interval to match your infrastructure:SSE_KEEPALIVE_INTERVAL=60 (AWS ALB) or240 (Azure).

Development

SettingDescriptionDefaultOptions
DEV_MODEEnable dev modefalsebool
RELOADAuto-reload on changesfalsebool
DEBUGDebug loggingfalsebool

Well-Known URI Configuration

SettingDescriptionDefaultOptions
WELL_KNOWN_ENABLEDEnable well-known URI endpoints (/.well-known/*)truebool
WELL_KNOWN_ROBOTS_TXTrobots.txt content(blocks crawlers)string
WELL_KNOWN_SECURITY_TXTsecurity.txt content (RFC 9116)(empty)string
WELL_KNOWN_CUSTOM_FILESAdditional custom well-known files (JSON){}JSON object
WELL_KNOWN_CACHE_MAX_AGECache control for well-known files (seconds)3600int > 0

🔍robots.txt: By default, blocks all crawlers for security. Customize for your needs.

🔐security.txt: Define security contact information per RFC 9116. Leave empty to disable.

📄Custom Files: Add arbitrary well-known files likeai.txt,dnt-policy.txt, etc.

Header Passthrough Configuration

SettingDescriptionDefaultOptions
ENABLE_HEADER_PASSTHROUGHEnable HTTP header passthrough feature (⚠️ Security implications)falsebool
ENABLE_OVERWRITE_BASE_HEADERSEnable overwriting of base headers (⚠️ Advanced usage)falsebool
DEFAULT_PASSTHROUGH_HEADERSDefault headers to pass through (JSON array)["X-Tenant-Id", "X-Trace-Id"]JSON array

⚠️Security Warning: Header passthrough is disabled by default for security. Only enable if you understand the implications and have reviewed which headers should be passed through to backing MCP servers. Authorization headers are not included in defaults.

Plugin Configuration

SettingDescriptionDefaultOptions
PLUGINS_ENABLEDEnable the plugin frameworkfalsebool
PLUGIN_CONFIG_FILEPath to main plugin configuration fileplugins/config.yamlstring
PLUGINS_MTLS_CA_BUNDLE(Optional) default CA bundle for external plugin mTLS(empty)string
PLUGINS_MTLS_CLIENT_CERT(Optional) gateway client certificate for plugin mTLS(empty)string
PLUGINS_MTLS_CLIENT_KEY(Optional) gateway client key for plugin mTLS(empty)string
PLUGINS_MTLS_CLIENT_KEY_PASSWORD(Optional) password for plugin client key(empty)string
PLUGINS_MTLS_VERIFY(Optional) verify remote plugin certificates (true/false)truebool
PLUGINS_MTLS_CHECK_HOSTNAME(Optional) enforce hostname verification for pluginstruebool
PLUGINS_CLI_COMPLETIONEnable auto-completion for plugins CLIfalsebool
PLUGINS_CLI_MARKUP_MODESet markup mode for plugins CLI(none)rich,markdown,disabled

HTTP Retry Configuration

SettingDescriptionDefaultOptions
RETRY_MAX_ATTEMPTSMaximum retry attempts for HTTP requests3int > 0
RETRY_BASE_DELAYBase delay between retries (seconds)1.0float > 0
RETRY_MAX_DELAYMaximum delay between retries (seconds)60int > 0
RETRY_JITTER_MAXMaximum jitter fraction of base delay0.5float 0-1

Running

Makefile

 make serve# Run production Gunicorn server on make serve-ssl# Run Gunicorn behind HTTPS on :4444 (uses ./certs)

Script helper

To run the development (uvicorn) server:

make dev# or./run.sh --reload --log debug --workers 2

run.sh is a wrapper arounduvicorn that loads.env, supports reload, and passes arguments to the server.

Key flags:

FlagPurposeExample
-e, --env FILEload env-file--env prod.env
-H, --hostbind address--host 127.0.0.1
-p, --portlisten port--port 8080
-w, --workersgunicorn workers--workers 4
-r, --reloadauto-reload--reload

Manual (Uvicorn)

uvicorn mcpgateway.main:app --host 0.0.0.0 --port 4444 --workers 4

Authentication examples

# Generate a JWT token using JWT_SECRET_KEY and export it as MCPGATEWAY_BEARER_TOKEN# Note that the module needs to be installed. If running locally use:export MCPGATEWAY_BEARER_TOKEN=$(JWT_SECRET_KEY=my-test-key python3 -m mcpgateway.utils.create_jwt_token)# Use the JWT token in an API callcurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/tools

☁️ AWS / Azure / OpenShift

Deployment details can be found in the GitHub Pages.

☁️ IBM Cloud Code Engine Deployment

This project supports deployment toIBM Cloud Code Engine using theibmcloud CLI and the IBM Container Registry.

☁️ IBM Cloud Code Engine Deployment

🔧 Prerequisites

  • Podmanor Docker installed locally
  • IBM Cloud CLI (usemake ibmcloud-cli-install to install)
  • AnIBM Cloud API key with access to Code Engine & Container Registry
  • Code Engine and Container Registry servicesenabled in your IBM Cloud account

📦 Environment Variables

Create a.env file (or export the variables in your shell).The first block isrequired; the second providestunable defaults you can override:

# ── Required ─────────────────────────────────────────────IBMCLOUD_REGION=us-southIBMCLOUD_RESOURCE_GROUP=defaultIBMCLOUD_PROJECT=my-codeengine-projectIBMCLOUD_CODE_ENGINE_APP=mcpgatewayIBMCLOUD_IMAGE_NAME=us.icr.io/myspace/mcpgateway:latestIBMCLOUD_IMG_PROD=mcpgateway/mcpgatewayIBMCLOUD_API_KEY=your_api_key_here# Optional - omit to use interactive `ibmcloud login --sso`# ── Optional overrides (sensible defaults provided) ──────IBMCLOUD_CPU=1# vCPUs for the appIBMCLOUD_MEMORY=4G# Memory allocationIBMCLOUD_REGISTRY_SECRET=my-regcred# Name of the Container Registry secret

Quick check:make ibmcloud-check-env


🚀 Make Targets

TargetPurpose
make ibmcloud-cli-installInstall IBM Cloud CLI and required plugins
make ibmcloud-loginLog in to IBM Cloud (API key or SSO)
make ibmcloud-ce-loginSelect the Code Engine project & region
make ibmcloud-tagTag the local container image
make ibmcloud-pushPush the image to IBM Container Registry
make ibmcloud-deployCreate or update the Code Engine application (uses CPU/memory/secret)
make ibmcloud-ce-statusShow current deployment status
make ibmcloud-ce-logsStream logs from the running app
make ibmcloud-ce-rmDelete the Code Engine application

📝 Example Workflow

make ibmcloud-check-envmake ibmcloud-cli-installmake ibmcloud-loginmake ibmcloud-ce-loginmake ibmcloud-tagmake ibmcloud-pushmake ibmcloud-deploymake ibmcloud-ce-statusmake ibmcloud-ce-logs

API Endpoints

You can test the API endpoints through curl, or Swagger UI, and check detailed documentation on ReDoc:

Generate an API Bearer token, and test the various API endpoints.

🔐 Authentication & Health Checks
# Generate a bearer token using the configured secret key (use the same as your .env)export MCPGATEWAY_BEARER_TOKEN=$(python3 -m mcpgateway.utils.create_jwt_token --username admin@example.com --secret my-test-key)echo${MCPGATEWAY_BEARER_TOKEN}# Quickly confirm that authentication works and the gateway is healthycurl -s -k -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" https://localhost:4444/health# {"status":"healthy"}# Quickly confirm the gateway version & DB connectivitycurl -s -k -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" https://localhost:4444/version| jq

🧱 Protocol APIs (MCP) /protocol
# Initialize MCP sessioncurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{           "protocol_version":"2025-03-26",           "capabilities":{},           "client_info":{"name":"MyClient","version":"1.0.0"}         }' \     http://localhost:4444/protocol/initialize# Ping (JSON-RPC style)curl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{"jsonrpc":"2.0","id":1,"method":"ping"}' \     http://localhost:4444/protocol/ping# Completion for prompt/resource arguments (not implemented)curl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{           "ref":{"type":"ref/prompt","name":"example_prompt"},           "argument":{"name":"topic","value":"py"}         }' \     http://localhost:4444/protocol/completion/complete# Sampling (streaming) (not implemented)curl -N -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{           "messages":[{"role":"user","content":{"type":"text","text":"Hello"}}],           "maxTokens":16         }' \     http://localhost:4444/protocol/sampling/createMessage

🧠 JSON-RPC Utility /rpc
# Generic JSON-RPC calls (tools, gateways, roots, etc.)curl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{"jsonrpc":"2.0","id":1,"method":"list_tools"}' \     http://localhost:4444/rpc

Handles any method name:list_tools,list_gateways,prompts/get, or invokes a tool if method matches a registered tool name .


🔧 Tool Management /tools
# Register a new toolcurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{           "name":"clock_tool",           "url":"http://localhost:9000/rpc",           "description":"Returns current time",           "input_schema":{             "type":"object",             "properties":{"timezone":{"type":"string"}},             "required":[]           }         }' \     http://localhost:4444/tools# List toolscurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/tools# Get tool by IDcurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/tools/1# Update toolcurl -X PUT -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{ "description":"Updated desc" }' \     http://localhost:4444/tools/1# Toggle active statuscurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     http://localhost:4444/tools/1/toggle?activate=falsecurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     http://localhost:4444/tools/1/toggle?activate=true# Delete toolcurl -X DELETE -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/tools/1

🤖 A2A Agent Management /a2a
# Register a new A2A agentcurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{           "name":"hello_world_agent",           "endpoint_url":"http://localhost:9999/",           "agent_type":"jsonrpc",           "description":"External AI agent for hello world functionality",           "auth_type":"api_key",           "auth_value":"your-api-key",           "tags":["ai", "hello-world"]         }' \     http://localhost:4444/a2a# List A2A agentscurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/a2a# Get agent by IDcurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/a2a/agent-id# Update agentcurl -X PUT -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{ "description":"Updated description" }' \     http://localhost:4444/a2a/agent-id# Test agent (direct invocation)curl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{           "parameters": {             "method": "message/send",             "params": {               "message": {                 "messageId": "test-123",                 "role": "user",                 "parts": [{"type": "text", "text": "Hello!"}]               }             }           },           "interaction_type": "test"         }' \     http://localhost:4444/a2a/agent-name/invoke# Toggle agent statuscurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     http://localhost:4444/a2a/agent-id/toggle?activate=false# Delete agentcurl -X DELETE -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     http://localhost:4444/a2a/agent-id# Associate agent with virtual server (agents become available as MCP tools)curl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{           "name":"AI Assistant Server",           "description":"Virtual server with AI agents",           "associated_a2a_agents":["agent-id"]         }' \     http://localhost:4444/servers

🤖A2A Integration: A2A agents are external AI agents that can be registered and exposed as MCP tools🔄Protocol Detection: Gateway automatically detects JSONRPC vs custom A2A protocols📊Testing: Built-in test functionality via Admin UI or/a2a/{agent_id}/test endpoint🎛️Virtual Servers: Associate agents with servers to expose them as standard MCP tools


🌐 Gateway Management /gateways
# Register an MCP server as a new gateway providercurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{"name":"peer_gateway","url":"http://peer:4444"}' \     http://localhost:4444/gateways# List gatewayscurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/gateways# Get gateway by IDcurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/gateways/1# Update gatewaycurl -X PUT -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{"description":"New description"}' \     http://localhost:4444/gateways/1# Toggle active statuscurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     http://localhost:4444/gateways/1/toggle?activate=false# Delete gatewaycurl -X DELETE -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/gateways/1

📁 Resource Management /resources
# Register resourcecurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{           "uri":"config://app/settings",           "name":"App Settings",           "content":"key=value"         }' \     http://localhost:4444/resources# List resourcescurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/resources# Read a resourcecurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/resources/config://app/settings# Update resourcecurl -X PUT -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{"content":"new=value"}' \     http://localhost:4444/resources/config://app/settings# Delete resourcecurl -X DELETE -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/resources/config://app/settings# Subscribe to updates (SSE)curl -N -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/resources/subscribe/config://app/settings

📝 Prompt Management /prompts
# Create prompt templatecurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{           "name":"greet",           "template":"Hello, {{ user }}!",           "argument_schema":{             "type":"object",             "properties":{"user":{"type":"string"}},             "required":["user"]           }         }' \     http://localhost:4444/prompts# List promptscurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/prompts# Get prompt (with args)curl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{"user":"Alice"}' \     http://localhost:4444/prompts/greet# Get prompt (no args)curl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/prompts/greet# Update promptcurl -X PUT -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{"template":"Hi, {{ user }}!"}' \     http://localhost:4444/prompts/greet# Toggle activecurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     http://localhost:4444/prompts/5/toggle?activate=false# Delete promptcurl -X DELETE -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/prompts/greet

🌲 Root Management /roots
# List rootscurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/roots# Add rootcurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{"uri":"/data","name":"Data Root"}' \     http://localhost:4444/roots# Remove rootcurl -X DELETE -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/roots/%2Fdata# Subscribe to root changes (SSE)curl -N -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/roots/changes

🖥️ Server Management /servers
# List serverscurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/servers# Get servercurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/servers/UUID_OF_SERVER_1# Create servercurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{"name":"db","description":"Database","associatedTools": ["1","2","3"]}' \     http://localhost:4444/servers# Update servercurl -X PUT -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     -H"Content-Type: application/json" \     -d'{"description":"Updated"}' \     http://localhost:4444/servers/UUID_OF_SERVER_1# Toggle activecurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" \     http://localhost:4444/servers/UUID_OF_SERVER_1/toggle?activate=false

📊 Metrics /metrics
# Get aggregated metricscurl -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/metrics# Reset metrics (all or per-entity)curl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/metrics/resetcurl -X POST -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/metrics/reset?entity=tool&id=1

📡 Events & Health
# SSE: all eventscurl -N -H"Authorization: Bearer$MCPGATEWAY_BEARER_TOKEN" http://localhost:4444/events# WebSocketwscat -c ws://localhost:4444/ws \      -H"Authorization: Basic$(echo -n admin:changeme|base64)"# Health checkcurl http://localhost:4444/health

Full Swagger UI at/docs.


🛠️ Sample Tool
uvicorn sample_tool.clock_tool:app --host 0.0.0.0 --port 9000
curl -X POST -H"Content-Type: application/json" \     -d'{"jsonrpc":"2.0","id":1,"method":"get_time","params":{"timezone":"UTC"}}' \     http://localhost:9000/rpc

Testing

maketest# Run unit testsmake lint# Run lint tools

Doctest Coverage

ContextForge implements comprehensive doctest coverage to ensure all code examples in documentation are tested and verified:

make doctest# Run all doctestsmake doctest-verbose# Run with detailed outputmake doctest-coverage# Generate coverage reportmake doctest-check# Check coverage percentage

Coverage Status:

  • Transport Modules: 100% (base, stdio, SSE, WebSocket, streamable HTTP)
  • Utility Functions: 100% (slug generation, JWT tokens, validation)
  • Configuration: 100% (settings, environment variables)
  • 🔄Service Classes: ~60% (in progress)
  • 🔄Complex Classes: ~40% (in progress)

Benefits:

  • All documented examples are automatically tested
  • Documentation stays accurate and up-to-date
  • Developers can run examples directly from docstrings
  • Regression prevention through automated verification

For detailed information, see theDoctest Coverage Guide.


Project Structure

📁 Directory and file structure for mcpgateway
# ────────── CI / Quality & Meta-files ──────────├── .bumpversion.cfg# Automated semantic-version bumps├── .coveragerc# Coverage.py settings├── .darglint# Doc-string linter rules├── .dockerignore# Context exclusions for image builds├── .editorconfig# Consistent IDE / editor behaviour├── .env# Local runtime variables (git-ignored)├── .env.ce# IBM Code Engine runtime env (ignored)├── .env.ce.example# Sample env for IBM Code Engine├── .env.example# Generic sample env file├── .env.gcr# Google Cloud Run runtime env (ignored)├── .eslintrc.json# ESLint rules for JS / TS assets├── .flake8# Flake-8 configuration├── .gitattributes# Git attributes (e.g. EOL normalisation)├── .github# GitHub settings, CI/CD workflows & templates│   ├── CODEOWNERS# Default reviewers│   └── workflows/# Bandit, Docker, CodeQL, Python Package, Container Deployment, etc.├── .gitignore# Git exclusion rules├── .hadolint.yaml# Hadolint rules for Dockerfiles├── .htmlhintrc# HTMLHint rules├── .markdownlint.json# Markdown-lint rules├── .pre-commit-config.yaml# Pre-commit hooks (ruff, black, mypy, ...)├── .pycodestyle# PEP-8 checker settings├── .pylintrc# Pylint configuration├── .pyspelling.yml# Spell-checker dictionary & filters├── .ruff.toml# Ruff linter / formatter settings├── .spellcheck-en.txt# Extra dictionary entries├── .stylelintrc.json# Stylelint rules for CSS├── .travis.yml# Legacy Travis CI config (reference)├── .whitesource# WhiteSource security-scanning config├── .yamllint# yamllint ruleset# ────────── Documentation & Guidance ──────────├── CHANGELOG.md# Version-by-version change log├── CODE_OF_CONDUCT.md# Community behaviour guidelines├── CONTRIBUTING.md# How to file issues & send PRs├── DEVELOPING.md# Contributor workflows & style guide├── LICENSE# Apache License 2.0├── README.md# Project overview & quick-start├── SECURITY.md# Security policy & CVE disclosure process├── TESTING.md# Testing strategy, fixtures & guidelines# ────────── Containerisation & Runtime ──────────├── Containerfile# OCI image build (Docker / Podman)├── Containerfile.lite# FROM scratch UBI-Micro production build├── docker-compose.yml# Local multi-service stack├── podman-compose-sonarqube.yaml# One-liner SonarQube stack├── run-gunicorn.sh# Opinionated Gunicorn startup script├── run.sh# Uvicorn shortcut with arg parsing# ────────── Build / Packaging / Tooling ──────────├── MANIFEST.in# sdist inclusion rules├── Makefile# Dev & deployment targets├── package-lock.json# Deterministic npm lock-file├── package.json# Front-end / docs tooling deps├── pyproject.toml# Poetry / PDM config & lint rules├── sonar-code.properties# SonarQube analysis settings├── uv.lock# UV resolver lock-file# ────────── Kubernetes & Helm Assets ──────────├── charts# Helm chart(s) for K8s / OpenShift│   ├── mcp-stack# Umbrella chart│   │   ├── Chart.yaml# Chart metadata│   │   ├── templates/...# Manifest templates│   │   └── values.yaml# Default values│   └── README.md# Install / upgrade guide├── k8s# Raw (non-Helm) K8s manifests│   └──*.yaml# Deployment, Service, PVC resources# ────────── Documentation Source ──────────├── docs# MkDocs site source│   ├── base.yml# MkDocs "base" configuration snippet (do not modify)│   ├── mkdocs.yml# Site configuration (requires base.yml)│   ├── requirements.txt# Python dependencies for the MkDocs site│   ├── Makefile# Make targets for building/serving the docs│   └── theme# Custom MkDocs theme assets│       └── logo.png# Logo for the documentation theme│   └── docs# Markdown documentation│       ├── architecture/# ADRs for the project│       ├── articles/# Long-form writeups│       ├── blog/# Blog posts│       ├── deployment/# Deployment guides (AWS, Azure, etc.)│       ├── development/# Development workflows & CI docs│       ├── images/# Diagrams & screenshots│       ├── index.md# Top-level docs landing page│       ├── manage/# Management topics (backup, logging, tuning, upgrade)│       ├── overview/# Feature overviews & UI documentation│       ├── security/# Security guidance & policies│       ├── testing/# Testing strategy & fixtures│       └── using/# User-facing usage guides (agents, clients, etc.)│       ├── media/# Social media, press coverage, videos & testimonials│       │   ├── press/# Press articles and blog posts│       │   ├── social/# Tweets, LinkedIn posts, YouTube embeds│       │   ├── testimonials/# Customer quotes & community feedback│       │   └── kit/# Media kit & logos for bloggers & press├── dictionary.dic# Custom dictionary for spell-checker (make spellcheck)# ────────── Application & Libraries ──────────├── agent_runtimes# Configurable agentic frameworks converted to MCP Servers├── mcpgateway# ← main application package│   ├── __init__.py# Package metadata & version constant│   ├── admin.py# FastAPI routers for Admin UI│   ├── cache│   │   ├── __init__.py│   │   ├── resource_cache.py# LRU+TTL cache implementation│   │   └── session_registry.py# Session ↔ cache mapping│   ├── config.py# Pydantic settings loader│   ├── db.py# SQLAlchemy models & engine setup│   ├── federation│   │   ├── __init__.py│   │   ├── discovery.py# Peer-gateway discovery│   │   ├── forward.py# RPC forwarding│   ├── handlers│   │   ├── __init__.py│   │   └── sampling.py# Streaming sampling handler│   ├── main.py# FastAPI app factory & startup events│   ├── mcp.db# SQLite fixture for tests│   ├── py.typed# PEP 561 marker (ships type hints)│   ├── schemas.py# Shared Pydantic DTOs│   ├── services│   │   ├── __init__.py│   │   ├── completion_service.py# Prompt / argument completion│   │   ├── gateway_service.py# Peer-gateway registry│   │   ├── logging_service.py# Central logging helpers│   │   ├── prompt_service.py# Prompt CRUD & rendering│   │   ├── resource_service.py# Resource registration & retrieval│   │   ├── root_service.py# File-system root registry│   │   ├── server_service.py# Server registry & monitoring│   │   └── tool_service.py# Tool registry & invocation│   ├── static│   │   ├── admin.css# Styles for Admin UI│   │   └── admin.js# Behaviour for Admin UI│   ├── templates│   │   └── admin.html# HTMX/Alpine Admin UI template│   ├── transports│   │   ├── __init__.py│   │   ├── base.py# Abstract transport interface│   │   ├── sse_transport.py# Server-Sent Events transport│   │   ├── stdio_transport.py# stdio transport for embedding│   │   └── websocket_transport.py# WS transport with ping/pong│   ├── models.py# Core enums / type aliases│   ├── utils│   │   ├── create_jwt_token.py# CLI & library for JWT generation│   │   ├── services_auth.py# Service-to-service auth dependency│   │   └── verify_credentials.py# Basic / JWT auth helpers│   ├── validation│   │   ├── __init__.py│   │   └── jsonrpc.py# JSON-RPC 2.0 validation│   └── version.py# Library version helper├── mcpgateway-wrapper# Stdio client wrapper (PyPI)│   ├── pyproject.toml│   ├── README.md│   └── src/mcpgateway_wrapper/│       ├── __init__.py│       └── server.py# Wrapper entry-point├── mcp-servers# Sample downstream MCP servers├── mcp.db# Default SQLite DB (auto-created)├── mcpgrid# Experimental grid client / PoC├── os_deps.sh# Installs system-level deps for CI# ────────── Tests & QA Assets ──────────├── test_readme.py# Guard: README stays in sync├── tests│   ├── conftest.py# Shared fixtures│   ├── e2e/...# End-to-end scenarios│   ├── hey/...# Load-test logs & helper script│   ├── integration/...# API-level integration tests│   └── unit/...# Pure unit tests for business logic

API Documentation


Makefile targets

This project offer the following Makefile targets. Typemake in the project root to show all targets.

🔧 Available Makefile targets
🐍 MCP CONTEXTFORGE  (An enterprise-ready Model Context Protocol Gateway)🔧 SYSTEM-LEVEL DEPENDENCIES (DEV BUILD ONLY)os-deps              - Install Graphviz, Pandoc, Trivy, SCC usedfor dev docs generation and security scan🌱 VIRTUAL ENVIRONMENT& INSTALLATIONvenv                 - Create a fresh virtual environment with uv& friendsactivate             - Activate the virtual environmentin the current shellinstall              - Install project into the venvinstall-dev          - Install project (incl. dev deps) into the venvinstall-db           - Install project (incl. postgres and redis) into venvupdate               - Update all installed deps inside the venvcheck-env            - Verify all required env varsin .env are present▶️ SERVE& TESTINGserve                - Run production Gunicorn server on :4444certs                - Generate self-signed TLS cert& keyin ./certs (won't overwrite)serve-ssl            - Run Gunicorn behind HTTPS on :4444 (uses ./certs)dev                  - Run fast-reload dev server (uvicorn)run                  - Execute helper script ./run.shtest                 - Run unit tests with pytesttest-curl            - Smoke-test API endpoints with curl scriptpytest-examples      - Run README / examples through pytest-examplesclean                - Remove caches, build artefacts, virtualenv, docs, certs, coverage, SBOM, etc.📊 COVERAGE & METRICScoverage             - Run tests with coverage, emit md/HTML/XML + badgepip-licenses         - Produce dependency license inventory (markdown)scc                  - Quick LoC/complexity snapshot with sccscc-report           - Generate HTML LoC & per-file metrics with scc📚 DOCUMENTATION & SBOMdocs                 - Build docs (graphviz + handsdown + images + SBOM)images               - Generate architecture & dependency diagrams🔍 LINTING & STATIC ANALYSISlint                 - Run the full linting suite (see targets below)black                - Reformat code with blackautoflake            - Remove unused imports / variables with autoflakeisort                - Organise & sort imports with isortflake8               - PEP-8 style & logical errorspylint               - Pylint static analysismarkdownlint         - Lint Markdown files with markdownlint (requires markdownlint-cli)mypy                 - Static type-checking with mypybandit               - Security scan with banditpydocstyle           - Docstring style checkerpycodestyle          - Simple PEP-8 checkerpre-commit           - Run all configured pre-commit hooksruff                 - Ruff linter + formatterty                   - Ty type checker from astralpyright              - Static type-checking with Pyrightradon                - Code complexity & maintainability metricspyroma               - Validate packaging metadataimportchecker        - Detect orphaned importsspellcheck           - Spell-check the codebasefawltydeps           - Detect undeclared / unused depswily                 - Maintainability reportpyre                 - Static analysis with Facebook Pyredepend               - List dependencies in ≈requirements formatsnakeviz             - Profile & visualise with snakevizpstats               - Generate PNG call-graph from cProfile statsspellcheck-sort      - Sort local spellcheck dictionarytox                  - Run tox across multi-Python versionssbom                 - Produce a CycloneDX SBOM and vulnerability scanpytype               - Flow-sensitive type checkercheck-manifest       - Verify sdist/wheel completenessyamllint            - Lint YAML files (uses .yamllint)jsonlint            - Validate every *.json file with jq (--exit-status)tomllint            - Validate *.toml files with tomlcheck🕸️  WEBPAGE LINTERS & STATIC ANALYSIS (HTML/CSS/JS lint + security scans + formatting)install-web-linters  - Install HTMLHint, Stylelint, ESLint, Retire.js & Prettier via npmlint-web             - Run HTMLHint, Stylelint, ESLint, Retire.js and npm auditformat-web           - Format HTML, CSS & JS files with Prettierosv-install          - Install/upgrade osv-scanner (Go)osv-scan-source      - Scan source & lockfiles for CVEsosv-scan-image       - Scan the built container image for CVEsosv-scan             - Run all osv-scanner checks (source, image, licence)📡 SONARQUBE ANALYSISsonar-deps-podman    - Install podman-compose + supporting toolssonar-deps-docker    - Install docker-compose + supporting toolssonar-up-podman      - Launch SonarQube with podman-composesonar-up-docker      - Launch SonarQube with docker-composesonar-submit-docker  - Run containerized Sonar Scanner CLI with Dockersonar-submit-podman  - Run containerized Sonar Scanner CLI with Podmanpysonar-scanner      - Run scan with Python wrapper (pysonar-scanner)sonar-info           - How to create a token & which env vars to export🛡️ SECURITY & PACKAGE SCANNINGtrivy                - Scan container image for CVEs (HIGH/CRIT). Needs podman socket enabledgrype-scan           - Scan container for security audit and vulnerability scanningdockle               - Lint the built container image via tarball (no daemon/socket needed)hadolint             - Lint Containerfile/Dockerfile(s) with hadolintpip-audit            - Audit Python dependencies for published CVEs📦 DEPENDENCY MANAGEMENTdeps-update          - Run update-deps.py to update all dependencies in pyproject.toml and docs/requirements.txtcontainerfile-update - Update base image in Containerfile to latest tag📦 PACKAGING & PUBLISHINGdist                 - Clean-build wheel *and* sdist into ./distwheel                - Build wheel onlysdist                - Build source distribution onlyverify               - Build + twine + check-manifest + pyroma (no upload)publish              - Verify, then upload to PyPI (needs TWINE_* creds)🦭 PODMAN CONTAINER BUILD & RUNpodman-dev           - Build development container imagepodman               - Build container imagepodman-prod          - Build production container image (using ubi-micro → scratch). Not supported on macOS.podman-run           - Run the container on HTTP  (port 4444)podman-run-shell     - Run the container on HTTP  (port 4444) and start a shellpodman-run-ssl       - Run the container on HTTPS (port 4444, self-signed)podman-run-ssl-host  - Run the container on HTTPS with --network=host (port 4444, self-signed)podman-stop          - Stop & remove the containerpodman-test          - Quick curl smoke-test against the containerpodman-logs          - Follow container logs (⌃C to quit)podman-stats         - Show container resource stats (if supported)podman-top           - Show live top-level process info in containerpodman-shell         - Open an interactive shell inside the Podman container🐋 DOCKER BUILD & RUNdocker-dev           - Build development Docker imagedocker               - Build production Docker imagedocker-prod          - Build production container image (using ubi-micro → scratch). Not supported on macOS.docker-run           - Run the container on HTTP  (port 4444)docker-run-ssl       - Run the container on HTTPS (port 4444, self-signed)docker-stop          - Stop & remove the containerdocker-test          - Quick curl smoke-test against the containerdocker-logs          - Follow container logs (⌃C to quit)docker-stats         - Show container resource usage stats (non-streaming)docker-top           - Show top-level process info in Docker containerdocker-shell         - Open an interactive shell inside the Docker container🛠️ COMPOSE STACK     - Build / start / stop the multi-service stackcompose-up           - Bring the whole stack up (detached)compose-restart      - Recreate changed containers, pulling / building as neededcompose-build        - Build (or rebuild) images defined in the compose filecompose-pull         - Pull the latest images onlycompose-logs         - Tail logs from all services (Ctrl-C to exit)compose-ps           - Show container status tablecompose-shell        - Open an interactive shell in the "gateway" containercompose-stop         - Gracefully stop the stack (keep containers)compose-down         - Stop & remove containers (keep named volumes)compose-rm           - Remove *stopped* containerscompose-clean        - ✨ Down **and** delete named volumes (data-loss ⚠)☁️ IBM CLOUD CODE ENGINEibmcloud-check-env          - Verify all required IBM Cloud env vars are setibmcloud-cli-install        - Auto-install IBM Cloud CLI + required plugins (OS auto-detected)ibmcloud-login              - Login to IBM Cloud CLI using IBMCLOUD_API_KEY (--sso)ibmcloud-ce-login           - Set Code Engine target project and regionibmcloud-list-containers    - List deployed Code Engine appsibmcloud-tag                - Tag container image for IBM Container Registryibmcloud-push               - Push image to IBM Container Registryibmcloud-deploy             - Deploy (or update) container image in Code Engineibmcloud-ce-logs            - Stream logs for the deployed applicationibmcloud-ce-status          - Get deployment statusibmcloud-ce-rm              - Delete the Code Engine application🧪 MINIKUBE LOCAL CLUSTERminikube-install      - Install Minikube (macOS, Linux, or Windows via choco)helm-install          - Install Helm CLI (macOS, Linux, or Windows)minikube-start        - Start local Minikube cluster with Ingress + DNS + metrics-serverminikube-stop         - Stop the Minikube clusterminikube-delete       - Delete the Minikube clusterminikube-image-load   - Build and load ghcr.io/ibm/mcp-context-forge:latest into Minikubeminikube-k8s-apply    - Apply Kubernetes manifests from deployment/k8s/minikube-status       - Show status of Minikube and ingress pods🛠️ HELM CHART TASKShelm-lint            - Lint the Helm chart (static analysis)helm-package         - Package the chart into dist/ as mcp-stack-<ver>.tgzhelm-deploy          - Upgrade/Install chart into Minikube (profile mcpgw)helm-delete          - Uninstall the chart release from Minikube🏠 LOCAL PYPI SERVERlocal-pypi-install   - Install pypiserver for local testinglocal-pypi-start     - Start local PyPI server on :8084 (no auth)local-pypi-start-auth - Start local PyPI server with basic auth (admin/admin)local-pypi-stop      - Stop local PyPI serverlocal-pypi-upload    - Upload existing package to local PyPI (no auth)local-pypi-upload-auth - Upload existing package to local PyPI (with auth)local-pypi-test      - Install package from local PyPIlocal-pypi-clean     - Full cycle: build → upload → install locally🏠 LOCAL DEVPI SERVERdevpi-install        - Install devpi server and clientdevpi-init           - Initialize devpi server (first time only)devpi-start          - Start devpi serverdevpi-stop           - Stop devpi serverdevpi-setup-user     - Create user and dev indexdevpi-upload         - Upload existing package to devpidevpi-test           - Install package from devpidevpi-clean          - Full cycle: build → upload → install locallydevpi-status         - Show devpi server statusdevpi-web            - Open devpi web interface

🔍 Troubleshooting

macOS: SQLite "disk I/O error" when running make serve

If the gateway fails on macOS withsqlite3.OperationalError: disk I/O error (works on Linux/Docker), it's usually a filesystem/locking quirk rather than a schema bug.

Quick placement guidance (macOS):

  • Avoid cloning/running the repo under~/Documents or~/Desktop if iCloud "Desktop & Documents" sync is enabled.

  • A simple, safe choice is a project folder directly under your home directory:

    • mkdir -p "$HOME/mcp-context-forge" && cd "$HOME/mcp-context-forge"
    • If you keep the DB inside the repo, use a subfolder likedata/ and an absolute path in.env:
      • mkdir -p "$HOME/mcp-context-forge/data"
      • DATABASE_URL=sqlite:////Users/$USER/mcp-context-forge/data/mcp.db
  • Use a safe, local APFS path for SQLite (avoid iCloud/Dropbox/OneDrive/Google Drive, network shares, or external exFAT/NAS):

    • Option A (system location): point the DB to Application Support (note spaces):
      • mkdir -p "$HOME/Library/Application Support/mcpgateway"
      • export DATABASE_URL="sqlite:////Users/$USER/Library/Application Support/mcpgateway/mcp.db"
    • Option B (project-local): keep the DB under~/mcp-context-forge/data:
      • mkdir -p "$HOME/mcp-context-forge/data"
      • export DATABASE_URL="sqlite:////Users/$USER/mcp-context-forge/data/mcp.db"
  • Clean stale SQLite artifacts after any crash:

    • pkill -f mcpgateway || true && rm -f mcp.db-wal mcp.db-shm mcp.db-journal
  • Reduce startup concurrency to rule out multi-process contention:

    • GUNICORN_WORKERS=1 make serve (or usemake dev which runs single-process)
  • Run the diagnostic helper to verify the environment:

    • python3 scripts/test_sqlite.py --verbose
  • While debugging, consider lowering pool pressure and retry:

    • DB_POOL_SIZE=10 DB_MAX_OVERFLOW=0 DB_POOL_TIMEOUT=60 DB_MAX_RETRIES=10 DB_RETRY_INTERVAL_MS=5000
  • Optional: temporarily disable the file-lock leader path by using the in-process mode:

    • export CACHE_TYPE=none

If the error persists, update SQLite and ensure Python links against it:

  • brew install sqlite3 && brew link --force sqlite3
  • brew install python3 && /opt/homebrew/bin/python3 -c 'import sqlite3; print(sqlite3.sqlite_version)'

See the full migration guide's "SQLite Troubleshooting Guide" for deeper steps (WAL cleanup, integrity check, recovery):MIGRATION-0.7.0.md.

Port publishing on WSL2 (rootless Podman & Docker Desktop)

Diagnose the listener

# Inside your WSL distross -tlnp| grep 4444# Use ssnetstat -anp| grep 4444# or netstat

Seeing:::4444 LISTEN rootlessport is normal - the IPv6 wildcardsocket (::) also accepts IPv4 trafficwhennet.ipv6.bindv6only = 0 (default on Linux).

Why localhost fails on Windows

WSL 2's NAT layer rewrites only theIPv6 side of the dual-stack listener. From Windows,http://127.0.0.1:4444 (or Docker Desktop's "localhost") therefore times-out.

Fix (Podman rootless)

# Inside the WSL distroecho"wsl"| sudo tee /etc/containers/podman-machinesystemctl --user restart podman.socket

ss should now show0.0.0.0:4444 instead of:::4444, and theservice becomes reachable from Windowsand the LAN.

Fix (Docker Desktop > 4.19)

Docker Desktop adds a "WSL integration" switch per-distro.Turn iton for your distro, restart Docker Desktop, then restart thecontainer:

docker restart mcpgateway
Gateway starts but immediately exits ("Failed to read DATABASE_URL")

Copy.env.example to.env first:

cp .env.example .env

Then editDATABASE_URL,JWT_SECRET_KEY,BASIC_AUTH_PASSWORD, etc.Missing or empty required vars cause a fast-fail at startup.

Contributing

  1. Fork the repo, create a feature branch.
  2. Runmake lint and fix any issues.
  3. Keepmake test green and 100% coverage.
  4. Open a PR - describe your changes clearly.

SeeCONTRIBUTING.md for more details.

Changelog

A complete changelog can be found here:CHANGELOG.md

License

Licensed under theApache License 2.0 - seeLICENSE

Core Authors and Maintainers

Special thanks to our contributors for helping us improve ContextForge:

Star History and Project Activity

Star History Chart

PyPi Downloads Stars Forks Contributors Last Commit Open Issues 

About

A Model Context Protocol (MCP) Gateway & Registry. Serves as a central management point for tools, resources, and prompts that can be accessed by MCP-compatible LLM applications. Converts REST API endpoints to MCP, composes virtual MCP servers with added security and observability, and converts between protocols (stdio, SSE, Streamable HTTP).

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

[8]ページ先頭

©2009-2025 Movatter.jp