- Notifications
You must be signed in to change notification settings - Fork2.1k
The official Python SDK for Model Context Protocol servers and clients
License
modelcontextprotocol/python-sdk
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
- MCP Python SDK
The Model Context Protocol allows applications to provide context for LLMs in a standardized way, separating the concerns of providing context from the actual LLM interaction. This Python SDK implements the full MCP specification, making it easy to:
- Build MCP clients that can connect to any MCP server
- Create MCP servers that expose resources, prompts and tools
- Use standard transports like stdio, SSE, and Streamable HTTP
- Handle all MCP protocol messages and lifecycle events
We recommend usinguv to manage your Python projects.
If you haven't created a uv-managed project yet, create one:
uv init mcp-server-democd mcp-server-demo
Then add MCP to your project dependencies:
uv add"mcp[cli]"
Alternatively, for projects using pip for dependencies:
pip install"mcp[cli]"
To run the mcp command with uv:
uv run mcp
Let's create a simple MCP server that exposes a calculator tool and some data:
"""FastMCP quickstart example.cd to the `examples/snippets/clients` directory and run: uv run server fastmcp_quickstart stdio"""frommcp.server.fastmcpimportFastMCP# Create an MCP servermcp=FastMCP("Demo")# Add an addition tool@mcp.tool()defadd(a:int,b:int)->int:"""Add two numbers"""returna+b# Add a dynamic greeting resource@mcp.resource("greeting://{name}")defget_greeting(name:str)->str:"""Get a personalized greeting"""returnf"Hello,{name}!"# Add a prompt@mcp.prompt()defgreet_user(name:str,style:str="friendly")->str:"""Generate a greeting prompt"""styles= {"friendly":"Please write a warm, friendly greeting","formal":"Please write a formal, professional greeting","casual":"Please write a casual, relaxed greeting", }returnf"{styles.get(style,styles['friendly'])} for someone named{name}."
Full example:examples/snippets/servers/fastmcp_quickstart.py
You can install this server inClaude Desktop and interact with it right away by running:
uv run mcp install server.py
Alternatively, you can test it with the MCP Inspector:
uv run mcp dev server.py
TheModel Context Protocol (MCP) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
- Expose data throughResources (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
- Provide functionality throughTools (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
- Define interaction patterns throughPrompts (reusable templates for LLM interactions)
- And more!
The FastMCP server is your core interface to the MCP protocol. It handles connection management, protocol compliance, and message routing:
"""Example showing lifespan support for startup/shutdown with strong typing."""fromcollections.abcimportAsyncIteratorfromcontextlibimportasynccontextmanagerfromdataclassesimportdataclassfrommcp.server.fastmcpimportContext,FastMCP# Mock database class for exampleclassDatabase:"""Mock database class for example."""@classmethodasyncdefconnect(cls)->"Database":"""Connect to database."""returncls()asyncdefdisconnect(self)->None:"""Disconnect from database."""passdefquery(self)->str:"""Execute a query."""return"Query result"@dataclassclassAppContext:"""Application context with typed dependencies."""db:Database@asynccontextmanagerasyncdefapp_lifespan(server:FastMCP)->AsyncIterator[AppContext]:"""Manage application lifecycle with type-safe context."""# Initialize on startupdb=awaitDatabase.connect()try:yieldAppContext(db=db)finally:# Cleanup on shutdownawaitdb.disconnect()# Pass lifespan to servermcp=FastMCP("My App",lifespan=app_lifespan)# Access type-safe lifespan context in tools@mcp.tool()defquery_db(ctx:Context)->str:"""Tool that uses initialized resources."""db=ctx.request_context.lifespan_context.dbreturndb.query()
Full example:examples/snippets/servers/lifespan_example.py
Resources are how you expose data to LLMs. They're similar to GET endpoints in a REST API - they provide data but shouldn't perform significant computation or have side effects:
frommcp.server.fastmcpimportFastMCPmcp=FastMCP(name="Resource Example")@mcp.resource("file://documents/{name}")defread_document(name:str)->str:"""Read a document by name."""# This would normally read from diskreturnf"Content of{name}"@mcp.resource("config://settings")defget_settings()->str:"""Get application settings."""return"""{ "theme": "dark", "language": "en", "debug": false}"""
Full example:examples/snippets/servers/basic_resource.py
Tools let LLMs take actions through your server. Unlike resources, tools are expected to perform computation and have side effects:
frommcp.server.fastmcpimportFastMCPmcp=FastMCP(name="Tool Example")@mcp.tool()defsum(a:int,b:int)->int:"""Add two numbers together."""returna+b@mcp.tool()defget_weather(city:str,unit:str="celsius")->str:"""Get weather for a city."""# This would normally call a weather APIreturnf"Weather in{city}: 22degrees{unit[0].upper()}"
Full example:examples/snippets/servers/basic_tool.py
Tools will return structured results by default, if their return typeannotation is compatible. Otherwise, they will return unstructured results.
Structured output supports these return types:
- Pydantic models (BaseModel subclasses)
- TypedDicts
- Dataclasses and other classes with type hints
dict[str, T]
(where T is any JSON-serializable type)- Primitive types (str, int, float, bool, bytes, None) - wrapped in
{"result": value}
- Generic types (list, tuple, Union, Optional, etc.) - wrapped in
{"result": value}
Classes without type hints cannot be serialized for structured output. Onlyclasses with properly annotated attributes will be converted to Pydantic modelsfor schema generation and validation.
Structured results are automatically validated against the output schemagenerated from the annotation. This ensures the tool returns well-typed,validated data that clients can easily process.
Note: For backward compatibility, unstructured results are alsoreturned. Unstructured results are provided for backward compatibilitywith previous versions of the MCP specification, and are quirks-compatiblewith previous versions of FastMCP in the current version of the SDK.
Note: In cases where a tool function's return type annotationcauses the tool to be classified as structuredand this is undesirable,the classification can be suppressed by passingstructured_output=False
to the@tool
decorator.
"""Example showing structured output with tools."""fromtypingimportTypedDictfrompydanticimportBaseModel,Fieldfrommcp.server.fastmcpimportFastMCPmcp=FastMCP("Structured Output Example")# Using Pydantic models for rich structured dataclassWeatherData(BaseModel):"""Weather information structure."""temperature:float=Field(description="Temperature in Celsius")humidity:float=Field(description="Humidity percentage")condition:strwind_speed:float@mcp.tool()defget_weather(city:str)->WeatherData:"""Get weather for a city - returns structured data."""# Simulated weather datareturnWeatherData(temperature=72.5,humidity=45.0,condition="sunny",wind_speed=5.2, )# Using TypedDict for simpler structuresclassLocationInfo(TypedDict):latitude:floatlongitude:floatname:str@mcp.tool()defget_location(address:str)->LocationInfo:"""Get location coordinates"""returnLocationInfo(latitude=51.5074,longitude=-0.1278,name="London, UK")# Using dict[str, Any] for flexible schemas@mcp.tool()defget_statistics(data_type:str)->dict[str,float]:"""Get various statistics"""return {"mean":42.5,"median":40.0,"std_dev":5.2}# Ordinary classes with type hints work for structured outputclassUserProfile:name:strage:intemail:str|None=Nonedef__init__(self,name:str,age:int,email:str|None=None):self.name=nameself.age=ageself.email=email@mcp.tool()defget_user(user_id:str)->UserProfile:"""Get user profile - returns structured data"""returnUserProfile(name="Alice",age=30,email="alice@example.com")# Classes WITHOUT type hints cannot be used for structured outputclassUntypedConfig:def__init__(self,setting1,setting2):self.setting1=setting1self.setting2=setting2@mcp.tool()defget_config()->UntypedConfig:"""This returns unstructured output - no schema generated"""returnUntypedConfig("value1","value2")# Lists and other types are wrapped automatically@mcp.tool()deflist_cities()->list[str]:"""Get a list of cities"""return ["London","Paris","Tokyo"]# Returns: {"result": ["London", "Paris", "Tokyo"]}@mcp.tool()defget_temperature(city:str)->float:"""Get temperature as a simple float"""return22.5# Returns: {"result": 22.5}
Full example:examples/snippets/servers/structured_output.py
Prompts are reusable templates that help LLMs interact with your server effectively:
frommcp.server.fastmcpimportFastMCPfrommcp.server.fastmcp.promptsimportbasemcp=FastMCP(name="Prompt Example")@mcp.prompt(title="Code Review")defreview_code(code:str)->str:returnf"Please review this code:\n\n{code}"@mcp.prompt(title="Debug Assistant")defdebug_error(error:str)->list[base.Message]:return [base.UserMessage("I'm seeing this error:"),base.UserMessage(error),base.AssistantMessage("I'll help debug that. What have you tried so far?"), ]
Full example:examples/snippets/servers/basic_prompt.py
FastMCP provides anImage
class that automatically handles image data:
"""Example showing image handling with FastMCP."""fromPILimportImageasPILImagefrommcp.server.fastmcpimportFastMCP,Imagemcp=FastMCP("Image Example")@mcp.tool()defcreate_thumbnail(image_path:str)->Image:"""Create a thumbnail from an image"""img=PILImage.open(image_path)img.thumbnail((100,100))returnImage(data=img.tobytes(),format="png")
Full example:examples/snippets/servers/images.py
The Context object gives your tools and resources access to MCP capabilities:
frommcp.server.fastmcpimportContext,FastMCPmcp=FastMCP(name="Progress Example")@mcp.tool()asyncdeflong_running_task(task_name:str,ctx:Context,steps:int=5)->str:"""Execute a task with progress updates."""awaitctx.info(f"Starting:{task_name}")foriinrange(steps):progress= (i+1)/stepsawaitctx.report_progress(progress=progress,total=1.0,message=f"Step{i+1}/{steps}", )awaitctx.debug(f"Completed step{i+1}")returnf"Task '{task_name}' completed"
Full example:examples/snippets/servers/tool_progress.py
MCP supports providing completion suggestions for prompt arguments and resource template parameters. With the context parameter, servers can provide completions based on previously resolved values:
Client usage:
"""cd to the `examples/snippets` directory and run: uv run completion-client"""importasyncioimportosfrommcpimportClientSession,StdioServerParametersfrommcp.client.stdioimportstdio_clientfrommcp.typesimportPromptReference,ResourceTemplateReference# Create server parameters for stdio connectionserver_params=StdioServerParameters(command="uv",# Using uv to run the serverargs=["run","server","completion","stdio"],# Server with completion supportenv={"UV_INDEX":os.environ.get("UV_INDEX","")},)asyncdefrun():"""Run the completion client example."""asyncwithstdio_client(server_params)as (read,write):asyncwithClientSession(read,write)assession:# Initialize the connectionawaitsession.initialize()# List available resource templatestemplates=awaitsession.list_resource_templates()print("Available resource templates:")fortemplateintemplates.resourceTemplates:print(f" -{template.uriTemplate}")# List available promptsprompts=awaitsession.list_prompts()print("\nAvailable prompts:")forpromptinprompts.prompts:print(f" -{prompt.name}")# Complete resource template argumentsiftemplates.resourceTemplates:template=templates.resourceTemplates[0]print(f"\nCompleting arguments for resource template:{template.uriTemplate}")# Complete without contextresult=awaitsession.complete(ref=ResourceTemplateReference(type="ref/resource",uri=template.uriTemplate),argument={"name":"owner","value":"model"}, )print(f"Completions for 'owner' starting with 'model':{result.completion.values}")# Complete with context - repo suggestions based on ownerresult=awaitsession.complete(ref=ResourceTemplateReference(type="ref/resource",uri=template.uriTemplate),argument={"name":"repo","value":""},context_arguments={"owner":"modelcontextprotocol"}, )print(f"Completions for 'repo' with owner='modelcontextprotocol':{result.completion.values}")# Complete prompt argumentsifprompts.prompts:prompt_name=prompts.prompts[0].nameprint(f"\nCompleting arguments for prompt:{prompt_name}")result=awaitsession.complete(ref=PromptReference(type="ref/prompt",name=prompt_name),argument={"name":"style","value":""}, )print(f"Completions for 'style' argument:{result.completion.values}")defmain():"""Entry point for the completion client."""asyncio.run(run())if__name__=="__main__":main()
Full example:examples/snippets/clients/completion_client.py
Request additional information from users. This example shows an Elicitation during a Tool Call:
frompydanticimportBaseModel,Fieldfrommcp.server.fastmcpimportContext,FastMCPmcp=FastMCP(name="Elicitation Example")classBookingPreferences(BaseModel):"""Schema for collecting user preferences."""checkAlternative:bool=Field(description="Would you like to check another date?")alternativeDate:str=Field(default="2024-12-26",description="Alternative date (YYYY-MM-DD)", )@mcp.tool()asyncdefbook_table(date:str,time:str,party_size:int,ctx:Context,)->str:"""Book a table with date availability check."""# Check if date is availableifdate=="2024-12-25":# Date unavailable - ask user for alternativeresult=awaitctx.elicit(message=(f"No tables available for{party_size} on{date}. Would you like to try another date?"),schema=BookingPreferences, )ifresult.action=="accept"andresult.data:ifresult.data.checkAlternative:returnf"[SUCCESS] Booked for{result.data.alternativeDate}"return"[CANCELLED] No booking made"return"[CANCELLED] Booking cancelled"# Date availablereturnf"[SUCCESS] Booked for{date} at{time}"
Full example:examples/snippets/servers/elicitation.py
Theelicit()
method returns anElicitationResult
with:
action
: "accept", "decline", or "cancel"data
: The validated response (only when accepted)validation_error
: Any validation error message
Tools can interact with LLMs through sampling (generating text):
frommcp.server.fastmcpimportContext,FastMCPfrommcp.typesimportSamplingMessage,TextContentmcp=FastMCP(name="Sampling Example")@mcp.tool()asyncdefgenerate_poem(topic:str,ctx:Context)->str:"""Generate a poem using LLM sampling."""prompt=f"Write a short poem about{topic}"result=awaitctx.session.create_message(messages=[SamplingMessage(role="user",content=TextContent(type="text",text=prompt), ) ],max_tokens=100, )ifresult.content.type=="text":returnresult.content.textreturnstr(result.content)
Full example:examples/snippets/servers/sampling.py
Tools can send logs and notifications through the context:
frommcp.server.fastmcpimportContext,FastMCPmcp=FastMCP(name="Notifications Example")@mcp.tool()asyncdefprocess_data(data:str,ctx:Context)->str:"""Process data with logging."""# Different log levelsawaitctx.debug(f"Debug: Processing '{data}'")awaitctx.info("Info: Starting processing")awaitctx.warning("Warning: This is experimental")awaitctx.error("Error: (This is just a demo)")# Notify about resource changesawaitctx.session.send_resource_list_changed()returnf"Processed:{data}"
Full example:examples/snippets/servers/notifications.py
Authentication can be used by servers that want to expose tools accessing protected resources.
mcp.server.auth
implements OAuth 2.1 resource server functionality, where MCP servers act as Resource Servers (RS) that validate tokens issued by separate Authorization Servers (AS). This follows theMCP authorization specification and implements RFC 9728 (Protected Resource Metadata) for AS discovery.
MCP servers can use authentication by providing an implementation of theTokenVerifier
protocol:
"""Run from the repository root: uv run examples/snippets/servers/oauth_server.py"""frompydanticimportAnyHttpUrlfrommcp.server.auth.providerimportAccessToken,TokenVerifierfrommcp.server.auth.settingsimportAuthSettingsfrommcp.server.fastmcpimportFastMCPclassSimpleTokenVerifier(TokenVerifier):"""Simple token verifier for demonstration."""asyncdefverify_token(self,token:str)->AccessToken|None:pass# This is where you would implement actual token validation# Create FastMCP instance as a Resource Servermcp=FastMCP("Weather Service",# Token verifier for authenticationtoken_verifier=SimpleTokenVerifier(),# Auth settings for RFC 9728 Protected Resource Metadataauth=AuthSettings(issuer_url=AnyHttpUrl("https://auth.example.com"),# Authorization Server URLresource_server_url=AnyHttpUrl("http://localhost:3001"),# This server's URLrequired_scopes=["user"], ),)@mcp.tool()asyncdefget_weather(city:str="London")->dict[str,str]:"""Get weather data for a city"""return {"city":city,"temperature":"22","condition":"Partly cloudy","humidity":"65%", }if__name__=="__main__":mcp.run(transport="streamable-http")
Full example:examples/snippets/servers/oauth_server.py
For a complete example with separate Authorization Server and Resource Server implementations, seeexamples/servers/simple-auth/
.
Architecture:
- Authorization Server (AS): Handles OAuth flows, user authentication, and token issuance
- Resource Server (RS): Your MCP server that validates tokens and serves protected resources
- Client: Discovers AS through RFC 9728, obtains tokens, and uses them with the MCP server
SeeTokenVerifier for more details on implementing token validation.
The fastest way to test and debug your server is with the MCP Inspector:
uv run mcp dev server.py# Add dependenciesuv run mcp dev server.py --with pandas --with numpy# Mount local codeuv run mcp dev server.py --with-editable.
Once your server is ready, install it in Claude Desktop:
uv run mcp install server.py# Custom nameuv run mcp install server.py --name"My Analytics Server"# Environment variablesuv run mcp install server.py -v API_KEY=abc123 -v DB_URL=postgres://...uv run mcp install server.py -f .env
For advanced scenarios like custom deployments:
"""Example showing direct execution of an MCP server.This is the simplest way to run an MCP server directly.cd to the `examples/snippets` directory and run: uv run direct-execution-server or python servers/direct_execution.py"""frommcp.server.fastmcpimportFastMCPmcp=FastMCP("My App")@mcp.tool()defhello(name:str="World")->str:"""Say hello to someone."""returnf"Hello,{name}!"defmain():"""Entry point for the direct execution server."""mcp.run()if__name__=="__main__":main()
Full example:examples/snippets/servers/direct_execution.py
Run it with:
python servers/direct_execution.py# oruv run mcp run servers/direct_execution.py
Note thatuv run mcp run
oruv run mcp dev
only supports server using FastMCP and not the low-level server variant.
Note: Streamable HTTP transport is superseding SSE transport for production deployments.
"""Run from the repository root: uv run examples/snippets/servers/streamable_config.py"""frommcp.server.fastmcpimportFastMCP# Stateful server (maintains session state)mcp=FastMCP("StatefulServer")# Other configuration options:# Stateless server (no session persistence)# mcp = FastMCP("StatelessServer", stateless_http=True)# Stateless server (no session persistence, no sse stream with supported client)# mcp = FastMCP("StatelessServer", stateless_http=True, json_response=True)# Add a simple tool to demonstrate the server@mcp.tool()defgreet(name:str="World")->str:"""Greet someone by name."""returnf"Hello,{name}!"# Run server with streamable_http transportif__name__=="__main__":mcp.run(transport="streamable-http")
Full example:examples/snippets/servers/streamable_config.py
You can mount multiple FastMCP servers in a Starlette application:
"""Run from the repository root: uvicorn examples.snippets.servers.streamable_starlette_mount:app --reload"""importcontextlibfromstarlette.applicationsimportStarlettefromstarlette.routingimportMountfrommcp.server.fastmcpimportFastMCP# Create the Echo serverecho_mcp=FastMCP(name="EchoServer",stateless_http=True)@echo_mcp.tool()defecho(message:str)->str:"""A simple echo tool"""returnf"Echo:{message}"# Create the Math servermath_mcp=FastMCP(name="MathServer",stateless_http=True)@math_mcp.tool()defadd_two(n:int)->int:"""Tool to add two to the input"""returnn+2# Create a combined lifespan to manage both session managers@contextlib.asynccontextmanagerasyncdeflifespan(app:Starlette):asyncwithcontextlib.AsyncExitStack()asstack:awaitstack.enter_async_context(echo_mcp.session_manager.run())awaitstack.enter_async_context(math_mcp.session_manager.run())yield# Create the Starlette app and mount the MCP serversapp=Starlette(routes=[Mount("/echo",echo_mcp.streamable_http_app()),Mount("/math",math_mcp.streamable_http_app()), ],lifespan=lifespan,)# Note: Clients connect to http://localhost:8000/echo/mcp and http://localhost:8000/math/mcp# To mount at the root of each path (e.g., /echo instead of /echo/mcp):# echo_mcp.settings.streamable_http_path = "/"# math_mcp.settings.streamable_http_path = "/"
Full example:examples/snippets/servers/streamable_starlette_mount.py
For low level server with Streamable HTTP implementations, see:
- Stateful server:
examples/servers/simple-streamablehttp/
- Stateless server:
examples/servers/simple-streamablehttp-stateless/
The streamable HTTP transport supports:
- Stateful and stateless operation modes
- Resumability with event stores
- JSON or SSE response formats
- Better scalability for multi-node deployments
By default, SSE servers are mounted at/sse
and Streamable HTTP servers are mounted at/mcp
. You can customize these paths using the methods described below.
For more information on mounting applications in Starlette, see theStarlette documentation.
You can mount the StreamableHTTP server to an existing ASGI server using thestreamable_http_app
method. This allows you to integrate the StreamableHTTP server with other ASGI applications.
"""Example showing how to mount StreamableHTTP servers in Starlette applications.Run from the repository root: uvicorn examples.snippets.servers.streamable_http_mounting:app --reload"""fromstarlette.applicationsimportStarlettefromstarlette.routingimportHost,Mountfrommcp.server.fastmcpimportFastMCP# Basic example - mounting at rootmcp=FastMCP("My App")@mcp.tool()defhello()->str:"""A simple hello tool"""return"Hello from MCP!"# Mount the StreamableHTTP server to the existing ASGI serverapp=Starlette(routes=[Mount("/",app=mcp.streamable_http_app()), ])# or dynamically mount as hostapp.router.routes.append(Host("mcp.acme.corp",app=mcp.streamable_http_app()))# Advanced example - multiple servers with path configuration# Create multiple MCP serversapi_mcp=FastMCP("API Server")chat_mcp=FastMCP("Chat Server")@api_mcp.tool()defapi_status()->str:"""Get API status"""return"API is running"@chat_mcp.tool()defsend_message(message:str)->str:"""Send a chat message"""returnf"Message sent:{message}"# Default behavior: endpoints will be at /api/mcp and /chat/mcpdefault_app=Starlette(routes=[Mount("/api",app=api_mcp.streamable_http_app()),Mount("/chat",app=chat_mcp.streamable_http_app()), ])# To mount at the root of each path (e.g., /api instead of /api/mcp):# Configure streamable_http_path before mountingapi_mcp.settings.streamable_http_path="/"chat_mcp.settings.streamable_http_path="/"configured_app=Starlette(routes=[Mount("/api",app=api_mcp.streamable_http_app()),Mount("/chat",app=chat_mcp.streamable_http_app()), ])# Or configure during initializationmcp_at_root=FastMCP("My Server",streamable_http_path="/")
Full example:examples/snippets/servers/streamable_http_mounting.py
Note: SSE transport is being superseded byStreamable HTTP transport.
You can mount the SSE server to an existing ASGI server using thesse_app
method. This allows you to integrate the SSE server with other ASGI applications.
fromstarlette.applicationsimportStarlettefromstarlette.routingimportMount,Hostfrommcp.server.fastmcpimportFastMCPmcp=FastMCP("My App")# Mount the SSE server to the existing ASGI serverapp=Starlette(routes=[Mount('/',app=mcp.sse_app()), ])# or dynamically mount as hostapp.router.routes.append(Host('mcp.acme.corp',app=mcp.sse_app()))
When mounting multiple MCP servers under different paths, you can configure the mount path in several ways:
fromstarlette.applicationsimportStarlettefromstarlette.routingimportMountfrommcp.server.fastmcpimportFastMCP# Create multiple MCP serversgithub_mcp=FastMCP("GitHub API")browser_mcp=FastMCP("Browser")curl_mcp=FastMCP("Curl")search_mcp=FastMCP("Search")# Method 1: Configure mount paths via settings (recommended for persistent configuration)github_mcp.settings.mount_path="/github"browser_mcp.settings.mount_path="/browser"# Method 2: Pass mount path directly to sse_app (preferred for ad-hoc mounting)# This approach doesn't modify the server's settings permanently# Create Starlette app with multiple mounted serversapp=Starlette(routes=[# Using settings-based configurationMount("/github",app=github_mcp.sse_app()),Mount("/browser",app=browser_mcp.sse_app()),# Using direct mount path parameterMount("/curl",app=curl_mcp.sse_app("/curl")),Mount("/search",app=search_mcp.sse_app("/search")), ])# Method 3: For direct execution, you can also pass the mount path to run()if__name__=="__main__":search_mcp.run(transport="sse",mount_path="/search")
For more information on mounting applications in Starlette, see theStarlette documentation.
For more control, you can use the low-level server implementation directly. This gives you full access to the protocol and allows you to customize every aspect of your server, including lifecycle management through the lifespan API:
"""Run from the repository root: uv run examples/snippets/servers/lowlevel/lifespan.py"""fromcollections.abcimportAsyncIteratorfromcontextlibimportasynccontextmanagerimportmcp.server.stdioimportmcp.typesastypesfrommcp.server.lowlevelimportNotificationOptions,Serverfrommcp.server.modelsimportInitializationOptions# Mock database class for exampleclassDatabase:"""Mock database class for example."""@classmethodasyncdefconnect(cls)->"Database":"""Connect to database."""print("Database connected")returncls()asyncdefdisconnect(self)->None:"""Disconnect from database."""print("Database disconnected")asyncdefquery(self,query_str:str)->list[dict[str,str]]:"""Execute a query."""# Simulate database queryreturn [{"id":"1","name":"Example","query":query_str}]@asynccontextmanagerasyncdefserver_lifespan(_server:Server)->AsyncIterator[dict]:"""Manage server startup and shutdown lifecycle."""# Initialize resources on startupdb=awaitDatabase.connect()try:yield {"db":db}finally:# Clean up on shutdownawaitdb.disconnect()# Pass lifespan to serverserver=Server("example-server",lifespan=server_lifespan)@server.list_tools()asyncdefhandle_list_tools()->list[types.Tool]:"""List available tools."""return [types.Tool(name="query_db",description="Query the database",inputSchema={"type":"object","properties": {"query": {"type":"string","description":"SQL query to execute"}},"required": ["query"], }, ) ]@server.call_tool()asyncdefquery_db(name:str,arguments:dict)->list[types.TextContent]:"""Handle database query tool call."""ifname!="query_db":raiseValueError(f"Unknown tool:{name}")# Access lifespan contextctx=server.request_contextdb=ctx.lifespan_context["db"]# Execute queryresults=awaitdb.query(arguments["query"])return [types.TextContent(type="text",text=f"Query results:{results}")]asyncdefrun():"""Run the server with lifespan management."""asyncwithmcp.server.stdio.stdio_server()as (read_stream,write_stream):awaitserver.run(read_stream,write_stream,InitializationOptions(server_name="example-server",server_version="0.1.0",capabilities=server.get_capabilities(notification_options=NotificationOptions(),experimental_capabilities={}, ), ), )if__name__=="__main__":importasyncioasyncio.run(run())
Full example:examples/snippets/servers/lowlevel/lifespan.py
The lifespan API provides:
- A way to initialize resources when the server starts and clean them up when it stops
- Access to initialized resources through the request context in handlers
- Type-safe context passing between lifespan and request handlers
"""Run from the repository root:uv run examples/snippets/servers/lowlevel/basic.py"""importasyncioimportmcp.server.stdioimportmcp.typesastypesfrommcp.server.lowlevelimportNotificationOptions,Serverfrommcp.server.modelsimportInitializationOptions# Create a server instanceserver=Server("example-server")@server.list_prompts()asyncdefhandle_list_prompts()->list[types.Prompt]:"""List available prompts."""return [types.Prompt(name="example-prompt",description="An example prompt template",arguments=[types.PromptArgument(name="arg1",description="Example argument",required=True)], ) ]@server.get_prompt()asyncdefhandle_get_prompt(name:str,arguments:dict[str,str]|None)->types.GetPromptResult:"""Get a specific prompt by name."""ifname!="example-prompt":raiseValueError(f"Unknown prompt:{name}")arg1_value= (argumentsor {}).get("arg1","default")returntypes.GetPromptResult(description="Example prompt",messages=[types.PromptMessage(role="user",content=types.TextContent(type="text",text=f"Example prompt text with argument:{arg1_value}"), ) ], )asyncdefrun():"""Run the basic low-level server."""asyncwithmcp.server.stdio.stdio_server()as (read_stream,write_stream):awaitserver.run(read_stream,write_stream,InitializationOptions(server_name="example",server_version="0.1.0",capabilities=server.get_capabilities(notification_options=NotificationOptions(),experimental_capabilities={}, ), ), )if__name__=="__main__":asyncio.run(run())
Full example:examples/snippets/servers/lowlevel/basic.py
Caution: Theuv run mcp run
anduv run mcp dev
tool doesn't support low-level server.
The low-level server supports structured output for tools, allowing you to return both human-readable content and machine-readable structured data. Tools can define anoutputSchema
to validate their structured output:
"""Run from the repository root: uv run examples/snippets/servers/lowlevel/structured_output.py"""importasynciofromtypingimportAnyimportmcp.server.stdioimportmcp.typesastypesfrommcp.server.lowlevelimportNotificationOptions,Serverfrommcp.server.modelsimportInitializationOptionsserver=Server("example-server")@server.list_tools()asyncdeflist_tools()->list[types.Tool]:"""List available tools with structured output schemas."""return [types.Tool(name="get_weather",description="Get current weather for a city",inputSchema={"type":"object","properties": {"city": {"type":"string","description":"City name"}},"required": ["city"], },outputSchema={"type":"object","properties": {"temperature": {"type":"number","description":"Temperature in Celsius"},"condition": {"type":"string","description":"Weather condition"},"humidity": {"type":"number","description":"Humidity percentage"},"city": {"type":"string","description":"City name"}, },"required": ["temperature","condition","humidity","city"], }, ) ]@server.call_tool()asyncdefcall_tool(name:str,arguments:dict[str,Any])->dict[str,Any]:"""Handle tool calls with structured output."""ifname=="get_weather":city=arguments["city"]# Simulated weather data - in production, call a weather APIweather_data= {"temperature":22.5,"condition":"partly cloudy","humidity":65,"city":city,# Include the requested city }# low-level server will validate structured output against the tool's# output schema, and additionally serialize it into a TextContent block# for backwards compatibility with pre-2025-06-18 clients.returnweather_dataelse:raiseValueError(f"Unknown tool:{name}")asyncdefrun():"""Run the structured output server."""asyncwithmcp.server.stdio.stdio_server()as (read_stream,write_stream):awaitserver.run(read_stream,write_stream,InitializationOptions(server_name="structured-output-example",server_version="0.1.0",capabilities=server.get_capabilities(notification_options=NotificationOptions(),experimental_capabilities={}, ), ), )if__name__=="__main__":asyncio.run(run())
Full example:examples/snippets/servers/lowlevel/structured_output.py
Tools can return data in three ways:
- Content only: Return a list of content blocks (default behavior before spec revision 2025-06-18)
- Structured data only: Return a dictionary that will be serialized to JSON (Introduced in spec revision 2025-06-18)
- Both: Return a tuple of (content, structured_data) preferred option to use for backwards compatibility
When anoutputSchema
is defined, the server automatically validates the structured output against the schema. This ensures type safety and helps catch errors early.
The SDK provides a high-level client interface for connecting to MCP servers using varioustransports:
"""cd to the `examples/snippets/clients` directory and run: uv run client"""importasyncioimportosfrompydanticimportAnyUrlfrommcpimportClientSession,StdioServerParameters,typesfrommcp.client.stdioimportstdio_clientfrommcp.shared.contextimportRequestContext# Create server parameters for stdio connectionserver_params=StdioServerParameters(command="uv",# Using uv to run the serverargs=["run","server","fastmcp_quickstart","stdio"],# We're already in snippets direnv={"UV_INDEX":os.environ.get("UV_INDEX","")},)# Optional: create a sampling callbackasyncdefhandle_sampling_message(context:RequestContext,params:types.CreateMessageRequestParams)->types.CreateMessageResult:print(f"Sampling request:{params.messages}")returntypes.CreateMessageResult(role="assistant",content=types.TextContent(type="text",text="Hello, world! from model", ),model="gpt-3.5-turbo",stopReason="endTurn", )asyncdefrun():asyncwithstdio_client(server_params)as (read,write):asyncwithClientSession(read,write,sampling_callback=handle_sampling_message)assession:# Initialize the connectionawaitsession.initialize()# List available promptsprompts=awaitsession.list_prompts()print(f"Available prompts:{[p.nameforpinprompts.prompts]}")# Get a prompt (greet_user prompt from fastmcp_quickstart)ifprompts.prompts:prompt=awaitsession.get_prompt("greet_user",arguments={"name":"Alice","style":"friendly"})print(f"Prompt result:{prompt.messages[0].content}")# List available resourcesresources=awaitsession.list_resources()print(f"Available resources:{[r.uriforrinresources.resources]}")# List available toolstools=awaitsession.list_tools()print(f"Available tools:{[t.namefortintools.tools]}")# Read a resource (greeting resource from fastmcp_quickstart)resource_content=awaitsession.read_resource(AnyUrl("greeting://World"))content_block=resource_content.contents[0]ifisinstance(content_block,types.TextContent):print(f"Resource content:{content_block.text}")# Call a tool (add tool from fastmcp_quickstart)result=awaitsession.call_tool("add",arguments={"a":5,"b":3})result_unstructured=result.content[0]ifisinstance(result_unstructured,types.TextContent):print(f"Tool result:{result_unstructured.text}")result_structured=result.structuredContentprint(f"Structured tool result:{result_structured}")defmain():"""Entry point for the client script."""asyncio.run(run())if__name__=="__main__":main()
Full example:examples/snippets/clients/stdio_client.py
Clients can also connect usingStreamable HTTP transport:
"""Run from the repository root: uv run examples/snippets/clients/streamable_basic.py"""importasynciofrommcpimportClientSessionfrommcp.client.streamable_httpimportstreamablehttp_clientasyncdefmain():# Connect to a streamable HTTP serverasyncwithstreamablehttp_client("http://localhost:8000/mcp")as (read_stream,write_stream,_, ):# Create a session using the client streamsasyncwithClientSession(read_stream,write_stream)assession:# Initialize the connectionawaitsession.initialize()# List available toolstools=awaitsession.list_tools()print(f"Available tools:{[tool.namefortoolintools.tools]}")if__name__=="__main__":asyncio.run(main())
Full example:examples/snippets/clients/streamable_basic.py
When building MCP clients, the SDK provides utilities to help display human-readable names for tools, resources, and prompts:
"""cd to the `examples/snippets` directory and run: uv run display-utilities-client"""importasyncioimportosfrommcpimportClientSession,StdioServerParametersfrommcp.client.stdioimportstdio_clientfrommcp.shared.metadata_utilsimportget_display_name# Create server parameters for stdio connectionserver_params=StdioServerParameters(command="uv",# Using uv to run the serverargs=["run","server","fastmcp_quickstart","stdio"],env={"UV_INDEX":os.environ.get("UV_INDEX","")},)asyncdefdisplay_tools(session:ClientSession):"""Display available tools with human-readable names"""tools_response=awaitsession.list_tools()fortoolintools_response.tools:# get_display_name() returns the title if available, otherwise the namedisplay_name=get_display_name(tool)print(f"Tool:{display_name}")iftool.description:print(f"{tool.description}")asyncdefdisplay_resources(session:ClientSession):"""Display available resources with human-readable names"""resources_response=awaitsession.list_resources()forresourceinresources_response.resources:display_name=get_display_name(resource)print(f"Resource:{display_name} ({resource.uri})")templates_response=awaitsession.list_resource_templates()fortemplateintemplates_response.resourceTemplates:display_name=get_display_name(template)print(f"Resource Template:{display_name}")asyncdefrun():"""Run the display utilities example."""asyncwithstdio_client(server_params)as (read,write):asyncwithClientSession(read,write)assession:# Initialize the connectionawaitsession.initialize()print("=== Available Tools ===")awaitdisplay_tools(session)print("\n=== Available Resources ===")awaitdisplay_resources(session)defmain():"""Entry point for the display utilities client."""asyncio.run(run())if__name__=="__main__":main()
Full example:examples/snippets/clients/display_utilities.py
Theget_display_name()
function implements the proper precedence rules for displaying names:
- For tools:
title
>annotations.title
>name
- For other objects:
title
>name
This ensures your client UI shows the most user-friendly names that servers provide.
The SDK includesauthorization support for connecting to protected MCP servers:
"""Before running, specify running MCP RS server URL.To spin up RS server locally, see examples/servers/simple-auth/README.mdcd to the `examples/snippets` directory and run: uv run oauth-client"""importasynciofromurllib.parseimportparse_qs,urlparsefrompydanticimportAnyUrlfrommcpimportClientSessionfrommcp.client.authimportOAuthClientProvider,TokenStoragefrommcp.client.streamable_httpimportstreamablehttp_clientfrommcp.shared.authimportOAuthClientInformationFull,OAuthClientMetadata,OAuthTokenclassInMemoryTokenStorage(TokenStorage):"""Demo In-memory token storage implementation."""def__init__(self):self.tokens:OAuthToken|None=Noneself.client_info:OAuthClientInformationFull|None=Noneasyncdefget_tokens(self)->OAuthToken|None:"""Get stored tokens."""returnself.tokensasyncdefset_tokens(self,tokens:OAuthToken)->None:"""Store tokens."""self.tokens=tokensasyncdefget_client_info(self)->OAuthClientInformationFull|None:"""Get stored client information."""returnself.client_infoasyncdefset_client_info(self,client_info:OAuthClientInformationFull)->None:"""Store client information."""self.client_info=client_infoasyncdefhandle_redirect(auth_url:str)->None:print(f"Visit:{auth_url}")asyncdefhandle_callback()->tuple[str,str|None]:callback_url=input("Paste callback URL: ")params=parse_qs(urlparse(callback_url).query)returnparams["code"][0],params.get("state", [None])[0]asyncdefmain():"""Run the OAuth client example."""oauth_auth=OAuthClientProvider(server_url="http://localhost:8001",client_metadata=OAuthClientMetadata(client_name="Example MCP Client",redirect_uris=[AnyUrl("http://localhost:3000/callback")],grant_types=["authorization_code","refresh_token"],response_types=["code"],scope="user", ),storage=InMemoryTokenStorage(),redirect_handler=handle_redirect,callback_handler=handle_callback, )asyncwithstreamablehttp_client("http://localhost:8001/mcp",auth=oauth_auth)as (read,write,_):asyncwithClientSession(read,write)assession:awaitsession.initialize()tools=awaitsession.list_tools()print(f"Available tools:{[tool.namefortoolintools.tools]}")resources=awaitsession.list_resources()print(f"Available resources:{[r.uriforrinresources.resources]}")defrun():asyncio.run(main())if__name__=="__main__":run()
Full example:examples/snippets/clients/oauth_client.py
For a complete working example, seeexamples/clients/simple-auth-client/
.
When calling tools through MCP, theCallToolResult
object contains the tool's response in a structured format. Understanding how to parse this result is essential for properly handling tool outputs.
"""examples/snippets/clients/parsing_tool_results.py"""importasynciofrommcpimportClientSession,StdioServerParameters,typesfrommcp.client.stdioimportstdio_clientasyncdefparse_tool_results():"""Demonstrates how to parse different types of content in CallToolResult."""server_params=StdioServerParameters(command="python",args=["path/to/mcp_server.py"] )asyncwithstdio_client(server_params)as (read,write):asyncwithClientSession(read,write)assession:awaitsession.initialize()# Example 1: Parsing text contentresult=awaitsession.call_tool("get_data", {"format":"text"})forcontentinresult.content:ifisinstance(content,types.TextContent):print(f"Text:{content.text}")# Example 2: Parsing structured content from JSON toolsresult=awaitsession.call_tool("get_user", {"id":"123"})ifhasattr(result,"structuredContent")andresult.structuredContent:# Access structured data directlyuser_data=result.structuredContentprint(f"User:{user_data.get('name')}, Age:{user_data.get('age')}")# Example 3: Parsing embedded resourcesresult=awaitsession.call_tool("read_config", {})forcontentinresult.content:ifisinstance(content,types.EmbeddedResource):resource=content.resourceifisinstance(resource,types.TextResourceContents):print(f"Config from{resource.uri}:{resource.text}")elifisinstance(resource,types.BlobResourceContents):print(f"Binary data from{resource.uri}")# Example 4: Parsing image contentresult=awaitsession.call_tool("generate_chart", {"data": [1,2,3]})forcontentinresult.content:ifisinstance(content,types.ImageContent):print(f"Image ({content.mimeType}):{len(content.data)} bytes")# Example 5: Handling errorsresult=awaitsession.call_tool("failing_tool", {})ifresult.isError:print("Tool execution failed!")forcontentinresult.content:ifisinstance(content,types.TextContent):print(f"Error:{content.text}")asyncdefmain():awaitparse_tool_results()if__name__=="__main__":asyncio.run(main())
The MCP protocol defines three core primitives that servers can implement:
Primitive | Control | Description | Example Use |
---|---|---|---|
Prompts | User-controlled | Interactive templates invoked by user choice | Slash commands, menu options |
Resources | Application-controlled | Contextual data managed by the client application | File contents, API responses |
Tools | Model-controlled | Functions exposed to the LLM to take actions | API calls, data updates |
MCP servers declare capabilities during initialization:
Capability | Feature Flag | Description |
---|---|---|
prompts | listChanged | Prompt template management |
resources | subscribe listChanged | Resource exposure and updates |
tools | listChanged | Tool discovery and execution |
logging | - | Server logging configuration |
completions | - | Argument completion suggestions |
- Model Context Protocol documentation
- Model Context Protocol specification
- Officially supported servers
We are passionate about supporting contributors of all levels of experience and would love to see you get involved in the project. See thecontributing guide to get started.
This project is licensed under the MIT License - see the LICENSE file for details.
About
The official Python SDK for Model Context Protocol servers and clients
Resources
License
Code of conduct
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Uh oh!
There was an error while loading.Please reload this page.