Movatterモバイル変換


[0]ホーム

URL:


Building MCP Servers with Google Gemini

Model Context Protocol (MCP) is an open standard developed by Anthropic that enables AI models to seamlessly access external tools and resources. It creates a standardized way for AI models to interact with tools, access the internet, run code, and more, without needing custom integrations for each tool or model.

In this tutorial, we'll build a complete MCP server that integrates with Brave Search, and then connect it to Google's Gemini 2.0 model to demonstrate how MCP creates a flexible architecture for AI-powered applications.

Resources: Find the complete code for this tutorial in theMCP Gemini Demo Repository and explore theofficial MCP documentation.

What is Model Context Protocol?

MCP provides a standardized interface between AI models and external tools. Benefits include:

  1. Interoperability: Any MCP-compatible model can use any MCP-compatible tool
  2. Modularity: Add or update tools without changing model integrations
  3. Standardization: Consistent interface reduces integration complexity
  4. Separation of Concerns: Clean division between model capabilities and tool functionality

MCP works through a client-server architecture:

The key insight:Any function that can be coded can be exposed as an MCP tool. This opens up endless possibilities - from API integrations to database access, custom calculations, or even control of physical devices.

Getting Started with MCP

To start building with MCP, you'll need the MCP SDK and Bun (for fast TypeScript execution):

mkdir mcp-geminicd mcp-geminibun init -ybun add @modelcontextprotocol/sdk@^1.7.0 @google/generative-ai

Note: Check out theMCP SDK Repository for the latest version and features.

Building an MCP Server

Our MCP server will expose two tools:

  1. Web Search: For general internet searches via Brave Search
  2. Local Search: For finding businesses and locations via Brave Search

But remember, you could expose virtually any function - PDF processing, database queries, email sending, image generation, etc.

Defining Tools: The Building Blocks

The core of MCP is defining tools. Each tool represents a callable function with a defined input schema:

// Web Search Tool Definition (simplified)exportconstWEB_SEARCH_TOOL:Tool={name:"brave_web_search",description:"Performs a web search using the Brave Search API...",inputSchema:{type:"object",properties:{query:{type:"string",description:"Search query",},count:{type:"number",description:"Number of results (1-20, default 10)",default:10,},// Other parameters...},required:["query"],},};

The key components of a tool definition are:

  1. Name: A unique identifier for the tool
  2. Description: Helps models understand when and how to use the tool
  3. Input Schema: JSON Schema defining parameters the tool accepts

This declarative approach means models can discover what tools are available and how to use them correctly.

Tool Implementation: Any Function Can Be a Tool

Once you've defined a tool, you need to implement the actual functionality. This is simply a function that receives the tool's arguments and returns a result:

// Web search handler (simplified)exportasyncfunctionwebSearchHandler(args:unknown){// Validate argumentsif(!isValidArgs(args)){thrownewError("Invalid arguments");}const{query,count}=args;// Call your API, function, or any code you wantconstresults=awaitperformWebSearch(query,count);// Return formatted results in MCP response formatreturn{content:[{type:"text",text:results}],isError:false,};}

The power of MCP lies in this flexibility. Your tool implementation can:

As long as you can code it in TypeScript/JavaScript, you can expose it as an MCP tool.

The Server: Wiring Everything Together

The MCP server connects your tool definitions and implementations together:

// Create MCP server (simplified)exportfunctioncreateServer(){constserver=newServer({name:SERVER_CONFIG.name,version:SERVER_CONFIG.version,},{capabilities:{tools:{tools:[WEB_SEARCH_TOOL,LOCAL_SEARCH_TOOL],},},});// Register handlersserver.setRequestHandler(ListToolsRequestSchema,async()=>({tools:[WEB_SEARCH_TOOL,LOCAL_SEARCH_TOOL],}));server.setRequestHandler(CallToolRequestSchema,async(request)=>{const{name,arguments:args}=request.params;// Route to appropriate handlerswitch(name){case"brave_web_search":returnawaitwebSearchHandler(args);case"brave_local_search":returnawaitlocalSearchHandler(args);default:// Handle unknown tools}});returnserver;}

The server provides two key functions:

  1. Tool Discovery: Listing available tools with their schemas
  2. Tool Execution: Routing tool calls to the appropriate implementations

This separation of concerns makes your server maintainable and extensible. Adding new tools is as simple as defining them and adding a new case to the handler.

Using the MCP Server with a Basic Client

To demonstrate our server, let's create a basic client. This helps understand how tools are called from outside systems:

// Create MCP client and connect (simplified)consttransport=newStdioClientTransport({command:"bun",args:["index.ts"],});constclient=newClient({name:"brave-search-demo-client",version:"1.0.0"},{capabilities:{tools:{}}});// Connect to serverawaitclient.connect(transport);// List available toolsconst{tools}=awaitclient.listTools();// Call a toolconstresult=awaitclient.callTool({name:"brave_web_search",arguments:{query:"latest AI research papers",count:3,},});

This basic flow shows the core interactions:

  1. Connection: Establish a connection between client and server
  2. Discovery: Learn what tools are available
  3. Invocation: Call tools with appropriate arguments
  4. Response: Receive and process tool results

Integrating with Google Gemini 2.0: AI-Powered Tools

The real power of MCP comes when we connect it to AI models. Let's integrate our server with Google's Gemini 2.0:

// Configure Gemini with function declarations (simplified)constmodel=googleGenAi.getGenerativeModel({model:"gemini-2.0-pro-exp-02-05",tools:[{functionDeclarations:[{name:"brave_web_search",description:"Search the web using Brave Search API",parameters:{// Schema matching our MCP tool},},// Other tools...],},],});// Process user queries with Gemini and MCP toolsasyncfunctionprocessQuery(userQuery){// Generate a response with Geminiconstresult=awaitmodel.generateContent({contents:[{role:"user",parts:[{text:userQuery}]}],});// Check if Gemini wants to call a functionif(hasFunctionCall(result)){constfunctionCall=extractFunctionCall(result);// Call our MCP toolconstsearchResults=awaitclient.callTool({name:functionCall.name,arguments:functionCall.args,});// Send function results back to Gemini for final responsereturnawaitgenerateFinalResponse(userQuery,functionCall,searchResults);}returnresult.text();}

This integration demonstrates the true potential of MCP:

  1. AI Judgment: The model decides when to use tools based on the user's query
  2. Tool Selection: The model chooses the appropriate tool for the task
  3. Parameter Formation: The model structures arguments correctly
  4. Result Integration: The model incorporates tool results into its response

The result is a seamless experience where the AI model acts as an intelligent router to the most appropriate functionality.

Deep dive: Learn more about Google Gemini's function calling capabilities in theofficial documentation.

Extensibility: Beyond Search

While our example focused on search tools, remember that MCP can expose any functionality. You could add tools for:

Each tool follows the same pattern:

  1. Define the tool with a schema
  2. Implement the functionality
  3. Register it with your MCP server

This extensibility makes MCP a powerful architecture for building AI applications that can grow with your needs.

Running the Examples

To run the examples, set up the required environment variables:

export BRAVE_API_KEY="your_brave_api_key"export GOOGLE_API_KEY="your_google_api_key"

Then run the examples with Bun:

# Basic clientbun examples/basic-client.ts# Gemini integrationbun examples/gemini-tool-function.ts

Resource: Find ready-to-use example code in theMCP Examples Repository.

Conclusion

In this tutorial, we've explored how to build a complete MCP server and integrate it with Google's Gemini 2.0 model. The key takeaways:

  1. MCP provides a standardized way for AI models to access tools
  2. Any function that can be coded can be exposed as an MCP tool
  3. The architecture separates tool implementation from tool usage
  4. This separation enables flexible, modular AI systems

This approach to AI architecture offers significant advantages:

As the AI ecosystem continues to evolve, standards like MCP will become increasingly important for building interoperable, extensible systems that combine the best of human-coded functions with the power of large language models.


[8]ページ先頭

©2009-2025 Movatter.jp