Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open In ColabOpen on GitHub

How to add ad-hoc tool calling capability to LLMs and Chat Models

caution

Some models have been fine-tuned for tool calling and provide a dedicated API for tool calling. Generally, such models are better at tool calling than non-fine-tuned models, and are recommended for use cases that require tool calling. Please see thehow to use a chat model to call tools guide for more information.

Prerequisites

This guide assumes familiarity with the following concepts:

In this guide, we'll see how to addad-hoc tool calling support to a chat model. This is an alternative method to invoke tools if you're using a model that does not natively supporttool calling.

We'll do this by simply writing a prompt that will get the model to invoke the appropriate tools. Here's a diagram of the logic:

chain

Setup

We'll need to install the following packages:

%pip install--upgrade--quiet langchain langchain-community

If you'd like to use LangSmith, uncomment the below:

import getpass
import os
# os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass()

You can select any of the given models for this how-to guide. Keep in mind that most of these models alreadysupport native tool calling, so using the prompting strategy shown here doesn't make sense for these models, and instead you should follow thehow to use a chat model to call tools guide.

pip install -qU "langchain[google-genai]"
import getpass
import os

ifnot os.environ.get("GOOGLE_API_KEY"):
os.environ["GOOGLE_API_KEY"]= getpass.getpass("Enter API key for Google Gemini: ")

from langchain.chat_modelsimport init_chat_model

model= init_chat_model("gemini-2.0-flash", model_provider="google_genai")

To illustrate the idea, we'll usephi3 via Ollama, which doesNOT have native support for tool calling. If you'd like to useOllama as well followthese instructions.

from langchain_community.llmsimport Ollama

model= Ollama(model="phi3")
API Reference:Ollama

Create a tool

First, let's create anadd andmultiply tools. For more information on creating custom tools, please seethis guide.

from langchain_core.toolsimport tool


@tool
defmultiply(x:float, y:float)->float:
"""Multiply two numbers together."""
return x* y


@tool
defadd(x:int, y:int)->int:
"Add two numbers."
return x+ y


tools=[multiply, add]

# Let's inspect the tools
for tin tools:
print("--")
print(t.name)
print(t.description)
print(t.args)
API Reference:tool
--
multiply
Multiply two numbers together.
{'x': {'title': 'X', 'type': 'number'}, 'y': {'title': 'Y', 'type': 'number'}}
--
add
Add two numbers.
{'x': {'title': 'X', 'type': 'integer'}, 'y': {'title': 'Y', 'type': 'integer'}}
multiply.invoke({"x":4,"y":5})
20.0

Creating our prompt

We'll want to write a prompt that specifies the tools the model has access to, the arguments to those tools, and the desired output format of the model. In this case we'll instruct it to output a JSON blob of the form{"name": "...", "arguments": {...}}.

from langchain_core.output_parsersimport JsonOutputParser
from langchain_core.promptsimport ChatPromptTemplate
from langchain_core.toolsimport render_text_description

rendered_tools= render_text_description(tools)
print(rendered_tools)
multiply(x: float, y: float) -> float - Multiply two numbers together.
add(x: int, y: int) -> int - Add two numbers.
system_prompt=f"""\
You are an assistant that has access to the following set of tools.
Here are the names and descriptions for each tool:

{rendered_tools}

Given the user input, return the name and input of the tool to use.
Return your response as a JSON blob with 'name' and 'arguments' keys.

The `arguments` should be a dictionary, with keys corresponding
to the argument names and the values corresponding to the requested values.
"""

prompt= ChatPromptTemplate.from_messages(
[("system", system_prompt),("user","{input}")]
)
chain= prompt| model
message= chain.invoke({"input":"what's 3 plus 1132"})

# Let's take a look at the output from the model
# if the model is an LLM (not a chat model), the output will be a string.
ifisinstance(message,str):
print(message)
else:# Otherwise it's a chat model
print(message.content)
{
"name": "add",
"arguments": {
"x": 3,
"y": 1132
}
}

Adding an output parser

We'll use theJsonOutputParser for parsing our models output to JSON.

from langchain_core.output_parsersimport JsonOutputParser

chain= prompt| model| JsonOutputParser()
chain.invoke({"input":"what's thirteen times 4"})
API Reference:JsonOutputParser
{'name': 'multiply', 'arguments': {'x': 13.0, 'y': 4.0}}
important

🎉 Amazing! 🎉 We now instructed our model on how torequest that a tool be invoked.

Now, let's create some logic to actually run the tool!

Invoking the tool 🏃

Now that the model can request that a tool be invoked, we need to write a function that can actually invokethe tool.

The function will select the appropriate tool by name, and pass to it the arguments chosen by the model.

from typingimport Any, Dict, Optional, TypedDict

from langchain_core.runnablesimport RunnableConfig


classToolCallRequest(TypedDict):
"""A typed dict that shows the inputs into the invoke_tool function."""

name:str
arguments: Dict[str, Any]


definvoke_tool(
tool_call_request: ToolCallRequest, config: Optional[RunnableConfig]=None
):
"""A function that we can use the perform a tool invocation.

Args:
tool_call_request: a dict that contains the keys name and arguments.
The name must match the name of a tool that exists.
The arguments are the arguments to that tool.
config: This is configuration information that LangChain uses that contains
things like callbacks, metadata, etc.See LCEL documentation about RunnableConfig.

Returns:
output from the requested tool
"""
tool_name_to_tool={tool.name: toolfor toolin tools}
name= tool_call_request["name"]
requested_tool= tool_name_to_tool[name]
return requested_tool.invoke(tool_call_request["arguments"], config=config)
API Reference:RunnableConfig

Let's test this out 🧪!

invoke_tool({"name":"multiply","arguments":{"x":3,"y":5}})
15.0

Let's put it together

Let's put it together into a chain that creates a calculator with add and multiplication capabilities.

chain= prompt| model| JsonOutputParser()| invoke_tool
chain.invoke({"input":"what's thirteen times 4.14137281"})
53.83784653

Returning tool inputs

It can be helpful to return not only tool outputs but also tool inputs. We can easily do this with LCEL byRunnablePassthrough.assign-ing the tool output. This will take whatever the input is to the RunnablePassrthrough components (assumed to be a dictionary) and add a key to it while still passing through everything that's currently in the input:

from langchain_core.runnablesimport RunnablePassthrough

chain=(
prompt| model| JsonOutputParser()| RunnablePassthrough.assign(output=invoke_tool)
)
chain.invoke({"input":"what's thirteen times 4.14137281"})
API Reference:RunnablePassthrough
{'name': 'multiply',
'arguments': {'x': 13, 'y': 4.14137281},
'output': 53.83784653}

What's next?

This how-to guide shows the "happy path" when the model correctly outputs all the required tool information.

In reality, if you're using more complex tools, you will start encountering errors from the model, especially for models that have not been fine tuned for tool calling and for less capable models.

You will need to be prepared to add strategies to improve the output from the model; e.g.,

  1. Provide few shot examples.
  2. Add error handling (e.g., catch the exception and feed it back to the LLM to ask it to correct its previous output).

[8]ページ先頭

©2009-2025 Movatter.jp