Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open In ColabOpen on GitHub

How to disable parallel tool calling

Provider-specific

This API is currently only supported by OpenAI and Anthropic.

OpenAI tool calling performs tool calling in parallel by default. That means that if we ask a question like "What is the weather in Tokyo, New York, and Chicago?" and we have a tool for getting the weather, it will call the tool 3 times in parallel. We can force it to call only a single tool once by using theparallel_tool_call parameter.

First let's set up our tools and model:

from langchain_core.toolsimport tool


@tool
defadd(a:int, b:int)->int:
"""Adds a and b."""
return a+ b


@tool
defmultiply(a:int, b:int)->int:
"""Multiplies a and b."""
return a* b


tools=[add, multiply]
API Reference:tool
import os
from getpassimport getpass

from langchain.chat_modelsimport init_chat_model

if"OPENAI_API_KEY"notin os.environ:
os.environ["OPENAI_API_KEY"]= getpass()

llm= init_chat_model("openai:gpt-4.1-mini")
API Reference:init_chat_model

Now let's show a quick example of how disabling parallel tool calls work:

llm_with_tools= llm.bind_tools(tools, parallel_tool_calls=False)
llm_with_tools.invoke("Please call the first tool two times").tool_calls
[{'name': 'add',
'args': {'a': 2, 'b': 2},
'id': 'call_Hh4JOTCDM85Sm9Pr84VKrWu5'}]

As we can see, even though we explicitly told the model to call a tool twice, by disabling parallel tool calls the model was constrained to only calling one.


[8]ページ先頭

©2009-2025 Movatter.jp