Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open In ColabOpen on GitHub

Migrating from MultiPromptChain

TheMultiPromptChain routed an input query to one of multiple LLMChains-- that is, given an input query, it used a LLM to select from a list of prompts, formatted the query into the prompt, and generated a response.

MultiPromptChain does not support commonchat model features, such as message roles andtool calling.

ALangGraph implementation confers a number of advantages for this problem:

  • Supports chat prompt templates, including messages withsystem and other roles;
  • Supports the use of tool calling for the routing step;
  • Supports streaming of both individual steps and output tokens.

Now let's look at them side-by-side. Note that for this guide we willlangchain-openai >= 0.1.20

%pip install-qU langchain-core langchain-openai
import os
from getpassimport getpass

if"OPENAI_API_KEY"notin os.environ:
os.environ["OPENAI_API_KEY"]= getpass()

Legacy

Details
from langchain.chains.router.multi_promptimport MultiPromptChain
from langchain_openaiimport ChatOpenAI

llm= ChatOpenAI(model="gpt-4o-mini")

prompt_1_template="""
You are an expert on animals. Please answer the below query:

{input}
"""

prompt_2_template="""
You are an expert on vegetables. Please answer the below query:

{input}
"""

prompt_infos=[
{
"name":"animals",
"description":"prompt for an animal expert",
"prompt_template": prompt_1_template,
},
{
"name":"vegetables",
"description":"prompt for a vegetable expert",
"prompt_template": prompt_2_template,
},
]

chain= MultiPromptChain.from_prompts(llm, prompt_infos)
chain.invoke({"input":"What color are carrots?"})
{'input': 'What color are carrots?',
'text': 'Carrots are most commonly orange, but they can also be found in a variety of other colors including purple, yellow, white, and red. The orange variety is the most popular and widely recognized.'}

In theLangSmith trace we can see the two steps of this process, including the prompts for routing the query and the final selected prompt.

LangGraph

Details
pip install-qU langgraph
from operatorimport itemgetter
from typingimport Literal

from langchain_core.output_parsersimport StrOutputParser
from langchain_core.promptsimport ChatPromptTemplate
from langchain_core.runnablesimport RunnableConfig
from langchain_openaiimport ChatOpenAI
from langgraph.graphimport END, START, StateGraph
from typing_extensionsimport TypedDict

llm= ChatOpenAI(model="gpt-4o-mini")

# Define the prompts we will route to
prompt_1= ChatPromptTemplate.from_messages(
[
("system","You are an expert on animals."),
("human","{input}"),
]
)
prompt_2= ChatPromptTemplate.from_messages(
[
("system","You are an expert on vegetables."),
("human","{input}"),
]
)

# Construct the chains we will route to. These format the input query
# into the respective prompt, run it through a chat model, and cast
# the result to a string.
chain_1= prompt_1| llm| StrOutputParser()
chain_2= prompt_2| llm| StrOutputParser()


# Next: define the chain that selects which branch to route to.
# Here we will take advantage of tool-calling features to force
# the output to select one of two desired branches.
route_system="Route the user's query to either the animal or vegetable expert."
route_prompt= ChatPromptTemplate.from_messages(
[
("system", route_system),
("human","{input}"),
]
)


# Define schema for output:
classRouteQuery(TypedDict):
"""Route query to destination expert."""

destination: Literal["animal","vegetable"]


route_chain= route_prompt| llm.with_structured_output(RouteQuery)


# For LangGraph, we will define the state of the graph to hold the query,
# destination, and final answer.
classState(TypedDict):
query:str
destination: RouteQuery
answer:str


# We define functions for each node, including routing the query:
asyncdefroute_query(state: State, config: RunnableConfig):
destination=await route_chain.ainvoke(state["query"], config)
return{"destination": destination}


# And one node for each prompt
asyncdefprompt_1(state: State, config: RunnableConfig):
return{"answer":await chain_1.ainvoke(state["query"], config)}


asyncdefprompt_2(state: State, config: RunnableConfig):
return{"answer":await chain_2.ainvoke(state["query"], config)}


# We then define logic that selects the prompt based on the classification
defselect_node(state: State)-> Literal["prompt_1","prompt_2"]:
if state["destination"]=="animal":
return"prompt_1"
else:
return"prompt_2"


# Finally, assemble the multi-prompt chain. This is a sequence of two steps:
# 1) Select "animal" or "vegetable" via the route_chain, and collect the answer
# alongside the input query.
# 2) Route the input query to chain_1 or chain_2, based on the
# selection.
graph= StateGraph(State)
graph.add_node("route_query", route_query)
graph.add_node("prompt_1", prompt_1)
graph.add_node("prompt_2", prompt_2)

graph.add_edge(START,"route_query")
graph.add_conditional_edges("route_query", select_node)
graph.add_edge("prompt_1", END)
graph.add_edge("prompt_2", END)
app= graph.compile()
from IPython.displayimport Image

Image(app.get_graph().draw_mermaid_png())

We can invoke the chain as follows:

state=await app.ainvoke({"query":"what color are carrots"})
print(state["destination"])
print(state["answer"])
{'destination': 'vegetable'}
Carrots are most commonly orange, but they can also come in a variety of other colors, including purple, red, yellow, and white. The different colors often indicate varying flavors and nutritional profiles. For example, purple carrots contain anthocyanins, while orange carrots are rich in beta-carotene, which is converted to vitamin A in the body.

In theLangSmith trace we can see the tool call that routed the query and the prompt that was selected to generate the answer.

Overview:

  • Under the hood,MultiPromptChain routed the query by instructing the LLM to generate JSON-formatted text, and parses out the intended destination. It took a registry of string prompt templates as input.
  • The LangGraph implementation, implemented above via lower-level primitives, uses tool-calling to route to arbitrary chains. In this example, the chains include chat model templates and chat models.

Next steps

Seethis tutorial for more detail on building with prompt templates, LLMs, and output parsers.

Check out theLangGraph documentation for detail on building with LangGraph.


[8]ページ先頭

©2009-2025 Movatter.jp