Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open In ColabOpen on GitHub

How to use output parsers to parse an LLM response into structured format

Language models output text. But there are times where you want to get more structured information than just text back. While some model providers supportbuilt-in ways to return structured output, not all do.

Output parsers are classes that help structure language model responses. There are two main methods an output parser must implement:

  • "Get format instructions": A method which returns a string containing instructions for how the output of a language model should be formatted.
  • "Parse": A method which takes in a string (assumed to be the response from a language model) and parses it into some structure.

And then one optional one:

  • "Parse with prompt": A method which takes in a string (assumed to be the response from a language model) and a prompt (assumed to be the prompt that generated such a response) and parses it into some structure. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so.

Get started

Below we go over the main type of output parser, thePydanticOutputParser.

from langchain_core.output_parsersimport PydanticOutputParser
from langchain_core.promptsimport PromptTemplate
from langchain_openaiimport OpenAI
from pydanticimport BaseModel, Field, model_validator

model= OpenAI(model_name="gpt-3.5-turbo-instruct", temperature=0.0)


# Define your desired data structure.
classJoke(BaseModel):
setup:str= Field(description="question to set up a joke")
punchline:str= Field(description="answer to resolve the joke")

# You can add custom validation logic easily with Pydantic.
@model_validator(mode="before")
@classmethod
defquestion_ends_with_question_mark(cls, values:dict)->dict:
setup= values.get("setup")
if setupand setup[-1]!="?":
raise ValueError("Badly formed question!")
return values


# Set up a parser + inject instructions into the prompt template.
parser= PydanticOutputParser(pydantic_object=Joke)

prompt= PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)

# And a query intended to prompt a language model to populate the data structure.
prompt_and_model= prompt| model
output= prompt_and_model.invoke({"query":"Tell me a joke."})
parser.invoke(output)
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad dressing!')

LCEL

Output parsers implement theRunnable interface, the basic building block of theLangChain Expression Language (LCEL). This means they supportinvoke,ainvoke,stream,astream,batch,abatch,astream_log calls.

Output parsers accept a string orBaseMessage as input and can return an arbitrary type.

parser.invoke(output)
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad dressing!')

Instead of manually invoking the parser, we also could've just added it to ourRunnable sequence:

chain= prompt| model| parser
chain.invoke({"query":"Tell me a joke."})
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad dressing!')

While all parsers support the streaming interface, only certain parsers can stream through partially parsed objects, since this is highly dependent on the output type. Parsers which cannot construct partial objects will simply yield the fully parsed output.

TheSimpleJsonOutputParser for example can stream through partial outputs:

from langchain.output_parsers.jsonimport SimpleJsonOutputParser

json_prompt= PromptTemplate.from_template(
"Return a JSON object with an `answer` key that answers the following question: {question}"
)
json_parser= SimpleJsonOutputParser()
json_chain= json_prompt| model| json_parser
list(json_chain.stream({"question":"Who invented the microscope?"}))
[{},
{'answer': ''},
{'answer': 'Ant'},
{'answer': 'Anton'},
{'answer': 'Antonie'},
{'answer': 'Antonie van'},
{'answer': 'Antonie van Lee'},
{'answer': 'Antonie van Leeu'},
{'answer': 'Antonie van Leeuwen'},
{'answer': 'Antonie van Leeuwenho'},
{'answer': 'Antonie van Leeuwenhoek'}]

Similarly,forPydanticOutputParser:

list(chain.stream({"query":"Tell me a joke."}))
[Joke(setup='Why did the tomato turn red?', punchline=''),
Joke(setup='Why did the tomato turn red?', punchline='Because'),
Joke(setup='Why did the tomato turn red?', punchline='Because it'),
Joke(setup='Why did the tomato turn red?', punchline='Because it saw'),
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the'),
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad'),
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad dressing'),
Joke(setup='Why did the tomato turn red?', punchline='Because it saw the salad dressing!')]

[8]ページ先頭

©2009-2025 Movatter.jp