Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open In ColabOpen on GitHub

How to summarize text in a single LLM call

LLMs can summarize and otherwise distill desired information from text, including large volumes of text. In many cases, especially for models with larger context windows, this can be adequately achieved via a single LLM call.

LangChain implements a simplepre-built chain that "stuffs" a prompt with the desired context for summarization and other purposes. In this guide we demonstrate how to use the chain.

Load chat model

Let's first load achat model:

pip install -qU "langchain[google-genai]"
import getpass
import os

ifnot os.environ.get("GOOGLE_API_KEY"):
os.environ["GOOGLE_API_KEY"]= getpass.getpass("Enter API key for Google Gemini: ")

from langchain.chat_modelsimport init_chat_model

llm= init_chat_model("gemini-2.0-flash", model_provider="google_genai")

Load documents

Next, we need some documents to summarize. Below, we generate some toy documents for illustrative purposes. See the document loaderhow-to guides andintegration pages for additional sources of data. Thesummarization tutorial also includes an example summarizing a blog post.

from langchain_core.documentsimport Document

documents=[
Document(page_content="Apples are red", metadata={"title":"apple_book"}),
Document(page_content="Blueberries are blue", metadata={"title":"blueberry_book"}),
Document(page_content="Bananas are yelow", metadata={"title":"banana_book"}),
]
API Reference:Document

Load chain

Below, we define a simple prompt and instantiate the chain with our chat model and documents:

from langchain.chains.combine_documentsimport create_stuff_documents_chain
from langchain_core.promptsimport ChatPromptTemplate

prompt= ChatPromptTemplate.from_template("Summarize this content: {context}")
chain= create_stuff_documents_chain(llm, prompt)

Invoke chain

Because the chain is aRunnable, it implements the usual methods for invocation:

result= chain.invoke({"context": documents})
result
'The content describes the colors of three fruits: apples are red, blueberries are blue, and bananas are yellow.'

Streaming

Note that the chain also supports streaming of individual output tokens:

for chunkin chain.stream({"context": documents}):
print(chunk, end="|")
|The| content| describes| the| colors| of| three| fruits|:| apples| are| red|,| blueberries| are| blue|,| and| bananas| are| yellow|.||

Next steps

See the summarizationhow-to guides for additional summarization strategies, including those designed for larger volumes of text.

See alsothis tutorial for more detail on summarization.


[8]ページ先頭

©2009-2025 Movatter.jp