Migrating from LLMChain
LLMChain
combined a prompt template, LLM, and output parser into a class.
Some advantages of switching to the LCEL implementation are:
- Clarity around contents and parameters. The legacy
LLMChain
contains a default output parser and other options. - Easier streaming.
LLMChain
only supports streaming via callbacks. - Easier access to raw message outputs if desired.
LLMChain
only exposes these via a parameter or via callback.
%pip install--upgrade--quiet langchain-openai
import os
from getpassimport getpass
if"OPENAI_API_KEY"notin os.environ:
os.environ["OPENAI_API_KEY"]= getpass()
Legacy
Details
from langchain.chainsimport LLMChain
from langchain_core.promptsimport ChatPromptTemplate
from langchain_openaiimport ChatOpenAI
prompt= ChatPromptTemplate.from_messages(
[("user","Tell me a {adjective} joke")],
)
legacy_chain= LLMChain(llm=ChatOpenAI(), prompt=prompt)
legacy_result= legacy_chain({"adjective":"funny"})
legacy_result
{'adjective': 'funny',
'text': "Why couldn't the bicycle stand up by itself?\n\nBecause it was two tired!"}
Note thatLLMChain
by default returned adict
containing both the input and the output fromStrOutputParser
, so to extract the output, you need to access the"text"
key.
legacy_result["text"]
"Why couldn't the bicycle stand up by itself?\n\nBecause it was two tired!"
LCEL
Details
from langchain_core.output_parsersimport StrOutputParser
from langchain_core.promptsimport ChatPromptTemplate
from langchain_openaiimport ChatOpenAI
prompt= ChatPromptTemplate.from_messages(
[("user","Tell me a {adjective} joke")],
)
chain= prompt| ChatOpenAI()| StrOutputParser()
chain.invoke({"adjective":"funny"})
'Why was the math book sad?\n\nBecause it had too many problems.'
If you'd like to mimic thedict
packaging of input and output inLLMChain
, you can use aRunnablePassthrough.assign
like:
from langchain_core.runnablesimport RunnablePassthrough
outer_chain= RunnablePassthrough().assign(text=chain)
outer_chain.invoke({"adjective":"funny"})
API Reference:RunnablePassthrough
{'adjective': 'funny',
'text': 'Why did the scarecrow win an award? Because he was outstanding in his field!'}
Next steps
Seethis tutorial for more detail on building with prompt templates, LLMs, and output parsers.
Check out theLCEL conceptual docs for more background information.