Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open on GitHub

Runnable interface

The Runnable interface is the foundation for working with LangChain components, and it's implemented across many of them, such aslanguage models,output parsers,retrievers,compiled LangGraph graphs and more.

This guide covers the main concepts and methods of the Runnable interface, which allows developers to interact with various LangChain components in a consistent and predictable manner.

Related Resources

Overview of runnable interface

The Runnable way defines a standard interface that allows a Runnable component to be:

  • Invoked: A single input is transformed into an output.
  • Batched: Multiple inputs are efficiently transformed into outputs.
  • Streamed: Outputs are streamed as they are produced.
  • Inspected: Schematic information about Runnable's input, output, and configuration can be accessed.
  • Composed: Multiple Runnables can be composed to work together usingthe LangChain Expression Language (LCEL) to create complex pipelines.

Please review theLCEL Cheatsheet for some common patterns that involve the Runnable interface and LCEL expressions.

Optimized parallel execution (batch)

LangChain Runnables offer a built-inbatch (andbatch_as_completed) API that allow you to process multiple inputs in parallel.

Using these methods can significantly improve performance when needing to process multiple independent inputs, as theprocessing can be done in parallel instead of sequentially.

The two batching options are:

  • batch: Process multiple inputs in parallel, returning results in the same order as the inputs.
  • batch_as_completed: Process multiple inputs in parallel, returning results as they complete. Results may arrive out of order, but each includes the input index for matching.

The default implementation ofbatch andbatch_as_completed use a thread pool executor to run theinvoke method in parallel. This allows for efficient parallel execution without the need for users to manage threads, and speeds up code that is I/O-bound (e.g., making API requests, reading files, etc.). It will not be as effective for CPU-bound operations, as the GIL (Global Interpreter Lock) in Python will prevent true parallel execution.

Some Runnables may provide their own implementations ofbatch andbatch_as_completed that are optimized for their specific use case (e.g.,rely on abatch API provided by a model provider).

note

The async versions ofabatch andabatch_as_completed relies on asyncio'sgather andas_completed functions to run theainvoke method in parallel.

tip

When processing a large number of inputs usingbatch orbatch_as_completed, users may want to control the maximum number of parallel calls. This can be done by setting themax_concurrency attribute in theRunnableConfig dictionary. See theRunnableConfig for more information.

Chat Models also have a built-inrate limiter that can be used to control the rate at which requests are made.

Asynchronous support

Runnables expose an asynchronous API, allowing them to be called using theawait syntax in Python. Asynchronous methods can be identified by the "a" prefix (e.g.,ainvoke,abatch,astream,abatch_as_completed).

Please refer to theAsync Programming with LangChain guide for more details.

Streaming APIs

Streaming is critical in making applications based on LLMs feel responsive to end-users.

Runnables expose the following three streaming APIs:

  1. syncstream and asyncastream: yields the output a Runnable as it is generated.
  2. The asyncastream_events: a more advanced streaming API that allows streaming intermediate steps and final output
  3. Thelegacy asyncastream_log: a legacy streaming API that streams intermediate steps and final output

Please refer to theStreaming Conceptual Guide for more details on how to stream in LangChain.

Input and output types

EveryRunnable is characterized by an input and output type. These input and output types can be any Python object, and are defined by the Runnable itself.

Runnable methods that result in the execution of the Runnable (e.g.,invoke,batch,stream,astream_events) work with these input and output types.

  • invoke: Accepts an input and returns an output.
  • batch: Accepts a list of inputs and returns a list of outputs.
  • stream: Accepts an input and returns a generator that yields outputs.

Theinput type andoutput type vary by component:

ComponentInput TypeOutput Type
PromptdictionaryPromptValue
ChatModela string, list of chat messages or a PromptValueChatMessage
LLMa string, list of chat messages or a PromptValueString
OutputParserthe output of an LLM or ChatModelDepends on the parser
Retrievera stringList of Documents
Toola string or dictionary, depending on the toolDepends on the tool

Please refer to the individual component documentation for more information on the input and output types and how to use them.

Inspecting schemas

note

This is an advanced feature that is unnecessary for most users. You should probablyskip this section unless you have a specific need to inspect the schema of a Runnable.

In more advanced use cases, you may want to programmaticallyinspect the Runnable and determine what input and output types the Runnable expects and produces.

The Runnable interface provides methods to get theJSON Schema of the input and output types of a Runnable, as well asPydantic schemas for the input and output types.

These APIs are mostly used internally for unit-testing and byLangServe which uses the APIs for input validation and generation ofOpenAPI documentation.

In addition, to the input and output types, some Runnables have been set up with additional run time configuration options.There are corresponding APIs to get the Pydantic Schema and JSON Schema of the configuration options for the Runnable.Please see theConfigurable Runnables section for more information.

MethodDescription
get_input_schemaGives the Pydantic Schema of the input schema for the Runnable.
get_output_schemaGives the Pydantic Schema of the output schema for the Runnable.
config_schemaGives the Pydantic Schema of the config schema for the Runnable.
get_input_jsonschemaGives the JSONSchema of the input schema for the Runnable.
get_output_jsonschemaGives the JSONSchema of the output schema for the Runnable.
get_config_jsonschemaGives the JSONSchema of the config schema for the Runnable.

With_types

LangChain will automatically try to infer the input and output types of a Runnable based on available information.

Currently, this inference does not work well for more complex Runnables that are built usingLCEL composition, and the inferred input and / or output types may be incorrect. In these cases, we recommend that users override the inferred input and output types using thewith_types method (API Reference).

RunnableConfig

Any of the methods that are used to execute the runnable (e.g.,invoke,batch,stream,astream_events) accept a second argument calledRunnableConfig (API Reference). This argument is a dictionary that contains configuration for the Runnable that will be usedat run time during the execution of the runnable.

ARunnableConfig can have any of the following properties defined:

AttributeDescription
run_nameName used for the given Runnable (not inherited).
run_idUnique identifier for this call. sub-calls will get their own unique run ids.
tagsTags for this call and any sub-calls.
metadataMetadata for this call and any sub-calls.
callbacksCallbacks for this call and any sub-calls.
max_concurrencyMaximum number of parallel calls to make (e.g., used by batch).
recursion_limitMaximum number of times a call can recurse (e.g., used by Runnables that return Runnables)
configurableRuntime values for configurable attributes of the Runnable.

Passingconfig to theinvoke method is done like so:

some_runnable.invoke(
some_input,
config={
'run_name':'my_run',
'tags':['tag1','tag2'],
'metadata':{'key':'value'}

}
)

Propagation of RunnableConfig

ManyRunnables are composed of other Runnables, and it is important that theRunnableConfig is propagated to all sub-calls made by the Runnable. This allows providing run time configuration values to the parent Runnable that are inherited by all sub-calls.

If this were not the case, it would be impossible to set and propagatecallbacks or other configuration values liketags andmetadata whichare expected to be inherited by all sub-calls.

There are two main patterns by which newRunnables are created:

  1. Declaratively usingLangChain Expression Language (LCEL):

    chain= prompt| chat_model| output_parser
  2. Using acustom Runnable (e.g.,RunnableLambda) or using the@tool decorator:

    deffoo(input):
    # Note that .invoke() is used directly here
    return bar_runnable.invoke(input)
    foo_runnable= RunnableLambda(foo)

LangChain will try to propagateRunnableConfig automatically for both of the patterns.

For handling the second pattern, LangChain relies on Python'scontextvars.

In Python 3.11 and above, this works out of the box, and you do not need to do anything special to propagate theRunnableConfig to the sub-calls.

In Python 3.9 and 3.10, if you are usingasync code, you need to manually pass theRunnableConfig through to theRunnable when invoking it.

This is due to a limitation inasyncio's tasks in Python 3.9 and 3.10 which didnot accept acontext argument.

Propagating theRunnableConfig manually is done like so:

asyncdeffoo(input, config):# <-- Note the config argument
returnawait bar_runnable.ainvoke(input, config=config)

foo_runnable= RunnableLambda(foo)
caution

When using Python 3.10 or lower and writing async code,RunnableConfig cannot be propagatedautomatically, and you will need to do it manually! This is a common pitfall whenattempting to stream data usingastream_events andastream_log as these methodsrely on proper propagation ofcallbacks defined inside ofRunnableConfig.

Setting custom run name, tags, and metadata

Therun_name,tags, andmetadata attributes of theRunnableConfig dictionary can be used to set custom values for the run name, tags, and metadata for a given Runnable.

Therun_name is a string that can be used to set a custom name for the run. This name will be used in logs and other places to identify the run. It is not inherited by sub-calls.

Thetags andmetadata attributes are lists and dictionaries, respectively, that can be used to set custom tags and metadata for the run. These values are inherited by sub-calls.

Using these attributes can be useful for tracking and debugging runs, as they will be surfaced inLangSmith as trace attributes that you canfilter and search on.

The attributes will also be propagated tocallbacks, and will appear in streaming APIs likeastream_events as part of each event in the stream.

Setting run id

note

This is an advanced feature that is unnecessary for most users.

You may need to set a customrun_id for a given run, in case you wantto reference it later or correlate it with other systems.

Therun_id MUST be a valid UUID string andunique for each run. It is used to identifythe parent run, sub-class will get their own unique run ids automatically.

To set a customrun_id, you can pass it as a key-value pair in theconfig dictionary when invoking the Runnable:

import uuid

run_id= uuid.uuid4()

some_runnable.invoke(
some_input,
config={
'run_id': run_id
}
)

# Do something with the run_id

Setting recursion limit

note

This is an advanced feature that is unnecessary for most users.

Some Runnables may return other Runnables, which can lead to infinite recursion if not handled properly. To prevent this, you can set arecursion_limit in theRunnableConfig dictionary. This will limit the number of times a Runnable can recurse.

Setting max concurrency

If using thebatch orbatch_as_completed methods, you can set themax_concurrency attribute in theRunnableConfig dictionary to control the maximum number of parallel calls to make. This can be useful when you want to limit the number of parallel calls to prevent overloading a server or API.

tip

If you're trying to rate limit the number of requests made by aChat Model, you can use the built-inrate limiter instead of settingmax_concurrency, which will be more effective.

See theHow to handle rate limits guide for more information.

Setting configurable

Theconfigurable field is used to pass runtime values for configurable attributes of the Runnable.

It is used frequently inLangGraph withLangGraph Persistenceandmemory.

It is used for a similar purpose inRunnableWithMessageHistory to specify eitherasession_id /conversation_id to keep track of conversation history.

In addition, you can use it to specify any custom configuration options to pass to anyConfigurable Runnable that they create.

Setting callbacks

Use this option to configurecallbacks for the runnable atruntime. The callbacks will be passed to all sub-calls made by the runnable.

some_runnable.invoke(
some_input,
{
"callbacks":[
SomeCallbackHandler(),
AnotherCallbackHandler(),
]
}
)

Please read theCallbacks Conceptual Guide for more information on how to use callbacks in LangChain.

important

If you're using Python 3.9 or 3.10 in an async environment, you must propagatetheRunnableConfig manually to sub-calls in some cases. Please see thePropagating RunnableConfig section for more information.

Creating a runnable from a function

You may need to create a custom Runnable that runs arbitrary logic. This is especiallyuseful if usingLangChain Expression Language (LCEL) to composemultiple Runnables and you need to add custom processing logic in one of the steps.

There are two ways to create a custom Runnable from a function:

  • RunnableLambda: Use this for simple transformations where streaming is not required.
  • RunnableGenerator: use this for more complex transformations when streaming is needed.

See theHow to run custom functions guide for more information on how to useRunnableLambda andRunnableGenerator.

important

Users should not try to subclass Runnables to create a new custom Runnable. It ismuch more complex and error-prone than simply usingRunnableLambda orRunnableGenerator.

Configurable runnables

note

This is an advanced feature that is unnecessary for most users.

It helps with configuration of large "chains" created using theLangChain Expression Language (LCEL)and is leveraged byLangServe for deployed Runnables.

Sometimes you may want to experiment with, or even expose to the end user, multiple different ways of doing things with your Runnable. This could involve adjusting parameters like the temperature in a chat model or even switching between different chat models.

To simplify this process, the Runnable interface provides two methods for creating configurable Runnables at runtime:

  • configurable_fields: This method allows you to configure specificattributes in a Runnable. For example, thetemperature attribute of a chat model.
  • configurable_alternatives: This method enables you to specifyalternative Runnables that can be run during runtime. For example, you could specify a list of different chat models that can be used.

See theHow to configure runtime chain internals guide for more information on how to configure runtime chain internals.


[8]ページ先頭

©2009-2025 Movatter.jp