Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open on GitHub

Why LangChain?

The goal oflangchain the Python package and LangChain the company is to make it as easy as possible for developers to build applications that reason.While LangChain originally started as a single open source package, it has evolved into a company and a whole ecosystem.This page will talk about the LangChain ecosystem as a whole.Most of the components within the LangChain ecosystem can be used by themselves - so if you feel particularly drawn to certain components but not others, that is totally fine! Pick and choose whichever components you like best for your own use case!

Features

There are several primary needs that LangChain aims to address:

  1. Standardized component interfaces: The growing number ofmodels andrelated components for AI applications has resulted in a wide variety of different APIs that developers need to learn and use.This diversity can make it challenging for developers to switch between providers or combine components when building applications.LangChain exposes a standard interface for key components, making it easy to switch between providers.

  2. Orchestration: As applications become more complex, combining multiple components and models, there'sa growing need to efficiently connect these elements into control flows that canaccomplish diverse tasks.Orchestration is crucial for building such applications.

  3. Observability and evaluation: As applications become more complex, it becomes increasingly difficult to understand what is happening within them.Furthermore, the pace of development can become rate-limited by theparadox of choice.For example, developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost.Observability and evaluations can help developers monitor their applications and rapidly answer these types of questions with confidence.

Standardized component interfaces

LangChain provides common interfaces for components that are central to many AI applications.As an example, allchat models implement theBaseChatModel interface.This provides a standard way to interact with chat models, supporting important but often provider-specific features liketool calling andstructured outputs.

Example: chat models

Manymodel providers supporttool calling, a critical feature for many applications (e.g.,agents), that allows a developer to request model responses that match a particular schema.The APIs for each provider differ.LangChain'schat model interface provides a common way to bindtools to a model in order to supporttool calling:

# Tool creation
tools=[my_tool]
# Tool binding
model_with_tools= model.bind_tools(tools)

Similarly, getting models to producestructured outputs is an extremely common use case.Providers support different approaches for this, includingJSON mode or tool calling, with different APIs.LangChain'schat model interface provides a common way to produce structured outputs using thewith_structured_output() method:

# Define schema
schema=...
# Bind schema to model
model_with_structure= model.with_structured_output(schema)

Example: retrievers

In the context ofRAG and LLM application components, LangChain'sretriever interface provides a standard way to connect to many different types of data services or databases (e.g.,vector stores or databases).The underlying implementation of the retriever depends on the type of data store or database you are connecting to, but all retrievers implement therunnable interface, meaning they can be invoked in a common manner.

documents= my_retriever.invoke("What is the meaning of life?")

Orchestration

While standardization for individual components is useful, we've increasingly seen that developers want tocombine components into more complex applications.This motivates the need fororchestration.There are several common characteristics of LLM applications that this orchestration layer should support:

  • Complex control flow: The application requires complex patterns such as cycles (e.g., a loop that reiterates until a condition is met).
  • Persistence: The application needs to maintainshort-term and / or long-term memory.
  • Human-in-the-loop: The application needs human interaction, e.g., pausing, reviewing, editing, approving certain steps.

The recommended way to orchestrate components for complex applications isLangGraph.LangGraph is a library that gives developers a high degree of control by expressing the flow of the application as a set of nodes and edges.LangGraph comes with built-in support forpersistence,human-in-the-loop,memory, and other features.It's particularly well suited for buildingagents ormulti-agent applications.Importantly, individual LangChain components can be used as LangGraph nodes, but you can also use LangGraphwithout using LangChain components.

Further reading

Have a look at our free course,Introduction to LangGraph, to learn more about how to use LangGraph to build complex applications.

Observability and evaluation

The pace of AI application development is often rate-limited by high-quality evaluations because there is a paradox of choice.Developers often wonder how to engineer their prompt or which LLM best balances accuracy, latency, and cost.High quality tracing and evaluations can help you rapidly answer these types of questions with confidence.LangSmith is our platform that supports observability and evaluation for AI applications.See our conceptual guides onevaluations andtracing for more details.

Further reading

See our video playlist onLangSmith tracing and evaluations for more details.

Conclusion

LangChain offers standard interfaces for components that are central to many AI applications, which offers a few specific advantages:

  • Ease of swapping providers: It allows you to swap out different component providers without having to change the underlying code.
  • Advanced features: It provides common methods for more advanced features, such asstreaming andtool calling.

LangGraph makes it possible to orchestrate complex applications (e.g.,agents) and provide features like includingpersistence,human-in-the-loop, ormemory.

LangSmith makes it possible to iterate with confidence on your applications, by providing LLM-specific observability and framework for testing and evaluating your application.


[8]ページ先頭

©2009-2025 Movatter.jp