Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

LangChain + Next.js starter template

License

NotificationsYou must be signed in to change notification settings

langchain-ai/langchain-nextjs-template

Repository files navigation

Open in GitHub CodespacesDeploy with Vercel

This template scaffolds a LangChain.js + Next.js starter app. It showcases how to use and combine LangChain modules for severaluse cases. Specifically:

Most of them use Vercel'sAI SDK to stream tokens to the client and display the incoming messages.

The agents useLangGraph.js, LangChain's framework for building agentic workflows. They use preconfigured helper functions to minimize boilerplate, but you can replace them with custom graphs as desired.

agent-convo.mp4

It's free-tier friendly too! Check out thebundle size stats below.

You can check out a hosted version of this repo here:https://langchain-nextjs-template.vercel.app/

🚀 Getting Started

First, clone this repo and download it locally.

Next, you'll need to set up environment variables in your repo's.env.local file. Copy the.env.example file to.env.local.To start with the basic examples, you'll just need to add your OpenAI API key.

Because this app is made to run in serverless Edge functions, make sure you've set theLANGCHAIN_CALLBACKS_BACKGROUND environment variable tofalse to ensure tracing finishes if you are usingLangSmith tracing.

Next, install the required packages using your preferred package manager (e.g.yarn).

Now you're ready to run the development server:

yarn dev

Openhttp://localhost:3000 with your browser to see the result! Ask the bot something and you'll see a streamed response:

A streaming conversation between the user and the AI

You can start editing the page by modifyingapp/page.tsx. The page auto-updates as you edit the file.

Backend logic lives inapp/api/chat/route.ts. From here, you can change the prompt and model, or add other modules and logic.

🧱 Structured Output

The second example shows how to have a model return output according to a specific schema using OpenAI Functions.Click theStructured Output link in the navbar to try it out:

A streaming conversation between the user and an AI agent

The chain in this example uses apopular library called Zod to construct a schema, then formats it in the way OpenAI expects.It then passes that schema as a function into OpenAI and passes afunction_call parameter to force OpenAI to return arguments in the specified format.

For more details,check out this documentation page.

🦜 Agents

To try out the agent example, you'll need to give the agent access to the internet by populating theSERPAPI_API_KEY in.env.local.Head over tothe SERP API website and get an API key if you don't already have one.

You can then click theAgent example and try asking it more complex questions:

A streaming conversation between the user and an AI agent

This example uses aprebuilt LangGraph agent, but you can customize your own as well.

🐶 Retrieval

The retrieval examples both use Supabase as a vector store. However, you can swap inanother supported vector store if preferred by changingthe code underapp/api/retrieval/ingest/route.ts,app/api/chat/retrieval/route.ts, andapp/api/chat/retrieval_agents/route.ts.

For Supabase, followthese instructions to set up yourdatabase, then get your database URL and private key and paste them into.env.local.

You can then switch to theRetrieval andRetrieval Agent examples. The default document text is pulled from the LangChain.js retrievaluse case docs, but you can change them to whatever text you'd like.

For a given text, you'll only need to pressUpload once. Pressing it again will re-ingest the docs, resulting in duplicates.You can clear your Supabase vector store by navigating to the console and runningDELETE FROM documents;.

After splitting, embedding, and uploading some text, you're ready to ask questions!

For more info on retrieval chains,see this page.The specific variant of the conversational retrieval chain used here is composed using LangChain Expression Language, which you canread more about here. This chain example will also return cited sourcesvia header in addition to the streaming response.

For more info on retrieval agents,see this page.

📦 Bundle size

The bundle size for LangChain itself is quite small. After compression and chunk splitting, for the RAG use case LangChain uses 37.32 KB of code space (as of@langchain/core 0.1.15), which is less than 4% of the total Vercel free tier edge function alottment of 1 MB:

This package has@next/bundle-analyzer set up by default - you can explore the bundle size interactively by running:

$ ANALYZE=true yarn build

📚 Learn More

The example chains in theapp/api/chat/route.ts andapp/api/chat/retrieval/route.ts files useLangChain Expression Language tocompose different LangChain.js modules together. You can integrate other retrievers, agents, preconfigured chains, and more too, though keep in mindHttpResponseOutputParser is meant to be used directly with model output.

To learn more about what you can do with LangChain.js, check out the docs here:

▲ Deploy on Vercel

When ready, you can deploy your app on theVercel Platform.

Check out theNext.js deployment documentation for more details.

Thank You!

Thanks for reading! If you have any questions or comments, reach out to us on Twitter@LangChainAI, orclick here to join our Discord server.

About

LangChain + Next.js starter template

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp