Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open on GitHub

GPT4All

This page covers how to use theGPT4All wrapper within LangChain. The tutorial is divided into two parts: installation and setup, followed by usage with an example.

Installation and Setup

  • Install the Python package withpip install gpt4all
  • Download aGPT4All model and place it in your desired directory

In this example, we are usingmistral-7b-openorca.Q4_0.gguf:

mkdir models
wget https://gpt4all.io/models/gguf/mistral-7b-openorca.Q4_0.gguf -O models/mistral-7b-openorca.Q4_0.gguf

Usage

GPT4All

To use the GPT4All wrapper, you need to provide the path to the pre-trained model file and the model's configuration.

from langchain_community.llmsimport GPT4All

# Instantiate the model. Callbacks support token-wise streaming
model= GPT4All(model="./models/mistral-7b-openorca.Q4_0.gguf", n_threads=8)

# Generate text
response= model.invoke("Once upon a time, ")
API Reference:GPT4All

You can also customize the generation parameters, such asn_predict,temp,top_p,top_k, and others.

To stream the model's predictions, add in a CallbackManager.

from langchain_community.llmsimport GPT4All
from langchain.callbacks.streaming_stdoutimport StreamingStdOutCallbackHandler

# There are many CallbackHandlers supported, such as
# from langchain.callbacks.streamlit import StreamlitCallbackHandler

callbacks=[StreamingStdOutCallbackHandler()]
model= GPT4All(model="./models/mistral-7b-openorca.Q4_0.gguf", n_threads=8)

# Generate text. Tokens are streamed through the callback manager.
model.invoke("Once upon a time, ", callbacks=callbacks)

Model File

You can download model files from the GPT4All client. You can download the client from theGPT4All website.

For a more detailed walkthrough of this, seethis notebook


[8]ページ先頭

©2009-2025 Movatter.jp