Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open In ColabOpen on GitHub

ChatNVIDIA

This will help you get started with NVIDIAchat models. For detailed documentation of allChatNVIDIA features and configurations head to theAPI reference.

Overview

Thelangchain-nvidia-ai-endpoints package contains LangChain integrations building applications with models onNVIDIA NIM inference microservice. NIM supports models across domains like chat, embedding, and re-ranking modelsfrom the community as well as NVIDIA. These models are optimized by NVIDIA to deliver the best performance on NVIDIAaccelerated infrastructure and deployed as a NIM, an easy-to-use, prebuilt containers that deploy anywhere using a singlecommand on NVIDIA accelerated infrastructure.

NVIDIA hosted deployments of NIMs are available to test on theNVIDIA API catalog. After testing,NIMs can be exported from NVIDIA’s API catalog using the NVIDIA AI Enterprise license and run on-premises or in the cloud,giving enterprises ownership and full control of their IP and AI application.

NIMs are packaged as container images on a per model basis and are distributed as NGC container images through the NVIDIA NGC Catalog.At their core, NIMs provide easy, consistent, and familiar APIs for running inference on an AI model.

This example goes over how to use LangChain to interact with NVIDIA supported via theChatNVIDIA class.

For more information on accessing the chat models through this api, check out theChatNVIDIA documentation.

Integration details

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
ChatNVIDIAlangchain_nvidia_ai_endpointsbetaPyPI - DownloadsPyPI - Version

Model features

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs

Setup

To get started:

  1. Create a free account withNVIDIA, which hosts NVIDIA AI Foundation models.

  2. Click on your model of choice.

  3. UnderInput select thePython tab, and clickGet API Key. Then clickGenerate Key.

  4. Copy and save the generated key asNVIDIA_API_KEY. From there, you should have access to the endpoints.

Credentials

import getpass
import os

ifnot os.getenv("NVIDIA_API_KEY"):
# Note: the API key should start with "nvapi-"
os.environ["NVIDIA_API_KEY"]= getpass.getpass("Enter your NVIDIA API key: ")

To enable automated tracing of your model calls, set yourLangSmith API key:

# os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

Installation

The LangChain NVIDIA AI Endpoints integration lives in thelangchain_nvidia_ai_endpoints package:

%pip install--upgrade--quiet langchain-nvidia-ai-endpoints

Instantiation

Now we can access models in the NVIDIA API Catalog:

## Core LC Chat Interface
from langchain_nvidia_ai_endpointsimport ChatNVIDIA

llm= ChatNVIDIA(model="mistralai/mixtral-8x7b-instruct-v0.1")
API Reference:ChatNVIDIA

Invocation

result= llm.invoke("Write a ballad about LangChain.")
print(result.content)

Working with NVIDIA NIMs

When ready to deploy, you can self-host models with NVIDIA NIM—which is included with the NVIDIA AI Enterprise software license—and run them anywhere, giving you ownership of your customizations and full control of your intellectual property (IP) and AI applications.

Learn more about NIMs

from langchain_nvidia_ai_endpointsimport ChatNVIDIA

# connect to an embedding NIM running at localhost:8000, specifying a specific model
llm= ChatNVIDIA(base_url="http://localhost:8000/v1", model="meta/llama3-8b-instruct")
API Reference:ChatNVIDIA

Stream, Batch, and Async

These models natively support streaming, and as is the case with all LangChain LLMs they expose a batch method to handle concurrent requests, as well as async methods for invoke, stream, and batch. Below are a few examples.

print(llm.batch(["What's 2*3?","What's 2*6?"]))
# Or via the async API
# await llm.abatch(["What's 2*3?", "What's 2*6?"])
for chunkin llm.stream("How far can a seagull fly in one day?"):
# Show the token separations
print(chunk.content, end="|")
asyncfor chunkin llm.astream(
"How long does it take for monarch butterflies to migrate?"
):
print(chunk.content, end="|")

Supported models

Queryingavailable_models will still give you all of the other models offered by your API credentials.

Theplayground_ prefix is optional.

ChatNVIDIA.get_available_models()
# llm.get_available_models()

Model types

All of these models above are supported and can be accessed viaChatNVIDIA.

Some model types support unique prompting techniques and chat messages. We will review a few important ones below.

To find out more about a specific model, please navigate to the API section of an AI Foundation modelas linked here.

General Chat

Models such asmeta/llama3-8b-instruct andmistralai/mixtral-8x22b-instruct-v0.1 are good all-around models that you can use for with any LangChain chat messages. Example below.

from langchain_core.output_parsersimport StrOutputParser
from langchain_core.promptsimport ChatPromptTemplate
from langchain_nvidia_ai_endpointsimport ChatNVIDIA

prompt= ChatPromptTemplate.from_messages(
[("system","You are a helpful AI assistant named Fred."),("user","{input}")]
)
chain= prompt| ChatNVIDIA(model="meta/llama3-8b-instruct")| StrOutputParser()

for txtin chain.stream({"input":"What's your name?"}):
print(txt, end="")

Code Generation

These models accept the same arguments and input structure as regular chat models, but they tend to perform better on code-generation and structured code tasks. An example of this ismeta/codellama-70b.

prompt= ChatPromptTemplate.from_messages(
[
(
"system",
"You are an expert coding AI. Respond only in valid python; no narration whatsoever.",
),
("user","{input}"),
]
)
chain= prompt| ChatNVIDIA(model="meta/codellama-70b")| StrOutputParser()

for txtin chain.stream({"input":"How do I solve this fizz buzz problem?"}):
print(txt, end="")

Multimodal

NVIDIA also supports multimodal inputs, meaning you can provide both images and text for the model to reason over. An example model supporting multimodal inputs isnvidia/neva-22b.

Below is an example use:

import IPython
import requests

image_url="https://www.nvidia.com/content/dam/en-zz/Solutions/research/ai-playground/nvidia-picasso-3c33-p@2x.jpg"## Large Image
image_content= requests.get(image_url).content

IPython.display.Image(image_content)
from langchain_nvidia_ai_endpointsimport ChatNVIDIA

llm= ChatNVIDIA(model="nvidia/neva-22b")
API Reference:ChatNVIDIA

Passing an image as a URL

from langchain_core.messagesimport HumanMessage

llm.invoke(
[
HumanMessage(
content=[
{"type":"text","text":"Describe this image:"},
{"type":"image_url","image_url":{"url": image_url}},
]
)
]
)
API Reference:HumanMessage

Passing an image as a base64 encoded string

At the moment, some extra processing happens client-side to support larger images like the one above. But for smaller images (and to better illustrate the process going on under the hood), we can directly pass in the image as shown below:

import IPython
import requests

image_url="https://picsum.photos/seed/kitten/300/200"
image_content= requests.get(image_url).content

IPython.display.Image(image_content)
import base64

from langchain_core.messagesimport HumanMessage

## Works for simpler images. For larger images, see actual implementation
b64_string= base64.b64encode(image_content).decode("utf-8")

llm.invoke(
[
HumanMessage(
content=[
{"type":"text","text":"Describe this image:"},
{
"type":"image_url",
"image_url":{"url":f"data:image/png;base64,{b64_string}"},
},
]
)
]
)
API Reference:HumanMessage

Directly within the string

The NVIDIA API uniquely accepts images as base64 images inlined within<img/> HTML tags. While this isn't interoperable with other LLMs, you can directly prompt the model accordingly.

base64_with_mime_type=f"data:image/png;base64,{b64_string}"
llm.invoke(f'What\'s in this image?\n<img src="{base64_with_mime_type}" />')

Example usage within a RunnableWithMessageHistory

Like any other integration, ChatNVIDIA is fine to support chat utilities like RunnableWithMessageHistory which is analogous to usingConversationChain. Below, we show theLangChain RunnableWithMessageHistory example applied to themistralai/mixtral-8x22b-instruct-v0.1 model.

%pip install--upgrade--quiet langchain
from langchain_core.chat_historyimport InMemoryChatMessageHistory
from langchain_core.runnables.historyimport RunnableWithMessageHistory

# store is a dictionary that maps session IDs to their corresponding chat histories.
store={}# memory is maintained outside the chain


# A function that returns the chat history for a given session ID.
defget_session_history(session_id:str)-> InMemoryChatMessageHistory:
if session_idnotin store:
store[session_id]= InMemoryChatMessageHistory()
return store[session_id]


chat= ChatNVIDIA(
model="mistralai/mixtral-8x22b-instruct-v0.1",
temperature=0.1,
max_tokens=100,
top_p=1.0,
)

# Define a RunnableConfig object, with a `configurable` key. session_id determines thread
config={"configurable":{"session_id":"1"}}

conversation= RunnableWithMessageHistory(
chat,
get_session_history,
)

conversation.invoke(
"Hi I'm Srijan Dubey.",# input or query
config=config,
)
conversation.invoke(
"I'm doing well! Just having a conversation with an AI.",
config=config,
)
conversation.invoke(
"Tell me about yourself.",
config=config,
)

Tool calling

Starting in v0.2,ChatNVIDIA supportsbind_tools.

ChatNVIDIA provides integration with the variety of models onbuild.nvidia.com as well as local NIMs. Not all these models are trained for tool calling. Be sure to select a model that does have tool calling for your experimention and applications.

You can get a list of models that are known to support tool calling with,

tool_models=[
modelfor modelin ChatNVIDIA.get_available_models()if model.supports_tools
]
tool_models

With a tool capable model,

from langchain_core.toolsimport tool
from pydanticimport Field


@tool
defget_current_weather(
location:str= Field(..., description="The location to get the weather for."),
):
"""Get the current weather for a location."""
...


llm= ChatNVIDIA(model=tool_models[0].id).bind_tools(tools=[get_current_weather])
response= llm.invoke("What is the weather in Boston?")
response.tool_calls
API Reference:tool

SeeHow to use chat models to call tools for additional examples.

Chaining

We canchain our model with a prompt template like so:

from langchain_core.promptsimport ChatPromptTemplate

prompt= ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human","{input}"),
]
)

chain= prompt| llm
chain.invoke(
{
"input_language":"English",
"output_language":"German",
"input":"I love programming.",
}
)
API Reference:ChatPromptTemplate

API reference

For detailed documentation of allChatNVIDIA features and configurations head to the API reference:https://python.langchain.com/api_reference/nvidia_ai_endpoints/chat_models/langchain_nvidia_ai_endpoints.chat_models.ChatNVIDIA.html

Related


[8]ページ先頭

©2009-2025 Movatter.jp