ChatDeepSeek
This will help you get started with DeepSeek's hostedchat models. For detailed documentation of all ChatDeepSeek features and configurations head to theAPI reference.
Overview
Integration details
Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatDeepSeek | langchain-deepseek | ❌ | beta | ✅ |
Model features
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |
DeepSeek-R1, specified viamodel="deepseek-reasoner"
, does not support tool calling or structured output. Those featuresare supported by DeepSeek-V3 (specified viamodel="deepseek-chat"
).
Setup
To access DeepSeek models you'll need to create a/an DeepSeek account, get an API key, and install thelangchain-deepseek
integration package.
Credentials
Head toDeepSeek's API Key page to sign up to DeepSeek and generate an API key. Once you've done this set theDEEPSEEK_API_KEY
environment variable:
import getpass
import os
ifnot os.getenv("DEEPSEEK_API_KEY"):
os.environ["DEEPSEEK_API_KEY"]= getpass.getpass("Enter your DeepSeek API key: ")
To enable automated tracing of your model calls, set yourLangSmith API key:
# os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
Installation
The LangChain DeepSeek integration lives in thelangchain-deepseek
package:
%pip install-qU langchain-deepseek
Instantiation
Now we can instantiate our model object and generate chat completions:
from langchain_deepseekimport ChatDeepSeek
llm= ChatDeepSeek(
model="deepseek-chat",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# other params...
)
Invocation
messages=[
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human","I love programming."),
]
ai_msg= llm.invoke(messages)
ai_msg.content
Chaining
We canchain our model with a prompt template like so:
from langchain_core.promptsimport ChatPromptTemplate
prompt= ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human","{input}"),
]
)
chain= prompt| llm
chain.invoke(
{
"input_language":"English",
"output_language":"German",
"input":"I love programming.",
}
)
API reference
For detailed documentation of all ChatDeepSeek features and configurations head to theAPI Reference.
Related
- Chat modelconceptual guide
- Chat modelhow-to guides