Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open In ColabOpen on GitHub

Chat Writer

This notebook provides a quick overview for getting started with Writerchat.

Writer has several chat models. You can find information about their latest models and their costs, context windows, and supported input types in theWriter docs.

Overview

Integration details

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
ChatWriterlangchain-writerPyPI - DownloadsPyPI - Version

Model features

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs

Credentials

Sign up forWriter AI Studio and follow thisQuickstart to obtain an API key. Then, set the WRITER_API_KEY environment variable:

import getpass
import os

ifnot os.getenv("WRITER_API_KEY"):
os.environ["WRITER_API_KEY"]= getpass.getpass("Enter your Writer API key: ")

If you want to get automated tracing of your model calls, you can also set yourLangSmith API key by uncommenting below:

# os.environ["LANGSMITH_TRACING"] = "true"
# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")

Installation

ChatWriter is available from thelangchain-writer package. Install it with:

%pip install-qU langchain-writer

Instantiation

Now we can instantiate our model object in order to generate chat completions:

from langchain_writerimport ChatWriter

llm= ChatWriter(
model="palmyra-x-004",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
)

Usage

To use the model, you pass in a list of messages and call theinvoke method:

messages=[
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human","I love programming."),
]
ai_msg= llm.invoke(messages)
ai_msg

Then, you can access the content of the message:

print(ai_msg.content)

Streaming

You can also stream the response. First, create a stream:

messages=[
(
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
),
("human","I love programming. Sing a song about it"),
]
ai_stream= llm.stream(messages)
ai_stream

Then, iterate over the stream to get the chunks:

for chunkin ai_stream:
print(chunk.content, end="")

Tool calling

Writer models like Palmyra X 004 supporttool calling, which lets you describe tools and their arguments. The model will return a JSON object with a tool to invoke and the inputs to that tool.

Binding tools

WithChatWriter.bind_tools, you can easily pass in Pydantic classes, dictionary schemas, LangChain tools, or even functions as tools to the model. Under the hood, these are converted to tool schemas, which look like this:

{
"name": "...",
"description": "...",
"parameters": {...} # JSONSchema
}

These are passed in every model invocation.

For example, to use a tool that gets the weather in a given location, you can define a Pydantic class and pass it toChatWriter.bind_tools:

from pydanticimport BaseModel, Field


classGetWeather(BaseModel):
"""Get the current weather in a given location"""

location:str= Field(..., description="The city and state, e.g. San Francisco, CA")


llm.bind_tools([GetWeather])

Then, you can invoke the model with the tool:

ai_msg= llm.invoke(
"what is the weather like in New York City",
)
ai_msg

Finally, you can access the tool calls and proceed to execute your functions:

print(ai_msg.tool_calls)

A note on tool binding

TheChatWriter.bind_tools() method does not create a new instance with bound tools, but stores the receivedtools andtool_choice in the initial class instance attributes to pass them as parameters during the Palmyra LLM call while usingChatWriter invocation. This approach allows the support of different tool types, e.g.function andgraph.Graph is one of the remotely called Writer Palmyra tools. For further information, visit ourdocs.

For more information about tool usage in LangChain, visit theLangChain tool calling documentation.

Batching

You can also batch requests and set themax_concurrency:

ai_batch= llm.batch(
[
"How to cook pancakes?",
"How to compose poem?",
"How to run faster?",
],
config={"max_concurrency":3},
)
ai_batch

Then, iterate over the batch to get the results:

for batchin ai_batch:
print(batch.content)
print("-"*100)

Asynchronous usage

All features above (invocation, streaming, batching, tools calling) also support asynchronous usage.

Prompt templates

Prompt templates help to translate user input and parameters into instructions for a language model. You can useChatWriter with a prompt template like so:

from langchain_core.promptsimport ChatPromptTemplate

prompt= ChatPromptTemplate(
[
(
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
),
("human","{input}"),
]
)

chain= prompt| llm
chain.invoke(
{
"input_language":"English",
"output_language":"German",
"input":"I love programming.",
}
)
API Reference:ChatPromptTemplate

API reference

For detailed documentation of all ChatWriter features and configurations, head to theAPI reference.

Additional resources

You can find information about Writer's models (including costs, context windows, and supported input types) and tools in theWriter docs.

Related


[8]ページ先頭

©2009-2025 Movatter.jp