Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
OurBuilding Ambient Agents with LangGraph course is now available on LangChain Academy!
Open In ColabOpen on GitHub

How to get log probabilities

Prerequisites

This guide assumes familiarity with the following concepts:

Certainchat models can be configured to return token-level log probabilities representing the likelihood of a given token. This guide walks through how to get this information in LangChain.

OpenAI

Install the LangChain x OpenAI package and set your API key

%pip install-qU langchain-openai
import getpass
import os

if"OPENAI_API_KEY"notin os.environ:
os.environ["OPENAI_API_KEY"]= getpass()

For the OpenAI API to return log probabilities we need to configure thelogprobs=True param. Then, the logprobs are included on each outputAIMessage as part of theresponse_metadata:

from langchain_openaiimport ChatOpenAI

llm= ChatOpenAI(model="gpt-4o-mini").bind(logprobs=True)

msg= llm.invoke(("human","how are you today"))

msg.response_metadata["logprobs"]["content"][:5]
API Reference:ChatOpenAI
[{'token': 'I', 'bytes': [73], 'logprob': -0.26341408, 'top_logprobs': []},
{'token': "'m",
'bytes': [39, 109],
'logprob': -0.48584133,
'top_logprobs': []},
{'token': ' just',
'bytes': [32, 106, 117, 115, 116],
'logprob': -0.23484154,
'top_logprobs': []},
{'token': ' a',
'bytes': [32, 97],
'logprob': -0.0018291725,
'top_logprobs': []},
{'token': ' computer',
'bytes': [32, 99, 111, 109, 112, 117, 116, 101, 114],
'logprob': -0.052299336,
'top_logprobs': []}]

And are part of streamed Message chunks as well:

ct=0
full=None
for chunkin llm.stream(("human","how are you today")):
if ct<5:
full= chunkif fullisNoneelse full+ chunk
if"logprobs"in full.response_metadata:
print(full.response_metadata["logprobs"]["content"])
else:
break
ct+=1
[]
[{'token': 'I', 'bytes': [73], 'logprob': -0.26593843, 'top_logprobs': []}]
[{'token': 'I', 'bytes': [73], 'logprob': -0.26593843, 'top_logprobs': []}, {'token': "'m", 'bytes': [39, 109], 'logprob': -0.3238896, 'top_logprobs': []}]
[{'token': 'I', 'bytes': [73], 'logprob': -0.26593843, 'top_logprobs': []}, {'token': "'m", 'bytes': [39, 109], 'logprob': -0.3238896, 'top_logprobs': []}, {'token': ' just', 'bytes': [32, 106, 117, 115, 116], 'logprob': -0.23778509, 'top_logprobs': []}]
[{'token': 'I', 'bytes': [73], 'logprob': -0.26593843, 'top_logprobs': []}, {'token': "'m", 'bytes': [39, 109], 'logprob': -0.3238896, 'top_logprobs': []}, {'token': ' just', 'bytes': [32, 106, 117, 115, 116], 'logprob': -0.23778509, 'top_logprobs': []}, {'token': ' a', 'bytes': [32, 97], 'logprob': -0.0022134194, 'top_logprobs': []}]

Next steps

You've now learned how to get logprobs from OpenAI models in LangChain.

Next, check out the other how-to guides chat models in this section, likehow to get a model to return structured output orhow to track token usage.


[8]ページ先頭

©2009-2025 Movatter.jp