Upgrade from the Python SDK v2
The Python SDK v3 introduces significant improvements and changes compared to the legacy v2 SDK. It isnot fully backward compatible. This comprehensive guide will help you migrate based on your current integration.
You can find a snapshot of the v2 SDK documentationhere.
Core Changes to SDK v2:
- OpenTelemetry Foundation: v3 is built on OpenTelemetry standards
- Trace Input/Output: Now derived from root observation by default
- Trace Attributes (
user_id,session_id, etc.) Can be set via enclosing spans OR directly on integrations using metadata fields (OpenAI call, Langchain invocation) - Context Management: Automatic OTELcontext propagation
Migration Path by Integration Type
@observe Decorator Users
v2 Pattern:
from langfuse.decoratorsimport langfuse_context, observe@observe()def my_function(): # This was the trace langfuse_context.update_current_trace(user_id="user_123") return "result"v3 Migration:
from langfuseimport observe, get_client# new import@observe()def my_function(): # This is now the root span, not the trace langfuse= get_client() # Update trace explicitly langfuse.update_current_trace(user_id="user_123") return "result"OpenAI Integration
v2 Pattern:
from langfuse.openaiimport openairesponse= openai.chat.completions.create( model="gpt-4o", messages=[{"role":"user","content":"Hello"}], # Trace attributes directly on the call user_id="user_123", session_id="session_456", tags=["chat"], metadata={"source":"app"})v3 Migration:
If you do not set additional trace attributes, no changes are needed.
If you set additional trace attributes, you have two options:
Option 1: Use metadata fields (simplest migration):
from langfuse.openaiimport openairesponse= openai.chat.completions.create( model="gpt-4o", messages=[{"role":"user","content":"Hello"}], metadata={ "langfuse_user_id":"user_123", "langfuse_session_id":"session_456", "langfuse_tags": ["chat"], "source":"app" # Regular metadata still works })Option 2: Use enclosing span (for more control):
from langfuseimport get_client, propagate_attributesfrom langfuse.openaiimport openailangfuse= get_client()with langfuse.start_as_current_observation(as_type="span",name="chat-request")as span: with propagate_attributes( user_id="user_123", session_id="session_456", tags=["chat"], ): response= openai.chat.completions.create( model="gpt-4o", messages=[{"role":"user","content":"Hello"}], metadata={"source":"app"} ) # Set trace input and output explicitly span.update_trace( output={"response": response.choices[0].message.content}, input={"query":"Hello"}, )LangChain Integration
v2 Pattern:
from langfuse.callbackimport CallbackHandlerhandler= CallbackHandler( user_id="user_123", session_id="session_456", tags=["langchain"])response= chain.invoke({"input":"Hello"},config={"callbacks": [handler]})v3 Migration:
You have two options for setting trace attributes:
Option 1: Use metadata fields in chain invocation (simplest migration):
from langfuse.langchainimport CallbackHandlerhandler= CallbackHandler()response= chain.invoke( {"input":"Hello"}, config={ "callbacks": [handler], "metadata": { "langfuse_user_id":"user_123", "langfuse_session_id":"session_456", "langfuse_tags": ["langchain"] } })Option 2: Use enclosing span (for more control):
from langfuseimport get_client, propagate_attributesfrom langfuse.langchainimport CallbackHandlerlangfuse= get_client()with langfuse.start_as_current_observation(as_type="span",name="langchain-request")as span: with propagate_attributes( user_id="user_123", session_id="session_456", tags=["langchain"], ): handler= CallbackHandler() response= chain.invoke({"input":"Hello"},config={"callbacks": [handler]}) # Set trace input and output explicitly span.update_trace( input={"query":"Hello"}, output={"response": response} )LlamaIndex Integration Users
v2 Pattern:
from langfuse.llama_indeximport LlamaIndexCallbackHandlerhandler= LlamaIndexCallbackHandler()Settings.callback_manager= CallbackManager([handler])response= index.as_query_engine().query("Hello")v3 Migration:
from langfuseimport get_client, propagate_attributesfrom openinference.instrumentation.llama_indeximport LlamaIndexInstrumentor# Use third-party OTEL instrumentationLlamaIndexInstrumentor().instrument()langfuse= get_client()with langfuse.start_as_current_observation(as_type="span",name="llamaindex-query")as span: with propagate_attributes( user_id="user_123", ): response= index.as_query_engine().query("Hello") span.update_trace( input={"query":"Hello"}, output={"response":str(response)} )Low-Level SDK Users
v2 Pattern:
from langfuseimport Langfuselangfuse= Langfuse()trace= langfuse.trace( name="my-trace", user_id="user_123", input={"query":"Hello"})generation= trace.generation( name="llm-call", model="gpt-4o")generation.end(output="Response")v3 Migration:
In v3, all spans / generations must be ended by calling.end() on thereturned object.
from langfuseimport get_client, propagate_attributeslangfuse= get_client()# Use context managers instead of manual objectswith langfuse.start_as_current_observation( as_type="span", name="my-trace", input={"query":"Hello"}# Becomes trace input automatically)as root_span: # Propagate trace attributes to all child observations with propagate_attributes( user_id="user_123", ): with langfuse.start_as_current_observation( as_type="generation", name="llm-call", model="gpt-4o" )as generation: generation.update(output="Response") # If needed, override trace output root_span.update_trace( input={"query":"Hello"}, output={"response":"Response"} )Key Migration Checklist
Update Imports:
- Use
from langfuse import get_clientto access global client instance configured via environment variables - Use
from langfuse import Langfuseto create a new client instance configured via constructor parameters - Use
from langfuse import observeto import the observe decorator - Update integration imports:
from langfuse.langchain import CallbackHandler
- Use
Trace Attributes Pattern:
- Option 1: Use metadata fields (
langfuse_user_id,langfuse_session_id,langfuse_tags) directly in integration calls - Option 2: Move
user_id,session_id,tagstopropagate_attributes()
- Option 1: Use metadata fields (
Trace Input/Output:
- Critical for LLM-as-a-judge: Explicitly set trace input/output
- Don’t rely on automatic derivation from root observation if you need specific values
Context Managers:
- Replace manual
langfuse.trace(),trace.span()with context managers if you want to use them - Use
with langfuse.start_as_current_observation()instead
- Replace manual
LlamaIndex Migration:
- Replace Langfuse callback with third-party OTEL instrumentation
- Install:
pip install openinference-instrumentation-llama-index
ID Management:
- No Custom Observation IDs: v3 uses W3C Trace Context standard - you cannot set custom observation IDs
- Trace ID Format: Must be 32-character lowercase hexadecimal (16 bytes)
- External ID Correlation: Use
Langfuse.create_trace_id(seed=external_id)to generate deterministic trace IDs from external systems
from langfuseimport Langfuse, observe# v3: Generate deterministic trace ID from external systemexternal_request_id= "req_12345"trace_id= Langfuse.create_trace_id(seed=external_request_id)@observe(langfuse_trace_id=trace_id)def my_function(): # This trace will have the deterministic ID passInitialization:
- Replace constructor parameters:
enabled→tracing_enabledthreads→media_upload_thread_count
- Replace constructor parameters:
Datasets
Thelink method on the dataset item objects has been replaced by a context manager that can be accessed via therun method on the dataset items. This is a higher level abstraction that manages trace creation and linking of the dataset item with the resulting trace.
See thedatasets documentation for more details.
Detailed Change Summary
Core Change: OpenTelemetry Foundation
- Built on OpenTelemetry standards for better ecosystem compatibility
Trace Input/Output Behavior
- v2: Integrations could set trace input/output directly
- v3: Trace input/output derived from root observation by default
- Migration: Explicitly set via
span.update_trace(input=..., output=...)
Trace Attributes Location
- v2: Could be set directly on integration calls
- v3: Must be set on enclosing spans
- Migration: Wrap integration calls with
langfuse.start_as_current_observation()
Creating Observations:
- v2:
langfuse.trace(),langfuse.span(),langfuse.generation() - v3:
langfuse.start_as_current_observation() - Migration: Use context managers, ensure
.end()is called or usewithstatements
- v2:
IDs and Context:
- v3: W3C Trace Context format, automaticcontext propagation
- Migration: Use
langfuse.get_current_trace_id()instead ofget_trace_id()
Event Size Limitations:
- v2: Events were limited to 1MB in size
- v3: No size limits enforced on the SDK-side for events
Future support for v2
We will continue to support the v2 SDK for the foreseeable future with critical bug fixes and security patches. We will not be adding any new features to the v2 SDK. You can find a snapshot of the v2 SDK documentationhere.