Movatterモバイル変換


[0]ホーム

URL:


Skip to content

OpenAI

We support instrumenting both thestandard OpenAI SDK package andOpenAI "agents" framework.

OpenAI SDK

Logfire supports instrumenting calls to OpenAI with thelogfire.instrument_openai() method, for example:

importopenaiimportlogfireclient=openai.Client()logfire.configure()logfire.instrument_openai()# instrument all OpenAI clients globally# or logfire.instrument_openai(client) to instrument a specific client instanceresponse=client.chat.completions.create(model='gpt-4',messages=[{'role':'system','content':'You are a helpful assistant.'},{'role':'user','content':'Please write me a limerick about Python logging.'},],)print(response.choices[0].message)

With that you get:

  • a span around the call to OpenAI which records duration and captures any exceptions that might occur
  • Human-readable display of the conversation with the agent
  • details of the response, including the number of tokens used
Logfire OpenAI
OpenAI span and conversation
Logfire OpenAI Arguments
Span arguments including response details

Methods covered

The following OpenAI methods are covered:

All methods are covered with bothopenai.Client andopenai.AsyncClient.

For example, here's instrumentation of an image generation call:

importopenaiimportlogfireasyncdefmain():client=openai.AsyncClient()logfire.configure()logfire.instrument_openai(client)response=awaitclient.images.generate(prompt='Image of R2D2 running through a desert in the style of cyberpunk.',model='dall-e-3',)url=response.data[0].urlimportwebbrowserwebbrowser.open(url)if__name__=='__main__':importasyncioasyncio.run(main())

Gives:

Logfire OpenAI Image Generation
OpenAI image generation span

Streaming Responses

When instrumenting streaming responses, Logfire creates two spans — one around the initial request and onearound the streamed response.

Here we also use Rich'sLive andMarkdown types to render the response in the terminal in real-time.💃

importopenaiimportlogfirefromrich.consoleimportConsolefromrich.liveimportLivefromrich.markdownimportMarkdownclient=openai.AsyncClient()logfire.configure()logfire.instrument_openai(client)asyncdefmain():console=Console()withlogfire.span('Asking OpenAI to write some code'):response=awaitclient.chat.completions.create(model='gpt-4',messages=[{'role':'system','content':'Reply in markdown one.'},{'role':'user','content':'Write Python to show a tree of files 🤞.'},],stream=True)content=''withLive('',refresh_per_second=15,console=console)aslive:asyncforchunkinresponse:ifchunk.choices[0].delta.contentisnotNone:content+=chunk.choices[0].delta.contentlive.update(Markdown(content))if__name__=='__main__':importasyncioasyncio.run(main())

Shows up like this in Logfire:

Logfire OpenAI Streaming
OpenAI streaming response

OpenAI Agents

We also support instrumenting theOpenAI "agents" framework.

importlogfirefromagentsimportAgent,Runnerlogfire.configure()logfire.instrument_openai_agents()agent=Agent(name="Assistant",instructions="You are a helpful assistant")result=Runner.run_sync(agent,"Write a haiku about recursion in programming.")print(result.final_output)

For more information, see theinstrument_openai_agents() API reference.

Which shows up like this in Logfire:

Logfire OpenAI Agents
OpenAI Agents

In this example we add a function tool to the agents:

fromtyping_extensionsimportTypedDictimportlogfirefromhttpximportAsyncClientfromagentsimportRunContextWrapper,Agent,function_tool,Runnerlogfire.configure()logfire.instrument_openai_agents()classLocation(TypedDict):lat:floatlong:float@function_toolasyncdeffetch_weather(ctx:RunContextWrapper[AsyncClient],location:Location)->str:"""Fetch the weather for a given location.    Args:        ctx: Run context object.        location: The location to fetch the weather for.    """r=awaitctx.context.get('https://httpbin.org/get',params=location)return'sunny'ifr.status_code==200else'rainy'agent=Agent(name='weather agent',tools=[fetch_weather])asyncdefmain():asyncwithAsyncClient()asclient:logfire.instrument_httpx(client)result=awaitRunner.run(agent,'Get the weather at lat=51 lng=0.2',context=client)print(result.final_output)if__name__=='__main__':importasyncioasyncio.run(main())

We see spans from within the function call nested within the agent spans:

Logfire OpenAI Agents
OpenAI Agents

[8]ページ先頭

©2009-2025 Movatter.jp