Movatterモバイル変換


[0]ホーム

URL:


跳转至

快速开始

实时智能体通过 OpenAI 的 Realtime API 为你的 AI 智能体启用语音对话。本指南将带你创建第一个实时语音智能体。

测试版功能

实时智能体目前为测试版。在我们改进实现时,可能会有不兼容的变更。

先决条件

  • Python 3.9 或更高版本
  • OpenAI API key
  • 对 OpenAI Agents SDK 的基本了解

安装

如果尚未安装,请先安装 OpenAI Agents SDK:

pipinstallopenai-agents

创建你的第一个实时智能体

1. 导入所需组件

importasynciofromagents.realtimeimportRealtimeAgent,RealtimeRunner

2. 创建一个实时智能体

agent=RealtimeAgent(name="Assistant",instructions="You are a helpful voice assistant. Keep your responses conversational and friendly.",)

3. 设置运行器

runner=RealtimeRunner(starting_agent=agent,config={"model_settings":{"model_name":"gpt-realtime","voice":"ash","modalities":["audio"],"input_audio_format":"pcm16","output_audio_format":"pcm16","input_audio_transcription":{"model":"gpt-4o-mini-transcribe"},"turn_detection":{"type":"semantic_vad","interrupt_response":True},}})

4. 启动会话

# Start the sessionsession=awaitrunner.run()asyncwithsession:print("Session started! The agent will stream audio responses in real-time.")# Process eventsasyncforeventinsession:try:ifevent.type=="agent_start":print(f"Agent started:{event.agent.name}")elifevent.type=="agent_end":print(f"Agent ended:{event.agent.name}")elifevent.type=="handoff":print(f"Handoff from{event.from_agent.name} to{event.to_agent.name}")elifevent.type=="tool_start":print(f"Tool started:{event.tool.name}")elifevent.type=="tool_end":print(f"Tool ended:{event.tool.name}; output:{event.output}")elifevent.type=="audio_end":print("Audio ended")elifevent.type=="audio":# Enqueue audio for callback-based playback with metadata# Non-blocking put; queue is unbounded, so drops won’t occur.passelifevent.type=="audio_interrupted":print("Audio interrupted")# Begin graceful fade + flush in the audio callback and rebuild jitter buffer.elifevent.type=="error":print(f"Error:{event.error}")elifevent.type=="history_updated":pass# Skip these frequent eventselifevent.type=="history_added":pass# Skip these frequent eventselifevent.type=="raw_model_event":print(f"Raw model event:{_truncate_str(str(event.data),200)}")else:print(f"Unknown event type:{event.type}")exceptExceptionase:print(f"Error processing event:{_truncate_str(str(e),200)}")def_truncate_str(s:str,max_length:int)->str:iflen(s)>max_length:returns[:max_length]+"..."returns

完整示例

下面是一个可运行的完整示例:

importasynciofromagents.realtimeimportRealtimeAgent,RealtimeRunnerasyncdefmain():# Create the agentagent=RealtimeAgent(name="Assistant",instructions="You are a helpful voice assistant. Keep responses brief and conversational.",)# Set up the runner with configurationrunner=RealtimeRunner(starting_agent=agent,config={"model_settings":{"model_name":"gpt-realtime","voice":"ash","modalities":["audio"],"input_audio_format":"pcm16","output_audio_format":"pcm16","input_audio_transcription":{"model":"gpt-4o-mini-transcribe"},"turn_detection":{"type":"semantic_vad","interrupt_response":True},}},)# Start the sessionsession=awaitrunner.run()asyncwithsession:print("Session started! The agent will stream audio responses in real-time.")# Process eventsasyncforeventinsession:try:ifevent.type=="agent_start":print(f"Agent started:{event.agent.name}")elifevent.type=="agent_end":print(f"Agent ended:{event.agent.name}")elifevent.type=="handoff":print(f"Handoff from{event.from_agent.name} to{event.to_agent.name}")elifevent.type=="tool_start":print(f"Tool started:{event.tool.name}")elifevent.type=="tool_end":print(f"Tool ended:{event.tool.name}; output:{event.output}")elifevent.type=="audio_end":print("Audio ended")elifevent.type=="audio":# Enqueue audio for callback-based playback with metadata# Non-blocking put; queue is unbounded, so drops won’t occur.passelifevent.type=="audio_interrupted":print("Audio interrupted")# Begin graceful fade + flush in the audio callback and rebuild jitter buffer.elifevent.type=="error":print(f"Error:{event.error}")elifevent.type=="history_updated":pass# Skip these frequent eventselifevent.type=="history_added":pass# Skip these frequent eventselifevent.type=="raw_model_event":print(f"Raw model event:{_truncate_str(str(event.data),200)}")else:print(f"Unknown event type:{event.type}")exceptExceptionase:print(f"Error processing event:{_truncate_str(str(e),200)}")def_truncate_str(s:str,max_length:int)->str:iflen(s)>max_length:returns[:max_length]+"..."returnsif__name__=="__main__":# Run the sessionasyncio.run(main())

配置选项

模型设置

  • model_name: 从可用的实时模型中选择(例如,gpt-realtime
  • voice: 选择音色(alloyechofableonyxnovashimmer
  • modalities: 启用文本或音频(["text"]["audio"]

音频设置

  • input_audio_format: 输入音频格式(pcm16g711_ulawg711_alaw
  • output_audio_format: 输出音频格式
  • input_audio_transcription: 转写配置

轮次检测

  • type: 检测方式(server_vadsemantic_vad
  • threshold: 语音活动阈值(0.0-1.0)
  • silence_duration_ms: 检测说话结束的静音时长
  • prefix_padding_ms: 语音前的音频填充

下一步

身份验证

请确保已在环境中设置你的 OpenAI API key:

exportOPENAI_API_KEY="your-api-key-here"

或在创建会话时直接传入:

session=awaitrunner.run(model_config={"api_key":"your-api-key"})

[8]ページ先頭

©2009-2025 Movatter.jp