Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Model

TTSVoicemodule-attribute

TTSVoice=Literal["alloy","ash","coral","echo","fable","onyx","nova","sage","shimmer",]

Exportable type for the TTSModelSettings voice enum

TTSModelSettingsdataclass

Settings for a TTS model.

Source code insrc/agents/voice/model.py
@dataclassclassTTSModelSettings:"""Settings for a TTS model."""voice:TTSVoice|None=None"""    The voice to use for the TTS model. If not provided, the default voice for the respective model    will be used.    """buffer_size:int=120"""The minimal size of the chunks of audio data that are being streamed out."""dtype:npt.DTypeLike=np.int16"""The data type for the audio data to be returned in."""transform_data:(Callable[[npt.NDArray[np.int16|np.float32]],npt.NDArray[np.int16|np.float32]]|None)=None"""    A function to transform the data from the TTS model. This is useful if you want the resulting    audio stream to have the data in a specific shape already.    """instructions:str=("You will receive partial sentences. Do not complete the sentence just read out the text.")"""    The instructions to use for the TTS model. This is useful if you want to control the tone of the    audio output.    """text_splitter:Callable[[str],tuple[str,str]]=get_sentence_based_splitter()"""    A function to split the text into chunks. This is useful if you want to split the text into    chunks before sending it to the TTS model rather than waiting for the whole text to be    processed.    """speed:float|None=None"""The speed with which the TTS model will read the text. Between 0.25 and 4.0."""

voiceclass-attributeinstance-attribute

voice:TTSVoice|None=None

The voice to use for the TTS model. If not provided, the default voice for the respective modelwill be used.

buffer_sizeclass-attributeinstance-attribute

buffer_size:int=120

The minimal size of the chunks of audio data that are being streamed out.

dtypeclass-attributeinstance-attribute

dtype:DTypeLike=int16

The data type for the audio data to be returned in.

transform_dataclass-attributeinstance-attribute

transform_data:(Callable[[NDArray[int16|float32]],NDArray[int16|float32]]|None)=None

A function to transform the data from the TTS model. This is useful if you want the resultingaudio stream to have the data in a specific shape already.

instructionsclass-attributeinstance-attribute

instructions:str="You will receive partial sentences. Do not complete the sentence just read out the text."

The instructions to use for the TTS model. This is useful if you want to control the tone of theaudio output.

text_splitterclass-attributeinstance-attribute

text_splitter:Callable[[str],tuple[str,str]]=(get_sentence_based_splitter())

A function to split the text into chunks. This is useful if you want to split the text intochunks before sending it to the TTS model rather than waiting for the whole text to beprocessed.

speedclass-attributeinstance-attribute

speed:float|None=None

The speed with which the TTS model will read the text. Between 0.25 and 4.0.

TTSModel

Bases:ABC

A text-to-speech model that can convert text into audio output.

Source code insrc/agents/voice/model.py
classTTSModel(abc.ABC):"""A text-to-speech model that can convert text into audio output."""@property@abc.abstractmethoddefmodel_name(self)->str:"""The name of the TTS model."""pass@abc.abstractmethoddefrun(self,text:str,settings:TTSModelSettings)->AsyncIterator[bytes]:"""Given a text string, produces a stream of audio bytes, in PCM format.        Args:            text: The text to convert to audio.        Returns:            An async iterator of audio bytes, in PCM format.        """pass

model_nameabstractmethodproperty

model_name:str

The name of the TTS model.

runabstractmethod

run(text:str,settings:TTSModelSettings)->AsyncIterator[bytes]

Given a text string, produces a stream of audio bytes, in PCM format.

Parameters:

NameTypeDescriptionDefault
textstr

The text to convert to audio.

required

Returns:

TypeDescription
AsyncIterator[bytes]

An async iterator of audio bytes, in PCM format.

Source code insrc/agents/voice/model.py
@abc.abstractmethoddefrun(self,text:str,settings:TTSModelSettings)->AsyncIterator[bytes]:"""Given a text string, produces a stream of audio bytes, in PCM format.    Args:        text: The text to convert to audio.    Returns:        An async iterator of audio bytes, in PCM format.    """pass

StreamedTranscriptionSession

Bases:ABC

A streamed transcription of audio input.

Source code insrc/agents/voice/model.py
classStreamedTranscriptionSession(abc.ABC):"""A streamed transcription of audio input."""@abc.abstractmethoddeftranscribe_turns(self)->AsyncIterator[str]:"""Yields a stream of text transcriptions. Each transcription is a turn in the conversation.        This method is expected to return only after `close()` is called.        """pass@abc.abstractmethodasyncdefclose(self)->None:"""Closes the session."""pass

transcribe_turnsabstractmethod

transcribe_turns()->AsyncIterator[str]

Yields a stream of text transcriptions. Each transcription is a turn in the conversation.

This method is expected to return only afterclose() is called.

Source code insrc/agents/voice/model.py
@abc.abstractmethoddeftranscribe_turns(self)->AsyncIterator[str]:"""Yields a stream of text transcriptions. Each transcription is a turn in the conversation.    This method is expected to return only after `close()` is called.    """pass

closeabstractmethodasync

close()->None

Closes the session.

Source code insrc/agents/voice/model.py
@abc.abstractmethodasyncdefclose(self)->None:"""Closes the session."""pass

STTModelSettingsdataclass

Settings for a speech-to-text model.

Source code insrc/agents/voice/model.py
@dataclassclassSTTModelSettings:"""Settings for a speech-to-text model."""prompt:str|None=None"""Instructions for the model to follow."""language:str|None=None"""The language of the audio input."""temperature:float|None=None"""The temperature of the model."""turn_detection:dict[str,Any]|None=None"""The turn detection settings for the model when using streamed audio input."""

promptclass-attributeinstance-attribute

prompt:str|None=None

Instructions for the model to follow.

languageclass-attributeinstance-attribute

language:str|None=None

The language of the audio input.

temperatureclass-attributeinstance-attribute

temperature:float|None=None

The temperature of the model.

turn_detectionclass-attributeinstance-attribute

turn_detection:dict[str,Any]|None=None

The turn detection settings for the model when using streamed audio input.

STTModel

Bases:ABC

A speech-to-text model that can convert audio input into text.

Source code insrc/agents/voice/model.py
classSTTModel(abc.ABC):"""A speech-to-text model that can convert audio input into text."""@property@abc.abstractmethoddefmodel_name(self)->str:"""The name of the STT model."""pass@abc.abstractmethodasyncdeftranscribe(self,input:AudioInput,settings:STTModelSettings,trace_include_sensitive_data:bool,trace_include_sensitive_audio_data:bool,)->str:"""Given an audio input, produces a text transcription.        Args:            input: The audio input to transcribe.            settings: The settings to use for the transcription.            trace_include_sensitive_data: Whether to include sensitive data in traces.            trace_include_sensitive_audio_data: Whether to include sensitive audio data in traces.        Returns:            The text transcription of the audio input.        """pass@abc.abstractmethodasyncdefcreate_session(self,input:StreamedAudioInput,settings:STTModelSettings,trace_include_sensitive_data:bool,trace_include_sensitive_audio_data:bool,)->StreamedTranscriptionSession:"""Creates a new transcription session, which you can push audio to, and receive a stream        of text transcriptions.        Args:            input: The audio input to transcribe.            settings: The settings to use for the transcription.            trace_include_sensitive_data: Whether to include sensitive data in traces.            trace_include_sensitive_audio_data: Whether to include sensitive audio data in traces.        Returns:            A new transcription session.        """pass

model_nameabstractmethodproperty

model_name:str

The name of the STT model.

transcribeabstractmethodasync

transcribe(input:AudioInput,settings:STTModelSettings,trace_include_sensitive_data:bool,trace_include_sensitive_audio_data:bool,)->str

Given an audio input, produces a text transcription.

Parameters:

NameTypeDescriptionDefault
inputAudioInput

The audio input to transcribe.

required
settingsSTTModelSettings

The settings to use for the transcription.

required
trace_include_sensitive_databool

Whether to include sensitive data in traces.

required
trace_include_sensitive_audio_databool

Whether to include sensitive audio data in traces.

required

Returns:

TypeDescription
str

The text transcription of the audio input.

Source code insrc/agents/voice/model.py
@abc.abstractmethodasyncdeftranscribe(self,input:AudioInput,settings:STTModelSettings,trace_include_sensitive_data:bool,trace_include_sensitive_audio_data:bool,)->str:"""Given an audio input, produces a text transcription.    Args:        input: The audio input to transcribe.        settings: The settings to use for the transcription.        trace_include_sensitive_data: Whether to include sensitive data in traces.        trace_include_sensitive_audio_data: Whether to include sensitive audio data in traces.    Returns:        The text transcription of the audio input.    """pass

create_sessionabstractmethodasync

create_session(input:StreamedAudioInput,settings:STTModelSettings,trace_include_sensitive_data:bool,trace_include_sensitive_audio_data:bool,)->StreamedTranscriptionSession

Creates a new transcription session, which you can push audio to, and receive a streamof text transcriptions.

Parameters:

NameTypeDescriptionDefault
inputStreamedAudioInput

The audio input to transcribe.

required
settingsSTTModelSettings

The settings to use for the transcription.

required
trace_include_sensitive_databool

Whether to include sensitive data in traces.

required
trace_include_sensitive_audio_databool

Whether to include sensitive audio data in traces.

required

Returns:

TypeDescription
StreamedTranscriptionSession

A new transcription session.

Source code insrc/agents/voice/model.py
@abc.abstractmethodasyncdefcreate_session(self,input:StreamedAudioInput,settings:STTModelSettings,trace_include_sensitive_data:bool,trace_include_sensitive_audio_data:bool,)->StreamedTranscriptionSession:"""Creates a new transcription session, which you can push audio to, and receive a stream    of text transcriptions.    Args:        input: The audio input to transcribe.        settings: The settings to use for the transcription.        trace_include_sensitive_data: Whether to include sensitive data in traces.        trace_include_sensitive_audio_data: Whether to include sensitive audio data in traces.    Returns:        A new transcription session.    """pass

VoiceModelProvider

Bases:ABC

The base interface for a voice model provider.

A model provider is responsible for creating speech-to-text and text-to-speech models, given aname.

Source code insrc/agents/voice/model.py
classVoiceModelProvider(abc.ABC):"""The base interface for a voice model provider.    A model provider is responsible for creating speech-to-text and text-to-speech models, given a    name.    """@abc.abstractmethoddefget_stt_model(self,model_name:str|None)->STTModel:"""Get a speech-to-text model by name.        Args:            model_name: The name of the model to get.        Returns:            The speech-to-text model.        """pass@abc.abstractmethoddefget_tts_model(self,model_name:str|None)->TTSModel:"""Get a text-to-speech model by name."""

get_stt_modelabstractmethod

get_stt_model(model_name:str|None)->STTModel

Get a speech-to-text model by name.

Parameters:

NameTypeDescriptionDefault
model_namestr | None

The name of the model to get.

required

Returns:

TypeDescription
STTModel

The speech-to-text model.

Source code insrc/agents/voice/model.py
@abc.abstractmethoddefget_stt_model(self,model_name:str|None)->STTModel:"""Get a speech-to-text model by name.    Args:        model_name: The name of the model to get.    Returns:        The speech-to-text model.    """pass

get_tts_modelabstractmethod

get_tts_model(model_name:str|None)->TTSModel

Get a text-to-speech model by name.

Source code insrc/agents/voice/model.py
@abc.abstractmethoddefget_tts_model(self,model_name:str|None)->TTSModel:"""Get a text-to-speech model by name."""

[8]ページ先頭

©2009-2025 Movatter.jp