LiveSessionFutures

@PublicPreviewAPI
abstract classLiveSessionFutures


Wrapper class providing Java compatible methods forLiveSession.

See also
LiveSession

Summary

Public companion functions

LiveSessionFutures
from(session: LiveSession)

Public functions

abstractListenableFuture<Unit>

Closes the client session.

abstractPublisher<LiveServerMessage>

Receives responses from the model for both streaming and standard requests.

abstractListenableFuture<Unit>
send(content: Content)

Sendsdata to the model.

abstractListenableFuture<Unit>
send(text: String)

Sends text to the model.

abstractListenableFuture<Unit>

Sends an audio input stream to the model, using the realtime API.

abstractListenableFuture<Unit>

Sends function calling responses to the model.

abstractListenableFuture<Unit>

This function is deprecated. Use `sendAudioRealtime`, `sendVideoRealtime`, or `sendTextRealtime` instead

abstractListenableFuture<Unit>

For details about the realtime input usage, see theBidiGenerateContentRealtimeInput documentation (Gemini Developer API orVertex AI Gemini API ).

abstractListenableFuture<Unit>

Sends a video input stream to the model, using the realtime API.

abstractListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation()

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation.

abstractListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(enableInterruptions: Boolean)

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation orclose.

abstractListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    functionCallHandler: ((FunctionCallPart)->FunctionResponsePart)?
)

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation orclose.

abstractListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    liveAudioConversationConfig: LiveAudioConversationConfig
)

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation.

abstractListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    transcriptHandler: ((Transcription?,Transcription?)->Unit)?
)

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation.

abstractListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    functionCallHandler: ((FunctionCallPart)->FunctionResponsePart)?,
    enableInterruptions: Boolean
)

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation orclose.

abstractListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    transcriptHandler: ((Transcription?,Transcription?)->Unit)?,
    enableInterruptions: Boolean
)

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation orclose.

abstractListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
startAudioConversation(
    functionCallHandler: ((FunctionCallPart)->FunctionResponsePart)?,
    transcriptHandler: ((Transcription?,Transcription?)->Unit)?,
    enableInterruptions: Boolean
)

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation orclose.

abstractListenableFuture<Unit>
@RequiresPermission(value = "android.permission.RECORD_AUDIO")
stopAudioConversation()

Stops the audio conversation with the Gemini Server.

abstractUnit

Stops receiving from the model.

Public companion functions

from

fun from(session: LiveSession): LiveSessionFutures
Returns
LiveSessionFutures

aLiveSessionFutures created around the providedLiveSession

Public functions

close

abstract fun close(): ListenableFuture<Unit>

Closes the client session.

Once aLiveSession is closed, it can not be reopened; you'll need to start a newLiveSession.

receive

abstract fun receive(): Publisher<LiveServerMessage>

Receives responses from the model for both streaming and standard requests.

Callclose to stop receiving responses from the model.

Returns
Publisher<LiveServerMessage>

APublisher which will emitLiveServerMessage from the model.

Throws
com.google.firebase.ai.type.SessionAlreadyReceivingException: com.google.firebase.ai.type.SessionAlreadyReceivingException

when the session is already receiving.

send

abstract fun send(content: Content): ListenableFuture<Unit>

Sendsdata to the model.

Calling this afterstartAudioConversation will play the response audio immediately.

Parameters
content: Content

ClientContent to be sent to the model.

send

abstract fun send(text: String): ListenableFuture<Unit>

Sends text to the model.

Calling this afterstartAudioConversation will play the response audio immediately.

Parameters
text: String

Text to be sent to the model.

sendAudioRealtime

abstract fun sendAudioRealtime(audio: InlineData): ListenableFuture<Unit>

Sends an audio input stream to the model, using the realtime API.

Parameters
audio: InlineData

The audio data to send.

sendFunctionResponse

abstract fun sendFunctionResponse(functionList: List<FunctionResponsePart>): ListenableFuture<Unit>

Sends function calling responses to the model.

Parameters
functionList: List<FunctionResponsePart>

The list ofFunctionResponsePart instances indicating the function response from the client.

sendMediaStream

abstract fun sendMediaStream(mediaChunks: List<MediaData>): ListenableFuture<Unit>
This function is deprecated.
Use `sendAudioRealtime`, `sendVideoRealtime`, or `sendTextRealtime` instead

Streams client data to the model.

Calling this afterstartAudioConversation will play the response audio immediately.

Parameters
mediaChunks: List<MediaData>

The list ofMediaData instances representing the media data to be sent.

sendTextRealtime

abstract fun sendTextRealtime(text: String): ListenableFuture<Unit>

For details about the realtime input usage, see theBidiGenerateContentRealtimeInput documentation (Gemini Developer API orVertex AI Gemini API ).

Parameters
text: String

The text data to send.

sendVideoRealtime

abstract fun sendVideoRealtime(video: InlineData): ListenableFuture<Unit>

Sends a video input stream to the model, using the realtime API.

Parameters
video: InlineData

The video data to send. Video MIME type could be either video or image.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(enableInterruptions: Boolean): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation orclose.

Parameters
enableInterruptions: Boolean

If enabled, allows the user to speak over or interrupt the model's ongoing reply.

WARNING: The user interruption feature relies on device-specific support, and may not be consistently available.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(
    functionCallHandler: ((FunctionCallPart)->FunctionResponsePart)?
): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation orclose.

Parameters
functionCallHandler: ((FunctionCallPart)->FunctionResponsePart)?

A callback function that is invoked whenever the model receives a function call.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(
    liveAudioConversationConfig: LiveAudioConversationConfig
): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation.

Parameters
liveAudioConversationConfig: LiveAudioConversationConfig

ALiveAudioConversationConfig provided by the user to control the various aspects of the conversation.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(
    transcriptHandler: ((Transcription?,Transcription?)->Unit)?
): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation.

Parameters
transcriptHandler: ((Transcription?,Transcription?)->Unit)?

A callback function that is invoked whenever the model receives a transcript. The firstTranscription object is the input transcription, and the second is the output transcription

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(
    functionCallHandler: ((FunctionCallPart)->FunctionResponsePart)?,
    enableInterruptions: Boolean
): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation orclose.

Parameters
functionCallHandler: ((FunctionCallPart)->FunctionResponsePart)?

A callback function that is invoked whenever the model receives a function call.

enableInterruptions: Boolean

If enabled, allows the user to speak over or interrupt the model's ongoing reply.

WARNING: The user interruption feature relies on device-specific support, and may not be consistently available.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(
    transcriptHandler: ((Transcription?,Transcription?)->Unit)?,
    enableInterruptions: Boolean
): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation orclose.

Parameters
transcriptHandler: ((Transcription?,Transcription?)->Unit)?

A callback function that is invoked whenever the model receives a transcript. The firstTranscription object is the input transcription, and the second is the output transcription

enableInterruptions: Boolean

If enabled, allows the user to speak over or interrupt the model's ongoing reply.

WARNING: The user interruption feature relies on device-specific support, and may not be consistently available.

startAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun startAudioConversation(
    functionCallHandler: ((FunctionCallPart)->FunctionResponsePart)?,
    transcriptHandler: ((Transcription?,Transcription?)->Unit)?,
    enableInterruptions: Boolean
): ListenableFuture<Unit>

Starts an audio conversation with the model, which can only be stopped usingstopAudioConversation orclose.

Parameters
functionCallHandler: ((FunctionCallPart)->FunctionResponsePart)?

A callback function that is invoked whenever the model receives a function call.

transcriptHandler: ((Transcription?,Transcription?)->Unit)?

A callback function that is invoked whenever the model receives a transcript. The firstTranscription object is the input transcription, and the second is the output transcription

enableInterruptions: Boolean

If enabled, allows the user to speak over or interrupt the model's ongoing reply.

WARNING: The user interruption feature relies on device-specific support, and may not be consistently available.

stopAudioConversation

@RequiresPermission(value = "android.permission.RECORD_AUDIO")
abstract fun stopAudioConversation(): ListenableFuture<Unit>

Stops the audio conversation with the Gemini Server.

This only needs to be called after a previous call tostartAudioConversation.

If there is no audio conversation currently active, this function does nothing.

stopReceiving

abstract fun stopReceiving(): Unit

Stops receiving from the model.

If this function is called during an ongoing audio conversation, the model's response will not be received, and no audio will be played; the live session object will no longer receive data from the server.

To resume receiving data, you must either handle it directly usingreceive, or indirectly by usingstartAudioConversation.

See also
close

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-11 UTC.