Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Enforce 25Kb limit for infinite transcription#13301

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
suvigyajain0101 wants to merge2 commits intoGoogleCloudPlatform:main
base:main
Choose a base branch
Loading
fromsuvigyajain0101:patch-1

Conversation

suvigyajain0101
Copy link

@suvigyajain0101suvigyajain0101 commentedApr 14, 2025
edited
Loading

Description

Current implementation breaks when a new stream is created, even under 5 min limit. This is due to the missing logic to handle 25KB stream size limit [1]

Updated the 'generator' function to yield data as soon as API limit is reached.

[1] -#12053

Fixes#12053

Note: Before submitting a pull request, please open an issue for discussion if you are not associated with Google.

Checklist

Current implementation breaks when a new stream is created, even under 5 min limit. This is due to the missing logic to handle 25KB stream size limit [1]Updated the 'generator' function to yield data as soon as API limit is reached.[1] -GoogleCloudPlatform#12053
@suvigyajain0101suvigyajain0101 requested review froma team ascode ownersApril 14, 2025 12:32
@product-auto-labelproduct-auto-labelbot added samplesIssues that are directly related to samples. api: speechIssues related to the Speech-to-Text API. labelsApr 14, 2025
Copy link
Contributor

@gemini-code-assistgemini-code-assistbot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Hello@suvigyajain0101, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

This pull request addresses an issue where the infinite transcription implementation was breaking due to a missing logic to handle the 25KB stream size limit imposed by the API. The changes modify thegenerator function inspeech/microphone/transcribe_streaming_infinite_v2.py to yield data in chunks that respect this limit, ensuring that the API's constraints are met and the transcription process doesn't break.

Highlights

  • Bug Fix: Fixes an issue where the infinite transcription would break due to exceeding the 25KB stream size limit.
  • API Limit Enforcement: Implements logic to split audio data into chunks smaller than 25KB before sending to the API.
  • Code Modification: Modifies thegenerator function to handle the 25KB limit by yielding smaller chunks of data.

Changelog

  • speech/microphone/transcribe_streaming_infinite_v2.py
    • AddedMAX_STREAMING_CHUNK constant to define the 25KB API limit at line 43.
    • Modified thegenerator function to split audio data into chunks smaller thanMAX_STREAMING_CHUNK at line 218.
    • Added logic to handle cases where a single chunk exceeds theMAX_STREAMING_CHUNK limit by further splitting it into smaller chunks at line 222.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on ourdocumentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either/gemini <command> or@gemini-code-assist <command>. Below is a summary of the supported commands.

FeatureCommandDescription
Code Review/gemini reviewPerforms a code review for the current pull request in its current state.
Pull Request Summary/gemini summaryProvides a summary of the current pull request in its current state.
Comment@gemini-code-assistResponds in comments when explicitly tagged, both in issue comments and review comments.
Help/gemini helpDisplays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a.gemini/ folder in the base of the repository. Detailed instructions can be foundhere.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on@gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with theGemini Code Assist IDE Extension.


A stream of speech, so grand,
But size limits, close at hand.
Split the data small,
Heed the API's call,
And transcription will stand.

Footnotes

  1. Review thePrivacy Notices,Generative AI Prohibited Use Policy,Terms of Service, and learn how to configure Gemini Code Assist in GitHubhere. Gemini can make mistakes, so double check it anduse code with caution.

Copy link
Contributor

@gemini-code-assistgemini-code-assistbot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Code Review

This pull request addresses an important issue where the streaming transcription breaks due to exceeding the API's 25KB limit. The solution implemented to split the data into smaller chunks is a good approach. However, there are a few areas that could be improved for clarity and efficiency.

Summary of Findings

  • Clarity of Chunking Logic: The chunking logic within the generator function could benefit from more descriptive variable names and comments to enhance readability.
  • Potential Inefficiency: The current implementation iterates through thedata list twice: once to calculatecombined_size and again to split the data into chunks. This could be optimized by combining these operations.

Merge Readiness

The pull request addresses a critical bug and implements a reasonable solution. However, addressing the comments regarding code clarity and potential efficiency improvements would significantly enhance the quality of the code. I recommend addressing these before merging. I am unable to directly approve this pull request, and others should review and approve this code before merging.

Comment on lines +219 to +234
combined_size = sum(len(chunk) for chunk in data)
if combined_size <= MAX_STREAMING_CHUNK:
yield b"".join(data)
else:
run_chunks = []
run_size = 0
for chunk in data:
if len(chunk) + run_size > MAX_STREAMING_CHUNK:
yield b"".join(run_chunks)
run_chunks = [chunk]
run_size = len(chunk)
else:
run_chunks.append(chunk)
run_size += len(chunk)
if run_chunks:
yield b"".join(run_chunks)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

high

The code iterates through thedata list twice. First to calculatecombined_size and then to create the chunks. This could be combined into a single loop for better efficiency. Consider calculating thecombined_size while creating the chunks, and only yielding the combined data if it's within the limit. If it exceeds the limit, yield the accumulated chunks and start a new chunk.

current_chunk= []current_chunk_size=0forchunkindata:ifcurrent_chunk_size+len(chunk)<=MAX_STREAMING_CHUNK:current_chunk.append(chunk)current_chunk_size+=len(chunk)else:ifcurrent_chunk:yieldb''.join(current_chunk)current_chunk= [chunk]current_chunk_size=len(chunk)ifcurrent_chunk:yieldb''.join(current_chunk)

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@suvigyajain0101
Copy link
Author

Hi@kharvey-google Please let me know if you need additional information from my side. Thanks!

@glasntglasnt added the kokoro:force-runAdd this label to force Kokoro to re-run the tests. labelMay 30, 2025
@kokoro-teamkokoro-team removed the kokoro:force-runAdd this label to force Kokoro to re-run the tests. labelMay 30, 2025
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@gemini-code-assistgemini-code-assist[bot]gemini-code-assist[bot] requested changes

At least 1 approving review is required to merge this pull request.

Assignees

@kharvey-googlekharvey-google

Labels
api: speechIssues related to the Speech-to-Text API.samplesIssues that are directly related to samples.
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

Infinite Streaming Not Working with Google Speech-to-Text API v2
5 participants
@suvigyajain0101@glasnt@kokoro-team@kharvey-google@iennae

[8]ページ先頭

©2009-2025 Movatter.jp