- Notifications
You must be signed in to change notification settings - Fork1k
Enhance Gemini usage tracking to collect comprehensive token data#1752
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
amiyapatanaik commentedMay 19, 2025
Thanks@kiqaps, much appreciated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
@kiqaps Thanks for picking this up! Just one suggestion.
Uh oh!
There was an error while loading.Please reload this page.
@kiqaps Would you mind checking the |
yeah, sure, no problem |
ok, its done... but i have done a more intrusive implementation to keep a single "token parser" for both providers (via genai or http), will you guys maintain both or http will be deprecated? if you prefer, I can keep the parsing in both providers |
The idea is to move forward with the So I think it's better to have duplicated logic and not have a common file. Would you mind changing it? Sorry if this was not clear before. |
@Kludex done :) |
Uh oh!
There was an error while loading.Please reload this page.
Thanks! |
09606c0
intopydantic:mainUh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
When using Gemini, I noticed that some tokens (such as reasoning tokens and usage by modality) were not being collected. I made an adjustment so that all of these are included within the Usage
details
.Since details is a dict from
str
toint
, I couldn't simply throw them in there, so I created the dict in what I thought was the most intuitive way, but I'm not sure if it's the best approach.I also had to add line breaks in this CLI test to make it pass on my PC (not sure why), but after pushing I saw that it broke the tests, so I removed them again :D
Now, every token documented here:https://ai.google.dev/api/generate-content#UsageMetadata is collected.
Also, when streaming, I'm retrieving usage data only from the last chunk and this shouldfix#1736