Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

feat: Add token tracking support to openai_embed function#2181

Merged
danielaskdd merged 2 commits intoHKUDS:mainfrom
yrangana:feat/openai-embedding-token-tracking
Oct 9, 2025
Merged

feat: Add token tracking support to openai_embed function#2181
danielaskdd merged 2 commits intoHKUDS:mainfrom
yrangana:feat/openai-embedding-token-tracking

Conversation

@yrangana
Copy link
Contributor

Description

This PR adds optional token tracking support to the openai_embed() function, enabling comprehensive monitoring of embedding token usage alongside existing LLM token tracking capabilities.

Related Issues

N/A - Enhancement to improve observability of API token usage for embeddings.

Changes Made

  • Add optionaltoken_tracker parameter to openai_embed() function in lightrag/llm/openai.py
  • Trackprompt_tokens andtotal_tokens for embedding API calls
  • Update function docstring to document the newtoken_tracker parameter
  • Enables monitoring of embedding token usage alongside LLM calls
  • Maintains backward compatibility with existing code (parameter is optional)

Checklist

  • Changes tested locally
  • Code reviewed
  • Documentation updated

Additional Notes

Implementation Details:

  • Follows the same pattern used in openai_complete_if_cache() for consistency
  • Token tracking only occurs whentoken_tracker is provided and response contains usage data
  • No breaking changes - existing code continues to work without modifications

Usage Example:

fromlightrag.utilsimportTokenTrackerfromlightrag.llm.openaiimportopenai_embedtoken_tracker=TokenTracker()embeddings=awaitopenai_embed(texts=["example text"],token_tracker=token_tracker)print(token_tracker)# Shows token usage statistics

chatgpt-codex-connector[bot] reacted with thumbs up emoji
- Add optional token_tracker parameter to openai_embed()- Track prompt_tokens and total_tokens for embedding API calls- Enables monitoring of embedding token usage alongside LLM calls- Maintains backward compatibility with existing code
@danielaskdd
Copy link
Collaborator

@codex review

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. Hooray!

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting

@danielaskdddanielaskdd merged commit0f15fdc intoHKUDS:mainOct 9, 2025
1 check passed
@danielaskdd
Copy link
Collaborator

Thanks for sharing.

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

No reviews

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

2 participants

@yrangana@danielaskdd

Comments


[8]ページ先頭

©2009-2026 Movatter.jp