- Notifications
You must be signed in to change notification settings - Fork845
Increase output token limit for EquivalenceEvaluator#6835
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
EquivalenceEvaluator was specifying MaxOutputTokens = 1 since its prompt instructs the LLM to produce a response (score) that is a single digit (between 1 and 5).Turns out that while this works for most models (including the OpenAI models that were used to test the prompt), some models require more than one token for this. For example, looks like Claude requires two tokens for this - seedotnet#6814).This PR bumps the MaxOutputTokens to 5 to address the above issue.Fixesdotnet#6814
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Pull Request Overview
Increases the maximum output token limit for EquivalenceEvaluator from 1 to 5 tokens to accommodate different LLM tokenization behaviors while maintaining the single-digit score output requirement.
- Bumps MaxOutputTokens from 1 to 5 in ChatOptions configuration
- Adds explanatory comment linking to the GitHub issue for context
Tip: Customize your code reviews with copilot-instructions.md.Create the file orlearn how to get started.
6fb8ab7 intodotnet:mainUh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
EquivalenceEvaluator was specifying MaxOutputTokens = 1 since its prompt instructs the LLM to produce a response (score) that is a single digit (between 1 and 5).
Turns out that while this works for most models (including the OpenAI models that were used to test the prompt), some models require more than one token for this. For example, looks like Claude requires two tokens for this - see#6814).
This PR bumps the MaxOutputTokens to 5 to address the above issue.
Fixes#6814
Microsoft Reviewers:Open in CodeFlow