Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Increase output token limit for EquivalenceEvaluator#6835

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged

Conversation

@shyamnamboodiripad
Copy link
Contributor

@shyamnamboodiripadshyamnamboodiripad commentedSep 22, 2025
edited by dotnet-policy-servicebot
Loading

EquivalenceEvaluator was specifying MaxOutputTokens = 1 since its prompt instructs the LLM to produce a response (score) that is a single digit (between 1 and 5).

Turns out that while this works for most models (including the OpenAI models that were used to test the prompt), some models require more than one token for this. For example, looks like Claude requires two tokens for this - see#6814).

This PR bumps the MaxOutputTokens to 5 to address the above issue.

Fixes#6814

Microsoft Reviewers:Open in CodeFlow

EquivalenceEvaluator was specifying MaxOutputTokens = 1 since its prompt instructs the LLM to produce a response (score) that is a single digit (between 1 and 5).Turns out that while this works for most models (including the OpenAI models that were used to test the prompt), some models require more than one token for this. For example, looks like Claude requires two tokens for this - seedotnet#6814).This PR bumps the MaxOutputTokens to 5 to address the above issue.Fixesdotnet#6814
@shyamnamboodiripadshyamnamboodiripad requested a review froma team as acode ownerSeptember 22, 2025 21:15
CopilotAI review requested due to automatic review settingsSeptember 22, 2025 21:15
Copy link
Contributor

CopilotAI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Pull Request Overview

Increases the maximum output token limit for EquivalenceEvaluator from 1 to 5 tokens to accommodate different LLM tokenization behaviors while maintaining the single-digit score output requirement.

  • Bumps MaxOutputTokens from 1 to 5 in ChatOptions configuration
  • Adds explanatory comment linking to the GitHub issue for context

Tip: Customize your code reviews with copilot-instructions.md.Create the file orlearn how to get started.

@shyamnamboodiripadshyamnamboodiripad merged commit6fb8ab7 intodotnet:mainSep 23, 2025
7 checks passed
@shyamnamboodiripadshyamnamboodiripad deleted the tokenlimit branchSeptember 23, 2025 17:43
This was referencedOct 14, 2025
This was referencedOct 22, 2025
@github-actionsgithub-actionsbot locked and limited conversation to collaboratorsOct 24, 2025
Sign up for freeto subscribe to this conversation on GitHub. Already have an account?Sign in.

Reviewers

Copilot code reviewCopilotCopilot left review comments

@peterwaldpeterwaldpeterwald approved these changes

Labels

area-ai-evalMicrosoft.Extensions.AI.Evaluation and related

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

[AI Evaluation] EquivalenceEvaluator is not producing an answer

2 participants

@shyamnamboodiripad@peterwald

[8]ページ先頭

©2009-2025 Movatter.jp