- Notifications
You must be signed in to change notification settings - Fork17
Correctly passing API key to moderation#36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Pull Request Overview
This PR refactors the moderation guardrail's client selection logic to simplify how it chooses between the context client and fallback OpenAI client. The previous implementation attempted to detect if the context client pointed to the official OpenAI API by inspecting base URLs; the new approach attempts to use any context client first and falls back to the default OpenAI client if aNotFoundError is raised (indicating the moderation endpoint doesn't exist on third-party providers).
Key changes:
- Simplified client selection logic using try-catch instead of URL inspection
- Added
NotFoundErrorexception handling for graceful fallback to OpenAI client - Added test coverage for both context client usage and third-party provider fallback scenarios
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| src/guardrails/checks/text/moderation.py | Refactored moderation function to use try-catch pattern for client selection instead of URL-based detection; added NotFoundError import and exception handling for fallback logic |
| tests/unit/checks/test_moderation.py | Added two new tests to verify context client usage and fallback behavior when moderation endpoint is unavailable |
| tests/conftest.py | Added stub NotFoundError exception to support testing the new fallback logic |
💡Add Copilot custom instructions for smarter, more guided reviews.Learn how to get started.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Pull Request Overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
💡Add Copilot custom instructions for smarter, more guided reviews.Learn how to get started.
| # Try the context client first, fall back if moderation endpoint doesn't exist | ||
| ifclientisnotNone: | ||
| try: | ||
| candidate=getattr(context,"guardrail_llm",None) | ||
| ifnotisinstance(candidate,AsyncOpenAI): | ||
| returnNone | ||
| # Attempt to discover the effective base URL in a best-effort way | ||
| base_url=getattr(candidate,"base_url",None) | ||
| ifbase_urlisNone: | ||
| inner=getattr(candidate,"_client",None) | ||
| base_url=getattr(inner,"base_url",None)orgetattr(inner,"_base_url",None) | ||
| # Reuse only when clearly the official OpenAI endpoint | ||
| ifbase_urlisNone: | ||
| returncandidate | ||
| ifisinstance(base_url,str)and"api.openai.com"inbase_url: | ||
| returncandidate | ||
| returnNone | ||
| exceptException: | ||
| returnNone | ||
| client=_maybe_reuse_openai_client_from_ctx(ctx)or_get_moderation_client() | ||
| resp=awaitclient.moderations.create( | ||
| model="omni-moderation-latest", | ||
| input=data, | ||
| ) | ||
| resp=await_call_moderation_api(client,data) | ||
| exceptNotFoundErrorase: | ||
| # Moderation endpoint doesn't exist on this provider (e.g., third-party) | ||
| # Fall back to the OpenAI client | ||
| logger.debug( | ||
| "Moderation endpoint not available on context client, falling back to OpenAI: %s", | ||
| e, | ||
| ) | ||
| client=_get_moderation_client() | ||
| resp=await_call_moderation_api(client,data) | ||
| else: | ||
| # No context client, use fallback | ||
| client=_get_moderation_client() | ||
| resp=await_call_moderation_api(client,data) |
CopilotAIOct 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Code duplication: Lines 187-188 and 191-192 duplicate the same logic (getting fallback client and calling moderation API). Consider refactoring to avoid this duplication. For example, you could setclient = _get_moderation_client() once and callresp = await _call_moderation_api(client, data) after the if-else block.
gabor-openai left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
TY!
1bfd82b intomainUh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Fixes this reportedBUG:
api_keypassed in via client initialization is not properly passed tomoderationcheckThe fix:
guardrail_llm, if the moderation endpoint does not exist with that client, fall back to creating a compatible OpenAI client viaenv variableOpenAI API keyis required for the Moderation checkFuture PR to allow configuring an
api_keyspecific to the moderation check via the config file (will only be needed for users who are using non-openai models as their base client).