- Notifications
You must be signed in to change notification settings - Fork17
Set safety_identifier to openai-guardrails-python#37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Pull Request Overview
This PR replaces the header-based safety identifier approach with a parameter-based approach for tracking guardrails library usage with the OpenAI API. The change removes the_openai_utils module that injected headers and instead conditionally adds asafety_identifier parameter to API calls when using the official OpenAI API (excluding Azure and alternative providers).
Key changes:
- Removed
_openai_utils.pymodule containing header-based safety identifier logic - Added
_supports_safety_identifier()helper function in three locations to detect compatible clients - Updated all OpenAI client instantiations to remove
prepare_openai_kwargs()wrapper calls - Added conditional
safety_identifierparameter to LLM API calls based on client compatibility
Reviewed Changes
Copilot reviewed 13 out of 13 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| src/guardrails/_openai_utils.py | Deleted header-based safety identifier utilities module |
| src/guardrails/checks/text/llm_base.py | Added safety identifier support detection and parameter injection for LLM checks |
| src/guardrails/resources/chat/chat.py | Added safety identifier support detection and parameter injection for chat completions |
| src/guardrails/resources/responses/responses.py | Added safety identifier support detection and parameter injection for responses API |
| src/guardrails/client.py | Removedprepare_openai_kwargs() calls from client initialization |
| src/guardrails/agents.py | Removedprepare_openai_kwargs() calls from default context creation |
| src/guardrails/runtime.py | Removedprepare_openai_kwargs() calls from runtime default context |
| src/guardrails/evals/guardrail_evals.py | Removedprepare_openai_kwargs() calls from eval client creation |
| src/guardrails/checks/text/moderation.py | Removedprepare_openai_kwargs() calls from moderation client |
| src/guardrails/utils/create_vector_store.py | Removedprepare_openai_kwargs() calls from vector store client |
| tests/unit/test_safety_identifier.py | Added comprehensive tests for safety identifier support detection |
| tests/unit/test_openai_utils.py | Deleted tests for removed header-based utilities |
| tests/unit/test_agents.py | Removed assertions checking for safety identifier headers |
💡Add Copilot custom instructions for smarter, more guided reviews.Learn how to get started.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
| mock_client.base_url="https://example.openai.azure.com/v1" | ||
| mock_client.__class__.__name__="AsyncAzureOpenAI" | ||
| # Azure detection happens via isinstance check, but we can test with class name |
CopilotAIOct 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This comment is misleading. The test creates a real Azure client and tests it with isinstance(), which is correct for llm_base.py. However, the chat.py and responses.py implementations use class name string matching instead, not isinstance checks.
| # Azure detection happens via isinstance check, but we cantestwith class name | |
| #In llm_base.py,Azure detection happens via isinstance check (thistestis correct). | |
| # Note: chat.py and responses.py use class name string matching instead. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Pull Request Overview
Copilot reviewed 14 out of 14 changed files in this pull request and generated 1 comment.
💡Add Copilot custom instructions for smarter, more guided reviews.Learn how to get started.
| __all__= [ | ||
| "LLMConfig", | ||
| "LLMOutput", | ||
| "LLMErrorOutput", | ||
| "create_llm_check_fn", | ||
| "LLMOutput", | ||
| "create_error_result", | ||
| "create_llm_check_fn", | ||
| ] |
CopilotAIOct 30, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
[nitpick] The__all__ list was reordered alphabetically, but this appears to be an incidental change unrelated to the PR's main purpose of refactoring the safety identifier. The previous ordering grouped related items together ('LLMConfig', 'LLMOutput', 'LLMErrorOutput', then functions). Consider reverting this change to keep the PR focused on its core objective.
gabor-openai left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
LGTM ty!
gabor-openai left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
LGTM TY!
06fa983 intomainUh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Fixing
safety_identifierimplementationsafety_identifierparameterNote
safety_identifieris only supported by the normal OpenAIresponsesandchat.completionsendpoint. Not implemented for Azure or third parties.