- Notifications
You must be signed in to change notification settings - Fork17
Handle sync guardrail calls to avoid awaitable error#21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Pull Request Overview
This PR fixes a bug where using synchronous OpenAI clients causedTypeError: object ChatCompletion can't be used in 'await' expression errors in LLM-based guardrails. The root cause was that guardrail LLMs were initialized as the same class as the actual client, but guardrails always execute asynchronously. The fix wraps synchronous methods in a background executor to allow uniform async execution.
Key changes:
- Added
_invoke_openai_callablehelper to handle both sync and async OpenAI SDK methods - Updated all LLM-based checks to use the new helper for calling OpenAI APIs
- Added unit tests to verify sync client support
Reviewed Changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
src/guardrails/checks/text/llm_base.py | Implements_invoke_openai_callable and_request_chat_completion helpers; updatesrun_llm to support sync/async clients |
src/guardrails/checks/text/prompt_injection_detection.py | Updates_call_prompt_injection_detection_llm to use_invoke_openai_callable |
src/guardrails/checks/text/hallucination_detection.py | Updateshallucination_detection to use_invoke_openai_callable |
tests/unit/checks/test_llm_base.py | Adds test for sync client support inrun_llm |
tests/unit/checks/test_prompt_injection_detection.py | Adds test for sync response handling in prompt injection detection |
Tip: Customize your code reviews with copilot-instructions.md.Create the file orlearn how to get started.
| else: | ||
| try: | ||
| fromopenaiimportAsyncAzureOpenAI,AzureOpenAI# type: ignore | ||
| exceptException:# pragma: no cover - optional dependency |
CopilotAIOct 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The bareexcept Exception is too broad. Consider catchingImportError specifically since this is handling an optional dependency import.
| exceptException:# pragma: no cover - optional dependency | |
| exceptImportError:# pragma: no cover - optional dependency |
| defcreate(self,**kwargs:Any)->Any: | ||
| _=kwargs |
CopilotAIOct 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The unused variable assignment_ = kwargs is unnecessary. If the intention is to indicate kwargs are intentionally unused, this can be omitted as Python allows unused parameters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This is just in a test, I think it's fine
| class_SyncResponses: | ||
| defparse(self,**kwargs:Any)->Any: | ||
| _=kwargs |
CopilotAIOct 16, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The unused variable assignment_ = kwargs is unnecessary. If the intention is to indicate kwargs are intentionally unused, this can be omitted as Python allows unused parameters.
| _ = kwargs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This is just in a test, I think it's fine
gabor-openai left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Thank you!
515bd41 intomainUh oh!
There was an error while loading.Please reload this page.
Fixes reportedbug where using a sync client caused a
TypeError: object ChatCompletion can't be used in 'await' expressionerror when LLM based guardrails are executed.Changes:
_invoke_openai_callableadded to properly handle sync and async traffic from all api endpoints we support