- Notifications
You must be signed in to change notification settings - Fork415
Adding guardrails to large language models.
License
guardrails-ai/guardrails
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
- [Feb 12, 2025] We just launched Guardrails Index -- the first of its kind benchmark comparing the performance and latency of 24 guardrails across 6 most common categories! Check out the index at index.guardrailsai.com
Guardrails is a Python framework that helps build reliable AI applications by performing two key functions:
- Guardrails runs Input/Output Guards in your application that detect, quantify and mitigate the presence of specific types of risks. To look at the full suite of risks, check outGuardrails Hub.
- Guardrails help you generate structured data from LLMs.
Guardrails Hub is a collection of pre-built measures of specific types of risks (called 'validators'). Multiple validators can be combined together into Input and Output Guards that intercept the inputs and outputs of LLMs. VisitGuardrails Hub to see the full list of validators and their documentation.
pipinstallguardrails-ai
Download and configure the Guardrails Hub CLI.
pip install guardrails-aiguardrails configure
Install a guardrail from Guardrails Hub.
guardrails hub install hub://guardrails/regex_match
Create a Guard from the installed guardrail.
fromguardrailsimportGuard,OnFailActionfromguardrails.hubimportRegexMatchguard=Guard().use(RegexMatch,regex="\(?\d{3}\)?-? *\d{3}-? *-?\d{4}",on_fail=OnFailAction.EXCEPTION)guard.validate("123-456-7890")# Guardrail passestry:guard.validate("1234-789-0000")# Guardrail failsexceptExceptionase:print(e)
Output:
Validation failed for field with errors: Result must match \(?\d{3}\)?-? *\d{3}-? *-?\d{4}
Run multiple guardrails within a Guard.First, install the necessary guardrails from Guardrails Hub.
guardrails hub install hub://guardrails/competitor_checkguardrails hub install hub://guardrails/toxic_language
Then, create a Guard from the installed guardrails.
fromguardrailsimportGuard,OnFailActionfromguardrails.hubimportCompetitorCheck,ToxicLanguageguard=Guard().use_many(CompetitorCheck(["Apple","Microsoft","Google"],on_fail=OnFailAction.EXCEPTION),ToxicLanguage(threshold=0.5,validation_method="sentence",on_fail=OnFailAction.EXCEPTION))guard.validate("""An apple a day keeps a doctor away. This is good advice for keeping your health.""")# Both the guardrails passtry:guard.validate("""Shut the hell up! Apple just released a new iPhone.""" )# Both the guardrails failexceptExceptionase:print(e)
Output:
Validation failed for field with errors: Found the following competitors: [['Apple']]. Please avoid naming those competitors next time, The following sentences in your response were found to be toxic:- Shut the hell up!
Let's go through an example where we ask an LLM to generate fake pet names. To do this, we'll create a PydanticBaseModel that represents the structure of the output we want.
frompydanticimportBaseModel,FieldclassPet(BaseModel):pet_type:str=Field(description="Species of pet")name:str=Field(description="a unique pet name")
Now, create a Guard from thePet
class. The Guard can be used to call the LLM in a manner so that the output is formatted to thePet
class. Under the hood, this is done by either of two methods:
- Function calling: For LLMs that support function calling, we generate structured data using the function call syntax.
- Prompt optimization: For LLMs that don't support function calling, we add the schema of the expected output to the prompt so that the LLM can generate structured data.
fromguardrailsimportGuardimportopenaiprompt=""" What kind of pet should I get and what should I name it? ${gr.complete_json_suffix_v2}"""guard=Guard.for_pydantic(output_class=Pet,prompt=prompt)raw_output,validated_output,*rest=guard(llm_api=openai.completions.create,engine="gpt-3.5-turbo-instruct")print(validated_output)
This prints:
{ "pet_type": "dog", "name": "Buddy}
Guardrails can be set up as a standalone service served by Flask withguardrails start
, allowing you to interact with it via a REST API. This approach simplifies development and deployment of Guardrails-powered applications.
- Install:
pip install "guardrails-ai"
- Configure:
guardrails configure
- Create a config:
guardrails create --validators=hub://guardrails/two_words --guard-name=two-word-guard
- Start the dev server:
guardrails start --config=./config.py
- Interact with the dev server via the snippets below
# with the guardrails clientimport guardrails as grgr.settings.use_server = Trueguard = gr.Guard(name='two-word-guard')guard.validate('this is more than two words')# or with the openai sdkimport openaiopenai.base_url = "http://localhost:8000/guards/two-word-guard/openai/v1/"os.environ["OPENAI_API_KEY"] = "youropenaikey"messages = [ { "role": "user", "content": "tell me about an apple with 3 words exactly", }, ]completion = openai.chat.completions.create( model="gpt-4o-mini", messages=messages,)
For production deployments, we recommend using Docker with Gunicorn as the WSGI server for improved performance and scalability.
You can reach out to us onDiscord orTwitter.
Yes, Guardrails can be used with proprietary and open-source LLMs. Check out this guide onhow to use Guardrails with any LLM.
Yes, you can create your own validators and contribute them to Guardrails Hub. Check out this guide onhow to create your own validators.
Guardrails can be used with Python and JavaScript. Check out the docs on how to use Guardrails from JavaScript. We are working on adding support for other languages. If you would like to contribute to Guardrails, please reach out to us onDiscord orTwitter.
We welcome contributions to Guardrails!
Get started by checking out Github issues and check out theContributing Guide. Feel free to open an issue, or reach out if you would like to add to the project!
About
Adding guardrails to large language models.
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.