guardrails
Here are 131 public repositories matching this topic...
Language:All
Sort:Most stars
The AI framework that adds the engineering to prompt engineering (Python/TS/Ruby/Java/C#/Rust/Go compatible)
- Updated
Nov 29, 2025 - Rust
An open-source framework for detecting, redacting, masking, and anonymizing sensitive data (PII) across text, images, and structured data. Supports NLP, pattern matching, and customizable pipelines.
- Updated
Nov 27, 2025 - Python
NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
- Updated
Nov 28, 2025 - Python
Building blocks for rapid development of GenAI applications
- Updated
Nov 29, 2025 - Python
Fastest LLM gateway (50x faster than LiteLLM) with adaptive load balancer, cluster mode, guardrails, 1000+ models support & <100 µs overhead at 5k RPS.
- Updated
Nov 29, 2025 - Go
⚕️GenAI powered multi-agentic medical diagnostics and healthcare research assistance chatbot. 🏥 Designed for healthcare professionals, researchers and patients.
- Updated
May 3, 2025 - Python
A curated list of blogs, videos, tutorials, code, tools, scripts, and anything useful to help you learn Azure Policy - by@jesseloudon
- Updated
Nov 20, 2025
PAIG (Pronounced similar to paige or payj) is an open-source project designed to protect Generative AI (GenAI) applications by ensuring security, safety, and observability.
- Updated
Aug 5, 2025 - CSS
Real-time guardrail that shows token spend & kills runaway LLM/agent loops.
- Updated
Jul 31, 2025 - JavaScript
Open-source MCP gateway and control plane for teams to govern which tools agents can use, what they can do, and how it’s audited—across agentic IDEs like Cursor, or other agents and AI tools.
- Updated
Oct 21, 2025 - TypeScript
Developer-First Open-Source AI Security Platform - Comprehensive Security Protection for AI Applications
- Updated
Nov 29, 2025 - Python
ChatGPT API Usage using LangChain, LlamaIndex, Guardrails, AutoGPT and more
- Updated
Aug 16, 2024 - Jupyter Notebook
Framework for LLM evaluation, guardrails and security
- Updated
Sep 9, 2024 - Python
LangEvals aggregates various language model evaluators into a single platform, providing a standard interface for a multitude of scores and LLM guardrails, for you to protect and benchmark your LLM models and pipelines.
- Updated
Oct 29, 2025 - Jupyter Notebook
Make AI work for Everyone - Monitoring and governing for your AI/ML
- Updated
Nov 28, 2025 - Python
LLM proxy to observe and debug what your AI agents are doing.
- Updated
Nov 6, 2025 - Python
First-of-its-kind AI benchmark for evaluating the protection capabilities of large language model (LLM) guard systems (guardrails and safeguards)
- Updated
Nov 3, 2025 - Python
Open-source toolkit for responsible AI: CLI + SDK to scan code, collect evidence, and generate model cards, risk files, evals, and RAG indexes.
- Updated
Nov 1, 2025 - JavaScript
A curated list of materials on AI guardrails
- Updated
Jun 3, 2025 - Python
Trustworthy question-answering AI plugin for chatbots in the social sector with advanced content performance analysis.
- Updated
Nov 26, 2025 - Python
Improve this page
Add a description, image, and links to theguardrails topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with theguardrails topic, visit your repo's landing page and select "manage topics."