llm-red-teaming
Here are 6 public repositories matching this topic...
Language:All
DeepTeam is a framework to red team LLMs and LLM systems.
- Updated
Dec 15, 2025 - Python
PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.
- Updated
Jun 29, 2025 - Python
A comprehensive guide to adversarial testing and security evaluation of AI systems, helping organizations identify vulnerabilities before attackers exploit them.
- Updated
Dec 15, 2025
Semantic Stealth Attacks & Symbolic Prompt Red Teaming on GPT and other LLMs.
- Updated
May 16, 2025
RAG Poisoning Lab — Educational AI Security Exercise
- Updated
Dec 7, 2025 - Python
🛠️ Explore large language models through hands-on projects and tutorials to enhance your understanding and practical skills in natural language processing.
- Updated
Dec 18, 2025 - Jupyter Notebook
Improve this page
Add a description, image, and links to thellm-red-teaming topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with thellm-red-teaming topic, visit your repo's landing page and select "manage topics."