ai-security-testing
Here are 7 public repositories matching this topic...
Language:All
PromptMe is an educational project that showcases security vulnerabilities in large language models (LLMs) and their web integrations. It includes 10 hands-on challenges inspired by the OWASP LLM Top 10, demonstrating how these vulnerabilities can be discovered and exploited in real-world scenarios.
- Updated
Jun 29, 2025 - Python
Comprehensive LLM testing suite for safety, performance, bias, and compliance, equipped with methodologies and tools to enhance the reliability and ethical integrity of models like OpenAI's GPT series for real-world applications.
- Updated
Apr 15, 2024
AI Coding Hackathon Project - Experimenting with AI-assisted development workflows
- Updated
Oct 18, 2025 - Rust
Secure your code in seconds. VibeSafe is an AI-native DevSecOps CLI tool that detects vulnerabilities, secrets, insecure configs, and hallucinated dependencies before they ship.
- Updated
May 17, 2025 - TypeScript
🛠️ Explore large language models through hands-on projects and tutorials to enhance your understanding and practical skills in natural language processing.
- Updated
Nov 29, 2025 - Jupyter Notebook
A Solution to The Gandalf AI from Lakera.https://gandalf.lakera.ai/ The Gandalf LLM README documents the inputs used to reveal secret passwords through various levels of the Gandalf AI by Lakera, with each input tested multiple times for consistency.
- Updated
May 18, 2024
Research and defense implementation for prompt injection vulnerabilities in LLM applications
- Updated
Oct 17, 2025 - Python
Improve this page
Add a description, image, and links to theai-security-testing topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with theai-security-testing topic, visit your repo's landing page and select "manage topics."