prompt-injection-tool
Here are 10 public repositories matching this topic...
This repository provides a benchmark for prompt injection attacks and defenses in LLMs
- Updated
Oct 29, 2025 - Python
Manual Prompt Injection / Red Teaming Tool
- Updated
Oct 5, 2024 - Python
Latest AI Jailbreak Payloads & Exploit Techniques for GPT, QWEN, and all LLM Models
- Updated
Sep 4, 2025
LLM Security Project with Llama Guard
- Updated
Feb 18, 2024 - Python
PITT is an open‑source, OWASP‑aligned LLM security scanner that detects prompt injection, data leakage, plugin abuse, and other AI‑specific vulnerabilities. Supports 90+ attack techniques, multiple LLM providers, YAML‑based rules, and generates detailed HTML/JSON reports for developers and security teams.
- Updated
Jul 30, 2025 - Python
FRACTURED-SORRY-Bench: This repository contains the code and data for the creating an Automated Multi-shot Jailbreak framework, as described in our paper.
- Updated
Nov 7, 2024 - Python
Client SDK to send LLM interactions to Vibranium Dome
- Updated
Mar 31, 2024 - Python
Prompt Engineering Tool for AI Models with cli prompt or api usage
- Updated
Sep 10, 2023 - Python
Improve this page
Add a description, image, and links to theprompt-injection-tool topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with theprompt-injection-tool topic, visit your repo's landing page and select "manage topics."