ai-security
Here are 363 public repositories matching this topic...
Language:All
Sort:Most stars
This repository is maintained by Omar Santos (@santosomar) and includes thousands of resources related to ethical hacking, bug bounties, digital forensics and incident response (DFIR), AI security, vulnerability research, exploit development, reverse engineering, and more. 🔥 Also check:https://hackertraining.org
- Updated
Dec 11, 2025 - Jupyter Notebook
🐢 Open-Source Evaluation & Testing library for LLM Agents
- Updated
Nov 18, 2025 - Python
企业级 AI 编程助手,专为 研发协作 和 研发管理 场景而设计。
- Updated
Dec 8, 2025 - TypeScript
ToolHive makes deploying MCP servers easy, secure and fun
- Updated
Dec 17, 2025 - Go
A curated list of useful resources that cover Offensive AI.
- Updated
Dec 13, 2025 - HTML
A curated list of AI-powered coding tools
- Updated
Nov 18, 2025
A list of backdoor learning resources
- Updated
Jul 31, 2024
a security scanner for custom LLM applications
- Updated
Dec 1, 2025 - Python
A security scanner for your LLM agentic workflows
- Updated
Nov 27, 2025 - Python
Reconmap is a collaboration-first security operations platform for infosec teams and MSSPs, enabling end‑to‑end engagement management, from reconnaissance through execution and reporting. With built-in command automation, output parsing, and AI‑assisted summaries, it delivers faster, more structured, and high‑quality security assessments.
- Updated
Dec 9, 2025 - CSS
MCP for Security: A collection of Model Context Protocol servers for popular security tools like SQLMap, FFUF, NMAP, Masscan and more. Integrate security testing and penetration testing into AI workflows.
- Updated
Dec 2, 2025 - TypeScript
A deliberately vulnerable banking application designed for practicing Security Testing of Web App, APIs, AI integrated App and secure code reviews. Features common vulnerabilities found in real-world applications, making it an ideal platform for security professionals, developers, and enthusiasts to learn pentesting and secure coding practices.
- Updated
Nov 23, 2025 - Python
Project CodeGuard is an AI model-agnostic security framework and ruleset that embeds secure-by-default practices into AI coding workflows (generation and review). It ships core security rules, translators for popular coding agents, and validators to test rule compliance.
- Updated
Dec 11, 2025 - Python
RuLES: a benchmark for evaluating rule-following in language models
- Updated
Feb 24, 2025 - Python
Toolkits to create a human-in-the-loop approval layer to monitor and guide AI agents workflow in real-time.
- Updated
Nov 28, 2024 - Svelte
Framework for testing vulnerabilities of large language models (LLM).
- Updated
Sep 24, 2025 - Python
AI-powered subdomain enumeration tool with local LLM analysis via Ollama - 100% private, zero API costs
- Updated
Nov 21, 2025 - Go
A curated list of academic events on AI Security & Privacy
- Updated
Aug 22, 2024
Improve this page
Add a description, image, and links to theai-security topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with theai-security topic, visit your repo's landing page and select "manage topics."