adversarial-machine-learning
Here are 507 public repositories matching this topic...
Language:All
Sort:Most stars
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
- Updated
Jul 11, 2025 - Python
Fawkes, privacy preserving tool against facial recognition systems. More info athttps://sandlab.cs.uchicago.edu/fawkes
- Updated
Aug 2, 2023 - Python
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLPhttps://textattack.readthedocs.io/en/master/
- Updated
Jul 10, 2025 - Python
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
- Updated
Jun 9, 2025
The Security Toolkit for LLM Interactions
- Updated
Jul 8, 2025 - Python
A Toolbox for Adversarial Robustness Research
- Updated
Sep 14, 2023 - Jupyter Notebook
A curated list of useful resources that cover Offensive AI.
- Updated
Jun 15, 2025 - HTML
A curated list of adversarial attacks and defenses papers on graph-structured data.
- Updated
Dec 15, 2023
RobustBench: a standardized adversarial robustness benchmark [NeurIPS 2021 Benchmarks and Datasets Track]
- Updated
Mar 31, 2025 - Python
T2F: text to face generation using Deep Learning
- Updated
May 14, 2022 - Python
Unofficial PyTorch implementation of the paper titled "Progressive growing of GANs for improved Quality, Stability, and Variation"
- Updated
Oct 3, 2023 - Python
Papers and resources related to the security and privacy of LLMs 🤖
- Updated
Jun 8, 2025 - Python
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
- Updated
Oct 15, 2023 - Python
GraphGallery is a gallery for benchmarking Graph Neural Networks
- Updated
Aug 14, 2023 - Python
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
- Updated
Jan 31, 2024 - Python
Security and Privacy Risk Simulator for Machine Learning (arXiv:2312.17667)
- Updated
Jun 15, 2025 - C++
Provable adversarial robustness at ImageNet scale
- Updated
May 20, 2019 - Python
TransferAttack is a pytorch framework to boost the adversarial transferability for image classification.
- Updated
Jun 21, 2025 - Python
A curated list of trustworthy deep learning papers. Daily updating...
- Updated
Jul 8, 2025
Backdoors Framework for Deep Learning and Federated Learning. A light-weight tool to conduct your research on backdoors.
- Updated
Feb 5, 2023 - Python
Improve this page
Add a description, image, and links to theadversarial-machine-learning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with theadversarial-machine-learning topic, visit your repo's landing page and select "manage topics."