- Notifications
You must be signed in to change notification settings - Fork1.3k
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
License
Trusted-AI/adversarial-robustness-toolbox
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation

Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART is hosted by theLinux Foundation AI & Data Foundation (LF AI & Data). ART provides tools that enabledevelopers and researchers to defend and evaluate Machine Learning models and applications against theadversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks(TensorFlow, Keras, PyTorch, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types(images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition,generation, certification, etc.).


Get Started | Documentation | Contributing |
---|---|---|
-Installation -Examples -Notebooks | -Attacks -Defences -Estimators -Metrics -Technical Documentation | -Slack,Invitation -Contributing -Roadmap -Citing |
The library is under continuous development. Feedback, bug reports and contributions are very welcome!
This material is partially based upon work supported by the Defense Advanced Research Projects Agency (DARPA) underContract No. HR001120C0013. Any opinions, findings and conclusions or recommendations expressed in this material arethose of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA).
About
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Topics
Resources
License
Code of conduct
Contributing
Security policy
Uh oh!
There was an error while loading.Please reload this page.