Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

License

NotificationsYou must be signed in to change notification settings

Trusted-AI/adversarial-robustness-toolbox

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 


CodeQLDocumentation StatusPyPIcodecovCode style: blackLicense: MITPyPI - Python Versionslack-imgDownloadsDownloadsCII Best Practices

中文README请按此处

Fallback image description

Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART is hosted by theLinux Foundation AI & Data Foundation (LF AI & Data). ART provides tools that enabledevelopers and researchers to defend and evaluate Machine Learning models and applications against theadversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks(TensorFlow, Keras, PyTorch, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types(images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition,generation, certification, etc.).

Adversarial Threats

Fallback image description


ART for Red and Blue Teams (selection)

Fallback image description

Learn more

Get StartedDocumentationContributing
-Installation
-Examples
-Notebooks
-Attacks
-Defences
-Estimators
-Metrics
-Technical Documentation
-Slack,Invitation
-Contributing
-Roadmap
-Citing

The library is under continuous development. Feedback, bug reports and contributions are very welcome!

Acknowledgment

This material is partially based upon work supported by the Defense Advanced Research Projects Agency (DARPA) underContract No. HR001120C0013. Any opinions, findings and conclusions or recommendations expressed in this material arethose of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA).

Packages

No packages published

Contributors114

Languages


[8]ページ先頭

©2009-2025 Movatter.jp