Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.

License

NotificationsYou must be signed in to change notification settings

thu-ml/ares

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🌐 Overview

🔍ARES 2.0 (Adversarial Robustness Evaluation for Safety) is a Python library dedicated to adversarial machine learning research. It aims at benchmarking the adversarial robustness of image classification and object detection models, and introduces mechanisms to defend against adversarial attacks through robust training.

🌟 Features

  • Developed onPytorch.
  • Supportsvarious attacks on classification models.
  • Employs adversarial attacks on object detection models.
  • Provides robust training for enhanced robustness and various trainedcheckpoints.
  • Enables distributed training and testing.

💾 Installation

  1. Optional: Initialize a dedicated environment for ARES 2.0.

    conda create -n ares python==3.10.9conda activate ares
  2. Clone and set up ARES 2.0 via the following commands:

    git clone https://github.com/thu-ml/ares2.0cd ares2.0pip install -r requirements.txtmim install mmengine==0.8.4mim install mmcv==2.0.0 mim install mmdet==3.1.0pip install -v -e .

🚀 Getting Started

  • For robustness evaluation of image classification models against adversarial attacks, please refer toclassification.
  • For robustness evaluation of object detection models, please refer todetection.
  • For methodologies on robust training, please refer torobust-training.

📘 Documentation

📚 Access detailedtutorials andAPI docs on strategies to attack classification models, object detection models, and robust traininghere.

📝 Citation

If you derive value from ARES 2.0 in your endeavors, kindly cite our paper on adversarial robustness, which encompasses all models, attacks, and defenses incorporated in ARES 2.0:

@inproceedings{dong2020benchmarking,  title={Benchmarking Adversarial Robustness on Image Classification},  author={Dong, Yinpeng and Fu, Qi-An and Yang, Xiao and Pang, Tianyu and Su, Hang and Xiao, Zihao and Zhu, Jun},  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},  pages={321--331},  year={2020}}

[8]ページ先頭

©2009-2025 Movatter.jp