Once-for-All-Adversarial-Training

所属分类:论文
开发工具:Python
文件大小:0KB
下载次数:0
上传日期:2021-12-30 10:41:26
上 传 者sh-1993
说明:  [NeurIPS 2020]“一劳永逸的对抗训练:健壮性和准确性之间的免费就地权衡”,王浩涛*,天龙C...,
([NeurIPS 2020] "Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free" by Haotao Wang*, Tianlong Chen*, Shupeng Gui, Ting-Kuei Hu, Ji Liu, and Zhangyang Wang)

文件列表:
Framework.PNG (72052, 2021-01-18)
LICENSE (1061, 2021-01-18)
OAT.py (14146, 2021-01-18)
PGDAT.py (8728, 2021-01-18)
attacks/ (0, 2021-01-18)
attacks/pgd.py (5600, 2021-01-18)
dataloaders/ (0, 2021-01-18)
dataloaders/cifar10.py (1274, 2021-01-18)
dataloaders/stl10.py (1051, 2021-01-18)
dataloaders/svhn.py (1167, 2021-01-18)
models/ (0, 2021-01-18)
models/DualBN.py (3528, 2021-01-18)
models/FiLM.py (2327, 2021-01-18)
models/__init__.py (0, 2021-01-18)
models/cifar10/ (0, 2021-01-18)
models/cifar10/resnet.py (2884, 2021-01-18)
models/cifar10/resnet_OAT.py (5400, 2021-01-18)
models/cifar10/resnet_slimmable.py (7370, 2021-01-18)
models/cifar10/resnet_slimmable_OAT.py (13104, 2021-01-18)
models/slimmable_ops.py (4564, 2021-01-18)
models/stl10/ (0, 2021-01-18)
models/stl10/wide_resnet.py (3772, 2021-01-18)
models/stl10/wide_resnet_OAT.py (5125, 2021-01-18)
models/svhn/ (0, 2021-01-18)
models/svhn/wide_resnet.py (3642, 2021-01-18)
models/svhn/wide_resnet_OAT.py (5096, 2021-01-18)
models/wrap_cat_model.py (370, 2021-01-18)
utils/ (0, 2021-01-18)
utils/__init__.py (0, 2021-01-18)
utils/context.py (1710, 2021-01-18)
utils/sample_lambda.py (3396, 2021-01-18)
utils/utils.py (5187, 2021-01-18)

# Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free [![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT) Haotao Wang\*, Tianlong Chen\*, Shupeng Gui, Ting-Kuei Hu, Ji Liu, Zhangyang Wang In NeurIPS 2020 ## Overview We present a novel once-for-all adverarial training (OAT) framework that addresses a new and important goal: in-situ “free” trade-off between robustness and accuracy at testing time. In particular, we demonstrate the importance of separating standard and adversarial feature statistics, when trying to pack their learning in one model. We also extend from OAT to OATS, that enables a joint in-situ trade-off among robustness, accuracy, and the computational budget. Experimental results show that OAT/OATS achieve similar or even superior performance, when compared to traditional dedicatedly trained robust models. Our approaches meanwhile cost only one model and no re-training. In other words, they are **free but no worse**. ## Framework


## Training ### Our once-for-all adversarial training method (OAT): ``` python OAT.py --ds -b -e --lr --use2BN ``` ### Traditional dedicated adversarial training baseline method (PGDAT): ``` python PGDAT.py --ds -b -e --lr ``` ## Citation If you use this code for your research, please cite our paper. ``` @inproceedings{wang2020onceforall, title={Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free}, author={Wang, Haotao and Chen, Tianlong and Gui, Shupeng and Hu, Ting-Kuei and Liu, Ji and Wang, Zhangyang}, booktitle={NeurIPS}, year={2020} } ```

近期下载者

相关文件


收藏者