pcl-adversarial-defense:在ICCV 2019中通过限制深度神经网络的隐藏空间进行对抗性防御

  • v8_887386
    了解作者
  • 43.3MB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • VIP专享
    资源类型
  • 0
    下载次数
  • 2022-05-20 11:52
    上传日期
通过限制深层神经网络(ICCV'19)的隐藏空间进行对抗性防御 该存储库是ICCV'19论文《的PyTorch实施,它。 为了对抗对抗性攻击,我们提出了原型一致性损失,以按类别区分深度网络的中间特征。 从图中可以看出,存在这样的对抗样本的主要原因是潜在特征空间中学习特征的紧密接近。 我们提供了用于重现论文结果的脚本。 克隆存储库 将此存储库克隆到所需的任何位置。 git clone https://github.com/aamir-mustafa/pcl-adversarial-defense cd pcl-adversarial-defense Softmax(交叉熵)训练 为了加快针对我们提出的损失形成聚类的过程,我们首先使用交叉熵损失训练模型。 softmax_training.py (用于初始softmax训练)。 经过培训的检查点将保存在Models_Softmax文
pcl-adversarial-defense-master.zip
  • pcl-adversarial-defense-master
  • Models_PCL
  • CIFAR10_PCL.pth.tar
    15.4MB
  • Models_Softmax
  • CIFAR10_Softmax.pth.tar
    15.4MB
  • robustness.py
    3.6KB
  • Block_Diag.png
    102.5KB
  • softmax_training.py
    6.7KB
  • Mapping_Function.png
    121.1KB
  • README.md
    2.7KB
  • contrastive_proximity.py
    1.5KB
  • pcl_training_adversarial_pgd.py
    14KB
  • pcl_training_adversarial_fgsm.py
    13KB
  • robust_ml.py
    1.5KB
  • pcl_training.py
    11.8KB
  • robust_model.pth.tar
    15.4MB
  • resnet_model.py
    4.7KB
  • proximity.py
    1.3KB
  • utils.py
    1.9KB
内容介绍
# Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks (ICCV'19) ![Figure 1](Mapping_Function.png) This repository is an PyTorch implementation of the ICCV'19 paper [Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks](https://arxiv.org/abs/1904.00887). To counter adversarial attacks, we propose Prototype Conformity Loss to class-wise disentangle intermediate features of a deep network. From the figure, it can be observed that the main reason for the existence of such adversarial samples is the close proximity of learnt features in the latent feature space. We provide scripts for reproducing the results from our paper. ## Clone the repository Clone this repository into any place you want. ```bash git clone https://github.com/aamir-mustafa/pcl-adversarial-defense cd pcl-adversarial-defense ``` ## Softmax (Cross-Entropy) Training To expedite the process of forming clusters for our proposed loss, we initially train the model using cross-entropy loss. ``softmax_training.py`` -- ( For initial softmax training). * The trained checkpoints will be saved in ``Models_Softmax`` folder. ## Prototype Conformity Loss The deep features for the prototype conformity loss are extracted from different intermediate layers using auxiliary branches, which map the features to a lower dimensional output as shown in the following figure. ![](Block_Diag.png) ``pcl_training.py`` -- ( Joint supervision with cross-entropy and our loss). * The trained checkpoints will be saved in ``Models_PCL`` folder. ## Adversarial Training ``pcl_training_adversarial_fgsm.py`` -- ( Adversarial Training using FGSM Attack). ``pcl_training_adversarial_pgd.py`` -- ( Adversarial Training using PGD Attack). ## Testing Model's Robustness against White-Box Attacks ``robustness.py`` -- (Evaluate trained model's robustness against various types of attacks). ## Comparison of Softmax Trained Model and Our Model Retained classification accuracy of the model's under various types of adversarial attacks: | Training Scheme | No Attack | FGSM | BIM | MIM | PGD | | :------- | :---------- | :----- |:------ |:------ |:------ | | Softmax | 92.15 | 21.48 | 0.01 | 0.02 | 0.00 | | Ours | 89.55 | 55.76 | 39.75 | 36.44 | 31.10 | ## Citation ``` @InProceedings{Mustafa_2019_ICCV, author = {Mustafa, Aamir and Khan, Salman and Hayat, Munawar and Goecke, Roland and Shen, Jianbing and Shao, Ling}, title = {Adversarial Defense by Restricting the Hidden Space of Deep Neural Networks}, booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, month = {October}, year = {2019} } ```
评论
    相关推荐