人工智能matlabmnist代码-Poisoning-Attacks-with-Back-gradient-Optimizat

  • V6_636979
    了解作者
  • 616.2KB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • VIP专享
    资源类型
  • 0
    下载次数
  • 2022-05-07 13:28
    上传日期
人工智能matlab mnist代码通过反向梯度优化实现中毒攻击 Matlab代码以及该论文中描述的中毒攻击示例该代码包括针对Adaline,Logistic回归和针对MNIST数据集的小型多层感知器的攻击(使用数字1和7)。 用 要生成随机训练/验证拆分,请首先在“ MNIST_splits”文件夹中运行脚本createSplits.m 。 然后,用于对Adaline,逻辑回归和MLP进行攻击的脚本分别是testAttackAdalineMNIST.m , testAttackLRmnist.m和testAttackMLPmnist.m 。 引文 如果您将此存储库中的代码用作已发布的研究项目的一部分,请引用本文。 @inproceedings{munoz2017towards, title={{Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization}}, author={Mu{\~n}oz-Gonz{\'a}lez, Luis and Biggio, Battista and Demo
Poisoning-Attacks-with-Back-gradient-Optimization-master.zip
  • Poisoning-Attacks-with-Back-gradient-Optimization-master
  • MNIST_splits
  • createSplits.m
    436B
  • mnist_1_7.mat
    624.9KB
  • LRcost.m
    75B
  • testAttackAdalineMNIST.m
    3.6KB
  • sigmoid.m
    51B
  • README.md
    2.5KB
  • getDerivativesMLP2.m
    964B
  • testAttackMLPmnist.m
    3.4KB
  • reverseLR2.m
    1.7KB
  • reverseAdaline.m
    1.2KB
  • testAttackLRmnist.m
    3.4KB
  • testMLP.m
    354B
  • reverseMLP2.m
    1.3KB
  • LICENSE
    1.1KB
  • trainLR2.m
    255B
  • getDerivativesMLP.m
    935B
  • trainMLP.m
    4.5KB
  • trainAdaline.m
    300B
内容介绍
# Poisoning Attacks with Back-gradient Optimization Matlab code with an example of the poisoning attack described in the paper [**"Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization."**](https://dl.acm.org/citation.cfm?id=3140451) The code includes the attack against Adaline, Logistic Regression and a small MultiLayer Perceptron for MNIST dataset (using digits 1 and 7). ## Use To generate the random training/validation splits, first run the script *createSplits.m* in the "MNIST_splits" folder. Then, the scripts to run the attacks against Adaline, logistic regression and the MLP are *testAttackAdalineMNIST.m*, *testAttackLRmnist.m* and *testAttackMLPmnist.m* respectively. ## Citation Please cite this paper if you use the code in this repository as part of a published research project. ``` @inproceedings{munoz2017towards, title={{Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization}}, author={Mu{\~n}oz-Gonz{\'a}lez, Luis and Biggio, Battista and Demontis, Ambra and Paudice, Andrea and Wongrassamee, Vasin and Lupu, Emil C and Roli, Fabio}, booktitle={Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security}, pages={27--38}, year={2017} } ``` ## Related papers You may also be interested some of our related papers on data poisoning: - [**"Poisoning Attacks with Generative Adversarial Nets."**](https://arxiv.org/pdf/1906.07773.pdf) L. Muñoz-González, B. Pfitzner, M. Russo, J. Carnerero-Cano, E.C. Lupu. ArXiv preprint arXiv:1906.07773, 2019 (*code available soon*). - [**"Label Sanitization against Label Flipping Poisoning Attacks."**](http://www.research.ibm.com/labs/ireland/nemesis2018/pdf/paper1.pdf) A. Paudice, L. Muñoz-González, E.C. Lupu. Nemesis Workshop on Adversarial Machine Learning. Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 5-15, 2018. - [**"Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection."**](https://arxiv.org/pdf/1802.03041.pdf) A. Paudice, L. Muñoz-González, A. Gyorgy, E.C. Lupu. ArXiv preprint: arXiv:1802.03041, 2018. ## About the authors This research work has been a collaboration between the [Resilient Information Systems Security (RISS) group](http://rissgroup.org/) at [Imperial College London](https://www.imperial.ac.uk/) and the [Pattern Recognition and Applications (PRA) Lab](https://pralab.diee.unica.it/en) at the [University of Cagliari](https://www.unica.it/unica/en/homepage.page).
评论
    相关推荐