Deep-Compression-AlexNet-master

所属分类:人工智能/神经网络/深度学习
开发工具:Python
文件大小:7317KB
下载次数:3
上传日期:2018-09-26 18:43:27
上 传 者123xy
说明:  AlexNet是2012年ImageNet竞赛冠军获得者Hinton和他的学生Alex Krizhevsky设计的。也是在那年之后,更多的更深的神经网路被提出,比如优秀的vgg,GoogleLeNet。其官方提供的数据模型,准确率达到57.1%,top 1-5 达到80.2%. 这项对于传统的机器学习分类算法而言,已经相当的出色。
(AlexNet was designed by 2012 ImageNet contest winner Hinton and his student Alex Krizhevsky. Also after that year, more and deeper neural networks were proposed, such as excellent VGG,GoogleLeNet. The accuracy rate of the official data model is 57.1%, and the accuracy rate of the top 1-5 is 80.2%. This is quite excellent for the traditional machine learning classification algorithm.)

文件列表:
AlexNet_compressed.net (9037369, 2018-05-08)
LICENSE (1262, 2018-05-08)
bvlc_alexnet_deploy.prototxt (3583, 2018-05-08)
decode.py (3519, 2018-05-08)

# Deep Compression on AlexNet This is a demo of [Deep Compression](http://arxiv.org/pdf/1510.00149v5.pdf) compressing AlexNet from 233MB to 8.9MB without loss of accuracy. It only differs from the paper that Huffman coding is not applied. Deep Compression's video from [ICLR'16 best paper award presentation](https://youtu.be/kQAhW9gh6aU) is available. # Related Papers [Learning both Weights and Connections for Efficient Neural Network (NIPS'15)](http://arxiv.org/pdf/1506.02626v3.pdf) [Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding (ICLR'16, best paper award)](http://arxiv.org/pdf/1510.00149v5.pdf) [EIE: Efficient Inference Engine on Compressed Deep Neural Network (ISCA'16)](http://arxiv.org/pdf/1602.01528v1.pdf) If you find Deep Compression useful in your research, please consider citing the paper: @inproceedings{han2015learning, title={Learning both Weights and Connections for Efficient Neural Network}, author={Han, Song and Pool, Jeff and Tran, John and Dally, William}, booktitle={Advances in Neural Information Processing Systems (NIPS)}, pages={1135--1143}, year={2015} } @article{han2015deep_compression, title={Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding}, author={Han, Song and Mao, Huizi and Dally, William J}, journal={International Conference on Learning Representations (ICLR)}, year={2016} } **A hardware accelerator working directly on the deep compressed model:** @article{han2016eie, title={EIE: Efficient Inference Engine on Compressed Deep Neural Network}, author={Han, Song and Liu, Xingyu and Mao, Huizi and Pu, Jing and Pedram, Ardavan and Horowitz, Mark A and Dally, William J}, journal={International Conference on Computer Architecture (ISCA)}, year={2016} } # Usage: export CAFFE_ROOT=$your caffe root$ python decode.py bvlc_alexnet_deploy.prototxt AlexNet_compressed.net $CAFFE_ROOT/alexnet.caffemodel cd $CAFFE_ROOT ./build/tools/caffe test --model=models/bvlc_alexnet/train_val.prototxt --weights=alexnet.caffemodel --iterations=1000 --gpu 0 # Test Result: I1022 20:18:58.336736 13182 caffe.cpp:1***] accuracy_top1 = 0.57074 I1022 20:18:58.336745 13182 caffe.cpp:1***] accuracy_top5 = 0.80254

近期下载者

相关文件


收藏者