FeatherNets_backup

所属分类:图形图像处理
开发工具:Python
文件大小:2377KB
下载次数:2
上传日期:2020-10-08 14:44:45
上 传 者Shenmz
说明:  这是一个在目标检测中的一个轻量化神经网络,我在project里面写了一份使用教程,解压后就可以看见对于的PDF使用文档,中文的。
(This is a lightweight neural network in target detection. I wrote a tutorial in the project. After decompressing, I can see the PDF document in Chinese.)

文件列表:
FeatherNets_backup (0, 2020-10-08)
FeatherNets_backup\.ipynb_checkpoints (0, 2020-05-27)
FeatherNets_backup\.ipynb_checkpoints\Feather_pytorch_2_onnx-checkpoint.py (1030, 2020-05-19)
FeatherNets_backup\.ipynb_checkpoints\gen_final_submission-checkpoint.py (3426, 2020-05-19)
FeatherNets_backup\.ipynb_checkpoints\losses-checkpoint.py (6804, 2020-05-19)
FeatherNets_backup\.ipynb_checkpoints\main-checkpoint.py (16893, 2020-05-19)
FeatherNets_backup\.ipynb_checkpoints\model_onnx2IR-checkpoint.py (142, 2020-05-19)
FeatherNets_backup\.ipynb_checkpoints\p_test-checkpoint.py (5044, 2020-05-19)
FeatherNets_backup\.ipynb_checkpoints\read_data-checkpoint.py (4055, 2020-05-19)
FeatherNets_backup\.ipynb_checkpoints\roc-checkpoint.py (1049, 2020-05-19)
FeatherNets_backup\.ipynb_checkpoints\test-checkpoint.sh (633, 2020-05-19)
FeatherNets_backup\.ipynb_checkpoints\train-checkpoint.sh (136, 2020-05-19)
FeatherNets_backup\autoaugment.py (11205, 2020-05-19)
FeatherNets_backup\cfgs (0, 2020-05-27)
FeatherNets_backup\cfgs\.ipynb_checkpoints (0, 2020-05-27)
FeatherNets_backup\cfgs\.ipynb_checkpoints\FeatherNetA-32-checkpoint.yaml (223, 2020-05-19)
FeatherNets_backup\cfgs\.ipynb_checkpoints\FeatherNetB-32-checkpoint.yaml (219, 2020-05-19)
FeatherNets_backup\cfgs\.ipynb_checkpoints\fishnet150-32-checkpoint.yaml (230, 2020-05-19)
FeatherNets_backup\cfgs\FeatherNetA-32.yaml (223, 2020-05-19)
FeatherNets_backup\cfgs\FeatherNetB-32-ir.yaml (226, 2020-05-19)
FeatherNets_backup\cfgs\FeatherNetB-32.yaml (266, 2020-05-19)
FeatherNets_backup\cfgs\fishnet150-32.yaml (230, 2020-05-19)
FeatherNets_backup\cfgs\MobileLiteNet54-32.yaml (231, 2020-05-19)
FeatherNets_backup\cfgs\MobileLiteNet54-se-64.yaml (237, 2020-05-19)
FeatherNets_backup\cfgs\mobilenetv2.yaml (222, 2020-05-19)
FeatherNets_backup\cfgs\shufflenetv2_1.yaml (231, 2020-05-19)
FeatherNets_backup\checkpoints (0, 2020-05-27)
FeatherNets_backup\checkpoints\pre-trainedModels (0, 2020-05-27)
FeatherNets_backup\convert.sh (345, 2020-05-19)
FeatherNets_backup\crop_head.py (8666, 2020-05-19)
FeatherNets_backup\cutout.py (1172, 2020-05-19)
FeatherNets_backup\data (0, 2020-05-27)
FeatherNets_backup\data\fileList.ipynb (2409, 2020-05-19)
FeatherNets_backup\data\fileList.py (2815, 2020-05-19)
FeatherNets_backup\data\ir_final_train.txt (3349596, 2020-05-19)
FeatherNets_backup\data\ir_final_train_tmp.txt (2622798, 2020-05-19)
... ...

## FeatherNets for [Face Anti-spoofing Attack Detection Challenge@CVPR2019](https://competitions.codalab.org/competitions/20853#results)[1] ## The detail in our paper:[FeatherNets: Convolutional Neural Networks as Light as Feather for Face Anti-spoofing](https://arxiv.org/pdf/1904.09290) # FeatherNetB Inference Time **1.87ms** In CPU(i7,OpenVINO) # Params only 0.35M!! FLOPs 80M !! In the first phase,we only use depth data for training ,and after ensemble ACER reduce to 0.0. But in the test phase, when we only use depth data, the best ACER is 0.0016.This result is not very satisfactory. If the security is not very high, just using single-mode data is a very good choice. In order to achieve better results, we use IR data to jointly predict the final result. # Results on the validation set |model name | ACER|TPR@FPR=10E-2|TPR@FPR=10E-3|FP|FN|epoch|params|FLOPs| | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | |FishNet150| 0.00144|0.999668|0.9***330|19|0|27|24.96M|***52.72M| |FishNet150| 0.00181|1.0|0.9996|24|0|52|24.96M|***52.72M| |FishNet150| 0.00496|0.9***6***|0.990***8|48|8|16|24.96M|***52.72M| |MobileNet v2|0.00228|0.9996|0.9993|28|1|5|2.23M|306.17M |MobileNet v2|0.00387|0.999433|0.997662|49|1|6|2.23M|306.17M |MobileNet v2|0.00402|0.9996|0.992623|51|1|7|2.23M|306.17M |MobileLiteNet54|0.00242|1.0|0.9***46|32|0|41|0.57M|270.91M| |MobileLiteNet54-se|0.00242|1.0|0.996994|32|0|69|0.57M|270.91M| |FeatherNetA|0.00261|1.00|0.961590|19|7|51|0.35M|79.99M| |FeatherNetB|0.00168|1.0|0.997662|20|1|48|0.35M|83.05M| |**Ensembled all**|0.0000|1.0|1.0|0|0|-|-|-| # Our Pretrained Models(model checkpoints) Link:https://pan.baidu.com/s/1vlKePiWYFYNxefD9Ld16cQ Key:xzv8 decryption key: OTC-MMFD-1184***96 [Google Dirve](https://drive.google.com/open?id=1F_du_iarTepKKYgXpk_cJNGRb34rlJ5c) ## Recent Update **2019.4.4**: updata data/fileList.py **2019.3.10**:code upload for the origanizers to reproduce. **2019.4.23**:add our paper FeatherNets **2019.8.4**: release our model checkpoint **2019.09.25**: early mutilmodal method # Prerequisites ## install requeirements ``` conda env create -n env_name -f env.yml ``` ## Data ### [CASIA-SURF Dataset](https://arxiv.org/abs/1812.00408)[2] How to download CASIA-SURF dataset? 1.Download, read the Contest Rules, and sign the agreement,[link](http://www.google.com/url?q=http%3A%2F%2Fwww.cbsr.ia.ac.cn%2Fusers%2Fjwan%2Fdatabase%2FCASIA-SURF_agreement.pdf&sa=D&sntz=1&usg=AFQjCNHFuTTHdLXoJbtuuxf4nvgT8A4Nzw) 2. Send the your signed agreements to: Jun Wan, jun.wan@ia.ac.cn ### Our Private Dataset(Available Soon) ### Data index tree ``` ├── data │ ├── our_realsense │ ├── Training │ ├── Val │ ├── Testing ``` Download and unzip our private Dataset into the ./data directory. Then run data/fileList.py to prepare the file list. ### Data Augmentation | Method | Settings | | ----- | -------- | | Random Flip | True | | Random Crop | 8% ~ 100% | | Aspect Ratio| 3/4 ~ 4/3 | | Random PCA Lighting | 0.1 | # Train the model ### Download pretrained models(trained on ImageNet2012) download [fishnet150](https://pan.baidu.com/s/1uOEFsBHIdqpDLrbfCZJGUg) pretrained model from [FishNet150 repo](https://github.com/kevin-ssy/FishNet)(Model trained without tricks ) download [mobilenetv2](https://drive.google.com/open?id=1jlto6HRVD3ipNkAl1lNhDbkBp7HylaqR) pretrained model from [MobileNet V2 repo](https://github.com/tonylins/pytorch-mobilenet-v2),or download from here,link: https://pan.baidu.com/s/11Hz50zlMyp3gtR9Bhws-Dg password: gi46 **move them to ./checkpoints/pre-trainedModels/** ### 1.train FishNet150 > nohup python main.py --config="cfgs/fishnet150-32.yaml" --b 32 --lr 0.01 --every-decay 30 --fl-gamma 2 >> fishnet150-train.log & ### 2.train MobileNet V2 > nohup python main.py --config="cfgs/mobilenetv2.yaml" --b 32 --lr 0.01 --every-decay 40 --fl-gamma 2 >> mobilenetv2-bs32-train.log & Commands to train the model: #### 3Train MobileLiteNet54 ``` python main.py --config="cfgs/MobileLiteNet54-32.yaml" --every-decay 60 -b 32 --lr 0.01 --fl-gamma 3 >>FNet54-bs32-train.log ``` #### 4Train MobileLiteNet54-SE ``` python main.py --config="cfgs/MobileLiteNet54-se-***.yaml" --b *** --lr 0.01 --every-decay 60 --fl-gamma 3 >> FNet54-se-bs***-train.log ``` #### 5Train FeatherNetA ``` python main.py --config="cfgs/FeatherNetA-32.yaml" --b 32 --lr 0.01 --every-decay 60 --fl-gamma 3 >> MobileLiteNetA-bs32-train.log ``` #### 6Train FeatherNetB ``` python main.py --config="cfgs/FeatherNetB-32.yaml" --b 32 --lr 0.01 --every-decay 60 --fl-gamma 3 >> MobileLiteNetB-bs32--train.log ``` ## How to create a submission file example: > python main.py --config="cfgs/mobilenetv2.yaml" --resume ./checkpoints/mobilenetv2_bs32/_4_best.pth.tar --val True --val-save True # Ensemble ### for validation ``` run EnsembledCode_val.ipynb ``` ### for test ``` run EnsembledCode_test.ipynb ``` **notice**:Choose a few models with large differences in prediction results # Serialized copy of the trained model You can download my artifacts folder which I used to generate my final submissions: Available Soon >[1] ChaLearn Face Anti-spoofing Attack Detection Challenge@CVPR2019,[link](https://competitions.codalab.org/competitions/20853?secret_key=ff0e7c30-e244-4681-88e4-9eb5b41dd7f7) >[2] Shifeng Zhang, Xiaobo Wang, Ajian Liu, Chenxu Zhao, Jun Wan, Sergio Escalera, Hailin Shi, Zezheng Wang, Stan Z. Li, " CASIA-SURF: A Dataset and Benchmark for Large-scale Multi-modal Face Anti-spoofing ", arXiv, 2018 [PDF](https://arxiv.org/abs/1812.00408) # Multimodal Methods In the early days of the competition, I thought about some other multimodal methods. You can view the network structure here.(multimodal_fusion_method.md) I have not been able to continue because of limited computing resources.

近期下载者

相关文件


收藏者