曲线叠加代码matlab-zebrafish-learning:以斑马鱼的录像为例,分析CNN中的特征学习

  • L5_596763
    了解作者
  • 531KB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • VIP专享
    资源类型
  • 0
    下载次数
  • 2022-05-14 10:45
    上传日期
曲线合并代码matlab 以斑马鱼游泳圈分类为例的两流CNN视频特征学习分析 本文附带该源代码。 这项工作通过可视化经过斑马鱼运动二进制分类训练的CNN的学习特征,证明了最新AI解释技术的实用性。 除了本自述文件外,本文的附录还提供了重要的进一步说明。 “ cnn”文件夹中的文件用于训练我们的CNN: 培训的主要功能是在文件main.py 。 experiment_builder.py执行实际的向前和向后传递。 data_providers.py实现了Pytorch的数据集模块。 我们的两流体系结构是在model_architectures.py实现的。 参数由arg_extractor.py读取。 文件夹“ svm”中的文件源自(): 训练我们的SVM的主要功能是在文件main.py 。 它取决于svm.py , tailmetrics.py , framemetrics.py和peakdetector.py 。 我们将来自控制台的最终输出放入output.txt 。 尾巴上的点都装有tailfit.py 。 文件夹“脚本”包含所有其他重要的python脚本,包括使用iNNvesti
zebrafish-learning-master.zip
  • zebrafish-learning-master
  • images
  • pipeline.png
    31.2KB
  • aug.png
    38KB
  • cnn
  • storage_utils.py
    3KB
  • model_architectures.py
    5KB
  • check_data.py
    2.7KB
  • main.py
    6.2KB
  • arg_extractor.py
    4.7KB
  • data_providers.py
    1.5KB
  • experiment_builder.py
    25.3KB
  • scripts
  • heatmap_selection_s.py
    3KB
  • heatmap.py
    5.1KB
  • augment.py
    11.8KB
  • preprocess.py
    12.6KB
  • analyze.py
    11.5KB
  • heatmap_averages.py
    2.6KB
  • shuffle.py
    504B
  • get_average_heatmaps.ipynb
    132.9KB
  • create_pretrained_file.py
    5.3KB
  • preprocess_get_figure.ipynb
    170.1KB
  • average_heatmap.ipynb
    108.6KB
  • heatmap_single.py
    4.2KB
  • heatmap_selection_t.py
    4KB
  • evaluate_probs.ipynb
    150.5KB
  • cut_agarose.py
    750B
  • svm
  • peakdetector.py
    1.6KB
  • tailfit.ipynb
    46.9KB
  • framemetrics.py
    1.2KB
  • tailfit.py
    15KB
  • output.txt
    1.9KB
  • subsample_tails.ipynb
    5.8KB
  • main.py
    1.9KB
  • tailmetrics.py
    3.3KB
  • check_tails.ipynb
    8.4KB
  • svm.py
    4.5KB
  • README.md
    3.9KB
  • requirements.txt
    146B
内容介绍
# Analysis of Video Feature Learning in Two-Stream CNNs on the Example of Zebrafish Swim Bout Classification This source code accompanies the paper ["Analysis of Video Feature Learning in Two-Stream CNNs on the Example of Zebrafish Swim Bout Classification" (Breier and Onken, 2020)](https://openreview.net/forum?id=rJgQkT4twH). The work demonstrates the utility of a recent AI explainability technique by visualizing the learned features of a CNN trained on binary classification of zebrafish movements. Beside this readme, the Appendix of the paper gives important further explanations. The files in the folder "cnn" were used for training our CNNs: * The main function for training is in file `main.py`. * `experiment_builder.py` performs the actual forward and backward passes. * `data_providers.py` implements Pytorch's Dataset module. * Our two-stream architecture is implemented in `model_architectures.py`. * Arguments are read by `arg_extractor.py`. The files in the folder "svm" are derived from the [study by Semmelhack et al. (2014)](https://elifesciences.org/articles/04878) ([source code here](https://bitbucket.org/mpinbaierlab/semmelhack-et-al.-2014/)): * The main function for training our SVM is in file `main.py`. * It depends on `svm.py`, `tailmetrics.py`, `framemetrics.py`, and `peakdetector.py`. * We put the final output from the console into `output.txt`. * The points on the tail were fitted with `tailfit.py`. The folder "scripts" contains all other important python scripts, including analyses with iNNvestigate, heatmap generation, and creation of other figures: * `preprocess.py` was used to compute the npz-file of 1,214 centered, cropped and normalized videos from the original raw avi-files. * `shuffle.py` to randomly shuffle the npz-file. * `augment.py` to receive hdf5-files in a highly parallelized way, applying flipping, cropping, and subsampling. * `create_pretrained_file.py` to extract PyTorch weights from the matlab-file [imagenet-vgg-m-2048.mat](http://www.vlfeat.org/matconvnet/models/imagenet-vgg-m-2048.mat) * `analyze.py` to load the trained PyTorch-weights and analyze all samples with Deep Taylor Decomposition, or another technique available in the iNNvestigate library. * `heatmap.py` and the other heatmap scripts to overlay spatial and temporal inputs with relevance heatmaps, either for single samples or averaged. * `cut_agarose.py` was used to remove experimental artifacts from the videos * `evaluate_probs.ipynb`, `get_temporal_stats.ipynb`, `get_training_curve.ipynb`, and `preprocess_get_figure.ipynb` are the jupyter notebook we used to create some of the figures of the relevance analysis and of preprocessing. The [best CNN weights we obtained can be found here](https://drive.google.com/open?id=1EdmGl5p7T9nhcH0IibpNvM6YMO5MHjIK) (after removing artifacts in the data), so you could try running analyses without training the CNN again. All scripts use Python 3. They require the following modules with versions: NumPy 1.16.4, Matplotlib 3.1.1, h5py 2.9.0, tqdm 4.32.2, OpenCV 4.1.0.25, scikit-learn 0.21.2, PyTorch 1.1.0, TensorFlow 1.14.0, Keras 2.2.4, iNNvestigate 1.0.8 File paths inside scripts have to be adapted to the local setup. We marked them as TODOs. In many scripts we used a debug-flag which can be switched on to receive more informative output. --- Some further explanations to the setup: ![Image of the project pipeline](images/pipeline.png?raw=true "Project pipeline") Project pipeline: The pipeline starts with videos in avi format and ends with CNN and SVM accuracy scores, as well as one heatmap per (augmented) sample ![Image of the augmentation procedure](images/aug.png?raw=true "Augmentation procedure") Augmentation procedure: The augmentation procedure includes flow-computation/subsampling, flipping, and cropping The files in the cnn-folder are originally based on code from the course MLP 2018/2019 at The University of Edinburgh.
评论
    相关推荐