siamese-fc-master
所属分类:网络编程
开发工具:WINDOWS
文件大小:3101KB
下载次数:46
上传日期:2017-07-21 18:45:21
上 传 者:
火狼AA
说明: 深度学习中的孪生网络 matlab编码,可以进行很好的分类
(Depth learning in the twin network matlab coding, can be a good classification)
文件列表:
ILSVRC15-curation (0, 2017-06-21)
ILSVRC15-curation\parse_objects.m (2830, 2017-06-21)
ILSVRC15-curation\per_frame_annotation.m (5123, 2017-06-21)
ILSVRC15-curation\save_crops.m (6014, 2017-06-21)
ILSVRC15-curation\vid_image_stats.m (3108, 2017-06-21)
ILSVRC15-curation\vid_setup_data.m (5608, 2017-06-21)
ILSVRC15-curation\video_ids.sh (475, 2017-06-21)
LICENSE (1072, 2017-06-21)
demo-sequences (0, 2017-06-21)
demo-sequences\vot15_bag (0, 2017-06-21)
demo-sequences\vot15_bag\groundtruth.txt (12328, 2017-06-21)
demo-sequences\vot15_bag\imgs (0, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000001.jpg (11358, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000002.jpg (12797, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000003.jpg (13360, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000004.jpg (13853, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000005.jpg (14149, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000006.jpg (14040, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000007.jpg (14540, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000008.jpg (14972, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000009.jpg (14456, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000010.jpg (14730, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000011.jpg (14662, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000012.jpg (14638, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000013.jpg (14731, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000014.jpg (14673, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000015.jpg (14872, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000016.jpg (14905, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000017.jpg (16182, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000018.jpg (16337, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000019.jpg (16318, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000020.jpg (16216, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000021.jpg (16183, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000022.jpg (16107, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000023.jpg (16011, 2017-06-21)
demo-sequences\vot15_bag\imgs\00000024.jpg (15845, 2017-06-21)
... ...
→ → → **NEWS!** We have ported an improved version of this code to **Tensorflow**. [Here the repository](https://github.com/torrvision/siamfc-tf)
## Fully-Convolutional Siamese Networks for Object Tracking
- - - -
Project page:
The code in this repository enables you to reproduce the experiments of our paper.
It can be used in two ways: **(1) tracking only** and **(2) training and tracking**.
- - - -
![pipeline image][logo]
[logo]: http://www.robots.ox.ac.uk/~luca/stuff/siamesefc_conv-explicit_small.jpg "Pipeline image"
- - - -
If you find our work and/or curated dataset useful, please cite:
```
@inproceedings{bertinetto2016fully,
title={Fully-Convolutional Siamese Networks for Object Tracking},
author={Bertinetto, Luca and Valmadre, Jack and Henriques, Jo{\~a}o F and Vedaldi, Andrea and Torr, Philip H S},
booktitle={ECCV 2016 Workshops},
pages={850--865},
year={2016}
}
```
- - - -
[ **Tracking only** ] If you don't care much about training, simply plug one of our pretrained networks to our basic tracker and see it in action.
1. Prerequisites: GPU, CUDA drivers, [cuDNN](https://developer.nvidia.com/cudnn), Matlab (we used 2015b), [MatConvNet](http://www.vlfeat.org/matconvnet/install/) (we used `v1.0-beta20`).
2. Clone the repository.
3. Download one of the pretrained networks from
4. Go to `siam-fc/tracking/` and remove the trailing `.example` from `env_paths_tracking.m.example`, `startup.m.example` and `run_tracking.m.example`, editing the files as appropriate.
5. Be sure to have at least one video sequence in the appropriate format. You can find an example here in the repository (`siam-fc/demo-sequences/vot15_bag`).
6. `siam-fc/tracking/run_tracking.m` is the entry point to execute the tracker, have fun!
[ **Training and tracking** ] Well, if you prefer to train your own network, the process is slightly more involved (but also more fun).
1. Prerequisites: GPU, CUDA drivers, [cuDNN](https://developer.nvidia.com/cudnn), Matlab (we used 2015b), [MatConvNet](http://www.vlfeat.org/matconvnet/install/) (we used `v1.0-beta20`).
2. Clone the repository.
3. Follow these [step-by-step instructions](https://github.com/bertinetto/siamese-fc/tree/master/ILSVRC15-curation), which will help you generating a curated dataset compatible with the rest of the code.
4. If you did not generate your own, download the [imdb_video.mat](http://bit.ly/imdb_video) (6.7GB) with all the metadata and the [dataset stats](http://bit.ly/imdb_video_stats).
5. Go to `siam-fc/training/` and remove the trailing `.example` from `env_paths.m.example`, `startup.m.example` and `run_experiment.m.example` editing the files as appropriate.
6. `siam-fc/training/run_experiment.m` is the entry point to start training. Default hyper-params are at the start of `experiment.m` and can be overwritten by custom ones specified in `run_experiment.m`.
7. By default, training plots are saved in `siam-fc/training/data/`. When you are happy, grab a network snapshot (`net-epoch-X.mat`) and save it somewhere convenient to use it for tracking.
8. Go to point `4.` of Tracking only and enjoy the result of the labour of your own GPUs!
近期下载者:
相关文件:
收藏者: