Siam

所属分类:Linux/Unix编程
开发工具:Python
文件大小:3098KB
下载次数:4
上传日期:2019-04-20 13:56:46
上 传 者neil868
说明:  孪生网络实现视频目标跟踪 ,通过python语言编写,采用pytorch 框架实现。
(Siamese-net for object tracking)

文件列表:
Siam\.idea\deployment.xml (541, 2018-12-18)
Siam\.idea\encodings.xml (138, 2018-12-17)
Siam\.idea\misc.xml (383, 2018-12-17)
Siam\.idea\modules.xml (267, 2018-12-17)
Siam\.idea\remote-mappings.xml (272, 2018-12-17)
Siam\.idea\Siam.iml (408, 2018-12-17)
Siam\.idea\workspace.xml (6740, 2018-12-18)
Siam\demo-sequences\vot15_bag\groundtruth.txt (12328, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000001.jpg (11358, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000002.jpg (12797, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000003.jpg (13360, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000004.jpg (13853, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000005.jpg (14149, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000006.jpg (14040, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000007.jpg (14540, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000008.jpg (14972, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000009.jpg (14456, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000010.jpg (14730, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000011.jpg (14662, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000012.jpg (14638, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000013.jpg (14731, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000014.jpg (14673, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000015.jpg (14872, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000016.jpg (14905, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000017.jpg (16182, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000018.jpg (16337, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000019.jpg (16318, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000020.jpg (16216, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000021.jpg (16183, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000022.jpg (16107, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000023.jpg (16011, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000024.jpg (15845, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000025.jpg (15972, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000026.jpg (16006, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000027.jpg (15963, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000028.jpg (15837, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000029.jpg (15850, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000030.jpg (15611, 2018-12-14)
Siam\demo-sequences\vot15_bag\imgs\00000031.jpg (15865, 2018-12-14)
... ...

’ **IMPORTANT**. At CVPR'17 we presented CFNet, which uses a slightly modified version of SiamFC (which I have been calling v2 or baseline-conv5) to compare against that paper's Correlation Filter Network. The difference is simply that it has only 32 output channel instead of 256 and it has activations with higher spatial resolutions. Results are slightly better, speed is slightly worse. For this reason, if you are starting fresh it makes much more sense to use the more recent code from the CFNet repository, which is also a bit cleaner I think. However, if you have started with this repo, no worries. Things are just marginally different so there is no much use in switching. ## Fully-Convolutional Siamese Networks for Object Tracking - - - - Project page: The code in this repository enables you to reproduce the experiments of our paper. It can be used in two ways: **(1) tracking only** and **(2) training and tracking**. - - - - ![pipeline image][logo] [logo]: http://www.robots.ox.ac.uk/~luca/stuff/siamesefc_conv-explicit_small.jpg "Pipeline image" - - - - If you find our work and/or curated dataset useful, please cite: ``` @inproceedings{bertinetto2016fully, title={Fully-Convolutional Siamese Networks for Object Tracking}, author={Bertinetto, Luca and Valmadre, Jack and Henriques, Jo{\~a}o F and Vedaldi, Andrea and Torr, Philip H S}, booktitle={ECCV 2016 Workshops}, pages={850--865}, year={2016} } ``` - - - - [ **Tracking only** ] If you don't care much about training, simply plug one of our pretrained networks to our basic tracker and see it in action. 1. Prerequisites: GPU, CUDA drivers, [cuDNN](https://developer.nvidia.com/cudnn), Matlab (we used 2015b), [MatConvNet](http://www.vlfeat.org/matconvnet/install/) (we used `v1.0-beta20`). 2. Clone the repository. 3. Download one of the pretrained networks from 4. Go to `siam-fc/tracking/` and remove the trailing `.example` from `env_paths_tracking.m.example`, `startup.m.example` and `run_tracking.m.example`, editing the files as appropriate. 5. Be sure to have at least one video sequence in the appropriate format. You can find an example here in the repository (`siam-fc/demo-sequences/vot15_bag`). 6. `siam-fc/tracking/run_tracking.m` is the entry point to execute the tracker, have fun! [ **Training and tracking** ] Well, if you prefer to train your own network, the process is slightly more involved (but also more fun). 1. Prerequisites: GPU, CUDA drivers, [cuDNN](https://developer.nvidia.com/cudnn), Matlab (we used 2015b), [MatConvNet](http://www.vlfeat.org/matconvnet/install/) (we used `v1.0-beta20`). 2. Clone the repository. 3. Follow these [step-by-step instructions](https://github.com/bertinetto/siamese-fc/tree/master/ILSVRC15-curation), which will help you generating a curated dataset compatible with the rest of the code. 4. If you did not generate your own, download the [imdb_video.mat](http://bit.ly/imdb_video) (6.7GB) with all the metadata and the [dataset stats](http://bit.ly/imdb_video_stats). 5. Go to `siam-fc/training/` and remove the trailing `.example` from `env_paths.m.example`, `startup.m.example` and `run_experiment.m.example` editing the files as appropriate. 6. `siam-fc/training/run_experiment.m` is the entry point to start training. Default hyper-params are at the start of `experiment.m` and can be overwritten by custom ones specified in `run_experiment.m`. 7. By default, training plots are saved in `siam-fc/training/data/`. When you are happy, grab a network snapshot (`net-epoch-X.mat`) and save it somewhere convenient to use it for tracking. 8. Go to point `4.` of Tracking only and enjoy the result of the labour of your own GPUs!

近期下载者

相关文件


收藏者