ExtremeNet

所属分类:模式识别(视觉/语音等)
开发工具:Python
文件大小:6622KB
下载次数:0
上传日期:2019-04-19 02:32:30
上 传 者sh-1993
说明:  通过对端点和中心点进行分组的自底向上目标检测
(Bottom-up Object Detection by Grouping Extreme and Center Points)

文件列表:
LICENSE (1522, 2019-04-19)
conda_packagelist.txt (6711, 2019-04-19)
config.py (4715, 2019-04-19)
config (0, 2019-04-19)
config\CornerNet-multi_scale.json (1082, 2019-04-19)
config\CornerNet.json (1005, 2019-04-19)
config\ExtremeNet-multi_scale.json (1124, 2019-04-19)
config\ExtremeNet.json (1134, 2019-04-19)
db (0, 2019-04-19)
db\__init__.py (0, 2019-04-19)
db\base.py (2136, 2019-04-19)
db\coco.py (6734, 2019-04-19)
db\coco_extreme.py (8108, 2019-04-19)
db\datasets.py (144, 2019-04-19)
db\detection.py (1955, 2019-04-19)
demo.py (11474, 2019-04-19)
dextr (0, 2019-04-19)
dextr.py (4438, 2019-04-19)
eval_dextr_mask.py (2291, 2019-04-19)
external (0, 2019-04-19)
external\Makefile (56, 2019-04-19)
external\__init__.py (0, 2019-04-19)
external\nms.pyx (14181, 2019-04-19)
external\setup.py (368, 2019-04-19)
images (0, 2019-04-19)
images\16004479832_a748d55f21_k.jpg (136674, 2019-04-19)
images\17790319373_bd19b24cfc_k.jpg (149246, 2019-04-19)
images\18124840932_e42b3e377c_k.jpg (162421, 2019-04-19)
images\19064748793_bb942deea1_k.jpg (140388, 2019-04-19)
images\24274813513_0cfd2ce6d0_k.jpg (113241, 2019-04-19)
images\33823288584_1d21cf0a26_k.jpg (249406, 2019-04-19)
images\33887522274_eebd074106_k.jpg (122273, 2019-04-19)
images\34501842524_3c858b3080_k.jpg (234375, 2019-04-19)
images\NOTICE (917, 2019-04-19)
models (0, 2019-04-19)
... ...

# ExtremeNet: Training and Evaluation Code Code for **bottom-up** object detection by grouping extreme and center points: ![](https://github.com/xingyizhou/ExtremeNet/blob/master/readme/teaser.png) > [**Bottom-up Object Detection by Grouping Extreme and Center Points**](https://github.com/xingyizhou/ExtremeNet/blob/master/https://arxiv.org/abs/1901.08043), > Xingyi Zhou, Jiacheng Zhuo, Philipp Krähenbühl, > *CVPR 2019 (arXiv 1901.08043)* This project is developed upon the [CornerNet code](https://github.com/xingyizhou/ExtremeNet/blob/master/https://github.com/princeton-vl/CornerNet) and contains the code from [Deep Extreme Cut(DEXTR)](https://github.com/xingyizhou/ExtremeNet/blob/master/https://github.com/scaelles/DEXTR-PyTorch). Thanks to the original authors! Contact: [zhouxy2017@gmail.com](https://github.com/xingyizhou/ExtremeNet/blob/master/mailto:zhouxy2017@gmail.com). Any questions or discussions are welcomed! ## Abstract With the advent of deep learning, object detection drifted from a bottom-up to a top-down recognition problem. State of the art algorithms enumerate a near-exhaustive list of object locations and classify each into: object or not. In this paper, we show that bottom-up approaches still perform competitively. We detect four extreme points (top-most, left-most, bottom-most, right-most) and one center point of objects using a standard keypoint estimation network. We group the five keypoints into a bounding box if they are geometrically aligned. Object detection is then a purely appearance-based keypoint estimation problem, without region classification or implicit feature learning. The proposed method performs on-par with the state-of-the-art region based detection methods, with a bounding box AP of 43.2% on COCO test-dev. In addition, our estimated extreme points directly span a coarse octagonal mask, with a COCO Mask AP of 18.9%, much better than the Mask AP of vanilla bounding boxes. Extreme point guided segmentation further improves this to 34.6% Mask AP. ## Installation The code was tested with [Anaconda](https://github.com/xingyizhou/ExtremeNet/blob/master/https://www.anaconda.com/download) Python 3.6 and [PyTorch](https://github.com/xingyizhou/ExtremeNet/blob/master/(http://pytorch.org/)) v0.4.1. After install Anaconda: 1. Clone this repo: ~~~ ExtremeNet_ROOT=/path/to/clone/ExtremeNet git clone --recursive https://github.com/xingyizhou/ExtremeNet $ExtremeNet_ROOT ~~~ 2. Create an Anaconda environment using the provided package list from [Cornernet](https://github.com/xingyizhou/ExtremeNet/blob/master/https://github.com/princeton-vl/CornerNet). ~~~ conda create --name CornerNet --file conda_packagelist.txt source activate CornerNet ~~~ 3. Compiling NMS (originally from [Faster R-CNN](https://github.com/xingyizhou/ExtremeNet/blob/master/https://github.com/rbgirshick/py-faster-rcnn/blob/master/lib/nms/cpu_nms.pyx) and [Soft-NMS](https://github.com/xingyizhou/ExtremeNet/blob/master/https://github.com/bharatsingh430/soft-nms/blob/master/lib/nms/cpu_nms.pyx)). ~~~ cd $ExtremeNet_ROOT/external make ~~~ ## Demo - Download our [pre-trained model](https://github.com/xingyizhou/ExtremeNet/blob/master/https://drive.google.com/file/d/1re-A74WRvuhE528X6sWsg1eEbMG8dmE4/view?usp=sharing) and put it in `cache/`. - Optionally, if you want to test instance segmentation with [Deep Extreme Cut](https://github.com/xingyizhou/ExtremeNet/blob/master/https://github.com/scaelles/DEXTR-PyTorch), download their [PASCAL + SBD pertained model](https://github.com/xingyizhou/ExtremeNet/blob/master/https://data.vision.ee.ethz.ch/kmaninis/share/DEXTR/Downloads/models/dextr_pascal-sbd.pth) and put it in `cache/`. - Run the demo ~~~ python demo.py [--demo /path/to/image/or/folder] [--show_mask] ~~~ Contents in `[]` are optional. By default, it runs the sample images provided in `$ExtremeNet_ROOT/images/` (from [Detectron](https://github.com/xingyizhou/ExtremeNet/blob/master/https://github.com/facebookresearch/Detectron/tree/master/demo)). We show the predicted extreme point heatmaps (combined four heatmaps and overlaid on the input image), the predicted center point heatmap, and the detection and octagon mask results. If setup correctly, the output will look like:

If `--show_mask` is turned on, it further pipelined with [DEXTR](https://github.com/xingyizhou/ExtremeNet/blob/master/https://github.com/scaelles/DEXTR-PyTorch) for instance segmentation. The output will look like:

## Data preparation If you want to reproduce the results in the paper for benchmark evaluation and training, you will need to setup dataset. ### Installing MS COCO APIs ~~~ cd $ExtremeNet_ROOT/data git clone https://github.com/cocodataset/cocoapi.git coco cd $ExtremeNet_ROOT/data/coco/PythonAPI make python setup.py install --user ~~~ ### Downloading MS COCO Data - Download the images (2017 Train, 2017 Val, 2017 Test) from [coco website](https://github.com/xingyizhou/ExtremeNet/blob/master/http://cocodataset.org/#download). - Download annotation files (2017 train/val and test image info) from [coco website](https://github.com/xingyizhou/ExtremeNet/blob/master/http://cocodataset.org/#download). - Place the data (or create symlinks) to make the data folder like: ~~~ ${ExtremeNet_ROOT} |-- data `-- |-- coco `-- |-- annotations | |-- instances_train2017.json | |-- instances_val2017.json | |-- image_info_test-dev2017.json `-- images |-- train2017 |-- val2017 |-- test2017 ~~~ ### Generate extreme point annotation from segmentation: ~~~ cd $ExtremeNet_ROOT/tools/ python gen_coco_extreme_points.py ~~~ It generates `instances_extreme_train2017.json` and `instances_extreme_val2017.json` in `data/coco/annotations/`. ## Benchmark Evaluation After downloading our pre-trained model and the dataset, - Run the following command to evaluate object detection: ~~~ python test.py ExtremeNet [--suffix multi_scale] ~~~ The results on COCO validation set should be [`40.3` box AP](https://github.com/xingyizhou/ExtremeNet/blob/master/https://drive.google.com/open?id=1oP3RJSayEt_O9R3LQnbSv2ZaD7E38_gd) without `--suffix multi_scale` and [`43.3` box AP](https://github.com/xingyizhou/ExtremeNet/blob/master/https://drive.google.com/open?id=1VpnP8RTAMb8_QVAWvMwJeQB2ODP53S3e) with `--suffix multi_scale`. - After obtaining the detection results, run the following commands for instance segmentation: ~~~ python eval_dextr_mask.py results/ExtremeNet/250000/validation/multi_scale/results.json ~~~ The results on COCO validation set should be [`34.6` mask AP](https://github.com/xingyizhou/ExtremeNet/blob/master/https://drive.google.com/open?id=14wzNND6JhPUGQU_He2CimXu-RT28F6LN)(The evaluation will be slow). - You can test with other hyper-parameters by creating a new config file (`ExtremeNet-.json`) in `config/`. ## Training You will need 5x 12GB GPUs to reproduce our training. Our model is fine-tuned on the 10-GPU pre-trained [CornerNet model](https://github.com/xingyizhou/ExtremeNet/blob/master/https://drive.google.com/file/d/1UHjVzSG27Ms0VfSFeGYJ2h2AYZ6d4Le_/view?usp=sharing). After downloading the CornerNet model and put it in `cache/`, run ~~~ python train.py ExtremeNet ~~~ You can resume a half-trained model by ~~~ python train.py ExtremeNet --iter xxxx ~~~ ### Notes: - Training takes about 10 days in our Titan V GPUs. Train with 150000 iterations (about 6 days) will be 0.5 AP lower. - Training from scratch for the same iteration (250000) may result in 2 AP lower than fintuning from CornerNet, but can get higher performance (43.9AP on COCO val w/ multi-scale testing) if trained for [500000 iterations](https://github.com/xingyizhou/ExtremeNet/blob/master/https://drive.google.com/file/d/1omiOUjWCrFbTJREypuZaODu0bOlF_7Fg/view?usp=sharing) - Changing the focal loss [implementation](https://github.com/xingyizhou/ExtremeNet/blob/master/https://github.com/xingyizhou/ExtremeNet/blob/master/models/py_utils/kp_utils.py#L428) to [this](https://github.com/xingyizhou/ExtremeNet/blob/master/https://github.com/xingyizhou/ExtremeNet/blob/master/models/py_utils/kp_utils.py#L405) can accelerate training, but costs more GPU memory. ## Citation If you find this model useful for your research, please use the following BibTeX entry. @inproceedings{zhou2019bottomup, title={Bottom-up Object Detection by Grouping Extreme and Center Points}, author={Zhou, Xingyi and Zhuo, Jiacheng and Kr{\"a}henb{\"u}hl, Philipp}, booktitle={CVPR}, year={2019} } Please also considering citing the CornerNet paper (where this code is heavily borrowed from) and Deep Extreme Cut paper (if you use the instance segmentation part). @inproceedings{law2018cornernet, title={CornerNet: Detecting Objects as Paired Keypoints}, author={Law, Hei and Deng, Jia}, booktitle={Proceedings of the European Conference on Computer Vision (ECCV)}, pages={734--750}, year={2018} } @Inproceedings{Man+18, Title = {Deep Extreme Cut: From Extreme Points to Object Segmentation}, Author = {K.K. Maninis and S. Caelles and J. Pont-Tuset and L. {Van Gool}}, Booktitle = {Computer Vision and Pattern Recognition (CVPR)}, Year = {2018} }

近期下载者

相关文件


收藏者