ADVENT-master

所属分类:图形图像处理
开发工具:Python
文件大小:324KB
下载次数:2
上传日期:2020-05-27 16:51:58
上 传 者冰冰噢噢噢噢
说明:  基于熵最小化的语义分割,无监督的领域自适应
(Semantic segmentation based on entropy minimization, unsupervised domain adaptation)

文件列表:
Dockerfile (584, 2019-09-19)
LICENSE (10780, 2019-09-19)
advent (0, 2019-09-19)
advent\__init__.py (0, 2019-09-19)
advent\dataset (0, 2019-09-19)
advent\dataset\__init__.py (0, 2019-09-19)
advent\dataset\base_dataset.py (1635, 2019-09-19)
advent\dataset\cityscapes.py (1649, 2019-09-19)
advent\dataset\cityscapes_list (0, 2019-09-19)
advent\dataset\cityscapes_list\info.json (1313, 2019-09-19)
advent\dataset\cityscapes_list\label.txt (25950, 2019-09-19)
advent\dataset\cityscapes_list\train.txt (139890, 2019-09-19)
advent\dataset\cityscapes_list\val.txt (23950, 2019-09-19)
advent\dataset\gta5.py (1251, 2019-09-19)
advent\dataset\gta5_list (0, 2019-09-19)
advent\dataset\gta5_list\all.txt (249660, 2019-09-19)
advent\domain_adaptation (0, 2019-09-19)
advent\domain_adaptation\__init__.py (0, 2019-09-19)
advent\domain_adaptation\config.py (4657, 2019-09-19)
advent\domain_adaptation\eval_UDA.py (5969, 2019-09-19)
advent\domain_adaptation\train_UDA.py (14235, 2019-09-19)
advent\model (0, 2019-09-19)
advent\model\__init__.py (0, 2019-09-19)
advent\model\deeplabv2.py (6508, 2019-09-19)
advent\model\discriminator.py (681, 2019-09-19)
advent\scripts (0, 2019-09-19)
advent\scripts\configs (0, 2019-09-19)
advent\scripts\configs\advent+minent_pretrained.yml (367, 2019-09-19)
advent\scripts\configs\advent.yml (259, 2019-09-19)
advent\scripts\configs\advent_pretrained.yml (177, 2019-09-19)
advent\scripts\configs\minent.yml (270, 2019-09-19)
advent\scripts\configs\minent_pretrained.yml (239, 2019-09-19)
advent\scripts\test.py (3336, 2019-09-19)
advent\scripts\train.py (6036, 2019-09-19)
advent\utils (0, 2019-09-19)
advent\utils\__init__.py (103, 2019-09-19)
advent\utils\func.py (1939, 2019-09-19)
... ...

# ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation ## Updates - *09/2019*: check out our new paper [DADA: Depth-aware Domain Adaptation in Semantic Segmentation](https://arxiv.org/abs/1904.01886) (accepted to ICCV 2019). With a depth-aware UDA framework, we leverage depth as the privileged information at train time to boost target performance. [Pytorch](https://github.com/valeoai/DADA) code and pre-trained models are coming soon. ## Paper ![](./teaser.jpg) [ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation](https://arxiv.org/abs/1811.12833) [Tuan-Hung Vu](https://tuanhungvu.github.io/), [Himalaya Jain](https://himalayajain.github.io/), [Maxime Bucher](https://maximebucher.github.io/), [Matthieu Cord](http://webia.lip6.fr/~cord/), [Patrick Prez](https://ptrckprz.github.io/) valeo.ai, France IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019 (**Oral**) If you find this code useful for your research, please cite our [paper](https://arxiv.org/abs/1811.12833): ``` @inproceedings{vu2018advent, title={ADVENT: Adversarial Entropy Minimization for Domain Adaptation in Semantic Segmentation}, author={Vu, Tuan-Hung and Jain, Himalaya and Bucher, Maxime and Cord, Mathieu and P{\'e}rez, Patrick}, booktitle={CVPR}, year={2019} } ``` ## Abstract Semantic segmentation is a key problem for many computer vision tasks. While approaches based on convolutional neural networks constantly break new records on different benchmarks, generalizing well to diverse testing environments remains a major challenge. In numerous real world applications, there is indeed a large gap between data distributions in train and test domains, which results in severe performance loss at run-time. In this work, we address the task of unsupervised domain adaptation in semantic segmentation with losses based on the entropy of the pixel-wise predictions. To this end, we propose two novel, complementary methods using (i) an entropy loss and (ii) an adversarial loss respectively. We demonstrate state-of-the-art performance in semantic segmentation on two challenging *synthetic-2-real* set-ups and show that the approach can also be used for detection. ## Demo [![](http://img.youtube.com/vi/Ihmz0yEqrq0/0.jpg)](http://www.youtube.com/watch?v=Ihmz0yEqrq0 "") ## Preparation ### Pre-requisites * Python 3.7 * Pytorch >= 0.4.1 * CUDA 9.0 or higher ### Installation 0. Clone the repo: ```bash $ git clone https://github.com/valeoai/ADVENT $ cd ADVENT ``` 1. Install OpenCV if you don't already have it: ```bash $ conda install -c menpo opencv ``` 2. Install this repository and the dependencies using pip: ```bash $ pip install -e ``` With this, you can edit the ADVENT code on the fly and import function and classes of ADVENT in other project as well. 3. Optional. To uninstall this package, run: ```bash $ pip uninstall ADVENT ``` You can take a look at the [Dockerfile](./Dockerfile) if you are uncertain about steps to install this project. ### Datasets By default, the datasets are put in ```/data```. We use symlinks to hook the ADVENT codebase to the datasets. An alternative option is to explicitlly specify the parameters ```DATA_DIRECTORY_SOURCE``` and ```DATA_DIRECTORY_TARGET``` in YML configuration files. * **GTA5**: Please follow the instructions [here](https://download.visinf.tu-darmstadt.de/data/from_games/) to download images and semantic segmentation annotations. The GTA5 dataset directory should have this basic structure: ```bash /data/GTA5/ % GTA dataset root /data/GTA5/images/ % GTA images /data/GTA5/labels/ % Semantic segmentation labels ... ``` * **Cityscapes**: Please follow the instructions in [Cityscape](https://www.cityscapes-dataset.com/) to download the images and validation ground-truths. The Cityscapes dataset directory should have this basic structure: ```bash /data/Cityscapes/ % Cityscapes dataset root /data/Cityscapes/leftImg8bit % Cityscapes images /data/Cityscapes/leftImg8bit/val /data/Cityscapes/gtFine % Semantic segmentation labels /data/Cityscapes/gtFine/val ... ``` ### Pre-trained models Pre-trained models can be downloaded [here](https://github.com/valeoai/ADVENT/releases) and put in ```/pretrained_models``` ## Running the code For evaluation, execute: ```bash $ cd /advent/scripts $ python test.py --cfg ./configs/advent_pretrained.yml $ python test.py --cfg ./configs/minent_pretrained.yml $ python test.py --cfg ./configs/advent+minent.yml ``` ### Training For the experiments done in the paper, we used pytorch 0.4.1 and CUDA 9.0. To ensure reproduction, the random seed has been fixed in the code. Still, you may need to train a few times to reach the comparable performance. By default, logs and snapshots are stored in ```/experiments``` with this structure: ```bash /experiments/logs /experiments/snapshots ``` To train AdvEnt: ```bash $ cd /advent/scripts $ python train.py --cfg ./configs/advent.yml $ python train.py --cfg ./configs/advent.yml --tensorboard % using tensorboard ``` To train MinEnt: ```bash $ python train.py --cfg ./configs/minent.yml $ python train.py --cfg ./configs/minent.yml --tensorboard % using tensorboard ``` ### Testing To test AdvEnt: ```bash $ cd /advent/scripts $ python test.py --cfg ./configs/advent.yml ``` To test MinEnt: ```bash $ python test.py --cfg ./configs/minent.yml ``` ## Acknowledgements This codebase is heavily borrowed from [AdaptSegNet](https://github.com/wasidennis/AdaptSegNet) and [Pytorch-Deeplab](https://github.com/speedinghzl/Pytorch-Deeplab). ## License ADVENT is released under the [Apache 2.0 license](./LICENSE).

近期下载者

相关文件


收藏者