• p3_514658
    了解作者
  • 2.4MB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • VIP专享
    资源类型
  • 0
    下载次数
  • 2022-06-15 17:32
    上传日期
人为分割PyTorch 在PyTorch中实现的人体分割,/代码和。 支持的网络 :骨干 (所有aphas和扩张), (所有num_layers) :骨干网 (num_layers = 18,34,50,101), :骨干网 (num_layers = 18) :主干网 (num_layers = 18、34、50、101) ICNet :主干网ResNetV1 (num_layers = 18、34、50、101) 要评估体系结构,内存,转发时间(以cpu或gpu表示),参数数量以及网络的FLOP数量,请使用以下命令: python measure_model.py 数据集 人像分割(人/背景) 自动人像分割以实现图像风格化:1800张图像 监督人:5711张图片 放 在此存储库中使用了Python3.6.x。 克隆存储库: git clone --re
Human-Segmentation-main.zip
  • Human-Segmentation-main
  • train.py
    3.4KB
  • base
  • base_trainer.py
    7.7KB
  • __init__.py
    343B
  • base_model.py
    4.6KB
  • base_data_loader.py
    2KB
  • base_inference.py
    5KB
  • requirements.txt
    604B
  • dataset
  • train_mask.txt
    803.6KB
  • valid_mask.txt
    89.3KB
  • create_pairs.py
    4KB
  • inference_webcam.py
    1.4KB
  • README.md
    4.7KB
  • measure_model.py
    2.7KB
  • inference_video.py
    4.7KB
  • backgrounds
  • 2.jpg
    18.8KB
  • 6.jpg
    435.6KB
  • 1.png
    203.8KB
  • 4.jpg
    923.2KB
  • 5.jpg
    5.6KB
  • 7.jpg
    822.6KB
  • 3.jpg
    13.1KB
内容介绍
# Human-Segmentation-PyTorch Human segmentation [models](https://github.com/AntiAegis/Semantic-Segmentation-PyTorch#supported-networks), [training](https://github.com/AntiAegis/Semantic-Segmentation-PyTorch#training)/[inference](https://github.com/AntiAegis/Semantic-Segmentation-PyTorch#inference) code, and [trained weights](https://github.com/AntiAegis/Semantic-Segmentation-PyTorch#benchmark), implemented in PyTorch. ## Supported networks * [UNet](https://arxiv.org/abs/1505.04597): backbones [MobileNetV2](https://arxiv.org/abs/1801.04381) (all aphas and expansions), [ResNetV1](https://arxiv.org/abs/1512.03385) (all num_layers) * [DeepLab3+](https://arxiv.org/abs/1802.02611): backbones [ResNetV1](https://arxiv.org/abs/1512.03385) (num_layers=18,34,50,101), [VGG16_bn](https://arxiv.org/abs/1409.1556) * [BiSeNet](https://arxiv.org/abs/1808.00897): backbones [ResNetV1](https://arxiv.org/abs/1512.03385) (num_layers=18) * [PSPNet](https://arxiv.org/abs/1612.01105): backbones [ResNetV1](https://arxiv.org/abs/1512.03385) (num_layers=18,34,50,101) * [ICNet](https://arxiv.org/abs/1704.08545): backbones [ResNetV1](https://arxiv.org/abs/1512.03385) (num_layers=18,34,50,101) To assess architecture, memory, forward time (in either cpu or gpu), numper of parameters, and number of FLOPs of a network, use this command: ``` python measure_model.py ``` ## Dataset **Portrait Segmentation (Human/Background)** * [Automatic Portrait Segmentation for Image Stylization](http://xiaoyongshen.me/webpage_portrait/index.html): 1800 images * [Supervisely Person](https://hackernoon.com/releasing-supervisely-person-dataset-for-teaching-machines-to-segment-humans-1f1fc1f28469): 5711 images ## Set * Python3.6.x is used in this repository. * Clone the repository: ``` git clone --recursive https://github.com/AntiAegis/Human-Segmentation-PyTorch.git cd Human-Segmentation-PyTorch git submodule sync git submodule update --init --recursive ``` * To install required packages, use pip: ``` workon humanseg pip install -r requirements.txt pip install -e models/pytorch-image-models ``` ## Training * For training a network from scratch, for example DeepLab3+, use this command: ``` python train.py --config config/config_DeepLab.json --device 0 ``` where *config/config_DeepLab.json* is the configuration file which contains network, dataloader, optimizer, losses, metrics, and visualization configurations. * For resuming training the network from a checkpoint, use this command: ``` python train.py --config config/config_DeepLab.json --device 0 --resume path_to_checkpoint/model_best.pth ``` * One can open tensorboard to monitor the training progress by enabling the visualization mode in the configuration file. ## Inference There are two modes of inference: [video](https://github.com/AntiAegis/Semantic-Segmentation-PyTorch/blob/master/inference_video.py) and [webcam](https://github.com/AntiAegis/Semantic-Segmentation-PyTorch/blob/master/inference_webcam.py). ``` python inference_video.py --watch --use_cuda --checkpoint path_to_checkpoint/model_best.pth python inference_webcam.py --use_cuda --checkpoint path_to_checkpoint/model_best.pth ``` ## Benchmark * Networks are trained on a combined dataset from the two mentioned datasets above. There are [6627 training](https://github.com/AntiAegis/Semantic-Segmentation-PyTorch/blob/master/dataset/train_mask.txt) and [737 testing](https://github.com/AntiAegis/Semantic-Segmentation-PyTorch/blob/master/dataset/valid_mask.txt) images. * Input size of model is set to 320. * The CPU and GPU time is the averaged inference time of 10 runs (there are also 10 warm-up runs before measuring) with batch size 1. * The mIoU is measured on the testing subset (737 images) from the combined dataset. * Hardware configuration for benchmarking: ``` CPU: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz GPU: GeForce GTX 1050 Mobile, CUDA 9.0 ``` | Model | Parameters | FLOPs | CPU time | GPU time | mIoU | |:-:|:-:|:-:|:-:|:-:|:-:| | [UNet_MobileNetV2](https://drive.google.com/file/d/17GZLCi_FHhWo4E4wPobbLAQdBZrlqVnF/view?usp=sharing) (alpha=1.0, expansion=6) | 4.7M | 1.3G | 167ms | 17ms | 91.37% | | [UNet_ResNet18](https://drive.google.com/file/d/14QxasSCcL_ij7NHR7Fshx5fi5Sc9MleD/view?usp=sharing) | 16.6M | 9.1G | 165ms | 21ms | 90.09% | | [DeepLab3+_ResNet18](https://drive.google.com/file/d/1WME_m8CCDupM6tLX6yPt-iA6gpmwQ7Sc/view?usp=sharing) | 16.6M | 9.1G | 133ms | 28ms | 91.21% | | [BiSeNet_ResNet18](https://drive.google.com/file/d/1Lm6O2-_lnQEjMM5lQRcIAbtA9YQUGQuy/view?usp=sharing) | 11.9M | 4.7G | 88ms | 10ms | 87.02% | | PSPNet_ResNet18 | 12.6M | 20.7G | 235ms | 666ms | --- | | [ICNet_ResNet18](https://drive.google.com/file/d/1Rg8KSU89oQoWW37gjipFSsg2w_X_lefQ/view?usp=sharing) | 11.6M | 2.0G | 48ms | 55ms | 86.27% |
评论
    相关推荐
    • GitLearning
      GitLearning 此存储库供git学习更多评论
    • git-em-all:ClonePull尽可能快地存储git存储库的数组
      git-em-all :warning: 工作正在进行中 尽可能快地克隆/拉取git repostories数组。 安装 npm install git-em-all 用法 var gitEmAll = require ( 'git-em-all' ) 贡献 欢迎捐款! 请先阅读。 执照
    • challenge-git
      离群工程Git挑战 在Outlier,您将能够提供功能和修复程序而不会引起冲突和其他版本控制麻烦。 保持代码库整洁的重要工具是git rebase 。 这项挑战将测试您对...将您的新存储库设置为源: git remote set-url origin $
    • challenge-git
      离群工程Git挑战 在Outlier,您将能够提供功能和修复程序而不会引起冲突和其他版本控制麻烦。 保持代码库整洁的重要工具是git rebase 。 这项挑战将测试您对...将您的新存储库设置为源: git remote set-url origin $
    • git-troubles
      git reset file_name HEAD 重置文件内容(对该文件的所有更改都将丢失) git checkout -- file_name 将整个存储库重置为上一次提交的状态(所有本地更改都将丢失) git reset --hard HEAD 删除最后的提交,但...
    • test:Git 存储库测试
      测试 这是文件自述文件,我将做一些更改以学习如何使用 GitGit 存储库测试
    • weijie:git项目存储
      weijie git项目存储
    • 挑战git
      离群工程Git挑战 在Outlier,您将能够提供功能和修复程序而不会引起冲突和其他版本控制麻烦。 保持代码库干净的重要工具是git rebase 。...将您的新存储库设置为源: git remote set-url origin ${y
    • git-challenge
      离群工程Git挑战 在Outlier,您将能够提供功能和修复程序而不会引起冲突和其他版本控制麻烦。 保持代码库整洁的重要工具是git rebase 。 这项挑战将测试您对...将您的新存储库设置为源: git remote set-url origin $
    • 挑战git
      离群工程Git挑战 在Outlier,您将能够提供功能和修复程序而不会引起冲突和其他版本控制麻烦。 保持代码库干净的重要工具是git rebase 。...将您的新存储库设置为源: git remote set-url origin ${y