FAZ-Segmentation
所属分类:图形图像处理
开发工具:Jupyter Notebook
文件大小:0KB
下载次数:0
上传日期:2022-11-22 08:14:21
上 传 者:
sh-1993
说明: 构建用于中心凹无血管区提取的Unet架构,
(Building Unet architecture for Foveal Avascular Zone Extraction,)
文件列表:
app.py (1504, 2023-10-05)
config/ (0, 2023-10-05)
config/train_config.json (483, 2023-10-05)
docker-build.sh (291, 2023-10-05)
docker/ (0, 2023-10-05)
docker/Dockerfile (754, 2023-10-05)
download_model.py (87, 2023-10-05)
images/ (0, 2023-10-05)
images/predict/ (0, 2023-10-05)
images/predict/test.txt (0, 2023-10-05)
images/raw/ (0, 2023-10-05)
images/raw/1.png (242730, 2023-10-05)
images/raw/test.txt (0, 2023-10-05)
pictures/ (0, 2023-10-05)
pictures/github_FAZ.png (904787, 2023-10-05)
pictures/output.png (120016, 2023-10-05)
requirements-2.txt (200, 2023-10-05)
src/ (0, 2023-10-05)
src/autoaugment_policy.py (8904, 2023-10-05)
src/classifier.py (2763, 2023-10-05)
src/dataset.py (6530, 2023-10-05)
src/efficientunet/ (0, 2023-10-05)
src/efficientunet/.ipynb_checkpoints/ (0, 2023-10-05)
src/efficientunet/.ipynb_checkpoints/efficientnet-checkpoint.py (7752, 2023-10-05)
src/efficientunet/.ipynb_checkpoints/efficientunet-checkpoint.py (6874, 2023-10-05)
src/efficientunet/__init__.py (91, 2023-10-05)
src/efficientunet/__pycache__/ (0, 2023-10-05)
src/efficientunet/__pycache__/__init__.cpython-36.pyc (254, 2023-10-05)
src/efficientunet/__pycache__/_version.cpython-36.pyc (177, 2023-10-05)
src/efficientunet/__pycache__/efficientnet.cpython-36.pyc (5677, 2023-10-05)
src/efficientunet/__pycache__/efficientunet.cpython-36.pyc (5453, 2023-10-05)
src/efficientunet/__pycache__/layers.cpython-36.pyc (5370, 2023-10-05)
src/efficientunet/__pycache__/utils.cpython-36.pyc (5406, 2023-10-05)
src/efficientunet/_version.py (22, 2023-10-05)
src/efficientunet/efficientnet.py (7735, 2023-10-05)
src/efficientunet/efficientunet.py (6874, 2023-10-05)
src/efficientunet/layers.py (7776, 2023-10-05)
src/efficientunet/utils.py (6390, 2023-10-05)
src/loss.py (10076, 2023-10-05)
... ...
# FAZ-Segmentation
Combine Hessian Filter and UNet for Foveal Avascular Zone Extraction
![picture](https://github.com/vinhnguyen21/FAZ-Segmentation/blob/master/pictures/github_FAZ.png)
# Docker Installation for FLask App
1. Build and run docker on port 2001
```
$ ./docker-build.sh
```
If getting error in permission
```
$ chmod u+x ./docker-build.sh
```
2. I have already mounted /images to /images of docker, so to test we will prepare image in:
```
├ ├── images
| ├── raw
| ├── 1.tif
| ├── predict
| ├── 1.png
```
* We will put raw image in /images/raw
* In postman:
* url: http://localhost:2001/faz/predict
* METHOD: GET
* Params:
* Key: id
* value: name of image such as 1.png
* You will have the prediction of model at /images/predict
![picture](https://github.com/vinhnguyen21/FAZ-Segmentation/blob/master/pictures/output.png)
# Training process
## Prepare dataset folder
```
├── train
| ├── raw
| ├── image1.tif
| ├── ...
| ├── mask
| ├── image1.png
| ├── ...
├── valid
| ├── raw
| ├── image1.png
| ├── ...
| ├── mask
| ├── image1.png
| ├── ...
├── test
| ├── raw
| ├── image1.tif
| ├── ...
| ├── mask
| ├── image1.png
| ├── ...
```
## Setup Environment
Run this script to create a virtual environment and install dependency libraries
1. $conda create -n name_environment python=3.6
2. $conda activate name_environment
3. $pip install -r requirements-2.txt
To train this project, we just run the command
```
$python train.py
```
where train_config.json which is located in config folder
We need to adjust the parameter in this json file before training:
* net_type: name of pretrained model you want to train.
list of model:
efficentnet_b0, efficientnet_b1, efficientnet_b2, efficientnet_b3, efficientnet_b4, efficientnet_b5, Se_resnext50, Se_resnext101, Se_resnet50, se_resnet101, Se_resnet152, Resnet18, Resnet34,Resnet50, Resnet101
* pretrained: boolean, using pretrained weights from ImageNet
* weight_path: Weight path of old trained model
* train_folder : path of raw folder of training dataset
example: /home/vinhng/OCTA/preprocess_OCTA/train/raw
* valid_folder : path of raw folder of valid dataset
example: /home/vinhng/OCTA/preprocess_OCTA/valid/raw
* test_folder : path of raw folder of valid dataset
example: /home/vinhng/OCTA/preprocess_OCTA/test/raw
* classes: number of classes. Default = 1
* model_path: directory which contains trained model
* size: size of input image and mask
* thresh_hold: thresh hold for convert grayscale mask to binary mask
* epoch: number of training epoch
# Testing process
```
download weight of model:
https://storage.googleapis.com/v-project/Se_resnext50-920eef84.pth
Then move this weight in folder:
./models
```
```
python test.py --path_images --model_type --weight
```
* path_images: directory of raw folder in testset (see prepare dataset above)
* model_type: name of pretrained model you want to train. Default: Se_resnext50
List of pretrained model is at training process above
* weight: directory to weight path.
近期下载者:
相关文件:
收藏者: