matlab图像复原万能代码-c2GAN:光学显微镜的无监督内容保留转换

  • h5_931783
    了解作者
  • 5.4MB
    文件大小
  • zip
    文件格式
  • 0
    收藏次数
  • VIP专享
    资源类型
  • 0
    下载次数
  • 2022-05-21 07:35
    上传日期
matlab图像识别万能代码c 2 GAN :用于光学显微镜的无监督内容保留转换。 内容 概述 我们的工作基于循环一致的生成对抗网络 ( CycleGANs ),这使得 CNN 的无监督训练成为可能,并且非常具有启发性。 为了纠正显微镜场景中的映射偏差并为基于深度学习的计算显微镜提供强大的无监督学习方法,我们提出了内容保留 CycleGAN ( c 2 GAN )。 通过施加额外的显着性约束,c 2 GAN 可以完成像素级回归任务,包括图像恢复(1 通道到 1 通道)、全幻灯片组织病理学着色(1 通道到 3 通道)和虚拟荧光标记 (13 -channel 到 3-channel)等。 首先,c 2 GAN 不需要预先对齐的训练对。 可以省去图像采集、标记和注册的繁重工作。 我们在这里发布我们的源代码,并希望我们的工作可以重现,并为显微镜领域的无监督图像到图像转换提供新的可能性。 如需更多信息和技术支持,请关注我们的更新。 更多细节请参阅首次出现此方法的配套文件。 该存储库中提供了一个可读的 c 2 GAN代码,旨在实现光学显微镜中的无监督域映射。 接下来,我们将逐步指导您实现我们的方法
c2GAN-master.zip
  • c2GAN-master
  • checkpoints
  • README.md
    15B
  • images
  • 8_GT.png
    5.1KB
  • 3_input.png
    11.3KB
  • 4_input.png
    9.1KB
  • 16_input.png
    10.1KB
  • 2_GT.png
    176.6KB
  • 14_GT.png
    12.3KB
  • 5_input.png
    12.7KB
  • 16_CCGAN.png
    19.8KB
  • 2_GT1.png
    68.2KB
  • 3_CCGAN.png
    9.1KB
  • logo2.jpg
    65.9KB
  • 5_CCGAN.png
    5.9KB
  • 5_GT.png
    2.5KB
  • liver_wholeslide_CCGAN.png
    42.9KB
  • bw_input.png
    88.9KB
  • 14_input.png
    20.4KB
  • 6_GT.png
    8.3KB
  • 4_CCGAN.png
    3.3KB
  • 4_GT.png
    3.4KB
  • 1_input.png
    394.2KB
  • 15_GT.png
    13KB
  • 14_CCGAN.png
    12.1KB
  • liver_input.png
    14.8KB
  • 2_CCGAN1.png
    102.4KB
  • 2_CCGAN.png
    168.3KB
  • 3_GT.png
    9.5KB
  • 15_input.png
    21.3KB
  • logo.jpg
    1004.5KB
  • 8_CCGAN.png
    5.1KB
  • 7_GT.png
    6KB
  • 2_input1.png
    94.7KB
  • 16_GT.png
    8.5KB
  • 7_input.png
    14.8KB
  • 1_GT.png
    440.3KB
  • 7_CCGAN.png
    6KB
  • 6_input.png
    42.6KB
  • 13_input.png
    83.1KB
  • logo.png
    33.5KB
  • bw_GT.png
    648.9KB
  • 2_input.png
    245.9KB
  • 15_CCGAN.png
    12.5KB
  • 8_input.png
    12.6KB
  • liver_GT.png
    17.8KB
  • liver_CCGAN.png
    17.7KB
  • liver_wholeslide_input.png
    35.2KB
  • 13_GT.png
    465KB
  • schematic.jpg
    50.6KB
  • liver_wholeslide_GT.png
    44.3KB
  • logo3.jpg
    62.6KB
  • 1_CCGAN.png
    440.8KB
  • 6_CCGAN.png
    8KB
  • bw_CCGAN.png
    644.4KB
  • 13_CCGAN.png
    410.9KB
  • cycleGAN_utils
  • inference.py
    1.7KB
  • export_graph.py
    2.4KB
  • ops.py
    7.4KB
  • utils.py
    1.4KB
  • reader.py
    3.5KB
  • model.py
    8.5KB
  • generator.py
    2.3KB
  • discriminator.py
    1.6KB
  • data
  • training_images
  • training_images.txt
    26B
  • fake_images
  • fake_images.txt
    46B
  • inferred_images
  • inferred_images.txt
    71B
  • README.md
    1.1KB
  • pre-processsing
  • make_train_13c.m
    1.8KB
  • folder_split_into_trainA_and_trainB.m
    1.1KB
  • make_train_1c.m
    1.4KB
  • README.md
    46B
  • stitch_results_rgb.m
    2.3KB
  • make_test_13c.m
    1.6KB
  • make_test_1c.m
    1.6KB
  • stitch_results_grey.m
    1.3KB
  • main.py
    9KB
  • LICENSE
    34.3KB
  • README.md
    10.5KB
  • preprocess.py
    4.9KB
内容介绍
# **c<sup>2</sup>GAN**: Unsupervised content-preserving transformation for optical microscopy. [![Platform](https://img.shields.io/badge/Platform%20-Ubuntu%2016.04-FF6347)](https://ubuntu.com/) [![Python](https://img.shields.io/badge/Python-v3.6-blue)](https://www.python.org/) [![Framework](https://img.shields.io/badge/Framework-Tensorflow%201.14.0-orange)](https://www.tensorflow.org/) [![License](https://img.shields.io/badge/License-GPL--3.0-00CD66)](https://opensource.org/licenses/GPL-3.0) [![Maintenance](https://img.shields.io/badge/Maintenance-On-blueviolet)](https://github.com/Xinyang-Li/c2GAN/graphs/contributors) [![DOI](https://img.shields.io/badge/DOI-10.1101%2F848077-green)](https://www.biorxiv.org/content/10.1101/848077v2) [![Commits](https://img.shields.io/github/commit-activity/m/Xinyang-Li/c2GAN?color=informational)](https://github.com/Xinyang-Li/c2GAN/graphs/commit-activity) ![Size](https://img.shields.io/github/repo-size/Xinyang-Li/c2GAN?color=red) [![Issue](https://img.shields.io/github/issues/Xinyang-Li/c2GAN)](https://github.com/Xinyang-Li/c2GAN/issues) [![Stars](https://img.shields.io/github/stars/Xinyang-Li/c2GAN?style=social)](https://img.shields.io/github/stars/Xinyang-Li/c2GAN?style=social) ## Contents - [Overview](#overview) - [Repo Structure](#repo-structure) - [System Environment](#system-environment) - [Demo](#demo) - [Results](#results) - [License](./LICENSE) - [Issues](https://github.com/Xinyang-Li/c2GAN/issues) - [Citation](#citation) # Overview Our work is based on Cycle-consistent Generative Adversarial Networks (**CycleGANs**) [[paper]](http://openaccess.thecvf.com/content_iccv_2017/html/Zhu_Unpaired_Image-To-Image_Translation_ICCV_2017_paper.html), which makes unsupervised training of CNNs possible and is really illuminating. To correct mapping biases in the scenario of microscopy and provide a robust unsupervised learning method for deep-learning-based computational microscopy, we propose content-preserving CycleGAN (**c<sup>2</sup>GAN**). By imposing additional **saliency constraint**, c<sup>2</sup>GAN can complete pixel-wise regression tasks including image restoration (1-channel to 1-channel), whole-slide histopathological coloration (1-channel to 3-channel), and virtual fluorescent labeling (13-channel to 3-channel), *etc*. Foremost, c<sup>2</sup>GAN needs no pre-aligned training pairs. The laborious work of image acquisition, labeling, and registration can be spared. We release our source code here and hope that our work can be reproducible and offer new possibilities for unsupervised image-to-image transformation in the field of microscopy. For more information and technical support please follow our update. More details please refer to the companion paper where this method first occurred. [[paper]](https://www.biorxiv.org/content/10.1101/848077v1.abstract) A readable **python code** of c<sup>2</sup>GAN aiming at realizing unsupervised domain mapping in optical microscopy is provided in this repository. Next, we will guide you step by step to implement our method. # Repo Structure ``` |---checkpoints |---|---project_name+time #creat by code# |---|---|---meta |---|---|---index |---|---|---ckpt |---cycleGAN_utils |---|---discriminator.py |---|---export_graph.py |---|---generator,py |---|---inference.py |---|---model.py |---|---ops.py |---|---reader.py |---|---utils.py |---data |---|---README.md |---|---training_data |---|---|---isotropic #project_name# |---|---|---|---trainA |---|---|---|---trainB |---|---|---|---testA |---|---|---|---testB |---|---fake_image |---|---|---|---project_name+time |---|---|---|---fake_x |---|---|---|---fake_y |---|---inferred_image |---|---|---|---project_name+time |---|---|---|---inferred_x |---|---|---|---inferred_y |---images |---|---some_images_for_README |---pro-processing |---|---some_matlab_code(*.m) |---LICENSE |---README.md |---main.py ``` # System Environment * ubuntu 16.04 * python 3.6. * **tensorflow-gpu 1.14.0** * NVIDIA GPU + CUDA 10.0 ## Building environment We recommend configuring a new environment named *c2gan* on your machine to avoid version conflicts of some packages.The typical install time on a desktop computer with CUDA support is about 10 minutes. We assume that *corresponding NVIDIA GPU support and CUDA 10.0* has been installed on your machine. * Check your CUDA version ``` $ cat /usr/local/cuda/version.txt ``` * Build anaconda environment ``` $ conda create -n c2gan python=3.6 ``` * Activate the *c2gan* environment and install tensorflow ``` $ source activate c2gan $ conda install tensorflow-gpu=1.14.0 ``` * Test if the installation is successful ``` $ python >>> import tensorflow as tf >>> tf.__version__ >>> hello = tf.constant("Hello World, TensorFlow!") >>> sess = tf.Session() >>> print(sess.run(hello)) ``` * Install necessary packages ``` $ conda install -c anaconda scipy ``` # Demo ## Data processing * You can download some **data** or **checkpoints** for demo code from [here](https://drive.google.com/open?id=1QPlLcTHlU58xo116KB1bd680EoMof_Wn). * Transform your images from '*.tif*' to '*.png*' to use the universal I/O APIs in tensorflow, and then divide the dataset into training set and test set. Usually we use 65%~80% of the dataset as the training data and 20%~35% of the dataset as the test data. Just put images of domain A in the 'trainA' folder, images of domain B in the 'trainB' folder, images of domain A for test in the 'testA' folder, and images of domain B for results evaluation in the 'testB' folder. ## Training Encode the training data into tfrecords for fast data loading. ``` $ python preprocess.py --project 1_Isotropic_Liver --type train ``` or ``` $ python preprocess.py --project 1_Isotropic_Liver ``` Start training. ``` $ python main.py ``` You modify the default arguments through command line, for example: ``` $ python main.py --project 1_Isotropic_Liver --image_size 128 --channels 1 --GPU 0 --epoch 100000 --type train ``` Here is the list of arguments: ``` --type: 'train or test, default: train' --project: 'the name of project, default: denoise' --image_size: 'image size, default: 256' --batch_size: 'batch size, default: 1' --load_model: 'folder of saved model, default: None' --GPU: 'GPU for running code, default: 0' --channels: 'the channels of input image, default: 3' --epoch: 'number of training epoch, default: 5' ``` If you interrupt the training process and want to restart training from that point, you can load the former checkpoints like this: ``` $ python main.py --project 1_Isotropic_Liver --image_size 128 --channels 1 --GPU 0 --epoch 100000 --type train --load_model 20190922-2222 ``` Tensorboard can be used to monitor the training progress and intermediate result. ``` $ tensorboard --logdir ``` ## Test the model Encode test data into tfrecords ``` $ python3 build_data.py --project 1_Isotropic_Liver --type test ``` We use the same code but different arguments for training and test. You just need to load pre-trained model or checkpoints. ``` $ python main.py --epoch 500 --project 1_Isotropic_Liver --channels 1 --image_size 128 --GPU 0 --type test --load_model 20190926-1619 ``` Interpretation of arguments above: ``` --epoch:'the number of images in the testing dataset' --load_model:'the name of checkpoint folder, you had better name it as "YYYYMMDD-HHMM" ' ``` You can obtain the inferenced images at the result folder. The typical training time on a medium-sized training set is about 10 hours. The performance testing is really fast that it takes less than 50 milliseconds per image. # Results Some of our results are exhibited below. For more results and further analyses, please refer to the companion paper where this method first occurred. [[paper]](https://www.biorxiv.org/content/10.1101/848077v1.abstract) ### Unsupervised whole-slide histopatholog
评论
    相关推荐