bf_intorg_YOLOv8_dev

所属分类:人工智能/神经网络/深度学习
开发工具:Jupyter Notebook
文件大小:0KB
下载次数:0
上传日期:2024-01-19 13:33:15
上 传 者sh-1993
说明:  bf intorg YOLOv8开发
(bf intorg YOLOv8 dev)

文件列表:
YOLOV8-data/results/
configs/
images/
1_czi_to_tiff_and_restructure.ipynb
2_binary_to_coco_V3.0.py
3_coco_to_yolov8_polygon_V3.0.py
4_display_yolo_annotations.ipynb
5_training_YOLO_V8_bf_organoids_spheroids.ipynb
yolov8n-seg.pt
yolov8n.pt

Training a YOLOv8 model for detection of intestinal organoids in brightfield images

[![License](https://img.shields.io/pypi/l/napari-accelerated-pixel-and-object-classification.svg?color=green)](https://github.com/adiezsanchez/intestinal_organoid_brightfield_analysis/blob/main/LICENSE) [![Development Status](https://img.shields.io/pypi/status/scikit-image.svg)](https://en.wikipedia.org/wiki/Software_release_life_cycle#Alpha) ![workflow](./images/workflow.png) The goal of this repository is to obtain a custom YOLOv8 model to segment and classify intestinal organoids and spheroids from brightfield images acquired using a widefield microscope. The obtained model will be used in the [intestinal_organoid_brightfield_analysis](https://github.com/adiezsanchez/intestinal_organoid_brightfield_analysis) repository. As a starting point the ground truth annotations for each raw image (.czi) are in a .tiff file format, where each "channel" contains a binary mask defining instances of each class. Training dataset can be downloaded [here](https://dropbox.com) In our particular dataset we have 3 classes of intestinal organoids: dead (or overgrown organoids), differentiated (developed organoids) or undifferentiated (aka spheroids). The resulting model will detect, segment and classify each of those instances.

classes

In order to train the YOLOv8 the initial binary masks defining each class instances must be converted to COCO polygon .json files and later on into YOLO-style polygon .txt files. Executing the notebooks and .py files in a sequential order (1 to 5) allows to do so.

Instructions

1. In order to run these Jupyter notebooks and .py scripts you will need to familiarize yourself with the use of Python virtual environments using Mamba. See instructions [here](https://biapol.github.io/blog/mara_lampert/getting_started_with_mambaforge_and_python/readme.html). 2. Then you will need to create a couple of virtual environments. One to preprocess all the data including napari and opencv, and another one to train the YOLOv8 model. 3. To run Jupyter notebooks and .py files 1 to 4 we will be using the napari-opencv environment that you can create from the YAML file found under the configs folder: mamba env create -f napari-opencv.yml --name napari-opencv 4. Finally we will be using a different environment to contain the ultralytics packages needed to train our own YOLOv8 segmentation model (notebook 5). This relies on pytorch, and if you want to leverage CUDA GPU acceleration you'll need to perform a few checks: # Check your CUDA Toolkit version, in my case it is version 12.1 as you see in the output below nvcc --version ![cudav](./images/cuda_version.png) 5. If your CUDA version is 12.1 and cuDNN version is 8.0 you can use the following code to create a working yolov8-GPU environment. mamba env create -f yolov8-GPU.yml --name yolov8-GPU 6. Otherwise you'll need to do a bit of Googling.

近期下载者

相关文件


收藏者