darknet

所属分类:其他
开发工具:C/C++
文件大小:8118KB
下载次数:0
上传日期:2020-12-01 09:38:51
上 传 者我太帅了呵呵
说明:  darknet yolo 目标检测 纯源码 不包括标注工具 以及权值文件.
(Darknet yolo target detection pure source code does not include annotation tools and weight files.)

文件列表:
darknet\.circleci (0, 2020-11-24)
darknet\.circleci\config.yml (928, 2020-11-24)
darknet\.travis.yml (10798, 2020-11-24)
darknet\3rdparty (0, 2020-11-24)
darknet\3rdparty\pthreads (0, 2020-11-24)
darknet\3rdparty\pthreads\bin (0, 2020-11-24)
darknet\3rdparty\pthreads\bin\pthreadGC2.dll (185976, 2020-11-24)
darknet\3rdparty\pthreads\bin\pthreadVC2.dll (82944, 2020-11-24)
darknet\3rdparty\pthreads\include (0, 2020-11-24)
darknet\3rdparty\pthreads\include\pthread.h (43867, 2020-11-24)
darknet\3rdparty\pthreads\include\sched.h (5178, 2020-11-24)
darknet\3rdparty\pthreads\include\semaphore.h (4732, 2020-11-24)
darknet\3rdparty\pthreads\lib (0, 2020-11-24)
darknet\3rdparty\pthreads\lib\libpthreadGC2.a (93692, 2020-11-24)
darknet\3rdparty\pthreads\lib\pthreadVC2.lib (29738, 2020-11-24)
darknet\3rdparty\stb (0, 2020-11-24)
darknet\3rdparty\stb\include (0, 2020-11-24)
darknet\3rdparty\stb\include\stb_image.h (258048, 2020-11-24)
darknet\3rdparty\stb\include\stb_image_write.h (61464, 2020-11-24)
darknet\build.ps1 (8682, 2020-11-24)
darknet\build.sh (2101, 2020-11-24)
darknet\build (0, 2020-11-24)
darknet\build\darknet (0, 2020-11-24)
darknet\build\darknet\darknet.sln (1311, 2020-11-24)
darknet\build\darknet\darknet.vcxproj (17562, 2020-11-24)
darknet\build\darknet\darknet_no_gpu.sln (1309, 2020-11-24)
darknet\build\darknet\darknet_no_gpu.vcxproj (17709, 2020-11-24)
darknet\build\darknet\x64 (0, 2020-11-24)
darknet\build\darknet\x64\backup (0, 2020-11-24)
darknet\build\darknet\x64\backup\tmp.txt (0, 2020-11-24)
darknet\build\darknet\x64\calc_anchors.cmd (271, 2020-11-24)
darknet\build\darknet\x64\calc_mAP.cmd (262, 2020-11-24)
darknet\build\darknet\x64\calc_mAP_coco.cmd (449, 2020-11-24)
darknet\build\darknet\x64\calc_mAP_voc_py.cmd (591, 2020-11-24)
darknet\build\darknet\x64\cfg (0, 2020-11-24)
darknet\build\darknet\x64\cfg\alexnet.cfg (974, 2020-11-24)
darknet\build\darknet\x64\cfg\cd53paspp-gamma.cfg (13430, 2020-11-24)
darknet\build\darknet\x64\cfg\cifar.cfg (1376, 2020-11-24)
darknet\build\darknet\x64\cfg\cifar.test.cfg (1293, 2020-11-24)
... ...

# Yolo v4, v3 and v2 for Windows and Linux ## (neural networks for object detection) Paper Yolo v4: https://arxiv.org/abs/2004.10934 More details: [medium link](https://medium.com/@alexeyab84/yolov4-the-most-accurate-real-time-neural-network-on-ms-coco-dataset-73adfd3602fe?source=friends_link&sk=6039748846bbcf1d960c3061542591d7) Manual: https://github.com/AlexeyAB/darknet/wiki Discussion: - [Reddit](https://www.reddit.com/r/MachineLearning/comments/gydxzd/p_yolov4_the_most_accurate_realtime_neural/) - [Google-groups](https://groups.google.com/forum/#!forum/darknet) - [Discord](https://discord.gg/zSq8rtW) About Darknet framework: http://pjreddie.com/darknet/ [![Darknet Continuous Integration](https://github.com/AlexeyAB/darknet/workflows/Darknet%20Continuous%20Integration/badge.svg)](https://github.com/AlexeyAB/darknet/actions?query=workflow%3A%22Darknet+Continuous+Integration%22) [![CircleCI](https://circleci.com/gh/AlexeyAB/darknet.svg?style=svg)](https://circleci.com/gh/AlexeyAB/darknet) [![TravisCI](https://travis-ci.org/AlexeyAB/darknet.svg?branch=master)](https://travis-ci.org/AlexeyAB/darknet) [![Contributors](https://img.shields.io/github/contributors/AlexeyAB/Darknet.svg)](https://github.com/AlexeyAB/darknet/graphs/contributors) [![License: Unlicense](https://img.shields.io/badge/license-Unlicense-blue.svg)](https://github.com/AlexeyAB/darknet/blob/master/LICENSE) [![DOI](https://zenodo.org/badge/75388965.svg)](https://zenodo.org/badge/latestdoi/75388965) [![arxiv.org](http://img.shields.io/badge/cs.CV-arXiv%3A2004.10934-B31B1B.svg)](https://arxiv.org/abs/2004.10934) [![colab](https://user-images.githubusercontent.com/409***85/86174089-b2709f80-bb29-11ea-9faf-3d8dc668a1a5.png)](https://colab.research.google.com/drive/12QusaaRj_lUwCGDvQNfICpa7kA7_a2dE) [![colab](https://user-images.githubusercontent.com/409***85/86174097-b56b9000-bb29-11ea-9240-c17f6bacfc34.png)](https://colab.research.google.com/drive/1_GdoqCJWXsChrOiY8sZMr_zbr_fH-0Fg) * [YOLOv4 model zoo](https://github.com/AlexeyAB/darknet/wiki/YOLOv4-model-zoo) * [Requirements (and how to install dependecies)](#requirements) * [Pre-trained models](#pre-trained-models) * [FAQ - frequently asked questions](https://github.com/AlexeyAB/darknet/wiki/FAQ---frequently-asked-questions) * [Explanations in issues](https://github.com/AlexeyAB/darknet/issues?q=is%3Aopen+is%3Aissue+label%3AExplanations) * [Yolo v4 in other frameworks (TensorRT, TensorFlow, PyTorch, OpenVINO, OpenCV-dnn, TVM,...)](#yolo-v4-in-other-frameworks) * [Datasets](#datasets) 0. [Improvements in this repository](#improvements-in-this-repository) 1. [How to use](#how-to-use-on-the-command-line) 2. How to compile on Linux * [Using cmake](#how-to-compile-on-linux-using-cmake) * [Using make](#how-to-compile-on-linux-using-make) 3. How to compile on Windows * [Using cmake](#how-to-compile-on-windows-using-cmake) * [Using vcpkg](#how-to-compile-on-windows-using-vcpkg) * [Legacy way](#how-to-compile-on-windows-legacy-way) 4. [Training and Evaluation of speed and accuracy on MS COCO](https://github.com/AlexeyAB/darknet/wiki#training-and-evaluation-of-speed-and-accuracy-on-ms-coco) 5. [How to train with multi-GPU:](#how-to-train-with-multi-gpu) 6. [How to train (to detect your custom objects)](#how-to-train-to-detect-your-custom-objects) 7. [How to train tiny-yolo (to detect your custom objects)](#how-to-train-tiny-yolo-to-detect-your-custom-objects) 8. [When should I stop training](#when-should-i-stop-training) 9. [How to improve object detection](#how-to-improve-object-detection) 10. [How to mark bounded boxes of objects and create annotation files](#how-to-mark-bounded-boxes-of-objects-and-create-annotation-files) 11. [How to use Yolo as DLL and SO libraries](#how-to-use-yolo-as-dll-and-so-libraries) ![Darknet Logo](http://pjreddie.com/media/files/darknet-black-small.png) ![modern_gpus](https://user-images.githubusercontent.com/409***85/82835867-f1c62380-9ecd-11ea-9134-15***ed2abc4b.png) AP50:95 / AP50 - FPS (Tesla V100) Paper: https://arxiv.org/abs/2004.10934 tkDNN-TensorRT accelerates YOLOv4 **~2x** times for batch=1 and **3x-4x** times for batch=4. * tkDNN: https://github.com/ceccocats/tkDNN * OpenCV: https://gist.github.com/YashasSamaga/48bdb167303e10f4d07b754888ddbdcf #### GeForce RTX 2080 Ti: | Network Size | Darknet, FPS (avg)| tkDNN TensorRT FP32, FPS | tkDNN TensorRT FP16, FPS | OpenCV FP16, FPS | tkDNN TensorRT FP16 batch=4, FPS | OpenCV FP16 batch=4, FPS | tkDNN Speedup | |:-----:|:--------:|--------:|--------:|--------:|--------:|--------:|------:| |320 | 100 | 116 | **202** | 183 | 423 | **430** | **4.3x** | |416 | 82 | 103 | **162** | 159 | 284 | **294** | **3.6x** | |512 | 69 | 91 | 134 | **138** | 206 | **216** | **3.1x** | |608 | 53 | 62 | 103 | **115**| 150 | **150** | **2.8x** | |Tiny 416 | 443 | 609 | **790** | 773 | **1774** | 1353 | **3.5x** | |Tiny 416 CPU Core i7 7700HQ | 3.4 | - | - | 42 | - | 39 | **12x** | * Yolo v4 Full comparison: [map_fps](https://user-images.githubusercontent.com/409***85/80283279-0e303e00-871f-11ea-814c-870967d77fd1.png) * Yolo v4 tiny comparison: [tiny_fps](https://user-images.githubusercontent.com/409***85/85734112-6e366700-b705-11ea-95d1-fcba0de76d72.png) * CSPNet: [paper](https://arxiv.org/abs/1911.11929) and [map_fps](https://user-images.githubusercontent.com/409***85/71702416-6***5dc00-2de0-11ea-8d65-de7d4b604021.png) comparison: https://github.com/WongKinYiu/CrossStagePartialNetworks * Yolo v3 on MS COCO: [Speed / Accuracy (mAP@0.5) chart](https://user-images.githubusercontent.com/409***85/52151356-e5d4a380-2683-11e9-9d7d-ac7bc192c477.jpg) * Yolo v3 on MS COCO (Yolo v3 vs RetinaNet) - Figure 3: https://arxiv.org/pdf/1804.02767v1.pdf * Yolo v2 on Pascal VOC 2007: https://hsto.org/files/a24/21e/068/a2421e0689fb43f08584de9d44c2215f.jpg * Yolo v2 on Pascal VOC 2012 (comp4): https://hsto.org/files/3a6/fdf/b53/3a6fdfb533f34cee9b52bdd9bb0b19d9.jpg #### Youtube video of results [![Yolo v4](http://img.youtube.com/vi/1_SiUOYUoOI/0.jpg)](https://youtu.be/1_SiUOYUoOI "Yolo v4") Others: https://www.youtube.com/user/pjreddie/videos #### How to evaluate AP of YOLOv4 on the MS COCO evaluation server 1. Download and unzip test-dev2017 dataset from MS COCO server: http://images.cocodataset.org/zips/test2017.zip 2. Download list of images for Detection taks and replace the paths with yours: https://raw.githubusercontent.com/AlexeyAB/darknet/master/scripts/testdev2017.txt 3. Download `yolov4.weights` file 245 MB: [yolov4.weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights) (Google-drive mirror [yolov4.weights](https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT) ) 4. Content of the file `cfg/coco.data` should be ```ini classes= 80 train = /trainvalno5k.txt valid = /testdev2017.txt names = data/coco.names backup = backup eval=coco ``` 5. Create `/results/` folder near with `./darknet` executable file 6. Run validation: `./darknet detector valid cfg/coco.data cfg/yolov4.cfg yolov4.weights` 7. Rename the file `/results/coco_results.json` to `detections_test-dev2017_yolov4_results.json` and compress it to `detections_test-dev2017_yolov4_results.zip` 8. Submit file `detections_test-dev2017_yolov4_results.zip` to the MS COCO evaluation server for the `test-dev2019 (bbox)` #### How to evaluate FPS of YOLOv4 on GPU 1. Compile Darknet with `GPU=1 CUDNN=1 CUDNN_HALF=1 OPENCV=1` in the `Makefile` 2. Download `yolov4.weights` file 245 MB: [yolov4.weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights) (Google-drive mirror [yolov4.weights](https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT) ) 3. Get any .avi/.mp4 video file (preferably not more than 1920x1080 to avoid bottlenecks in CPU performance) 4. Run one of two commands and look at the AVG FPS: * include video_capturing + NMS + drawing_bboxes: `./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights test.mp4 -dont_show -ext_output` * exclude video_capturing + NMS + drawing_bboxes: `./darknet detector demo cfg/coco.data cfg/yolov4.cfg yolov4.weights test.mp4 -benchmark` #### Pre-trained models There are weights-file for different cfg-files (trained for MS COCO dataset): FPS on RTX 2070 (R) and Tesla V100 (V): * [yolov4.cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4.cfg) - 245 MB: [yolov4.weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights) (Google-drive mirror [yolov4.weights](https://drive.google.com/open?id=1cewMfusmPjYWbrnuJRuKhPMwRe_b9PaT) ) paper [Yolo v4](https://arxiv.org/abs/2004.10934) just change `width=` and `height=` parameters in `yolov4.cfg` file and use the same `yolov4.weights` file for all cases: * `width=608 height=608` in cfg: **65.7% mAP@0.5 (43.5% AP@0.5:0.95) - 34(R) FPS / 62(V) FPS** - 128.5 BFlops * `width=512 height=512` in cfg: *****.9% mAP@0.5 (43.0% AP@0.5:0.95) - 45(R) FPS / 83(V) FPS** - 91.1 BFlops * `width=416 height=416` in cfg: **62.8% mAP@0.5 (41.2% AP@0.5:0.95) - 55(R) FPS / 96(V) FPS** - 60.1 BFlops * `width=320 height=320` in cfg: **60% mAP@0.5 ( 38% AP@0.5:0.95) - 63(R) FPS / 123(V) FPS** - 35.5 BFlops * [yolov4-tiny.cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-tiny.cfg) - **40.2% mAP@0.5 - 371(1080Ti) FPS / 330(RTX2070) FPS** - 6.9 BFlops - 23.1 MB: [yolov4-tiny.weights](https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights) * [enet-coco.cfg (EfficientNetB0-Yolov3)](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/enet-coco.cfg) - **45.5% mAP@0.5 - 55(R) FPS** - 3.7 BFlops - 18.3 MB: [enetb0-coco_final.weights](https://drive.google.com/file/d/1FlHeQjWEQVJt0ay1PVsiuuMzmtNyv36m/view) * [yolov3-openimages.cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov3-openimages.cfg) - 247 MB - 18(R) FPS - OpenImages dataset: [yolov3-openimages.weights](https://pjreddie.com/media/files/yolov3-openimages.weights)
CLICK ME - Yolo v3 models * [csresnext50-panet-spp-original-optimal.cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/csresnext50-panet-spp-original-optimal.cfg) - **65.4% mAP@0.5 (43.2% AP@0.5:0.95) - 32(R) FPS** - 100.5 BFlops - 217 MB: [csresnext50-panet-spp-original-optimal_final.weights](https://drive.google.com/open?id=1_NnfVgj0EDtb_WLNoXV8Mo7WKgwdYZCc) * [yolov3-spp.cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov3-spp.cfg) - **60.6% mAP@0.5 - 38(R) FPS** - 141.5 BFlops - 240 MB: [yolov3-spp.weights](https://pjreddie.com/media/files/yolov3-spp.weights) * [csresnext50-panet-spp.cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/csresnext50-panet-spp.cfg) - **60.0% mAP@0.5 - 44 FPS** - 71.3 BFlops - 217 MB: [csresnext50-panet-spp_final.weights](https://drive.google.com/file/d/1aNXdM8qVy11nqTcd2oaVB3mf7ckr258-/view?usp=sharing) * [yolov3.cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov3.cfg) - **55.3% mAP@0.5 - 66(R) FPS** - 65.9 BFlops - 236 MB: [yolov3.weights](https://pjreddie.com/media/files/yolov3.weights) * [yolov3-tiny.cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov3-tiny.cfg) - **33.1% mAP@0.5 - 345(R) FPS** - 5.6 BFlops - 33.7 MB: [yolov3-tiny.weights](https://pjreddie.com/media/files/yolov3-tiny.weights) * [yolov3-tiny-prn.cfg](https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov3-tiny-prn.cfg) - **33.1% mAP@0.5 - 370(R) FPS** - 3.5 BFlops - 18.8 MB: [yolov3-tiny-prn.weights](https://drive.google.com/file/d/18yYZWyKbo4XSDVyztmsEcF9B_6bxrhUY/view?usp=sharing)
CLICK ME - Yolo v2 models * `yolov2.cfg` (194 MB COCO Yolo v2) - requires 4 GB GPU-RAM: https://pjreddie.com/media/files/yolov2.weights * `yolo-voc.cfg` (194 MB VOC Yolo v2) - requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo-voc.weights * `yolov2-tiny.cfg` (43 MB COCO Yolo v2) - requires 1 GB GPU-RAM: https://pjreddie.com/media/files/yolov2-tiny.weights * `yolov2-tiny-voc.cfg` (60 MB VOC Yolo v2) - requires 1 GB GPU-RAM: http://pjreddie.com/media/files/yolov2-tiny-voc.weights * `yolo9000.cfg` (186 MB Yolo9000-model) - requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo9000.weights
Put it near compiled: darknet.exe You can get cfg-files by path: `darknet/cfg/` ### Requirements * Windows or Linux * **CMake >= 3.12**: https://cmake.org/download/ * **CUDA >= 10.0**: https://developer.nvidia.com/cuda-toolkit-archive (on Linux do [Post-installation Actions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions)) * **OpenCV >= 2.4**: use your preferred package manager (brew, apt), build from source using [vcpkg](https://github.com/Microsoft/vcpkg) or download from [OpenCV official site](https://opencv.org/releases.html) (on Windows set system variable `OpenCV_DIR` = `C:\opencv\build` - where are the `include` and `x***` folders [image](https://user-images.githubusercontent.com/409***85/53249516-5130f480-36c9-11e9-8238-a6e82e48c6f2.png)) * **cuDNN >= 7.0** https://developer.nvidia.com/rdp/cudnn-archive (on **Linux** copy `cudnn.h`,`libcudnn.so`... as desribed here https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installlinux-tar , on **Windows** copy `cudnn.h`,`cudnn***_7.dll`, `cudnn***_7.lib` as desribed here https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installwindows ) * **GPU with CC >= 3.0**: https://en.wikipedia.org/wiki/CUDA#GPUs_supported * on Linux **GCC or Clang**, on Windows **MSVC 2017/2019** https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=Community #### Yolo v4 in other frameworks * **TensorFlow:** YOLOv4 on TensorFlow 2.0 / TFlite / Andriod: https://github.com/hunglc007/tensorflow-yolov4-tflite For YOLOv3 - convert `yolov3.weights`/`cfg` files to `yolov3.ckpt`/`pb/meta`: by using [mystic123](https://github.com/mystic123/tensorflow-yolo-v3) project, and [TensorFlow-lite](https://www.tensorflow.org/lite/guide/get_started#2_convert_the_model_format) * **OpenCV-dnn** the fastest implementation of YOLOv4 for CPU (x86/ARM-Android), OpenCV can be compiled with [OpenVINO-backend](https://github.com/opencv/opencv/wiki/Intel's-Deep-Learning-Inference-Engine-backend) for running on (Myriad X / USB Neural Compute Stick / Arria FPGA), use `yolov4.weights`/`cfg` with: [C++ example](https://github.com/opencv/opencv/blob/8c25a8eb7b10fb50cda323ee6bec68aa1a9ce43c/samples/dnn/object_detection.cpp#L192-L221) or [Python example](https://github.com/opencv/opencv/blob/8c25a8eb7b10fb50cda323ee6bec68aa1a9ce43c/samples/dnn/object_detection.py#L129-L150) * **Intel OpenVINO 2020 R4:** (NPU Myriad X / USB Neural Compute Stick / Arria FPGA): read this [manual](https://github.com/TNTWEN/OpenVINO-YOLOV4) (old [manual](https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow#converting-a-darknet-yolo-model) ) * **Tencent/ncnn:** the fastest inference of YOLOv4 on mobile phone CPU: https://github.com/Tencent/ncnn * **PyTorch > ONNX**: * [WongKinYiu/PyTorch_YOLOv4](https://github.com/WongKinYiu/PyTorch_YOLOv4) * [maudzung/3D-YOLOv4](https://github.com/maudzung/Complex-YOLOv4-Pytorch) * [Tianxiaomo/pytorch-YOLOv4](https://github.com/Tianxiaomo/pytorch-YOLOv4) * **ONNX** on Jetson for YOLOv4: https://developer.nvidia.com/blog/announcing-onnx-runtime-for-jetson/ * **TensorRT** YOLOv4 on TensorRT+tkDNN: https://github.com/ceccocats/tkDNN For YOLOv3 (-70% faster inference): [Yolo is natively supported in DeepStream 4.0](https://news.developer.nvidia.com/deepstream-sdk-4-now-available/) read [PDF](https://docs.nvidia.com/metropolis/deepstream/Custom_YOLO_Model_in_the_DeepStream_YOLO_App.pdf). [wang-xinyu/tensorrtx](https://github.com/wang-xinyu/tensorrtx) implemented yolov3-spp, yolov4, etc. * **Deepstream 5.0 / TensorRT for YOLOv4** https://github.com/NVIDIA-AI-IOT/yolov4_deepstream * **Amazon Neurochip / Amazon EC2 Inf1 instances** 1.85 times higher throughput and 37% lower cost per image for TensorFlow based YOLOv4 model, using Keras [URL](https://aws.amazon.com/ru/blogs/machine-learning/improving-performance-for-deep-learning-based-object-detection-with-an-aws-neuron-compiled-yolov4-model-on-aws-inferentia/) * **TVM** - compilation of deep learning models (Keras, MXNet, PyTorch, Tensorflow, CoreML, DarkNet) into minimum deployable modules on diverse hardware backends (CPUs, GPUs, FPGA, and specialized accelerators): https://tvm.ai/about * **OpenDataCam** - It detects, tracks and counts moving objects by using YOLOv4: https://github.com/opendatacam/opendatacam#-hardware-pre-requisite * **Netron** - Visualizer for neural networks: https://github.com/lutzroeder/netron #### Datasets * MS COCO: use `./scripts/get_coco_dataset.sh` to get labeled MS COCO detection dataset * OpenImages: use `python ./scripts/get_openimages_dataset.py` for labeling train detection dataset * Pascal VOC: use `python ./scripts/voc_label.py` for labeling Train/Test/Val detection datasets * ILSVRC2012 (ImageNet classification): use `./scripts/get_imagenet_train.sh` (also `imagenet_label.sh` for labeling valid set) * German/Belgium/Russian/LISA/MASTIF Traffic Sign Datasets for Detection - use this parsers: https://github.com/angeligareta/Datasets2Darknet#detection-task * List of other datasets: https://github.com/AlexeyAB/darknet/tree/master/scripts#datasets ### Improvements in this repository * developed State-of-the-Art object detector YOLOv4 * added State-of-Art models: CSP, PRN, EfficientNet * added layers: [conv_lstm], [scale_channels] SE/ASFF/BiFPN, [local_avgpool], [sam], [Gaussian_yolo], [reorg3d] (fixed [reorg]), fixed [batchnorm] * added the ability for training recurrent models (with layers conv-lstm`[conv_lstm]`/conv-rnn`[crnn]`) for accurate detection on video * added data augmentation: `[net] mixup=1 cutmix=1 mosaic=1 blur=1`. Added activations: SWISH, MISH, NORM_CHAN, NORM_CHAN_SOFTMAX * added the ability for training with GPU-processing using CPU-RAM to increase the mini_batch_size and increase accuracy (instead of batch-norm sync) * improved binary neural network performance **2x-4x times** for Detection on CPU and GPU if you trained your own weights by using this XNOR-net model (bit-1 inference) : https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3-tiny_xnor.cfg * improved neural network performance **~7%** by fusing 2 layers into 1: Convolutional + Batch-norm * improved performance: Detection **2x times**, on GPU Volta/Turing (Tesla V100, GeForce RTX, ...) using Tensor Cores if `CUDNN_HALF` defined in the `Makefile` or `darknet.sln` * improved performance **~1.2x** times on FullHD, **~2x** times on 4K, for detection on the video (file/stream) using `darknet detector demo`... * improved performance **3.5 X times** of data augmentation for training (using OpenCV SSE/AVX functions instead of hand-written functions) - removes bottleneck for training on multi-GPU or GPU Volta * improved performance of detection and training on Intel CPU with AVX (Yolo v3 **~85%**) * optimized memory allocation during network resizing when `random=1` * optimized GPU initialization for detection - we use batch=1 initially instead of re-init with batch=1 * added correct calculation of **mAP, F1, IoU, Preci ... ...

近期下载者

相关文件


收藏者