darknet

所属分类:其他
开发工具:C/C++
文件大小:7818KB
下载次数:1
上传日期:2020-01-13 17:24:28
上 传 者pcb9382
说明:  采用C语言实现Darknet53的yolov3的框架
(A framework for implementing yolov3 of Darknet53 in C language)

文件列表:
.circleci (0, 2019-09-28)
.circleci\config.yml (662, 2019-09-28)
.travis.yml (9845, 2019-09-28)
3rdparty (0, 2019-09-28)
3rdparty\pthreads (0, 2019-09-28)
3rdparty\pthreads\bin (0, 2019-09-28)
3rdparty\pthreads\bin\pthreadGC2.dll (185976, 2019-09-28)
3rdparty\pthreads\bin\pthreadVC2.dll (82944, 2019-09-28)
3rdparty\pthreads\include (0, 2019-09-28)
3rdparty\pthreads\include\pthread.h (42499, 2019-09-28)
3rdparty\pthreads\include\sched.h (4995, 2019-09-28)
3rdparty\pthreads\include\semaphore.h (4563, 2019-09-28)
3rdparty\pthreads\lib (0, 2019-09-28)
3rdparty\pthreads\lib\libpthreadGC2.a (93692, 2019-09-28)
3rdparty\pthreads\lib\pthreadVC2.lib (29738, 2019-09-28)
3rdparty\stb (0, 2019-09-28)
3rdparty\stb\include (0, 2019-09-28)
3rdparty\stb\include\stb_image.h (250861, 2019-09-28)
3rdparty\stb\include\stb_image_write.h (60006, 2019-09-28)
CMakeLists.txt (20019, 2019-09-28)
DarknetConfig.cmake.in (1363, 2019-09-28)
LICENSE (515, 2019-09-28)
Makefile (4992, 2019-09-28)
appveyor.yml (5851, 2019-09-28)
build.ps1 (8265, 2019-09-28)
build.sh (2044, 2019-09-28)
build (0, 2019-09-28)
build\darknet (0, 2019-09-28)
build\darknet\YoloWrapper.cs (3131, 2019-09-28)
build\darknet\darknet.sln (1283, 2019-09-28)
build\darknet\darknet.vcxproj (17136, 2019-09-28)
build\darknet\darknet_no_gpu.sln (1281, 2019-09-28)
build\darknet\darknet_no_gpu.vcxproj (17281, 2019-09-28)
build\darknet\x64 (0, 2019-09-28)
build\darknet\x64\backup (0, 2019-09-28)
... ...

# Yolo-v3 and Yolo-v2 for Windows and Linux ### (neural network for object detection) - Tensor Cores can be used on [Linux](https://github.com/AlexeyAB/darknet#how-to-compile-on-linux) and [Windows](https://github.com/AlexeyAB/darknet#how-to-compile-on-windows-using-vcpkg) More details: http://pjreddie.com/darknet/yolo/ [![CircleCI](https://circleci.com/gh/AlexeyAB/darknet.svg?style=svg)](https://circleci.com/gh/AlexeyAB/darknet) [![TravisCI](https://travis-ci.org/AlexeyAB/darknet.svg?branch=master)](https://travis-ci.org/AlexeyAB/darknet) [![AppveyorCI](https://ci.appveyor.com/api/projects/status/594bwb5uoc1fxwiu/branch/master?svg=true)](https://ci.appveyor.com/project/AlexeyAB/darknet/branch/master) [![Contributors](https://img.shields.io/github/contributors/AlexeyAB/Darknet.svg)](https://github.com/AlexeyAB/darknet/graphs/contributors) [![License: Unlicense](https://img.shields.io/badge/license-Unlicense-blue.svg)](https://github.com/AlexeyAB/darknet/blob/master/LICENSE) * [Requirements (and how to install dependecies)](#requirements) * [Pre-trained models](#pre-trained-models) * [Explanations in issues](https://github.com/AlexeyAB/darknet/issues?q=is%3Aopen+is%3Aissue+label%3AExplanations) * [Yolo v3 in other frameworks (TensorRT, TensorFlow, PyTorch, OpenVINO, OpenCV-dnn,...)](#yolo-v3-in-other-frameworks) * [Datasets](#datasets) 0. [Improvements in this repository](#improvements-in-this-repository) 1. [How to use](#how-to-use-on-the-command-line) 2. [How to compile on Linux](#how-to-compile-on-linux) 3. How to compile on Windows * [Using vcpkg](#how-to-compile-on-windows-using-vcpkg) * [Legacy way](#how-to-compile-on-windows-legacy-way) 4. [How to train (Pascal VOC Data)](#how-to-train-pascal-voc-data) 5. [How to train with multi-GPU:](#how-to-train-with-multi-gpu) 6. [How to train (to detect your custom objects)](#how-to-train-to-detect-your-custom-objects) 7. [How to train tiny-yolo (to detect your custom objects)](#how-to-train-tiny-yolo-to-detect-your-custom-objects) 8. [When should I stop training](#when-should-i-stop-training) 9. [How to calculate mAP on PascalVOC 2007](#how-to-calculate-map-on-pascalvoc-2007) 10. [How to improve object detection](#how-to-improve-object-detection) 11. [How to mark bounded boxes of objects and create annotation files](#how-to-mark-bounded-boxes-of-objects-and-create-annotation-files) 12. [How to use Yolo as DLL and SO libraries](#how-to-use-yolo-as-dll-and-so-libraries) | ![Darknet Logo](http://pjreddie.com/media/files/darknet-black-small.png) |   ![map_time](https://user-images.githubusercontent.com/409***85/52151356-e5d4a380-2683-11e9-9d7d-ac7bc192c477.jpg) mAP@0.5 (AP50) https://pjreddie.com/media/files/papers/YOLOv3.pdf | |---|---| * YOLOv3-spp better than YOLOv3 - mAP = 60.6%, FPS = 20: https://pjreddie.com/darknet/yolo/ * Yolo v3 source chart for the RetinaNet on MS COCO got from Table 1 (e): https://arxiv.org/pdf/1708.02002.pdf * Yolo v2 on Pascal VOC 2007: https://hsto.org/files/a24/21e/068/a2421e0689fb43f08584de9d44c2215f.jpg * Yolo v2 on Pascal VOC 2012 (comp4): https://hsto.org/files/3a6/fdf/b53/3a6fdfb533f34cee9b52bdd9bb0b19d9.jpg ### Requirements * Windows or Linux * **CMake >= 3.8** for modern CUDA support: https://cmake.org/download/ * **CUDA 10.0**: https://developer.nvidia.com/cuda-toolkit-archive (on Linux do [Post-installation Actions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions)) * **OpenCV >= 2.4**: use your preferred package manager (brew, apt), build from source using [vcpkg](https://github.com/Microsoft/vcpkg) or download from [OpenCV official site](https://opencv.org/releases.html) (on Windows set system variable `OpenCV_DIR` = `C:\opencv\build` - where are the `include` and `x***` folders [image](https://user-images.githubusercontent.com/409***85/53249516-5130f480-36c9-11e9-8238-a6e82e48c6f2.png)) * **cuDNN >= 7.0 for CUDA 10.0** https://developer.nvidia.com/rdp/cudnn-archive (on **Linux** copy `cudnn.h`,`libcudnn.so`... as desribed here https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installlinux-tar , on **Windows** copy `cudnn.h`,`cudnn***_7.dll`, `cudnn***_7.lib` as desribed here https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installwindows ) * **GPU with CC >= 3.0**: https://en.wikipedia.org/wiki/CUDA#GPUs_supported * on Linux **GCC or Clang**, on Windows **MSVC 2015/2017/2019** https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=Community Compiling on **Windows** by using `Cmake-GUI` as on this [**IMAGE**](https://user-images.githubusercontent.com/409***85/55107892-6becf380-50e3-11e9-9a0a-556a943c429a.png): Configure -> Optional platform for generator (Set: x***) -> Finish -> Generate -> Open Project -> x*** & Release -> Build -> Build solution Compiling on **Linux** by using command `make` (or alternative way by using command: `cmake . && make` ) #### Pre-trained models There are weights-file for different cfg-files (smaller size -> faster speed & lower accuracy: * `yolov3-openimages.cfg` (247 MB COCO **Yolo v3**) - requires 4 GB GPU-RAM: https://pjreddie.com/media/files/yolov3-openimages.weights * `yolov3-spp.cfg` (240 MB COCO **Yolo v3**) - requires 4 GB GPU-RAM: https://pjreddie.com/media/files/yolov3-spp.weights * `yolov3.cfg` (236 MB COCO **Yolo v3**) - requires 4 GB GPU-RAM: https://pjreddie.com/media/files/yolov3.weights * `yolov3-tiny.cfg` (34 MB COCO **Yolo v3 tiny**) - requires 1 GB GPU-RAM: https://pjreddie.com/media/files/yolov3-tiny.weights * `enet-coco.cfg` (EfficientNetb0-Yolo- 45.5% mAP@0.5 - 3.7 BFlops) [enetb0-coco_final.weights](https://drive.google.com/file/d/1FlHeQjWEQVJt0ay1PVsiuuMzmtNyv36m/view) and `yolov3-tiny-prn.cfg` (33.1% mAP@0.5 - 3.5 BFlops - [more](https://github.com/WongKinYiu/PartialResidualNetworks))
CLICK ME - Yolo v2 models * `yolov2.cfg` (194 MB COCO Yolo v2) - requires 4 GB GPU-RAM: https://pjreddie.com/media/files/yolov2.weights * `yolo-voc.cfg` (194 MB VOC Yolo v2) - requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo-voc.weights * `yolov2-tiny.cfg` (43 MB COCO Yolo v2) - requires 1 GB GPU-RAM: https://pjreddie.com/media/files/yolov2-tiny.weights * `yolov2-tiny-voc.cfg` (60 MB VOC Yolo v2) - requires 1 GB GPU-RAM: http://pjreddie.com/media/files/yolov2-tiny-voc.weights * `yolo9000.cfg` (186 MB Yolo9000-model) - requires 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo9000.weights
Put it near compiled: darknet.exe You can get cfg-files by path: `darknet/cfg/` #### Yolo v3 in other frameworks * **TensorFlow:** convert `yolov3.weights`/`cfg` files to `yolov3.ckpt`/`pb/meta`: by using [mystic123](https://github.com/mystic123/tensorflow-yolo-v3) or [jinyu121](https://github.com/jinyu121/DW2TF) projects, and [TensorFlow-lite](https://www.tensorflow.org/lite/guide/get_started#2_convert_the_model_format) * **Intel OpenVINO 2019 R1:** (Myriad X / USB Neural Compute Stick / Arria FPGA): read this [manual](https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow#converting-a-darknet-yolo-model) * **OpenCV-dnn** is a very fast DNN implementation on CPU (x86/ARM-Android), use `yolov3.weights`/`cfg` with: [C++ example](https://github.com/opencv/opencv/blob/8c25a8eb7b10fb50cda323ee6bec68aa1a9ce43c/samples/dnn/object_detection.cpp#L192-L221), [Python example](https://github.com/opencv/opencv/blob/8c25a8eb7b10fb50cda323ee6bec68aa1a9ce43c/samples/dnn/object_detection.py#L129-L150) * **PyTorch > ONNX > CoreML > iOS** how to convert cfg/weights-files to pt-file: [ultralytics/yolov3](https://github.com/ultralytics/yolov3#darknet-conversion) and [iOS App](https://itunes.apple.com/app/id1452689527) * **TensorRT** for YOLOv3 (-70% faster inference): [Yolo is natively supported in DeepStream 4.0](https://news.developer.nvidia.com/deepstream-sdk-4-now-available/) * **TVM** - compilation of deep learning models (Keras, MXNet, PyTorch, Tensorflow, CoreML, DarkNet) into minimum deployable modules on diverse hardware backends (CPUs, GPUs, FPGA, and specialized accelerators): https://tvm.ai/about #### Datasets * MS COCO: use `./scripts/get_coco_dataset.sh` to get labeled MS COCO detection dataset * OpenImages: use `python ./scripts/get_openimages_dataset.py` for labeling train detection dataset * Pascal VOC: use `python ./scripts/voc_label.py` for labeling Train/Test/Val detection datasets * ILSVRC2012 (ImageNet classification): use `./scripts/get_imagenet_train.sh` (also `imagenet_label.sh` for labeling valid set) * German/Belgium/Russian/LISA/MASTIF Traffic Sign Datasets for Detection - use this parsers: https://github.com/angeligareta/Datasets2Darknet#detection-task * List of other datasets: https://github.com/AlexeyAB/darknet/tree/master/scripts#datasets ##### Examples of results [![Yolo v3](http://img.youtube.com/vi/VOC3huqHrss/0.jpg)](https://www.youtube.com/watch?v=MPU2HistivI "Yolo v3") Others: https://www.youtube.com/user/pjreddie/videos ### Improvements in this repository * added support for Windows * improved binary neural network performance **2x-4x times** for Detection on CPU and GPU if you trained your own weights by using this XNOR-net model (bit-1 inference) : https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3-tiny_xnor.cfg * improved neural network performance **~7%** by fusing 2 layers into 1: Convolutional + Batch-norm * improved neural network performance Detection **3x times**, Training **2 x times** on GPU Volta (Tesla V100, Titan V, ...) using Tensor Cores if `CUDNN_HALF` defined in the `Makefile` or `darknet.sln` * improved performance **~1.2x** times on FullHD, **~2x** times on 4K, for detection on the video (file/stream) using `darknet detector demo`... * improved performance **3.5 X times** of data augmentation for training (using OpenCV SSE/AVX functions instead of hand-written functions) - removes bottleneck for training on multi-GPU or GPU Volta * improved performance of detection and training on Intel CPU with AVX (Yolo v3 **~85%**, Yolo v2 ~10%) * fixed usage of `[reorg]`-layer * optimized memory allocation during network resizing when `random=1` * optimized initialization GPU for detection - we use batch=1 initially instead of re-init with batch=1 * added correct calculation of **mAP, F1, IoU, Precision-Recall** using command `darknet detector map`... * added drawing of chart of average-Loss and accuracy-mAP (`-map` flag) during training * run `./darknet detector demo ... -json_port 8070 -mjpeg_port 8090` as JSON and MJPEG server to get results online over the network by using your soft or Web-browser * added calculation of anchors for training * added example of Detection and Tracking objects: https://github.com/AlexeyAB/darknet/blob/master/src/yolo_console_dll.cpp * fixed code for use Web-cam on OpenCV > 3.x * run-time tips and warnings if you use incorrect cfg-file or dataset * many other fixes of code... And added manual - [How to train Yolo v3/v2 (to detect your custom objects)](#how-to-train-to-detect-your-custom-objects) Also, you might be interested in using a simplified repository where is implemented INT8-quantization (+30% speedup and -1% mAP reduced): https://github.com/AlexeyAB/yolo2_light #### How to use on the command line On Linux use `./darknet` instead of `darknet.exe`, like this:`./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights` On Linux find executable file `./darknet` in the root directory, while on Windows find it in the directory `\build\darknet\x***` * Yolo v3 COCO - **image**: `darknet.exe detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights -thresh 0.25` * **Output coordinates** of objects: `darknet.exe detector test cfg/coco.data yolov3.cfg yolov3.weights -ext_output dog.jpg` * Yolo v3 COCO - **video**: `darknet.exe detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights -ext_output test.mp4` * Yolo v3 COCO - **WebCam 0**: `darknet.exe detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights -c 0` * Yolo v3 COCO for **net-videocam** - Smart WebCam: `darknet.exe detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights http://192.168.0.80:8080/video?dummy=param.mjpg` * Yolo v3 - **save result videofile res.avi**: `darknet.exe detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights test.mp4 -out_filename res.avi` * Yolo v3 **Tiny** COCO - video: `darknet.exe detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights test.mp4` * **JSON and MJPEG server** that allows multiple connections from your soft or Web-browser `ip-address:8070` and 8090: `./darknet detector demo ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights test50.mp4 -json_port 8070 -mjpeg_port 8090 -ext_output` * Yolo v3 Tiny **on GPU #1**: `darknet.exe detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights -i 1 test.mp4` * Alternative method Yolo v3 COCO - image: `darknet.exe detect cfg/yolov3.cfg yolov3.weights -i 0 -thresh 0.25` * Train on **Amazon EC2**, to see mAP & Loss-chart using URL like: `http://ec2-35-160-228-91.us-west-2.compute.amazonaws.com:8090` in the Chrome/Firefox (**Darknet should be compiled with OpenCV**): `./darknet detector train cfg/coco.data yolov3.cfg darknet53.conv.74 -dont_show -mjpeg_port 8090 -map` * 186 MB Yolo9000 - image: `darknet.exe detector test cfg/combine9k.data cfg/yolo9000.cfg yolo9000.weights` * Remeber to put data/9k.tree and data/coco9k.map under the same folder of your app if you use the cpp api to build an app * To process a list of images `data/train.txt` and save results of detection to `result.json` file use: `darknet.exe detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights -ext_output -dont_show -out result.json < data/train.txt` * To process a list of images `data/train.txt` and save results of detection to `result.txt` use: `darknet.exe detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights -dont_show -ext_output < data/train.txt > result.txt` * Pseudo-lableing - to process a list of images `data/new_train.txt` and save results of detection in Yolo training format for each image as label `.txt` (in this way you can increase the amount of training data) use: `darknet.exe detector test cfg/coco.data cfg/yolov3.cfg yolov3.weights -thresh 0.25 -dont_show -save_labels < data/new_train.txt` * To calculate anchors: `darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416` * To check accuracy mAP@IoU=50: `darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights` * To check accuracy mAP@IoU=75: `darknet.exe detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights -iou_thresh 0.75` ##### For using network video-camera mjpeg-stream with any Android smartphone 1. Download for Android phone mjpeg-stream soft: IP Webcam / Smart WebCam * Smart WebCam - preferably: https://play.google.com/store/apps/details?id=com.acontech.android.SmartWebCam2 * IP Webcam: https://play.google.com/store/apps/details?id=com.pas.webcam 2. Connect your Android phone to computer by WiFi (through a WiFi-router) or USB 3. Start Smart WebCam on your phone 4. Replace the address below, on shown in the phone application (Smart WebCam) and launch: * Yolo v3 COCO-model: `darknet.exe detector demo data/coco.data yolov3.cfg yolov3.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` ### How to compile on Linux Just do `make` in the darknet directory. Before make, you can set such options in the `Makefile`: [link](https://github.com/AlexeyAB/darknet/blob/9c1b9a2cf6363546c152251be578a21f3c3caec6/Makefile#L1) * `GPU=1` to build with CUDA to accelerate by using GPU (CUDA should be in `/usr/local/cuda`) * `CUDNN=1` to build with cuDNN v5-v7 to accelerate training by using GPU (cuDNN should be in `/usr/local/cudnn`) * `CUDNN_HALF=1` to build for Tensor Cores (on Titan V / Tesla V100 / DGX-2 and later) speedup Detection 3x, Training 2x * `OPENCV=1` to build with OpenCV 4.x/3.x/2.4.x - allows to detect on video files and video streams from network cameras or web-cams * `DEBUG=1` to bould debug version of Yolo * `OPENMP=1` to build with OpenMP support to accelerate Yolo by using multi-core CPU * `LIBSO=1` to build a library `darknet.so` and binary runable file `uselib` that uses this library. Or you can try to run so `LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib test.mp4` How to use this SO-library from your own code - you can look at C++ example: https://github.com/AlexeyAB/darknet/blob/master/src/yolo_console_dll.cpp or use in such a way: `LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib data/coco.names cfg/yolov3.cfg yolov3.weights test.mp4` * `ZED_CAMERA=1` to build a library with ZED-3D-camera support (should be ZED SDK installed), then run `LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib data/coco.names cfg/yolov3.cfg yolov3.weights zed_camera` To run Darknet on Linux use examples from this article, just use `./darknet` instead of `darknet.exe`, i.e. use this command: `./darknet detector test ./cfg/coco.data ./cfg/yolov3.cfg ./yolov3.weights` ### How to compile on Windows (using `vcpkg`) If you have already installed Visual Studio 2015/2017/2019, CUDA > 10.0, cuDNN > 7.0, OpenCV > 2.4, then compile Darknet by using `C:\Program Files\CMake\bin\cmake-gui.exe` as on this [**IMAGE**](https://user-images.githubusercontent.com/409***85/55107892-6becf380-50e3-11e9-9a0a-556a943c429a.png): Configure -> Optional platform for generator (Set: x***) -> Finish -> Generate -> Open Project -> x*** & Release -> Build -> Build solution Otherwise, follow these steps: 1. Install or update Visual Studio to at least version 2017, making sure to have it fully patched (run again the installer if not sure to automatically update to latest version). If you need to install from scratch, download VS from here: [Visual Studio Community](http://visualstudio.com) 2. Install CUDA and cuDNN 3. Install `git` and `cmake`. Make sure they are on the Path at least for the current account 4. Install [vcpkg](https://github.com/Microsoft/vcpkg) and try to install a test library to make sure everything is working, for example `vcpkg install opengl` 5. Define an environment variables, `VCPKG_ROOT`, pointing to the install path of `vcpkg` 6. Define another environment variable, with name `VCPKG_DEFAULT_TRIPLET` and value `x***-windows` 7. Open Powershell and type these commands: ```PowerShell PS \> cd $env:VCPKG_ROOT PS Code\vcpkg> .\vcpkg install pthreads opencv[ffmpeg] #replace with opencv[cuda,ffmpeg] in case you want to use cuda-accelerated openCV ``` 8. Open Powershell, go to the `darknet` folder and build with the command `.\build.ps1`. If you want to use Visual Studio, you will find two custom solutions created for you by CMake after the build, one in `build_win_debug` and the other in `build_win_release`, containing all the appropriate config flags for your system. ### How to compile on Windows (legacy way) 1. If you have **CUDA 10.0, cuDNN 7.4 and OpenCV 3.x** (with paths: `C:\opencv_3.0\opencv\build\include` & `C:\opencv_3.0\opencv\build\x***\vc14\lib`), then open `build\darknet\darknet.sln`, set **x***** and **Release** https://hsto.org/webt/uh/fk/-e/uhfk-eb0q-hwd9hsxhrikbokd6u.jpeg and do the: Build -> Build darknet. Also add Windows system variable `CUDNN` with path to CUDNN: https://user-images.githubusercontent.com/409***85/532497***-019ef880-36ca-11e9-8ffe-d9cf47e7e462.jpg 1.1. Find files `opencv_world320.dll` and `opencv_ffmpeg320_***.dll` (or `opencv_world340.dll` and `opencv_ffmpeg340_***.dll`) in `C:\opencv_3.0\opencv\build\x***\vc14\bin` and put it near with `darknet.exe` 1.2 Check that ther ... ...

近期下载者

相关文件


收藏者