Darknet Object Detection Framework and YOLO
!darknet and hank.ai logos
Darknet is an open-source neural network framework primarily written in C and CUDA.
YOLO (You Only Look Once) is a real-time object detection system that operates within the Darknet framework.
Read how Hank.ai is helping the Darknet/YOLO community
Announcing Darknet V3 "Jazz"
See the Darknet/YOLO web site
Please read through the Darknet/YOLO FAQ
Join the Darknet/YOLO discord server
Papers
1. Paper YOLOv7
2. Paper Scaled-YOLOv4
3. Paper YOLOv4
4. Paper YOLOv3
General Information
The Darknet/YOLO framework continues to be both faster and more accurate than other frameworks and versions of YOLO.
This framework is completely free and open-source. You can integrate Darknet/YOLO into existing projects and products, including commercial ones, without a license or payment.
Darknet V3 ("Jazz"), released in October 2024, can accurately process the LEGO dataset videos at up to 1000 FPS when using an NVIDIA RTX 3090 GPU. This means each video frame is read, resized, and processed by Darknet/YOLO in 1 millisecond or less.
Join the Darknet/YOLO Discord server for help or discussions: https://discord.gg/zSq8rtW
The CPU version of Darknet/YOLO runs on various devices, including Raspberry Pi, cloud & colab servers, desktops, laptops, and high-end training rigs. The GPU version of Darknet/YOLO requires a CUDA-capable GPU from NVIDIA.
Darknet/YOLO is known to work on Linux, Windows, and Mac. See the building instructions below.
Darknet Version
The original Darknet tool written by Joseph Redmon in 2013-2017 did not have a version number. We consider this version 0.x.
The next popular Darknet repo maintained by Alexey Bochkovskiy between 2017-2021 also did not have a version number. We consider this version 1.x.
The Darknet repo sponsored by Hank.ai and maintained by Stéphane Charette starting in 2023 was the first one with a version command. From 2023 until late 2024, it returned version 2.x "OAK".
The goal was to try and break as little of the existing functionality while getting familiar with the codebase.
Key changes in Darknet 2.x include:
Re-wrote the build steps for a unified CMake-based build on Windows and Linux.
Converted the codebase to use the C++ compiler.
Enhanced the chart.png visualization during training.
Bug fixes and performance optimizations, primarily focused on reducing training time.
The last branch of this codebase is version 2.1 in the v2 branch.
The next phase of development started in mid-2024 and was released in October 2024. The version command now returns 3.x "JAZZ".
You can always do a checkout of the previous v2 branch if you need to run one of these commands. Let us know if you encounter any missing commands.
Key changes in Darknet 3.x include:
Removal of many old and unmaintained commands.
Significant performance optimizations for both training and inference.
Modification of the legacy C API, requiring minor modifications for applications using the original Darknet API. See the updated API documentation here: https://darknetcv.ai/api/api.html
Introduction of a new Darknet V3 C and C++ API: https://darknetcv.ai/api/api.html
New applications and sample code in the src-examples directory: https://darknetcv.ai/api/files.html
MSCOCO Pre-trained Weights
Several popular versions of YOLO were pre-trained for convenience on the MSCOCO dataset. This dataset contains 80 classes, which can be found in the cfg/coco.names text file.
There are several other simpler datasets and pre-trained weights available for testing Darknet/YOLO, such as LEGO Gears and Rolodex. See the Darknet/YOLO FAQ for details.
The MSCOCO pre-trained weights can be downloaded from various locations, including this repository:
YOLOv2 (November 2016)
YOLOv2-tiny
YOLOv2-full
YOLOv3 (May 2018)
YOLOv3-tiny
YOLOv3-full
YOLOv4 (May 2020)
YOLOv4-tiny
YOLOv4-full
YOLOv7 (August 2022)
YOLOv7-tiny
YOLOv7-full
The MSCOCO pre-trained weights are provided for demonstration purposes only. The corresponding .cfg and .names files for MSCOCO are in the cfg directory. Example commands:
`
wget --no-clobber https://github.com/hank-ai/darknet/releases/download/v2.0/yolov4-tiny.weights
darknet02displayannotatedimages coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg
darknet03display_videos coco.names yolov4-tiny.cfg yolov4-tiny.weights video1.avi
DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg
DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights video1.avi
`
Remember that you are encouraged to train your own networks. MSCOCO is primarily used to confirm that everything is working correctly.
Building
The various build methods available in the past (pre-2023) have been merged into a single unified solution. Darknet requires C++17 or newer, OpenCV, and uses CMake to generate the necessary project files.
You do not need to know C++ to build, install, nor run Darknet/YOLO, just like you don't need to be a mechanic to drive a car.
Beware if you are following old tutorials with more complicated build steps, or build steps that don't match what is in this readme. The new build steps as described below started in August 2023.
Software developers are encouraged to visit https://darknetcv.ai/ to get information on the internals of the Darknet/YOLO object detection framework.
Google Colab
The Google Colab instructions are the same as the Linux instructions. Several Jupyter notebooks are available demonstrating tasks like training a new network.
See the notebooks in the colab subdirectory, and/or follow the Linux instructions below.
Linux CMake Method
Darknet build tutorial for Linux
1. Install necessary packages:
`bash
sudo apt-get install build-essential git libopencv-dev cmake
`
2. Clone the Darknet repository:
`bash
mkdir ~/src
cd ~/src
git clone https://github.com/hank-ai/darknet
cd darknet
`
3. Create a build directory and run CMake:
`bash
mkdir build
cd build
cmake -DCMAKEBUILDTYPE=Release ..
`
4. Build Darknet:
`bash
make -j4
`
5. Optional: Install CUDA or CUDA+cuDNN
If you have a modern NVIDIA GPU, you can install either CUDA or CUDA+cuDNN. This will allow Darknet to use your GPU for faster image and video processing.
- Download and install CUDA from https://developer.nvidia.com/cuda-downloads.
- Download and install cuDNN from https://developer.nvidia.com/rdp/cudnn-download or https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#cudnn-package-manager-installation-overview.
Important: If you install CUDA or CUDA+cuDNN after building Darknet, you must delete the CMakeCache.txt file in your build directory and re-run cmake to ensure that CMake can find the necessary files.
Note: Darknet can run without CUDA, but if you want to train a custom network, either CUDA or CUDA+cuDNN is required.
6. Package and install Darknet:
`bash
make package
sudo dpkg -i darknet-VERSION.deb
`
Important: If you are using an older version of CMake, you may need to upgrade it before running the cmake command. Upgrade CMake on Ubuntu using:
`bash
sudo apt-get purge cmake
sudo snap install cmake --classic
`
Advanced users:
- If you want to build an RPM installation file instead of a DEB file, edit the following two lines in CM_package.cmake before running make package:
`cmake
# SET (CPACK_GENERATOR "DEB")
SET (CPACK_GENERATOR "RPM")
`
- To install the installation package once it has finished building, use your distribution's package manager. For example, on Debian-based systems like Ubuntu:
`bash
sudo dpkg -i darknet-2.0.1-Linux.deb
`
- Installing the .deb package will copy the following files:
- /usr/bin/darknet: The Darknet executable.
- /usr/include/darknet.h: The Darknet API for C, C++, and Python developers.
- /usr/include/darknet_version.h: Contains version information for developers.
- /usr/lib/libdarknet.so: The library to link against for C, C++, and Python developers.
- /opt/darknet/cfg/...: Location of all .cfg templates.
- You are now done! Darknet has been built and installed into /usr/bin/. Run darknet version from the CLI to confirm the installation.
Windows CMake Method
These instructions assume a brand new installation of Windows 11 22H2.
1. Install required software:
`powershell
winget install Git.Git
winget install Kitware.CMake
winget install nsis.nsis
winget install Microsoft.VisualStudio.2022.Community
`
2. Modify Visual Studio installation:
- Open the "Windows Start" menu and run "Visual Studio Installer".
- Click on "Modify".
- Select "Desktop Development With C++".
- Click on "Modify" in the bottom-right corner and then "Yes".
3. Install Microsoft VCPKG:
- Open the "Windows Start" menu and select "Developer Command Prompt for VS 2022". Do not use PowerShell for these steps.
- Advanced users: Instead of running the Developer Command Prompt, you can use a normal command prompt or ssh into the device and manually run Program FilesMicrosoft Visual Studio2022CommunityCommon7ToolsVsDevCmd.bat.
- Run the following commands:
`powershell
cd c:
mkdir c:src
cd c:src
git clone https://github.com/microsoft/vcpkg
cd vcpkg
bootstrap-vcpkg.bat
.vcpkg.exe integrate install
.vcpkg.exe integrate powershell
.vcpkg.exe install opencv[contrib,dnn,freetype,jpeg,openmp,png,webp,world]:x64-windows
`
- Be patient during this last step as it can take a long time to run. It needs to download and build many things.
- Advanced users: Note that there are many other optional modules you may want to add when building OpenCV. Run .vcpkg.exe search opencv to see the full list.
4. Optional: Install CUDA or CUDA+cuDNN
If you have a modern NVIDIA GPU, you can install either CUDA or CUDA+cuDNN. This will allow Darknet to use your GPU for faster image and video processing.
- Download and install CUDA from https://developer.nvidia.com/cuda-downloads.
- Download and install cuDNN from https://developer.nvidia.com/rdp/cudnn-download or https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#download-windows.
Important: If you install CUDA or CUDA+cuDNN after building Darknet, you must delete the CMakeCache.txt file in your build directory and re-run cmake to ensure that CMake can find the necessary files.
Note: Darknet can run without CUDA, but if you want to train a custom network, either CUDA or CUDA+cuDNN is required.
5. Clone Darknet and build it:
`powershell
cd c:src
git clone https://github.com/hank-ai/darknet.git
cd darknet
mkdir build
cd build
cmake -DCMAKEBUILDTYPE=Release -DCMAKETOOLCHAINFILE=C:/src/vcpkg/scripts/buildsystems/vcpkg.cmake ..
msbuild.exe /property:Platform=x64;Configuration=Release /target:Build -maxCpuCount -verbosity:normal -detailedSummary darknet.sln
msbuild.exe /property:Platform=x64;Configuration=Release PACKAGE.vcxproj
`
Important:
- CUDA Installation: CUDA must be installed after Visual Studio. If you upgrade Visual Studio, remember to re-install CUDA.
- Missing DLLs: If you encounter errors about missing CUDA or cuDNN DLLs (e.g., cublas64_12.dll), manually copy the CUDA .dll files into the same output directory as darknet.exe. For example:
`powershell
copy "C:Program FilesNVIDIA GPU Computing ToolkitCUDAv12.2bin*.dll" src-cliRelease
`
(This is an example; check the CUDA version you're running and adjust the path accordingly.)
- Re-run msbuild.exe: After copying the .dll files, re-run the last msbuild.exe command to generate the NSIS installation package:
`powershell
msbuild.exe /property:Platform=x64;Configuration=Release PACKAGE.vcxproj
`
- Advanced users: Note that the output of the cmake command is a normal Visual Studio solution file (darknet.sln). If you regularly use the Visual Studio GUI instead of msbuild.exe, you can ignore the command-line steps and load the Darknet project in Visual Studio.
- You should now have a file you can run: C:srcDarknetbuildsrc-cliReleasedarknet.exe. Run this to test: C:srcDarknetbuildsrc-cliReleasedarknet.exe version.
6. Install Darknet:
- Run the NSIS installation wizard that was built in the last step. Look for the file darknet-VERSION.exe in the build directory. For example:
`
darknet-2.0.31-win64.exe
`
- The NSIS installation package will:
- Create a directory called Darknet, for example, C:Program FilesDarknet.
- Install the CLI application (darknet.exe) and other sample apps.
- Install required third-party .dll files, such as those from OpenCV.
- Install necessary Darknet .dll, .lib, and .h files to use darknet.dll from another application.
- Install template .cfg files.
- You are now done! Once the installation wizard has finished, Darknet will have been installed into C:Program FilesDarknet. Run this to test: C:Program FilesDarknetbindarknet.exe version.
Using Darknet
CLI
The following is not a complete list of all commands supported by Darknet.
In addition to the Darknet CLI, also consider the DarkHelp project CLI, which offers an alternative CLI to Darknet/YOLO with advanced features not available directly in Darknet. You can use both the Darknet CLI and the DarkHelp CLI together.
For most of the commands below, you'll need the .weights file along with the corresponding .names and .cfg files. You can either train your own network (highly recommended!) or download a pre-trained network from the internet. Examples of pre-trained datasets include:
LEGO Gears (for finding objects in an image)
Rolodex (for finding text in an image)
MSCOCO (standard 80-class object detection)
Commands to run:
Get help:
`bash
darknet help
`
Check the version:
`bash
darknet version
`
Predict using an image:
V2:
`bash
darknet detector test cars.data cars.cfg cars_best.weights image1.jpg
`
V3:
`bash
darknet02displayannotatedimages cars.cfg image1.jpg
`
DarkHelp:
`bash
DarkHelp cars.cfg cars.cfg cars_best.weights image1.jpg
`
Output coordinates:
V2:
`bash
darknet detector test animals.data animals.cfg animalsbest.weights -extoutput dog.jpg
`
V3:
`bash
darknet01inference_images animals dog.jpg
`
DarkHelp:
`bash
DarkHelp --json animals.cfg animals.names animals_best.weights dog.jpg
`
Working with videos:
V2:
`bash
darknet detector demo animals.data animals.cfg animalsbest.weights -extoutput test.mp4
`
V3:
`bash
darknet03display_videos animals.cfg test.mp4
`
DarkHelp:
`bash
DarkHelp animals.cfg animals.names animals_best.weights test.mp4
`
Reading from a webcam:
V2:
`bash
darknet detector demo animals.data animals.cfg animals_best.weights -c 0
`
V3:
`bash
darknet08display_webcam animals
`
Save results to a video:
V2:
`bash
darknet detector demo animals.data animals.cfg animalsbest.weights test.mp4 -outfilename res.avi
`
V3:
`bash
darknet05processvideosmultithreaded animals.cfg animals.names animals_best.weights test.mp4
`
DarkHelp:
`bash
DarkHelp animals.cfg animals.names animals_best.weights test.mp4
`
JSON output:
V2:
`bash
darknet detector demo animals.data animals.cfg animalsbest.weights test50.mp4 -jsonport 8070 -mjpegport 8090 -extoutput
`
V3:
`bash
darknet06imagestojson animals image1.jpg
`
DarkHelp:
`bash
DarkHelp --json animals.names animals.cfg animals_best.weights image1.jpg
`
Running on a specific GPU:
V2:
`bash
darknet detector demo animals.data animals.cfg animals_best.weights -i 1 test.mp4
`
Checking neural network accuracy:
`bash
darknet detector map driving.data driving.cfg driving_best.weights ...
`
Example output:
`
Id Name AvgPrecision TP FN FP TN Accuracy ErrorRate Precision Recall Specificity FalsePosRate -- ---- ------------ ------ ------ ------ ------ -------- --------- --------- ------ ----------- ------------ 0 vehicle 91.2495 32648 3903 5826 65129 0.9095 0.0905 0.8486 0.8932 0.9179 0.0821 1 motorcycle 80.4499 2936 513 569 5393 0.8850 0.1150 0.8377 0.8513 0.9046 0.0954 2 bicycle 89.0912 570 124 104 3548 0.9475 0.0525 0.8457 0.8213 0.9715 0.0285 3 person 76.7937 7072 1727 2574 27523 0.8894 0.1106 0.7332 0.8037 0.9145 0.0855 4 many vehicles 64.3089 1068 509 733 11288 0.9087 0.0913 0.5930 0.6772 0.9390 0.0610 5 green light 86.8118 1969 239 510 4116 0.8904 0.1096 0.7943 0.8918 0.8898 0.1102 6 yellow light 82.0390 126 38 30 1239 0.9525 0.0475 0.8077 0.7683 0.9764 0.0236 7 red light 94.1033 3449 217 451 4643 0.9237 0.0763 0.8844 0.9408 0.9115 0.0885
`
Checking accuracy mAP@IoU=75:
`bash
darknet detector map animals.data animals.cfg animalsbest.weights -iouthresh 0.75
`
Recalculating anchors:
It's best to recalculate anchors in DarkMark, as it runs 100 consecutive times and selects the best anchors. However, if you want to use the older version in Darknet:
`bash
darknet detector calcanchors animals.data -numof_clusters 6 -width 320 -height 256
`
Training a new network:
`bash
darknet detector -map -dont_show train animals.data animals.cfg
`
(Also see the Training section below)
Training
Quick links to relevant sections of the Darknet/YOLO FAQ:
How should I setup my files and directories?
Which configuration file should I use?
What command should I use when training my own network?
The simplest way to annotate and train is with DarkMark, which creates all the necessary Darknet files. This is the recommended way to train a new neural network.
If you prefer to manually setup the various files to train a custom network:
1. Create a new folder:
- Choose a folder to store your files. For this example, we'll create a neural network to detect animals, so the directory will be ~/nn/animals/.
2. Copy a Darknet configuration file:
- Copy a Darknet configuration file as a template. For example, use cfg/yolov4-tiny.cfg. Place this in the folder you created. Now, you should have ~/nn/animals/animals.cfg.
3. Create a animals.names text file:
- Create a animals.names text file in the same folder as the configuration file. You now have ~/nn/animals/animals.names.
4. Edit the animals.names file:
- Edit the animals.names file using a text editor. List the classes you want to detect, with exactly one entry per line, no blank lines, and no comments. For this example, the .names file will contain four lines:
`
dog
cat
bird
horse
`
5. Create a animals.data text file:
- Create a animals.data text file in the same folder. For this example, the .data file will contain:
`
classes = 4
train = /home/username/nn/animals/animals_train.txt
valid = /home/username/nn/animals/animals_valid.txt
names = /home/username/nn/animals/animals.names
backup = /home/username/nn/animals
`
6. Create a folder for images and annotations:
- Create a folder to store your images and annotations. For example, this could be ~/nn/animals/dataset.
- Each image will need a corresponding .txt file describing the annotations for that image. The format of these .txt annotation files is very specific. You cannot create them by hand, as each annotation requires the exact coordinates. Use DarkMark or similar software to annotate your images. The YOLO annotation format is described in the Darknet/YOLO FAQ.
7. Create "train" and "valid" text files:
- Create the "train" and "valid" text files named in the .data file.
- These two text files should list all the images Darknet will use for training and validation (for calculating mAP%).
- Each line should contain exactly one image path and filename. You can use relative or absolute paths.
8. Modify the .cfg file:
- Use a text editor to modify your .cfg file:
- Make sure batch=64.
- Subdivisions: Depending on the network dimensions and GPU memory, you may need to adjust the subdivisions. Start with subdivisions=1 and refer to the Darknet/YOLO FAQ if it doesn't work.
- Maxbatches: Set a good starting value for maxbatches to 2000 times the number of classes. For this example, we have 4 animals, so max_batches=8000.
- Steps: Set steps to 80% and 90% of max_batches. In this case, we'd use steps=6400,7200.
- Width and Height: These are the network dimensions. The Darknet/YOLO FAQ explains how to calculate the best size.
- Classes: Search for all instances of classes=... and update it with the number of classes in your .names file. In this example, we'd use classes=4.
- Filters: Search for all instances of filters=... in the [convolutional] sections before each [yolo] section. The value to use is (numberofclasses + 5) 3. For this example, (4 + 5) 3 = 27. So we'd use filters=27 on the appropriate lines.
9. Start training:
- Navigate to the ~/nn/animals/ directory:
`bash
cd ~/nn/animals/
`
- Run the following command to start training:
`bash
darknet detector -map -dont_show train animals.data animals.cfg
`
- Be patient. The best weights will be saved as animals_best.weights. You can track training progress by observing the chart.png file. Refer to the Darknet/YOLO FAQ for additional parameters you might want to use during training.
- If you want more detailed training information, add the --verbose parameter:
`bash
darknet detector -map -dont_show --verbose train animals.data animals.cfg
`
Other Tools and Links
DarkMark: For managing your Darknet/YOLO projects, annotating images, verifying annotations, and generating files for training with Darknet.
DarkHelp: For a robust alternative CLI to Darknet, using image tiling, object tracking in videos, and a robust C++ API suitable for commercial applications.
Darknet/YOLO FAQ: A comprehensive resource for answering your questions.
Stéphane's YouTube channel: Numerous tutorials and example videos.
Darknet/YOLO Discord server: Join the community for support and discussions.
Roadmap
Last updated 2024-10-30:
Completed:
Swapped out qsort() for std::sort() during training (some other obscure ones remain).
Removed check_mistakes, getchar(), and system().
Converted Darknet to use the C++ compiler (g++ on Linux, VisualStudio on Windows).
Fixed the Windows build.
Fixed Python support.
Built the darknet library.
Re-enabled labels on predictions ("alphabet" code).
Re-enabled CUDA/GPU code.
Re-enabled CUDNN.
Re-enabled CUDNN half.
Removed hard-coded CUDA architecture.
Improved CUDA version information.
Re-enabled AVX.
Removed old solutions and Makefile.
Made OpenCV non-optional.
Removed dependency on the old pthread library.
Removed STB.
Re-wrote CMakeLists.txt to use the new CUDA detection.
Removed old "alphabet" code and deleted the 700+ images in data/labels.
Enabled out-of-source builds.
Improved version number output.
Implemented performance optimizations related to training (ongoing).
Implemented performance optimizations related to inference (ongoing).
Used pass-by-reference where possible.
Cleaned up .hpp files.
Re-wrote darknet.h.
Avoided casting cv::Mat to void*, using it as a proper C++ object.
Fixed or improved consistency in the use of internal image structure.
Fixed builds for ARM-based Jetson devices. (Original Jetson devices are unlikely to be fixed since they are no longer supported by NVIDIA - no C++17 compiler. New Jetson Orin devices are working).
Fixed the Python API in V3.
Improved support for Python. (Are any Python developers willing to help with this?)
Short-term goals
Replace printf() with std::cout (in progress).
Investigate support for old zed cameras.
Improve and make command-line parsing more consistent (in progress).
Mid-term goals
Remove all char* code and replace with std::string.
Avoid hiding warnings and clean up compiler warnings (in progress).
Improve the use of cv::Mat instead of the custom image structure in C (in progress).
Replace old list functionality with std::vector or std::list.
Fix support for 1-channel grayscale images.
Add support for N-channel images where N > 3 (e.g., images with additional depth or thermal channels).
Continue ongoing code cleanup (in progress).
Long-term goals
Fix CUDA/CUDNN issues with all GPUs.
Re-write CUDA+cuDNN code.
Consider adding support for non-NVIDIA GPUs.
Add rotated bounding boxes or "angle" support.
Implement keypoints/skeletons.
Add heatmaps (in progress).
Implement segmentation.