Data.gov main repository
This is the main repository for the Data.gov Platform. It is used primarily to track the team's work, but also to house datagov-wide code (GitHub Actions templates, egress, etc.).
If you're seeking documentation for cloud.gov environments, refer to the application repositories.
GitHub Actions and Templates
Several GitHub Actions have been refactored to use templates in this repository. You can find these templates here and examples of invoking them in Inventory and Catalog.
Darknet Object Detection Framework and YOLO
Darknet is an open-source neural network framework built with C, C++, and CUDA.
YOLO (You Only Look Once) is a cutting-edge, real-time object detection system that operates within the Darknet framework.
Read how Hank.ai is supporting the Darknet/YOLO community.
Discover the Darknet/YOLO website.
Explore the Darknet/YOLO FAQ.
Join the Darknet/YOLO Discord server.
Papers
1. YOLOv7 Paper (link to paper)
2. Scaled-YOLOv4 Paper (link to paper)
3. YOLOv4 Paper (link to paper)
4. YOLOv3 Paper (link to paper)
General Information
The Darknet/YOLO framework continues to be both faster and more accurate than other frameworks and versions of YOLO. It is completely free and open source, allowing you to incorporate Darknet/YOLO into your projects and products without any licensing restrictions or fees.
Darknet V3 ("Jazz"), released in October 2024, can process the LEGO dataset videos at up to 1000 FPS using an NVIDIA RTX 3090 GPU. This means each video frame is processed in less than 1 millisecond.
Join the Darknet/YOLO Discord server for help and discussion: https://discord.gg/zSq8rtW
The CPU version of Darknet/YOLO can run on various devices, including Raspberry Pi, cloud & colab servers, desktops, laptops, and high-end training rigs. The GPU version requires a CUDA-capable NVIDIA GPU.
Darknet/YOLO is known to work on Linux, Windows, and Mac. Building instructions are provided below.
Darknet Version
Version 0.x: The original Darknet tool created by Joseph Redmon between 2013-2017.
Version 1.x: The popular Darknet repository maintained by Alexey Bochkovskiy from 2017-2021.
Version 2.x ("OAK"): The Darknet repository sponsored by Hank.ai and maintained by Stéphane Charette starting in 2023. This version introduced a version command.
Version 2.1: The last branch of the version 2 codebase, available in the v2 branch.
Version 3.x ("JAZZ"): The latest phase of development released in October 2024.
Key changes in Version 3.x:
1. Removed many old and unmaintained commands.
2. Significant performance optimizations for both training and inference.
3. Modified legacy C API; applications using the original Darknet API may require minor adjustments.
4. Introduced new Darknet V3 C and C++ API: https://darknetcv.ai/api/api.html
5. New apps and sample code in src-examples: https://darknetcv.ai/api/files.html
MSCOCO Pre-trained Weights
Several popular versions of YOLO are pre-trained on the MSCOCO dataset for convenience. This dataset contains 80 classes, listed in the cfg/coco.names text file.
Other simpler datasets and pre-trained weights, such as LEGO Gears and Rolodex, are available for testing Darknet/YOLO. Refer to the Darknet/YOLO FAQ for details.
You can download MSCOCO pre-trained weights from various locations, including this repository:
YOLOv2 (November 2016)
YOLOv2-tiny
YOLOv2-full
YOLOv3 (May 2018)
YOLOv3-tiny
YOLOv3-full
YOLOv4 (May 2020)
YOLOv4-tiny
YOLOv4-full
YOLOv7 (August 2022)
YOLOv7-tiny
YOLOv7-full
Example commands:
`
wget --no-clobber https://github.com/hank-ai/darknet/releases/download/v2.0/yolov4-tiny.weights darknet02displayannotatedimages coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg
darknet03display_videos coco.names yolov4-tiny.cfg yolov4-tiny.weights video1.avi
DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg
DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights video1.avi
`
Remember that you are encouraged to train your own networks. MSCOCO is primarily used to confirm that everything is working correctly.
Building
The build methods available before 2023 have been combined into a single, unified solution. Darknet requires C++17 or newer, OpenCV, and uses CMake to generate project files.
You don't need to know C++ to build, install, or run Darknet/YOLO, just like you don't need to be a mechanic to drive a car.
Google Colab
The Google Colab instructions are identical to the Linux instructions. Several Jupyter notebooks demonstrating tasks like training a new network are available in the colab subdirectory. You can also follow the Linux instructions below.
Linux CMake Method
1. Optional: If you have a modern NVIDIA GPU, install CUDA or CUDA+cuDNN. Darknet will utilize your GPU for faster image and video processing.
2. Delete the CMakeCache.txt file from your Darknet build directory to force CMake to re-find all necessary files.
3. Re-build Darknet.
4. Install CUDA (optional): Visit https://developer.nvidia.com/cuda-downloads to download and install CUDA.
5. Install cuDNN (optional): Visit https://developer.nvidia.com/rdp/cudnn-download or https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#cudnn-package-manager-installation-overview to download and install cuDNN.
6. Verify CUDA installation: Ensure you can run nvcc and nvidia-smi. You might need to modify your PATH variable.
7. Install dependencies and clone Darknet:
`bash
sudo apt-get install build-essential git libopencv-dev cmake
mkdir ~/srccd ~/src
git clone https://github.com/hank-ai/darknetcd darknet
mkdir build
cd build
`
8. Configure CMake:
`bash
cmake -DCMAKEBUILDTYPE=Release ..
`
9. Build Darknet:
`bash
make -j4 package
`
10. Install Darknet:
`bash
sudo dpkg -i darknet-VERSION.deb
`
11. Test installation:
`bash
darknet version
`
Additional Notes:
If you're using an older version of CMake, upgrade it before running the cmake command:
`bash
sudo apt-get purge cmake
sudo snap install cmake --classic
`
Restart your shell if using bash, or fish should automatically update the path.
To build an RPM installation file instead of a DEB file, modify the CM_package.cmake file.
Once the installation package is built, use the package manager for your distribution to install it (e.g., sudo dpkg -i darknet-2.0.1-Linux.deb on Debian-based systems).
Windows CMake Method
1. Install necessary tools:
`bash
winget install Git.Git
winget install Kitware.CMake
winget install nsis.nsis
winget install Microsoft.VisualStudio.2022.Community
`
2. Modify Visual Studio installation:
- Open "Visual Studio Installer".
- Click "Modify".
- Select "Desktop Development With C++".
- Click "Modify" in the bottom-right corner and then "Yes".
3. Open Developer Command Prompt for VS 2022: Do not use PowerShell.
4. Install Microsoft VCPKG:
`bash
cd c:mkdir c:srccd c:src
git clone https://github.com/microsoft/vcpkgcd vcpkg
bootstrap-vcpkg.bat
.vcpkg.exe integrate
install .vcpkg.exe integrate powershell.vcpkg.exe install opencv[contrib,dnn,freetype,jpeg,openmp,png,webp,world]:x64-windows
`
5. Optional: Install CUDA or CUDA+cuDNN (as in Linux instructions).
6. Delete CMakeCache.txt (as in Linux instructions).
7. Re-build Darknet (as in Linux instructions).
8. Install CUDA (optional): Visit https://developer.nvidia.com/cuda-downloads to download and install CUDA.
9. Install cuDNN (optional): Visit https://developer.nvidia.com/rdp/cudnn-download or https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#download-windows to download and install cuDNN.
10. Verify CUDA installation: Ensure you can run nvcc.exe. You might need to modify your PATH variable.
11. Unzip and copy cuDNN files: Once downloaded, unzip and copy the bin, include, and lib directories to C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/[version]/. You might need to overwrite some files.
12. Clone Darknet and build:
`bash
cd c:src
git clone https://github.com/hank-ai/darknet.gitcd darknet
mkdir build
cd build
cmake -DCMAKEBUILDTYPE=Release -DCMAKETOOLCHAINFILE=C:/src/vcpkg/scripts/buildsystems/vcpkg.cmake ..
msbuild.exe /property:Platform=x64;Configuration=Release /target:Build -maxCpuCount -verbosity:normal -detailedSummary darknet.sln
msbuild.exe /property:Platform=x64;Configuration=Release PACKAGE.vcxproj
`
13. Copy CUDA DLLs (optional): If you encounter errors about missing CUDA or cuDNN DLLs (e.g., cublas64_12.dll), manually copy the CUDA DLLs to the Darknet.exe output directory:
`bash
copy "C:Program FilesNVIDIA GPU Computing ToolkitCUDAv12.2bin*.dll" src-cliRelease
`
(Make sure you replace the version number with the one you are using.)
14. Re-run the msbuild.exe command to generate the NSIS installation package:
`bash
msbuild.exe /property:Platform=x64;Configuration=Release PACKAGE.vcxproj
`
15. Test installation:
`bash
C:srcDarknetbuildsrc-cliReleasedarknet.exe version
`
16. Run the NSIS installation wizard: This will install the CLI application, required DLLs, libraries, include files, and template configuration files.
Additional Notes:
The cmake command generates a Visual Studio solution file (Darknet.sln). You can use the Visual Studio GUI to build the project instead of msbuild.exe.
The NSIS installation package (e.g., darknet-VERSION.exe) can be found in the build directory.
Using Darknet
CLI
The following is not an exhaustive list of all Darknet commands.
darknet help: Display help information.
darknet version: Check the Darknet version.
Prediction Commands:
V2:
`bash
darknet detector test cars.data cars.cfg cars_best.weights image1.jpg
`
V3:
`bash
darknet02displayannotatedimages cars.cfg image1.jpg
`
DarkHelp:
`bash
DarkHelp cars.cfg cars.cfg cars_best.weights image1.jpg
`
Output Coordinates:
V2:
`bash
darknet detector test animals.data animals.cfg animalsbest.weights -extoutput dog.jpg
`
V3:
`bash
darknet01inference_images animals dog.jpg
`
DarkHelp:
`bash
DarkHelp --json animals.cfg animals.names animals_best.weights dog.jpg
`
Video Processing:
V2:
`bash
darknet detector demo animals.data animals.cfg animalsbest.weights -extoutput test.mp4
`
V3:
`bash
darknet03display_videos animals.cfg test.mp4
`
DarkHelp:
`bash
DarkHelp animals.cfg animals.names animals_best.weights test.mp4
`
Webcam Input:
V2:
`bash
darknet detector demo animals.data animals.cfg animals_best.weights -c 0
`
V3:
`bash
darknet08display_webcam animals
`
Save Results to Video:
V2:
`bash
darknet detector demo animals.data animals.cfg animalsbest.weights test.mp4 -outfilename res.avi
`
V3:
`bash
darknet05processvideosmultithreaded animals.cfg animals.names animals_best.weights test.mp4
`
DarkHelp:
`bash
DarkHelp animals.cfg animals.names animals_best.weights test.mp4
`
JSON Output:
V2:
`bash
darknet detector demo animals.data animals.cfg animalsbest.weights test50.mp4 -jsonport 8070 -mjpegport 8090 -extoutput
`
V3:
`bash
darknet06imagestojson animals image1.jpg
`
DarkHelp:
`bash
DarkHelp --json animals.names animals.cfg animals_best.weights image1.jpg
`
Specify GPU:
V2:
`bash
darknet detector demo animals.data animals.cfg animals_best.weights -i 1 test.mp4
`
Accuracy Calculation:
mAP:
`bash
darknet detector map driving.data driving.cfg driving_best.weights ...
`
mAP@IoU=75:
`bash
darknet detector map animals.data animals.cfg animalsbest.weights -iouthresh 0.75
`
Recalculate Anchors:
DarkMark (recommended): Use DarkMark to run 100 consecutive calculations and select the best anchors.
Darknet:
`bash
darknet detector calcanchors animals.data -numof_clusters 6 -width 320 -height 256
`
Train a New Network:
Using DarkMark (recommended): Use DarkMark to create all necessary files for training.
Manual Setup:
1. Create a new folder for your project (e.g., ~/nn/animals/).
2. Copy a configuration file as a template (e.g., cfg/yolov4-tiny.cfg) into the folder.
3. Create a animals.names text file with your class names, one per line.
4. Create a animals.data text file with the following format:
`
classes = 4
train = /home/username/nn/animals/animals_train.txt
valid = /home/username/nn/animals/animals_valid.txt
names = /home/username/nn/animals/animals.names
backup = /home/username/nn/animals
`
5. Create a folder for images and annotations (e.g., ~/nn/animals/dataset).
6. Annotate your images and generate corresponding .txt annotation files using DarkMark or similar software.
7. Create animalstrain.txt and animalsvalid.txt files, listing images for training and validation respectively, one per line.
8. Modify the configuration file:
- Set batch=64.
- Adjust subdivisions as needed.
- Set max_batches=8000 (or 2000 x number of classes).
- Set steps=6400,7200 (80% and 90% of max_batches).
- Set width and height to your network dimensions.
- Update classes with the number of classes in your .names file.
- Update filters in each [convolutional] section before [yolo] sections: (numberofclasses + 5) * 3.
9. Start training:
`bash
cd ~/nn/animals/
darknet detector -map -dont_show train animals.data animals.cfg
`
Additional Training Tips:
Use the --verbose parameter for more detailed training information:
`bash
darknet detector -map -dont_show --verbose train animals.data animals.cfg
`
Other Tools and Links
DarkMark: Manage Darknet/YOLO projects, annotate images, verify annotations, and generate training files.
DarkHelp: Robust alternative CLI to Darknet, support for image tiling, object tracking, and a C++ API for commercial applications.
Darknet/YOLO FAQ: Comprehensive resource for answers to common questions.
Stéphane's YouTube channel: Tutorials and example videos on Darknet/YOLO.
Darknet/YOLO Discord server: Community forum for support and discussion.
Roadmap
Completed Tasks:
Replaced qsort() with std::sort() for improved efficiency.
Removed deprecated functions like check_mistakes, getchar(), and system().
Converted Darknet to use the C++ compiler (g++ on Linux, VisualStudio on Windows).
Fixed Windows build issues.
Restored Python support.
Built the Darknet library.
Re-enabled prediction labels (alphabet code).
Re-enabled CUDA/GPU, CUDNN, and CUDNN half support.
Removed hard-coded CUDA architecture.
Improved CUDA version information.
Re-enabled AVX support.
Removed old solutions and Makefile.
Made OpenCV a non-optional dependency.
Removed dependency on the old pthread library.
Removed STB dependency.
Re-wrote CMakeLists.txt to use the new CUDA detection.
Removed old alphabet code and deleted unnecessary images.
Enabled out-of-source builds.
Improved version number output.
Optimized training and inference performance.
Implemented pass-by-reference where applicable.
Cleaned up .hpp files.
Re-wrote darknet.h.
Used cv::Mat as a proper C++ object instead of casting to void*.
Fixed inconsistencies in internal image structure usage.
Fixed build issues for ARM-based Jetson devices (except for unsupported older models).
Fixed Python API in Version 3.
Short-Term Goals:
Replace printf() with std::cout.
Investigate support for older Zed cameras.
Improve and standardize command line parsing.
Mid-Term Goals:
Remove all char* code and replace with std::string.
Eliminate compiler warnings and ensure consistent code style.
Enhance usage of cv::Mat over the custom C image structure.
Replace old list functionality with std::vector or std::list.
Fix support for 1-channel greyscale images.
Add support for N-channel images (e.g., with additional depth or thermal channels).
Ongoing code cleanup.
Long-Term Goals:
Address CUDA/CUDNN issues across all GPUs.
Re-write CUDA+cuDNN code.
Explore support for non-NVIDIA GPUs.
Implement rotated bounding boxes or "angle" support.
Add keypoints/skeletons, heatmaps, and segmentation support.