Ansible
Ansible is a remarkably simple IT automation system. It adeptly handles configuration management, application deployment, cloud provisioning, ad-hoc task execution, network automation, and multi-node orchestration. Ansible empowers you to effortlessly execute complex changes, such as zero-downtime rolling updates with load balancers. Dive deeper into the capabilities of Ansible on their official website.
Design Principles
1. Utilize Ansible
You can seamlessly install a released version of Ansible using pip or your preferred package manager. Consult our detailed installation guide for comprehensive instructions across various platforms.
2. Empower Power Users and Developers
For advanced users and developers, the devel branch provides access to the latest features and fixes. While generally stable, it's important to acknowledge the potential for breaking changes when utilizing this branch. We highly recommend engaging with the Ansible community if you choose to work with the devel branch.
Communication
Join the vibrant Ansible forum to engage with the community, seek assistance, and ask questions. For additional communication channels, explore our dedicated guide on Connecting with the Ansible Community.
Contribute to Ansible
Coding Guidelines
We meticulously document our Coding Guidelines in the Developer Guide. We strongly encourage you to review the following sections:
1. Branch Information
2. Roadmap
Based on valuable feedback from the team and community, an initial roadmap is published for each major or minor version (e.g., 2.7, 2.8). The Ansible Roadmap page outlines our plans and provides avenues for influencing our direction.
Authors
Ansible was originally conceived by Michael DeHaan and has benefited from the contributions of over 5000 users (and counting). Thank you to all who have contributed!
Ansible is proudly sponsored by Red Hat, Inc.
License
GNU General Public License v3.0 or later. Refer to COPYING for the complete license text.
Darknet Object Detection Framework and YOLO
!darknet and hank.ai logos
Darknet is an open-source neural network framework developed in C, C++, and CUDA.
YOLO (You Only Look Once) represents a cutting-edge, real-time object detection system that operates within the Darknet framework.
Discover how Hank.ai is contributing to the Darknet/YOLO community: https://darknetcv.ai/
Explore the official Darknet/YOLO website: https://pjreddie.com/darknet/
Consult the comprehensive Darknet/YOLO FAQ: https://pjreddie.com/darknet/yolo/
Join the active Darknet/YOLO Discord server: https://discord.gg/zSq8rtW
Papers
1. YOLOv7 Paper: https://arxiv.org/abs/2207.02696
2. Scaled-YOLOv4 Paper: https://arxiv.org/abs/2102.12074
3. YOLOv4 Paper: https://arxiv.org/abs/2004.10934
4. YOLOv3 Paper: https://arxiv.org/abs/1804.02769
General Information
The Darknet/YOLO framework continues to outperform other frameworks and versions of YOLO in both speed and accuracy.
Its complete freedom and open-source nature allow you to seamlessly integrate Darknet/YOLO into existing projects and products, including commercial ones, without licensing restrictions or fees.
Darknet V3 ("Jazz"), released in October 2024, demonstrates its prowess by processing the LEGO dataset videos at an impressive 1000 FPS when utilizing an NVIDIA RTX 3090 GPU. This equates to a processing time of 1 millisecond or less for each video frame, showcasing exceptional efficiency.
For any assistance or discussions related to Darknet/YOLO, join the dedicated Discord server: https://discord.gg/zSq8rtW.
The CPU version of Darknet/YOLO is adaptable to various devices, including Raspberry Pi, cloud & colab servers, desktops, laptops, and high-end training rigs. The GPU version of Darknet/YOLO necessitates a CUDA-capable GPU from NVIDIA.
Darknet/YOLO has been validated to work seamlessly on Linux, Windows, and Mac operating systems. Refer to the building instructions outlined below.
Darknet Version
The original Darknet tool, developed by Joseph Redmon between 2013 and 2017, lacked a version number. We consider this to be version 0.x.
The subsequent popular Darknet repository, maintained by Alexey Bochkovskiy from 2017 to 2021, also lacked a version number. We categorize this as version 1.x.
The Darknet repository, sponsored by Hank.ai and managed by Stéphane Charette starting in 2023, introduced a version command for the first time. From 2023 until late 2024, it returned version 2.x "OAK".
The development goals centered on minimizing disruption to existing functionality while familiarizing ourselves with the codebase.
Key improvements in version 2.x:
1. Unified Build Process: Re-wrote the build steps for a unified approach using CMake on both Windows and Linux.
2. C++ Transition: Converted the codebase to leverage the C++ compiler.
3. Enhanced Training Visualization: Improved the chart.png visualization during training.
4. Performance Optimizations: Addressed bugs and implemented performance-related optimizations, primarily focused on reducing training time.
Version 2.1 represents the final branch of this codebase, available in the v2 branch.
The next stage of development commenced in mid-2024 and culminated in the October 2024 release of version 3.x "JAZZ".
You retain the option to check out the previous v2 branch if you require access to specific commands from that version. If you encounter any missing commands, please notify us for investigation and potential re-integration.
Significant changes in version 3.x:
1. Command Pruning: Removed numerous outdated and unmaintained commands.
2. Enhanced Performance: Implemented extensive performance optimizations, both during training and inference.
3. API Modifications: The legacy C API underwent modifications; applications relying on the original Darknet API will necessitate minor adjustments. Refer to the updated documentation for guidance: https://darknetcv.ai/api/api.html
4. New API Introduction: Introduced a new Darknet V3 C and C++ API: https://darknetcv.ai/api/api.html
5. Expanded Sample Code: Added new applications and sample code within the src-examples directory: https://darknetcv.ai/api/files.html
MSCOCO Pre-trained Weights
For user convenience, several popular YOLO versions were pre-trained on the MSCOCO dataset. This dataset encompasses 80 classes, which can be found in the cfg/coco.names text file.
Additional simpler datasets and pre-trained weights are readily available for testing Darknet/YOLO, including LEGO Gears and Rolodex. For detailed information, consult the Darknet/YOLO FAQ.
You can obtain the MSCOCO pre-trained weights from various locations, including this repository:
YOLOv2 (November 2016):
yolov2-tiny
yolov2-full
YOLOv3 (May 2018):
yolov3-tiny
yolov3-full
YOLOv4 (May 2020):
yolov4-tiny
yolov4-full
YOLOv7 (August 2022):
yolov7-tiny
yolov7-full
The MSCOCO pre-trained weights are provided for demonstration purposes. The corresponding .cfg and .names files for MSCOCO are located in the cfg directory.
Example commands:
`bash
wget --no-clobber https://github.com/hank-ai/darknet/releases/download/v2.0/yolov4-tiny.weights
darknet02displayannotatedimages coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg
darknet03display_videos coco.names yolov4-tiny.cfg yolov4-tiny.weights video1.avi
DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg
DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights video1.avi
`
It's highly recommended to train your own networks. MSCOCO is typically used to ensure the framework's functionality is operating as expected.
Building
Previous build methods (pre-2023) have been consolidated into a unified solution. Darknet requires C++17 or newer, OpenCV, and utilizes CMake to generate the necessary project files.
Building Darknet/YOLO does not require C++ expertise; analogous to driving a car, you don't need to be a mechanic to utilize it.
Software developers are encouraged to visit https://darknetcv.ai/ for insights into the inner workings of the Darknet/YOLO object detection framework.
Google Colab
The Google Colab instructions mirror the Linux instructions. Several Jupyter notebooks demonstrate specific tasks, such as training a new network.
Explore the notebooks within the colab subdirectory or follow the Linux instructions provided below.
Linux CMake Method
1. Essential Software:
Build essentials: sudo apt-get install build-essential git libopencv-dev cmake
2. Repository Cloning:
Create a source directory: mkdir ~/srccd ~/src
Clone the repository: git clone https://github.com/hank-ai/darknetcd darknet
3. Build Directory:
Create a build directory: mkdir buildcd build
4. CMake Configuration:
Configure CMake: cmake -DCMAKEBUILDTYPE=Release ..
5. Build Darknet:
Build: make -j4
6. Package Installation:
Create the package: package
Install the package: sudo dpkg -i darknet-VERSION.deb
Optional: CUDA or CUDA+cuDNN Installation
For accelerated image and video processing, you can optionally install either CUDA or CUDA+cuDNN.
CUDA Installation:
Visit https://developer.nvidia.com/cuda-downloads to download and install CUDA.
cuDNN Installation:
Visit https://developer.nvidia.com/rdp/cudnn-download or https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#cudnn-package-manager-installation-overview to download and install cuDNN.
Post-CUDA Installation:
Ensure you can execute nvcc and nvidia-smi. You might need to modify your PATH variable.
Upgrading CUDA or CUDA+cuDNN:
Delete the CMakeCache.txt file from your Darknet build directory to force CMake to re-find the necessary files.
Re-build Darknet.
CMake Version Upgrade (if necessary):
Purge existing CMake: sudo apt-get purge cmake
Install the latest CMake: sudo snap install cmake --classic
Restart your shell (bash) or ensure the new path is recognized (fish).
Advanced Users:
To build an RPM installation file instead of DEB, modify the relevant lines in CM_package.cmake. Before running make -j4 package, adjust the following lines:
`cmake
SET (CPACK_GENERATOR "DEB")
SET (CPACK_GENERATOR "RPM")
`
For distributions like CentOS and OpenSUSE, modify these lines:
`cmake
SET (CPACK_GENERATOR "DEB")
SET (CPACK_GENERATOR "RPM")
`
Once the installation package is built, install it using your distribution's package manager. For example, on Debian-based systems like Ubuntu:
`bash
sudo dpkg -i darknet-2.0.1-Linux.deb
`
Post-Installation:
The installed files include:
- /usr/bin/darknet: Darknet executable. Run darknet version to confirm installation.
- /usr/include/darknet.h: Darknet API for C, C++, and Python developers.
- /usr/include/darknet_version.h: Version information for developers.
- /usr/lib/libdarknet.so: Library for linking in C, C++, and Python development.
- /opt/darknet/cfg/...: Location of all .cfg templates.
Darknet is now successfully built and installed in /usr/bin/. To verify, run darknet version.
Windows CMake Method
1. Prerequisites:
Install the following using Winget:
- Git: winget install Git.Git
- CMake: winget install Kitware.CMake
- NSIS: winget install nsis.nsis
- Visual Studio 2022 Community: winget install Microsoft.VisualStudio.2022.Community
2. Visual Studio Configuration:
Open "Visual Studio Installer" from the Windows Start menu.
Click "Modify".
Select "Desktop Development With C++".
Click "Modify" in the bottom-right corner and then "Yes".
3. Developer Command Prompt:
Open the "Windows Start" menu and select "Developer Command Prompt for VS 2022". Do not use PowerShell for these steps.
4. Microsoft VCPKG Installation (for OpenCV):
Navigate to C:: cd c:
Create a src directory: mkdir c:src
Clone VCPKG: cd c:src git clone https://github.com/microsoft/vcpkg
Bootstrap VCPKG: cd vcpkg bootstrap-vcpkg.bat
Integrate VCPKG: .vcpkg.exe integrate install
Install OpenCV (including dependencies): .vcpkg.exe install opencv[contrib,dnn,freetype,jpeg,openmp,png,webp,world]:x64-windows
5. Optional: CUDA or CUDA+cuDNN Installation (Windows)
For accelerated image and video processing, you can optionally install either CUDA or CUDA+cuDNN.
CUDA Installation:
Visit https://developer.nvidia.com/cuda-downloads to download and install CUDA.
cuDNN Installation:
Visit https://developer.nvidia.com/rdp/cudnn-download or https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#download-windows to download and install cuDNN.
Post-CUDA Installation:
Ensure you can execute nvcc.exe. You might need to modify your PATH variable.
Unzip the downloaded cuDNN and copy the bin, include, and lib directories into C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/[version]/. You may need to overwrite some files.
Upgrading CUDA or CUDA+cuDNN:
CUDA must be installed after Visual Studio. Re-install CUDA if you upgrade Visual Studio.
6. Cloning and Building Darknet:
Navigate to your source directory: cd c:src
Clone the repository: git clone https://github.com/hank-ai/darknet.git
Create a build directory: cd darknetmkdir build
Configure CMake with VCPKG: cd build cmake -DCMAKEBUILDTYPE=Release -DCMAKETOOLCHAINFILE=C:/src/vcpkg/scripts/buildsystems/vcpkg.cmake ..
Build the solution: msbuild.exe /property:Platform=x64;Configuration=Release /target:Build -maxCpuCount -verbosity:normal -detailedSummary darknet.sln
Generate the NSIS installation package: msbuild.exe /property:Platform=x64;Configuration=Release PACKAGE.vcxproj
7. Handling Missing CUDA/cuDNN DLLs:
If you encounter errors about missing CUDA or cuDNN DLLs (e.g., cublas64_12.dll), manually copy the relevant CUDA .dll files to the same output directory as Darknet.exe. For example:
`bash
copy "C:Program FilesNVIDIA GPU Computing ToolkitCUDAv12.2bin*.dll" src-cliRelease
`
Adjust the version number in the command to match your installation.
Re-run the msbuild.exe command to generate the NSIS installation package.
Advanced Users:
The cmake command generates a Visual Studio solution file (Darknet.sln). If you prefer the Visual Studio GUI, you can load the Darknet project in Visual Studio instead of using command-line tools.
Post-Build Verification:
Verify that C:srcDarknetbuildsrc-cliReleasedarknet.exe exists. Run C:srcDarknetbuildsrc-cliReleasedarknet.exe version to confirm.
Installation:
Run the NSIS installation wizard (e.g., darknet-VERSION.exe in the build directory) to install Darknet, libraries, include files, and necessary DLLs.
Post-Installation Verification:
Verify that C:/Program Files/darknet/bin/darknet.exe exists. Run C:/Program Files/darknet/bindarknet.exe version to confirm.
Using Darknet
CLI
This list does not encompass all Darknet commands.
In addition to the Darknet CLI, consider using the DarkHelp project CLI, which offers an alternative and more advanced interface. Both CLIs can be used together.
For most commands, you'll need a .weights file along with the corresponding .names and .cfg files. You can train your own network or download pre-trained networks.
Pre-trained datasets:
LEGO Gears: Object detection in images.
Rolodex: Text detection in images.
MSCOCO: Standard 80-class object detection.
Common CLI Commands:
1. Help: darknet help
2. Version: darknet version
3. Image Prediction (V2):
darknet detector test cars.data cars.cfg cars_best.weights image1.jpg
4. Image Prediction (V3):
darknet02displayannotatedimages cars.cfg image1.jpg
5. Image Prediction (DarkHelp):
DarkHelp cars.cfg cars.cfg cars_best.weights image1.jpg
6. Output Coordinates (V2):
darknet detector test animals.data animals.cfg animalsbest.weights -extoutput dog.jpg
7. Output Coordinates (V3):
darknet01inference_images animals dog.jpg
8. Output Coordinates (DarkHelp):
DarkHelp --json animals.cfg animals.names animals_best.weights dog.jpg
9. Video Processing (V2):
darknet detector demo animals.data animals.cfg animalsbest.weights -extoutput test.mp4
10. Video Processing (V3):
darknet03display_videos animals.cfg test.mp4
11. Video Processing (DarkHelp):
DarkHelp animals.cfg animals.names animals_best.weights test.mp4
12. Webcam Processing (V2):
darknet detector demo animals.data animals.cfg animals_best.weights -c 0
13. Webcam Processing (V3):
darknet08display_webcam animals
14. Video Saving (V2):
darknet detector demo animals.data animals.cfg animalsbest.weights test.mp4 -outfilename res.avi
15. Video Saving (V3):
darknet05processvideosmultithreaded animals.cfg animals.names animals_best.weights test.mp4
16. Video Saving (DarkHelp):
DarkHelp animals.cfg animals.names animals_best.weights test.mp4
17. JSON Output (V2):
darknet detector demo animals.data animals.cfg animalsbest.weights test50.mp4 -jsonport 8070 -mjpegport 8090 -extoutput
18. JSON Output (V3):
darknet06imagestojson animals image1.jpg
19. JSON Output (DarkHelp):
DarkHelp --json animals.names animals.cfg animals_best.weights image1.jpg
20. GPU Selection (V2):
darknet detector demo animals.data animals.cfg animals_best.weights -i 1 test.mp4
21. Network Accuracy Check:
darknet detector map driving.data driving.cfg driving_best.weights ...
22. Accuracy Check (mAP@IoU=75):
darknet detector map animals.data animals.cfg animalsbest.weights -iouthresh 0.75
23. Anchor Recalculation (DarkMark recommended):
darknet detector calcanchors animals.data -numof_clusters 6 -width 320 -height 256
24. Training a New Network:
darknet detector -map -dont_show train animals.data animals.cfg (See training section below)
Training
Quick Links to Relevant Sections of the Darknet/YOLO FAQ:
Setup: https://pjreddie.com/darknet/yolo/
Configuration File Selection: https://pjreddie.com/darknet/yolo/
Training Command: https://pjreddie.com/darknet/yolo/
The most streamlined approach to annotation and training involves utilizing DarkMark. This is the recommended method for training a new neural network.
Manual Training Setup:
1. Create a Project Folder: For example, ~/nn/animals/.
2. Copy a Configuration Template:
Choose a configuration file (e.g., cfg/yolov4-tiny.cfg).
Place it in the project folder.
Now you have ~/nn/animals/animals.cfg.
3. Create the animals.names File:
Create a text file named animals.names in the project folder.
Edit this file with your desired classes.
Ensure each class is on a separate line, with no blank lines or comments.
For example:
`
dog
cat
bird
horse
`
4. Create the animals.data File:
Create a text file named animals.data in the project folder.
The content should resemble:
`
classes = 4
train = /home/username/nn/animals/animals_train.txt
valid = /home/username/nn/animals/animals_valid.txt
names = /home/username/nn/animals/animals.names
backup = /home/username/nn/animals
`
5. Create the dataset Folder:
Create a folder for storing your images and annotations. For instance, ~/nn/animals/dataset.
Each image requires a corresponding .txt file that defines its annotations.
You cannot manually create these .txt files; DarkMark or similar tools are necessary to annotate your images and generate these files.
Refer to the Darknet/YOLO FAQ for the YOLO annotation format.
6. Create the animalstrain.txt and animalsvalid.txt Files:
Create these text files as specified in the animals.data file.
These files list all images to be used for training and validation, respectively.
One image per line, using either relative or absolute paths.
7. Modify the Configuration File (animals.cfg):
Batch: Set batch=64.
Subdivisions: Start with subdivisions=1. Adjust as needed based on network dimensions and GPU memory.
Max Batches: A good starting value is maxbatches=2000 * numberofclasses. In this example, maxbatches=8000 (4 animals).
Steps: Set to 80% and 90% of max_batches. In this example, steps=6400,7200.
Width and Height: Define the network dimensions. Refer to the Darknet/YOLO FAQ for guidance.
Classes: Set classes=... to match the number of classes in your .names file (4 in this example).
Filters: In each convolutional layer prior to a yolo layer, set filters=... to (numberofclasses + 5) * 3. In this example, filters=27.
8. Start Training:
Navigate to your project folder: cd ~/nn/animals/
Start training: darknet detector -map -dont_show train animals.data animals.cfg
Be patient. The best weights will be saved as animals_best.weights.
Observe the progress of training by viewing the chart.png file.
Consult the Darknet/YOLO FAQ for additional training parameters.
For more detailed training output, add the --verbose flag:
`bash
darknet detector -map -dont_show --verbose train animals.data animals.cfg
`
Other Tools and Links
DarkMark: For managing Darknet/YOLO projects, annotating images, verifying annotations, and generating training files. https://darknetcv.ai/darkmark/
DarkHelp: For a robust alternative CLI to Darknet, image tiling, object tracking, and a commercial-friendly C++ API. https://darknetcv.ai/darkhelp/
Darknet/YOLO FAQ: For answers to common questions. https://pjreddie.com/darknet/yolo/
Stéphane's YouTube Channel: For tutorials and example videos. https://www.youtube.com/@stephane-charette
Darknet/YOLO Discord Server: For support questions and community discussions. https://discord.gg/zSq8rtW
Roadmap
Last Updated: 2024-10-30
Completed
Replaced qsort() with std::sort() during training.
Removed check_mistakes, getchar(), and system().
Migrated Darknet to the C++ compiler (g++ on Linux, VisualStudio on Windows).
Resolved Windows build issues.
Re-enabled Python support.
Built the Darknet library.
Re-enabled prediction labels ("alphabet" code).
Re-enabled CUDA/GPU code.
Re-enabled CUDNN.
Re-enabled CUDNN half.
Removed hard-coded CUDA architecture.
Improved CUDA version information.
Re-enabled AVX.
Removed old solutions and Makefile.
Made OpenCV a non-optional dependency.
Removed dependency on the old pthread library.
Removed STB.
Rewritten CMakeLists.txt to use the new CUDA detection.
Removed old "alphabet" code and deleted 700+ images in data/labels.
Enabled out-of-source building.
Improved version number output.
Implemented performance optimizations related to training (ongoing).
Implemented performance optimizations related to inference (ongoing).
Employed pass-by-reference where applicable.
Cleaned up .hpp files.
Rewritten darknet.h.
Avoided casting cv::Mat to void* and used it as a proper C++ object.
Addressed inconsistencies in internal image structure usage.
Fixed build for ARM-based Jetson devices.
New Jetson Orin devices are functional.
Resolved Python API issues in V3.
Short-Term Goals
Swap out printf() with std::cout (in progress).
Investigate old zed camera support.
Improve command-line parsing for consistency (in progress).
Mid-Term Goals
Remove all char* code and replace with std::string.
Eliminate hidden warnings and address compiler warnings (in progress).
Enhance the use of cv::Mat instead of the custom C image structure (in progress).
Replace old list functionality with std::vector or std::list.
Fix support for 1-channel grayscale images.
Add support for N-channel images where N > 3 (e.g., images with depth or thermal channels).
Continue ongoing code cleanup (in progress).
Long-Term Goals
Address CUDA/CUDNN issues across all GPUs.
Rewrite CUDA+cuDNN code.
Explore support for non-NVIDIA GPUs.
Implement rotated bounding boxes or angle support.
Introduce keypoints/skeletons.
Add support for heatmaps (in progress).
Incorporate segmentation.