HmacManager
Summary
The editor of Downcodes brings you HmacManager, an HMAC authentication tool for ASP.NET Core applications, providing seamless integration and strong security for your applications.
Features
HMAC Authentication: HmacManager provides HMAC authentication, enabling you to add a secure authentication layer to your ASP.NET Core API.
Easy configuration: With simple configuration options, you can quickly integrate HMAC authentication in your applications.
Customization options: HmacManager provides customization options so that you can configure it according to your needs.
Enhanced security: HMAC authentication enhances the security of your API by using the HMAC hashing algorithm to authenticate requests.
Installation
HmacManager is available on NuGet.
`bash
dotnet add package HmacManager
`
Resources
Further reading: Official documentation
Sample code: GitHub repository
Darknet Object Detection Framework and YOLO
!darknet and hank.ai logos
Darknet is an open source neural network framework written in C, C++ and CUDA.
YOLO (You Only Look Once) is an advanced real-time target detection system running in the Darknet framework.
1. Papers
YOLOv7: Paper link
Scaled-YOLOv4: Paper link
YOLOv4: Paper link
YOLOv3: Paper link
2. General Information
The Darknet/YOLO framework continues to outperform other frameworks and YOLO versions in speed and accuracy.
The framework is completely free and open source. You can integrate Darknet/YOLO into existing projects and products, including commercial products, without licensing or fees.
Darknet V3 ("Jazz"), released in October 2024, can accurately run LEGO dataset video at up to 1000 FPS when using an NVIDIA RTX 3090 GPU, meaning each video frame takes 1 millisecond or less. Read, resized, and processed by Darknet/YOLO in seconds.
Join the Darknet/YOLO Discord server: https://discord.gg/zSq8rtW
The CPU version of Darknet/YOLO can run on simple devices such as Raspberry Pi, cloud servers, Colab servers, desktops, laptops, and high-end training equipment. The GPU version of Darknet/YOLO requires a GPU with NVIDIA CUDA support.
Darknet/YOLO is known to work well on Linux, Windows, and Mac. See build instructions below.
3. Darknet version
Version 0.x: The original Darknet tool written by Joseph Redmon in 2013-2017 has no version number.
Version 1.x: The next popular Darknet repository maintained by Alexey Bochkovskiy between 2017-2021 also has no version number.
Version 2.x ("OAK"): The Darknet repository sponsored by Hank.ai and maintained by Stéphane Charette from 2023 is the first to have a version command. From 2023 to the end of 2024, it returns to version 2.x "OAK".
Version 3.x (“JAZZ”): Next phase of development starting in mid-2024, with release in October 2024. The version command now returns 3.x "JAZZ".
4. MSCOCO pre-training weights
For convenience, several popular versions of YOLO are pre-trained on the MSCOCO dataset. This data set contains 80 categories and can be seen in the text file cfg/coco.names.
There are other simpler datasets and pre-trained weights available for testing Darknet/YOLO, such as LEGO Gears and Rolodex. See the Darknet/YOLO FAQ for details.
MSCOCO pre-trained weights can be downloaded from a number of different locations or from this repository:
* YOLOv2 (November 2016)
*YOLOv2-tiny
*YOLOv2-full
* YOLOv3 (May 2018)
* YOLOv3-tiny
*YOLOv3-full
* YOLOv4 (May 2020)
* YOLOv4-tiny
*YOLOv4-full
* YOLOv7 (August 2022)
* YOLOv7-tiny
*YOLOv7-full
MSCOCO pretrained weights are for demonstration purposes only. The corresponding .cfg and .names files are located in the cfg directory. Example command:
`bash
wget --no-clobber https://github.com/hank-ai/darknet/releases/download/v2.0/yolov4-tiny.weights darknet02displayannotatedimages coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg darknet03display_videos coco.names yolov4-tiny.cfg yolov4-tiny.weights video1.avi DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights video1. avi
`
Note that one should train their own network. MSCOCO is often used to confirm that everything is OK.
5. Build
Various building methods from the past (pre-2023) have been merged into a unified solution. Darknet requires C++17 or higher, OpenCV, and using CMake to generate the necessary project files.
You don't need to know C++ to build, install, or run Darknet/YOLO, just like you don't need to be a mechanic to drive a car.
5.1 Google Colab
Google Colab instructions are the same as Linux instructions. Several Jupyter notebooks show how to perform certain tasks, such as training a new network.
See the notebook in the colab subdirectory, or follow the Linux instructions below.
5.2 Linux CMake method
Darknet build tutorial on Linux
`bash
sudo apt-get install build-essential git libopencv-dev cmake mkdir ~/srccd ~/src git clone https://github.com/hank-ai/darknetcd darknet mkdir buildcd build cmake -DCMAKEBUILDTYPE=Release .. make -j4 package sudo dpkg -i darknet-VERSION.deb
`
5.3 Windows CMake method
These instructions assume a fresh installation of Windows 11 22H2.
`bash
winget install Git.Git winget install Kitware.CMake winget install nsis.nsis winget install Microsoft.VisualStudio.2022.Community
`
We then need to modify the Visual Studio installation to include support for C++ applications:
* Click the Windows Start menu and run Visual Studio Installer.
* Click "Edit".
* Select "Desktop development using C++".
* Click "Edit" in the lower right corner, then click "Yes".
After everything is downloaded and installed, click on the "Windows Start" menu again and select "Developer Command Prompt for VS 2022". Do not use PowerShell to perform these steps or you will run into problems!
Advanced users:
* In addition to running the developer command prompt, you can also use a normal command prompt or ssh to the device and manually run "Program FilesMicrosoft Visual Studio2022CommunityCommon7ToolsVsDevCmd.bat".
Once you follow the instructions above and run the developer command prompt (not PowerShell!) run the following command to install Microsoft VCPKG, which will then be used to build OpenCV:
`bash
cd c:mkdir c:srccd c:src git clone https://github.com/microsoft/vcpkgcd vcpkg bootstrap-vcpkg.bat .vcpkg.exe integrate install .vcpkg.exe integrate powershell.vcpkg.exe install opencv[contrib, dnn,freetype,jpeg,openmp,png,webp,world]:x64-windows
`
Please be patient with this last step as it takes a long time to run. It requires downloading and building a lot of stuff.
Advanced users:
* Please note that there are many other optional modules you may want to add when building OpenCV. Run .vcpkg.exe search opencv to see the complete list.
Optional: If you have a modern NVIDIA GPU, you can install CUDA or CUDA+cuDNN at this time. If installed, Darknet will use your GPU to accelerate image (and video) processing.
You must delete the CMakeCache.txt file from the Darknet build directory to force CMake to re-find all necessary files.
Remember to rebuild Darknet.
Darknet can run without it, but if you want to train a custom network, you need CUDA or CUDA+cuDNN.
Visit https://developer.nvidia.com/cuda-downloads to download and install CUDA.
Visit https://developer.nvidia.com/rdp/cudnn-download or https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#cudnn-package-manager-installation-overview to download and Install cuDNN.
After installing CUDA, make sure you can run nvcc.exe and nvidia-smi.exe. You may need to modify the PATH variable.
After downloading cuDNN, unzip it and copy the bin, include, and lib directories to C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/[version]/. You may need to overwrite some files.
If you install CUDA or CUDA+cuDNN at a later time, or if you upgrade to a newer version of NVIDIA software:
* CUDA must be installed after Visual Studio. If you upgrade Visual Studio, remember to reinstall CUDA.
Once all previous steps have been completed successfully, you need to clone Darknet and build it. In this step we also need to tell CMake where vcpkg is located so that it can find OpenCV and other dependencies:
`bash
cd c:src git clone https://github.com/hank-ai/darknet.gitcd darknetmkdir buildcd build cmake -DCMAKEBUILDTYPE=Release -DCMAKETOOLCHAINFILE=C:/src/vcpkg/scripts/buildsystems/vcpkg.cmake .. msbuild. exe /property:Platform=x64;Configuration=Release /target:Build -maxCpuCount -verbosity:normal -detailedSummary darknet.sln msbuild.exe /property:Platform=x64;Configuration=Release PACKAGE.vcxproj
`
If you encounter errors about some missing CUDA or cuDNN DLL (for example, cublas64_12.dll), manually copy the CUDA .dll file to the same output directory as Darknet.exe. For example:
`bash
copy "C:Program FilesNVIDIA GPU Computing ToolkitCUDAv12.2bin*.dll" src-cliRelease
`
(Here's an example! Check which version you're running, and run the appropriate command for the version you have installed.)
After the files are copied, re-run the last msbuild.exe command to generate the NSIS installation package:
`bash
msbuild.exe /property:Platform=x64;Configuration=Release PACKAGE.vcxproj
`
Advanced users:
* Please note that the output of the cmake command is the normal Visual Studio solution file, Darknet.sln. If you are a software developer who frequently uses the Visual Studio GUI instead of msbuild.exe to build projects, you can ignore the command line and load the Darknet project in Visual Studio.
You should now have this file ready to run: C:srcDarknetbuildsrc-cliReleasedarknet.exe. Run the following command to test: C:srcDarknetbuildsrc-cliReleasedarknet.exe version.
To properly install Darknet, libraries, include files, and necessary DLLs, run the NSIS installation wizard built in the previous step. See the file darknet-VERSION.exe in the build directory. For example:
`bash
darknet-2.0.31-win64.exe
`
Installing the NSIS installation package will:
* Create a directory named Darknet, for example C:Program FilesDarknet.
* Install the CLI application, darknet.exe and other sample applications.
* Install required third-party .dll files, such as those from OpenCV.
* Install the necessary Darknet .dll, .lib, and .h files to use darknet.dll from other applications.
* Install template .cfg file.
You are done now! After the installation wizard is completed, Darknet will be installed in C:Program FilesDarknet. Run the following command to test: C:Program FilesDarknetbindarknet.exe version.
If you don't have C:/Program Files/darknet/bin/darknet.exe, you didn't install it, you just built it! Make sure you browse each panel of the NSIS Installation Wizard in the previous step.
6. Use Darknet
6.1 CLI
The following is not a complete list of all commands supported by Darknet.
In addition to the Darknet CLI, also note the DarkHelp project CLI, which provides an alternative to the Darknet/YOLO CLI. DarkHelp CLI also has some enhancements not found in Darknet. You can use the Darknet CLI and DarkHelp CLI together, they are not mutually exclusive.
For most of the commands shown below, you will need a .weights file with corresponding .names and .cfg files. You can train the network yourself (highly recommended!), or you can download neural networks that have been trained by others and are freely available on the Internet. Examples of pre-training datasets include:
* LEGO Gears (find objects in images)
* Rolodex (find text in image)
* MSCOCO (standard 80 category target detection)
Commands to run include:
* List some possible commands and options that can be run:
`bash
darknet help
`
* Check version:
`bash
darknet version
`
* Use images for prediction:
`bash
V2: darknet detector test cars.data cars.cfg cars_best.weights image1.jpg
V3: darknet02displayannotatedimages cars.cfg image1.jpg
DarkHelp: DarkHelp cars.cfg cars_best.weights image1.jpg
`
* Output coordinates:
`bash
V2: darknet detector test animals.data animals.cfg animalsbest.weights -extoutput dog.jpg
V3: darknet01inference_images animals dog.jpg
DarkHelp: DarkHelp --json animals.cfg animals.names animals_best.weights dog.jpg
`
*Use video:
`bash
V2: darknet detector demo animals.data animals.cfg animalsbest.weights -extoutput test.mp4
V3: darknet03display_videos animals.cfg test.mp4
DarkHelp: DarkHelp animals.cfg animals.names animals_best.weights test.mp4
`
* Reading from webcam:
`bash
V2: darknet detector demo animals.data animals.cfg animals_best.weights -c 0
V3: darknet08display_webcam animals
`
* Save results to video:
`bash
V2: darknet detector demo animals.data animals.cfg animalsbest.weights test.mp4 -outfilename res.avi
V3: darknet05processvideosmultithreaded animals.cfg animals.names animals_best.weights test.mp4
DarkHelp: DarkHelp animals.cfg animals.names animals_best.weights test.mp4
`
*JSON:
`bash
V2: darknet detector demo animals.data animals.cfg animalsbest.weights test50.mp4 -jsonport 8070 -mjpegport 8090 -extoutput
V3: darknet06imagestojson animals image1.jpg
DarkHelp: DarkHelp --json animals.names animals.cfg animals_best.weights image1.jpg
`
* Run on specific GPUs:
`bash
V2: darknet detector demo animals.data animals.cfg animals_best.weights -i 1 test.mp4
`
* Check the accuracy of the neural network:
`bash
darknet detector map driving.data driving.cfg driving_best.weights ... Id Name AvgPrecision TP FN FP TN Accuracy ErrorRate Precision Recall Specificity FalsePosRate -- ---- ------------ ---- -- ------ ------ ------ -------- --------- --------- ---- -- ---------- ------------ 0 vehicle 91.2495 32648 3903 5826 65129 0.9095 0.0905 0.8486 0.8932 0.9179 0.0821 1 motorcycle 80.4499 2936 513 569 5393 0.8850 0.11 50 0.8377 0.8513 0.9046 0.0954 2 bicycle 89.0912 570 124 104 3548 0.9475 0.0525 0.8457 0.8213 0.9715 0.0285 3 person 76.7937 7072 1727 2574 27523 0.8894 0.1106 0.7332 0.8037 0.9145 0.0855 4 many vehicles 64.3089 1068 509 733 11288 0.9087 0.0913 0.5930 0.6772 0.9390 0.0610 5 green light 86.8118 1969 239 5 10 4116 0.8904 0.1096 0.7943 0.8918 0.8898 0.1102 6 yellow light 82.0390 126 38 30 1239 0.9525 0.0475 0.8077 0.7683 0.9764 0.0236 7 light 94.1033 3449 2 17 451 4643 0.9237 0.0763 0.8844 0.9408 0.9115 0.0885
`
* Check accuracy mAP@IoU=75:
`bash
darknet detector map animals.data animals.cfg animalsbest.weights -iouthresh 0.75
`
* Recalculating anchor points is best done in DarkMark as it will run 100 times in a row and select the best anchor point from all calculated anchor points. However, if you want to run an older version in Darknet:
`bash
darknet detector calcanchors animals.data -numof_clusters 6 -width 320 -height 256
`
* Train new network:
`bash
cd ~/nn/animals/ darknet detector -map -dont_show train animals.data animals.cfg
`
6.2 Training
Quick links to relevant parts of the Darknet/YOLO FAQ:
* How should I set up my files and directories?
* Which profile should I use?
* Which command should I use when training my own network?
Create all necessary Darknet files using DarkMark, which is the easiest way to annotate and train. This is definitely the recommended way to train new neural networks.
If you wish to manually set up the various files to train a custom network:
* Create a new folder to store the files. For this example, a neural network will be created to detect animals, so the following directory will be created: ~/nn/animals/.
* Copy one of the Darknet configuration files you want to use as a template. For example, see cfg/yolov4-tiny.cfg. Place it in the folder you created. For this example, we now have ~/nn/animals/animals.cfg.
* Create an animals.names text file in the same folder where you placed the configuration file. For this example, we now have ~/nn/animals/animals.names.
* Use a text editor to edit the animals.names file. List the categories you want to use. You need exactly one entry per line, no blank lines and no comments. For this example, the .names file will contain exactly 4 lines:
`
dog
cat
bird
horse
`
* Create an animals.data text file in the same folder. For this example, the .data file will contain:
`
classes = 4
train = /home/username/nn/animals/animals_train.txt
valid = /home/username/nn/animals/animals_valid.txt
names = /home/username/nn/animals/animals.names
backup = /home/username/nn/animals
`
* Create a folder to store your images and annotations. For example, this could be ~/nn/animals/dataset. Each image requires a corresponding .txt file that describes the annotations for that image. The format of .txt comment files is very specific. You cannot create these files manually because each annotation needs to contain the precise coordinates of the annotation. See DarkMark or other similar software to annotate your images. The YOLO annotation format is described in the Darknet/YOLO FAQ.
* Create "train" and "valid" text files named in the .data file. These two text files need to separately list all the images that Darknet must use to train and validate when calculating mAP%. Exactly one image per row. Paths and filenames can be relative or absolute.
* Use a text editor to modify your .cfg file.
* Make sure batch=64.
* Pay attention to subdivisions. Depending on the network size and the amount of memory available on the GPU, you may need to increase subdivisions. The best value to use is 1, so start with that. If you are unable to use 1, please see the Darknet/YOLO FAQ.
Note that maxbatches=…. A good value to start with is 2000 times the number of categories. For this example we have 4 animals, so 4 2000 = 8000. This means we will use maxbatches=8000.
* Note steps=..... This should be set to 80% and 90% of maxbatches. For this example, we will use steps=6400,7200 since maxbatches is set to 8000.
* Note width=... and height=..... These are network dimensions. The Darknet/YOLO FAQ explains how to calculate the optimal size to use.
* Search for all instances of classes=... lines and modify them with the number of classes in the .names file. For this example we will use classes=4.
Searches for instances of all filters=... lines in the [convolutional] section before each [yolo] section. The value to use is (number of categories + 5) 3. This means that for this example, (4 + 5) * 3 = 27. Therefore, we will use filters=27 on the corresponding line.
* Start training! Run the following command:
`bash
cd ~/nn/animals/ darknet detector -map -dont_show train animals.data animals.cfg
`
* Please wait. The best weights will be saved as animals_best.weights. And you can observe the progress of training by viewing the chart.png file. See the Darknet/YOLO FAQ for additional parameters you may wish to use when training a new network.
* If you want to see more details during training, add the --verbose parameter. For example:
`bash
darknet detector -map -dont_show --verbose train animals.data animals.cfg
`
7. Other Tools and Links
To manage your Darknet/YOLO project, annotate images, validate your annotations, and generate the necessary files for training with Darknet, see DarkMark.
For a powerful alternative to the Darknet CLI, to use image tiling, object tracking in video, or to get a powerful C++ API that can be easily used in commercial applications, see DarkHelp.
Please see the Darknet/YOLO FAQ for help answering your questions.
Check out the many tutorials and example videos on Stéphane's YouTube channel.
If you have support questions or would like to chat with other Darknet/YOLO users, please join the Darknet/YOLO Discord server.
8. Roadmap
Last updated on 2024-10-30:
* Completed
* Replaced qsort() with std::sort() during training (some other obscure code still exists)
* Get rid of check_mistakes, getchar() and system()
* Convert Darknet to use a C++ compiler (g++ on Linux, VisualStudio on Windows)
* Fix Windows build
* Fix Python support
* Build darknet library
* Re-enable labels in predictions ("alphabet" code)
* Re-enable CUDA/GPU code
* Re-enable CUDNN
* Re-enable CUDNN half
* Don't hardcode the CUDA architecture
* Better CUDA version information
* Re-enable AVX
* Remove old solution and Makefile
* Make OpenCV non-optional
* Remove dependency on old pthread library
* Delete STB
* Rewrite CMakeLists.txt to use new CUDA detection
* Removed old "alphabet" code and deleted over 700 images in data/labels
*Build outside source code
* Has better version number output
* Performance optimization related to training (ongoing task)
* Performance optimization related to inference (ongoing task)
* Use pass-by-value references whenever possible
* Clean .hpp files
* Rewrite darknet.h
Don't cast cv::Mat to void, use it as a proper C++ object
* Fix or maintain consistent usage of internal image structures
* Fix the build of ARM architecture Jetson device
*Original Jetson devices are unlikely to be fixed as they are no longer supported by NVIDIA (no C++17 compiler)
* New Jetson Orin device now running
* Fix Python API in V3
* Need better Python support (any Python developers want to help?)
* Short term goals
* Replace printf() with std::cout (work in progress)
* Check out old zed camera support
* Better, more consistent command line parsing (work in progress)
* Mid-term goals
Remove all char codes and replace with std::string
* Don't hide warnings and clean up compiler warnings (work in progress)
* Better use of cv::Mat instead of custom image structures in C (work in progress)
* Replace old list functionality with std::vector or std::list
* Fixed support for 1-channel grayscale images
* Add support for N-channel images where N > 3 (e.g. images with extra depth or hot channels)
* Continuous code cleanup (in progress)
* Long term goals
* Fix CUDA/CUDNN issues on all GPUs
* Rewrite CUDA+cuDNN code
* Consider adding support for non-NVIDIA GPUs
* Rotated bounding box, or some form of "angle" support
*Key points/skeleton
* Heatmap (work in progress)
* Split