UniGetUI (formerly WingetUI)
The editor of Downcodes introduces you to UniGetUI, an intuitive GUI interface created for Windows 10 and 11 users, designed to simplify the use of common CLI package managers, such as WinGet, Scoop, Chocolatey, Pip, Npm, .NET Tool and PowerShell Gallery.
UniGetUI functions
With UniGetUI you can easily download, install, update and uninstall software published on all supported package managers and much more!
Package managers supported by UniGetUI
Please check out the "Supported Package Managers Table" for more details!
Disclaimer
The UniGetUI project has no connection with any supported package managers and is completely unofficial. Please note that Downcodes, the developer of UniGetUI, is not responsible for downloaded software. Please use with caution!
Notice
The official website of UniGetUI is https://www.marticliment.com/unigetui/. Any other website should be considered unofficial, no matter what they say. In particular, wingetui.com is not the official website of UniGetUI (formerly WingetUI).
Support developers
Your support is vital to the continued development of UniGetUI and is deeply appreciated by the editors of Downcodes. Thanks!
Table of contents
1. Installation
There are multiple ways to install UniGetUI, please choose your preferred installation method!
* Microsoft Store installation (recommended)
Click here to download the UniGetUI installer.
* Install via Winget
`bash
winget install --exact --id MartiCliment.UniGetUI --source winget
`
* Installed via Scoop
Note: There is currently a problem with the Scoop package of UniGetUI. Please do not install UniGetUI via Scoop yet.
`bash
# rem The current UniGetUI scoop package is broken. Please do not install UniGetUI via scoop for the time being
# rem scoop bucket add extras
# rem scoop install extras/wingetui
`
* Installed via Chocolatey
`bash
choco install wingetui
`
2. Update UniGetUI
UniGetUI has built-in automatic update functionality. However, you can also update it just like any other package in UniGetUI (since UniGetUI is available through Winget and Scoop).
3. Function
* Supported package managers
Note: All package managers support basic installation, update, and uninstall processes, as well as checking for updates, finding new packages, and retrieving details from packages.
| Package Manager | Support | Description |
|---|---|---|
| WinGet | ✅ | |
| Scoop | ✅ | |
| Chocolatey | ✅ | |
| Pip | ✅ | |
| Npm | ✅ | |
| .NET Tool | ✅ | |
| PowerShell Gallery | ✅ | |
illustrate:
1. Some packages do not support installation to custom locations or scopes and will ignore this setting.
2. Although the package manager may not support pre-release versions, some packages may be copied, and one of the copies is a beta version of it.
3. Some installers have no GUI and will ignore the interactive flag.
* Translate UniGetUI into other languages
To translate UniGetUI into other languages or update old translations, see the UniGetUI Wiki for more information.
* Currently supported languages
*Updated: Tue Oct 29 00:13:19 2024
4. Contribution
UniGetUI wouldn't be possible without the help of our dear contributors. From the person who fixed the typo to the person who improved half the code, UniGetUI couldn't do without their contributions!
Contributors:
*…
5. Screenshot
*…
6. Frequently Asked Questions
* I can't install or upgrade a specific Winget package! what do I do?
This may be an issue with Winget rather than UniGetUI. Please check if the package can be installed/upgraded via PowerShell or command prompt using the command winget upgrade or winget install (as appropriate, for example: winget upgrade --id Microsoft.PowerToys). If this doesn't work, consider asking for help on the Winget project page.
* Package name is truncated by ellipsis - how to view its full name/ID?
This is a known limitation of Winget. See this issue for more details: microsoft/winget-cli#2603.
* My antivirus software tells me UniGetUI is a virus! /My browser blocks the download of UniGetUI!
A common reason why applications (i.e. executable files) are blocked and/or detected as viruses - even if they don't contain malicious code, such as UniGetUI - is because relatively few people use them. Add to that the fact that you're probably downloading something that was recently released, and in many cases, blocking unknown apps is a good precaution against real malware. Since UniGetUI is open source and safe to use, please whitelist the application in the settings of your antivirus software/browser.
* Are Winget/Scoop bags safe?
UniGetUI, Microsoft and Scoop are not responsible for packages available for download, which are provided by third parties and could theoretically be broken. Microsoft has implemented some checks on the software available on Winget to reduce the risk of downloading malware. Even so, it is recommended that you only download software from trusted publishers. Check out the wiki for more information!
7. Command line parameters
See here for a complete list of parameters.
8. Example
*…
9. License
Apache-2.0 License
Darknet object detection framework and YOLO
!darknet and hank.ai logo
Darknet is an open source neural network framework written in C, C++ and CUDA.
YOLO (You Only Look Once) is a state-of-the-art real-time object detection system in the Darknet framework.
Read how Hank.ai helps the Darknet/YOLO community
Announcing Darknet V3 "Jazz"
Check out the Darknet/YOLO website
Please read the Darknet/YOLO FAQ
Join the Darknet/YOLO discord server
Papers
Paper YOLOv7
Paper Scaled-YOLOv4
Paper YOLOv4
Paper YOLOv3
General Information
The Darknet/YOLO framework is faster and more accurate than other frameworks and YOLO versions.
The framework is completely free and open source. You can incorporate Darknet/YOLO into existing projects and products - including commercial products - without licensing or fees.
Darknet V3 ("Jazz"), released in October 2024, can accurately run LEGO dataset videos at up to 1000 FPS when using an NVIDIA RTX 3090 GPU, meaning each video frame is in 1 millisecond or less time to be read, resized and processed by Darknet/YOLO.
If you need help or want to discuss Darknet/YOLO, please join the Darknet/YOLO Discord server: https://discord.gg/zSq8rtW
The CPU version of Darknet/YOLO can run on simple devices such as Raspberry Pi, cloud and colab servers, desktops, laptops and high-end training equipment. The GPU version of Darknet/YOLO requires NVIDIA's CUDA-compatible GPU.
Darknet/YOLO is known to run on Linux, Windows, and Mac. Check out the build instructions below.
Darknet version
The original Darknet tools written by Joseph Redmon in 2013-2017 did not have version numbers. We think this is version 0.x.
The next popular Darknet repository maintained between 2017-2021 by Alexey Bochkovskiy also has no version number. We think this is version 1.x.
The Darknet repository sponsored by Hank.ai and maintained by Stéphane Charette from 2023 is the first to have a version command. From 2023 to the end of 2024, it returns to version 2.x "OAK".
The goal is to get familiar with the code base while breaking as little existing functionality as possible.
Rewritten the build steps so we have a unified way to build on Windows and Linux using CMake.
Convert the code base to use a C++ compiler.
Enhance chart.png during training.
Bug fixes and performance-related optimizations, mainly related to reducing the time required to train the network.
The last branch of this code base is version 2.1 in the v2 branch.
The next phase of development begins in mid-2024 and will be released in October 2024. The version command now returns 3.x "JAZZ".
If you need to run one of these commands, you can always checkout the previous v2 branch. Please let us know so we can investigate adding back any missing commands.
Removed many old and unmaintained commands.
Many performance optimizations, both during training and inference.
The old C API has been modified; applications using the original Darknet API require minor modifications: https://darknetcv.ai/api/api.html
New Darknet V3 C and C++ API: https://darknetcv.ai/api/api.html
New applications and sample code in src-examples: https://darknetcv.ai/api/files.html
MSCOCO pre-trained weights
For convenience, several popular versions of YOLO are pre-trained on the MSCOCO dataset. This data set contains 80 categories and can be seen in the text file cfg/coco.names.
There are several other simpler datasets and pre-trained weights available for testing Darknet/YOLO, such as LEGO Gears and Rolodex. For more information, see the Darknet/YOLO FAQ.
MSCOCO pre-trained weights can be downloaded from a number of different locations and can also be downloaded from this repository:
YOLOv2, November 2016
*YOLOv2-tiny
*YOLOv2-full
YOLOv3, May 2018
* YOLOv3-tiny
*YOLOv3-full
YOLOv4, May 2020
* YOLOv4-tiny
*YOLOv4-full
YOLOv7, August 2022
* YOLOv7-tiny
*YOLOv7-full
MSCOCO pretrained weights are for demonstration purposes only. The corresponding .cfg and .names files for MSCOCO are located in the cfg directory. Example command:
`bash
wget --no-clobber https://github.com/hank-ai/darknet/releases/download/v2.0/yolov4-tiny.weights darknet02displayannotatedimages coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg darknet03display_videos coco.names yolov4-tiny.cfg yolov4-tiny.weights video1.avi DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights video1. avi
`
Note that one should train their own network. MSCOCO is usually used to confirm that everything is working properly.
Build
The various building methods available in the past (before 2023) have been merged into a unified solution. Darknet requires C++17 or higher, OpenCV, and using CMake to generate the necessary project files.
You don't need to know C++ to build, install, or run Darknet/YOLO any more than you need to be a mechanic to drive a car.
Google Colab
Google Colab instructions are the same as Linux instructions. There are several Jupyter notebooks showing how to perform certain tasks, such as training a new network.
Check out the notebook in the colab subdirectory, or follow the Linux instructions below.
Linux CMake method
1. Install dependencies
`bash
sudo apt-get update
sudo apt-get install build-essential git libopencv-dev cmake
`
2. Clone the Darknet repository
`bash
git clone https://github.com/hank-ai/darknet.git
`
3. Create a build directory
`bash
cd darknet
mkdir build
cd build
`
4. Use CMake to configure the build
`bash
cmake -DCMAKEBUILDTYPE=Release ..
`
5. Build Darknet
`bash
make -j4
`
6. Install Darknet
`bash
sudo make install
`
7. Test Darknet
`bash
darknet version
`
Windows CMake methods
1. Install dependencies
`bash
winget install Git.Git
winget install Kitware.CMake
winget install nsis.nsis
winget install Microsoft.VisualStudio.2022.Community
`
2. Install OpenCV
`bash
cd C:
mkdir C:src
cd C:src
git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
bootstrap-vcpkg.bat
.vcpkg.exe integrate install
.vcpkg.exe integrate powershell
.vcpkg.exe install opencv[contrib,dnn,freetype,jpeg,openmp,png,webp,world]:x64-windows
`
3. Clone the Darknet repository
`bash
cd C:src
git clone https://github.com/hank-ai/darknet.git
`
4. Create a build directory
`bash
cd darknet
mkdir build
cd build
`
5. Use CMake to configure the build
`bash
cmake -DCMAKEBUILDTYPE=Release -DCMAKETOOLCHAINFILE=C:srcvcpkgscriptsbuildsystemsvcpkg.cmake ..
`
6. Build Darknet using Visual Studio
`bash
msbuild.exe /property:Platform=x64;Configuration=Release /target:Build -maxCpuCount -verbosity:normal -detailedSummary darknet.sln
`
7. Create NSIS installation package
`bash
msbuild.exe /property:Platform=x64;Configuration=Release PACKAGE.vcxproj
`
8. Run Darknet
`bash
C:srcdarknetbuildsrc-cliReleasedarknet.exe version
`
Using Darknet
CLI
The following is not a complete list of all commands supported by Darknet.
In addition to the Darknet CLI, also note the DarkHelp project CLI, which provides an alternative CLI to Darknet/YOLO. DarkHelp CLI also has several advanced features not found in Darknet. You can use the Darknet CLI and DarkHelp CLI together, they are not mutually exclusive.
For most of the commands shown below, you will need the .weights file and the corresponding .names and .cfg files. You can train the network yourself (highly recommended!), or download neural networks that others have trained and are freely available on the Internet. Examples of pre-training datasets include:
LEGO Gears (find objects in images)
Rolodex (find text in image)
MSCOCO (Standard Class 80 Object Detection)
Commands to run include:
List some possible commands and options that can be run:
darknet help
Check version:
darknet version
Use images to make predictions:
V2: darknet detector test cars.data cars.cfg cars_best.weights image1.jpg
V3: darknet02displayannotatedimages cars.cfg image1.jpg
DarkHelp: DarkHelp cars.cfg cars.cfg cars_best.weights image1.jpg
Output coordinates:
V2: darknet detector test animals.data animals.cfg animalsbest.weights -extoutput dog.jpg
V3: darknet01inference_images animals dog.jpg
DarkHelp: DarkHelp --json animals.cfg animals.names animals_best.weights dog.jpg
Use video:
V2: darknet detector demo animals.data animals.cfg animalsbest.weights -extoutput test.mp4
V3: darknet03display_videos animals.cfg test.mp4
DarkHelp: DarkHelp animals.cfg animals.names animals_best.weights test.mp4
Reading from webcam:
V2: darknet detector demo animals.data animals.cfg animals_best.weights -c 0
V3: darknet08display_webcam animals
Save results to video:
V2: darknet detector demo animals.data animals.cfg animalsbest.weights test.mp4 -outfilename res.avi
V3: darknet05processvideosmultithreaded animals.cfg animals.names animals_best.weights test.mp4
DarkHelp: DarkHelp animals.cfg animals.names animals_best.weights test.mp4
JSON:
V2: darknet detector demo animals.data animals.cfg animalsbest.weights test50.mp4 -jsonport 8070 -mjpegport 8090 -extoutput
V3: darknet06imagestojson animals image1.jpg
DarkHelp: DarkHelp --json animals.names animals.cfg animals_best.weights image1.jpg
Run on a specific GPU:
V2: darknet detector demo animals.data animals.cfg animals_best.weights -i 1 test.mp4
Check the accuracy of the neural network:
`bash
darknet detector map driving.data driving.cfg driving_best.weights ...
Id Name AvgPrecision TP FN FP TN Accuracy ErrorRate Precision Recall Specificity FalsePosRate
-- ---- ------------ ------ ------ ------ ------ -------- --------- --------- ------ ---------- ----------
0 vehicle 91.2495 32648 3903 5826 65129 0.9095 0.0905 0.8486 0.8932 0.9179 0.0821
1 motorcycle 80.4499 2936 513 569 5393 0.8850 0.1150 0.8377 0.8513 0.9046 0.0954
2 bicycle 89.0912 570 124 104 3548 0.9475 0.0525 0.8457 0.8213 0.9715 0.0285
3 person 76.7937 7072 1727 2574 27523 0.8894 0.1106 0.7332 0.8037 0.9145 0.0855
4 many vehicles 64.3089 1068 509 733 11288 0.9087 0.0913 0.5930 0.6772 0.9390 0.0610
5 green light 86.8118 1969 239 510 4116 0.8904 0.1096 0.7943 0.8918 0.8898 0.1102
6 yellow light 82.0390 126 38 30 1239 0.9525 0.0475 0.8077 0.7683 0.9764 0.0236
7 red light 94.1033 3449 217 451 4643 0.9237 0.0763 0.8844 0.9408 0.9115 0.0885
`
Check accuracy mAP@IoU=75:
darknet detector map animals.data animals.cfg animalsbest.weights -iouthresh 0.75
Recalculating anchor points is best done in DarkMark as it will run 100 times in a row and select the best anchor point from all calculated anchor points. However, if you want to run an older version in Darknet:
darknet detector calcanchors animals.data -numof_clusters 6 -width 320 -height 256
Train a new network:
darknet detector -map -dont_show train animals.data animals.cfg (see also training section below)
train
Quick links to relevant sections of the Darknet/YOLO FAQ:
How should I set up my files and directories?
Which profile should I use?
Which command should you use when training your own network?
Using DarkMark to create all necessary Darknet files is the easiest way to annotate and train. This is definitely the recommended way to train new neural networks.
If you wish to manually set up the various files to train a custom network:
1. Create a new folder
Create a new folder to store the files. For example, you will create a neural network to detect animals, so create the following directory: ~/nn/animals/.
2. Copy the configuration file
Copy one of the Darknet configuration files you want to use as a template. For example, see cfg/yolov4-tiny.cfg. Place it in the folder you created. For example, now we have ~/nn/animals/animals.cfg.
3. Create .names file
Create an animals.names text file in the same folder where you place the configuration file. For example, now we have ~/nn/animals/animals.names.
4. Edit the .names file
Use a text editor to edit the animals.names file. List the categories you want to use. There must be exactly one entry per line, no blank lines, and no comments. For example, the .names file will contain exactly 4 lines:
`
dog
cat
bird
horse
`
5. Create .data file
Create an animals.data text file in the same folder. For example, a .data file would contain:
`
classes=4
train=/home/username/nn/animals/animals_train.txt
valid=/home/username/nn/animals/animals_valid.txt
names=/home/username/nn/animals/animals.names
backup=/home/username/nn/animals
`
6. Create a dataset folder
Create a folder to store your images and annotations. For example, this might be ~/nn/animals/dataset. Each image requires a corresponding .txt file that describes the annotations for that image. The format of .txt comment files is very specific. You cannot create these files manually because each annotation needs to contain the precise coordinates of the annotation. Check out DarkMark or other similar software to annotate your images. The YOLO annotation format is described in the Darknet/YOLO FAQ.
7. Create “train” and “valid” files
Create "train" and "valid" text files named in the .data file. These two text files need to list all the images that Darknet must use for training and validation when calculating mAP%, respectively. Exactly one image per row. Paths and filenames can be relative or absolute.
8. Modify the .cfg file
Use a text editor to modify your .cfg file.
* Make sure batch=64.
* Pay attention to subdivisions. Depending on the network size and the amount of memory available on the GPU, you may need to increase subdivisions. The optimal value is 1, so start with 1. If 1 doesn't work for you, please see the Darknet/YOLO FAQ.
Note maxbatches=…. A good value to start with is 2000 times the number of categories. For example, we have 4 animals, so 4 2000 = 8000. This means we will use maxbatches=8000.
* Note steps=…. This should be set to 80% and 90% of maxbatches. For example, since maxbatches is set to 8000, we will use steps=6400,7200.
* Note width=... and height=.... These are network dimensions. The Darknet/YOLO FAQ explains how to calculate the optimal size to use.
In the [convolutional] section before each [yolo] section, find all instances of the classes=... and filters=... lines. The value to use is (numberofclasses + 5) 3. This means that for this example, (4 + 5) * 3 = 27. So we will use filters=27 on the corresponding line.
9. Start training
Run the following command:
`bash
cd ~/nn/animals/
darknet detector -map -dont_show train animals.data animals.cfg
`
Be patient. The best weights will be saved as animals_best.weights. You can observe the training progress by viewing the chart.png file. See the Darknet/YOLO FAQ for additional parameters you may want to use when training a new network.
If you want to see more details during training, add the --verbose parameter. For example:
`bash
darknet detector -map -dont_show --verbose train animals.data animals.cfg
`
Other tools and links
To manage your Darknet/YOLO project, annotate images, validate your annotations, and generate the necessary files required for training with Darknet, check out DarkMark.
For a powerful alternative CLI to Darknet for using image stitching, object tracking in video, or using a powerful C++ API that can be easily used in commercial applications, check out DarkHelp.
Check out the Darknet/YOLO FAQ to see if it can help answer your question.
Check out the many tutorials and example videos on Stéphane’s YouTube channel
If you have support questions or want to chat with other Darknet/YOLO users, please join the Darknet/YOLO discord server.
Roadmap
Last updated on 2024-10-30:
Completed
Replaced qsort() with std::sort() during training (some other ambiguities still exist)
Remove check_mistakes, getchar() and system()
Convert Darknet to use a C++ compiler (g++ on Linux, Visual Studio on Windows)
Fix Windows build
Fix Python support
Build darknet library
Re-enable labels in predictions ("alphabet" code)
Re-enable CUDA/GPU code
Re-enable CUDNN
Re-enable CUDNN half
Don't hardcode the CUDA architecture
Better CUDA version information
Re-enable AVX
Remove old solution and Makefile
Make OpenCV non-optional
Remove dependency on old pthread library
Delete STB
Rewrite CMakeLists.txt to use new CUDA instrumentation
Removed old "alphabet" code and deleted over 700 images in data/labels
Build outside of source code
Have better version number output
Training-related performance optimizations (ongoing tasks)
Performance optimizations related to inference (ongoing tasks)
Use references by value whenever possible
Clean .hpp files
Rewrite darknet.h
Don't convert cv::Mat to void, instead use it as a proper C++ object
Fix or maintain consistent usage of internal image structures
Fix build for ARM-based Jetson devices
*Original Jetson devices are unlikely to be fixed as they are no longer supported by NVIDIA (no C++17 compiler)
* New Jetson Orin device now running
Fix Python API in V3
* Need better Python support (are there any Python developers willing to help?)
short term goals
Replace printf() with std::cout (work in progress)
Investigate old zed camera support
Better, more consistent command line parsing (work in progress)
mid-term goals
Remove all char codes and replace with std::string
Don't hide warnings and clean up compiler warnings (work in progress)
Better to use cv::Mat instead of custom image structures in C (work in progress)
Replace old list functions with std::vector or std::list
Fix support for 1-channel grayscale images
Add support for N-channel images where N > 3 (e.g. images with additional depth or thermal channels)
Ongoing code cleanup (in progress)
long term goals
Fix CUDA/CUDNN issues on all GPUs
Rewrite CUDA+cuDNN code
Investigate adding support for non-NVIDIA GPUs
Rotated bounding box, or some kind of "angle" support
key points/skeleton
Heatmap (work in progress)
segmentation