Marmite
Marmite [Markdown makes sites] is a very simple static site generator.
Downcodes editor said: "Marmite is a very simple static website generator. I often use other SSGs, but always find it too cumbersome to set them up from scratch. Just a directory of markdown files and running a command can generate a website, This sounds really convenient."
How to use
Marmite does one simple thing:
Convert markdown file to HTML
It can also handle the generation or copying of static files or media files to the output directory.
Install
Install using cargo:
`bash
cargo install marmite
`
Or download the precompiled binaries from the releases page.
use
It's so easy to use!
`bash
$ marmite folderwithmarkdownfiles pathtogeneratedsite
`
The website will be generated in the pathtogenerated_site/ directory.
CLI
`bash
❯ marmite --help
Marmite is the easiest static site generator.
Usage: marmite [OPTIONS]
Arguments:
Options:
--serve Serve the site with a built-in HTTP server
--watch Detect changes and rebuild the site automatically
--bind
Address to bind the server [default: localhost:8000]
--config
Path to custom configuration file [default: marmite.yaml]
--debug Print debug messages
--init-templates Initialize templates in the project
--start-theme Initialize a theme with templates and static assets
-h, --help Print help
-V, --version Print version
`
Get started
Read the tutorial to learn how to get started with Marmite https://rochacbruno.github.io/marmite/getting-started.html and spend a few minutes creating your blog.
document
Read more about how to customize templates, add comments, and more at https://rochacbruno.github.io/marmite/.
Summarize
Marmite is very simple.
If this simplicity doesn't suit your needs, there are other great static website generators out there. Here are some I recommend:
Jekyll
Hugo
Gatsby
Next.js
Darknet object detection framework and YOLO
Overview
Darknet is an open source neural network framework written in C, C++ and CUDA.
YOLO (You Only Look Once) is a state-of-the-art real-time object detection system running in the Darknet framework.
Important links:
How Hank.ai helps the Darknet/YOLO community
Darknet/YOLO website
Darknet/YOLO FAQ
Darknet/YOLO Discord Server
paper
YOLOv7 paper
Scaled-YOLOv4 paper
YOLOv4 paper
YOLOv3 paper
General information
The Darknet/YOLO framework consistently outperforms other frameworks and YOLO versions in terms of speed and accuracy.
The framework is completely free and open source. You can integrate Darknet/YOLO into existing projects and products, including commercial products, without licensing or fees.
Darknet V3 ("Jazz"), released in October 2024, can accurately run LEGO dataset video at up to 1000 FPS when using an NVIDIA RTX 3090 GPU, meaning Darknet/YOLO reads in 1 millisecond or less Fetch, resize and process each video frame.
The CPU version of Darknet/YOLO can run on simple devices such as Raspberry Pi, cloud and colab servers, desktops, laptops and high-end training equipment. The GPU version of Darknet/YOLO requires NVIDIA's CUDA-compatible GPU.
Darknet/YOLO runs on Linux, Windows and Mac. See build instructions below.
Darknet version
The original Darknet tools written by Joseph Redmon in 2013-2017 did not have version numbers. We think it is version 0.x.
The next popular Darknet repository maintained by Alexey Bochkovskiy from 2017-2021 also had no version number. We believe it is version 1.x.
The Darknet repository sponsored by Hank.ai and maintained by Stéphane Charette from 2023 is the first to have a version command. From 2023 to the end of 2024, it returns to version 2.x "OAK".
The goal is to break existing functionality as little as possible while getting familiar with the code base.
Rewritten the build steps so we have a unified way to build on Windows and Linux using CMake.
Convert the code base to use a C++ compiler.
Enhanced chart.png during training.
Bug fixes and performance-related optimizations, mainly related to reducing the time required to train the network.
The last branch of this code base is version 2.1 in the v2 branch.
The next phase of development begins in mid-2024, with release in October 2024. The version command now returns 3.x "JAZZ".
If you need to run one of these commands, you can always check out the previous v2 branch. Let us know so we can investigate adding any missing commands.
Removed many old and unmaintained commands.
Many performance optimizations, including optimizations during training and inference.
Modified old C API; applications using the original Darknet API require minor modifications: https://darknetcv.ai/api/api.html
New Darknet V3 C and C++ API: https://darknetcv.ai/api/api.html
New applications and sample code in src-examples: https://darknetcv.ai/api/files.html
MSCOCO pre-trained weights
For convenience, several popular YOLO versions are pre-trained using the MSCOCO dataset. This data set contains 80 categories and can be seen in the text file cfg/coco.names.
There are several other simpler datasets and pre-trained weights available for testing Darknet/YOLO, such as LEGO Gears and Rolodex. For more information, see the Darknet/YOLO FAQ.
MSCOCO pretrained weights can be downloaded from a few different locations or from this repository:
YOLOv2 (November 2016)
- yolov2-tiny.weights
- yolov2-full.weights
YOLOv3 (May 2018)
- yolov3-tiny.weights
- yolov3-full.weights
YOLOv4 (May 2020)
- yolov4-tiny.weights
- yolov4-full.weights
YOLOv7 (August 2022)
- yolov7-tiny.weights
- yolov7-full.weights
MSCOCO pretrained weights are for demonstration purposes only. The corresponding .cfg and .names files (for MSCOCO) are located in the cfg directory. Example command:
`bash
wget --no-clobber https://github.com/hank-ai/darknet/releases/download/v2.0/yolov4-tiny.weights
darknet02displayannotatedimages coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg
darknet03display_videos coco.names yolov4-tiny.cfg yolov4-tiny.weights video1.avi
DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg
DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights video1.avi
`
Note that one should train their own network. MSCOCO is often used to confirm that everything is OK.
Build
Various construction methods from the past (pre-2023) have been merged into a unified solution. Darknet requires C++17 or higher, OpenCV and using CMake to generate the necessary project files.
You don't need to know C++ to build, install, or run Darknet/YOLO, just like you don't need to be a mechanic to drive a car.
Google Colab
Google Colab instructions are the same as Linux instructions. A number of Jupyter notebooks are provided showing how to perform certain tasks, such as training a new network.
See the notebook in the colab subdirectory, or follow the Linux instructions below.
Linux CMake method
1. Install necessary software:
`bash
sudo apt-get install build-essential git libopencv-dev cmake
`
2. Clone the Darknet repository:
`bash
git clone https://github.com/hank-ai/darknet
`
3. Create the build directory:
`bash
cd darknet
mkdir build
cd build
`
4. Use CMake to generate build files:
`bash
cmake -DCMAKEBUILDTYPE=Release ..
`
5. Build Darknet:
`bash
make -j4
`
6. Install Darknet (optional):
`bash
make package
sudo dpkg -i darknet-VERSION.deb
`
Notice:
If you have an NVIDIA GPU installed on your system, you can install CUDA or CUDA+cuDNN to accelerate image (and video) processing.
If you install CUDA or CUDA+cuDNN or upgrade the NVIDIA software, you need to delete the CMakeCache.txt file in the build directory and rebuild Darknet.
You can use the darknet version command to check whether Darknet has been installed successfully.
Windows CMake methods
1. Install necessary software:
`bash
winget install Git.Git
winget install Kitware.CMake
winget install nsis.nsis
winget install Microsoft.VisualStudio.2022.Community
`
2. Modify the Visual Studio installation:
- Open the Windows Start menu and run Visual Studio Installer.
- Click "Edit".
- Select "Desktop development using C++".
- Click "Edit" in the lower right corner, then click "Yes".
3. Install Microsoft VCPKG:
`bash
cdc:
mkdir c:src
cd c:src
git clone https://github.com/microsoft/vcpkg
cd vcpkg
bootstrap-vcpkg.bat
.vcpkg.exe integrate install
.vcpkg.exe integrate powershell
.vcpkg.exe install opencv[contrib,dnn,freetype,jpeg,openmp,png,webp,world]:x64-windows
`
4. Clone the Darknet repository:
`bash
cd c:src
git clone https://github.com/hank-ai/darknet.git
cd darknet
mkdir build
cd build
`
5. Use CMake to generate build files:
`bash
cmake -DCMAKEBUILDTYPE=Release -DCMAKETOOLCHAINFILE=C:/src/vcpkg/scripts/buildsystems/vcpkg.cmake ..
`
6. Build Darknet:
`bash
msbuild.exe /property:Platform=x64;Configuration=Release /target:Build -maxCpuCount -verbosity:normal -detailedSummary darknet.sln
msbuild.exe /property:Platform=x64;Configuration=Release PACKAGE.vcxproj
`
7. Install Darknet:
- Run the darknet-VERSION.exe file in the build directory to start the NSIS installation wizard.
Notice:
If you have an NVIDIA GPU installed on your system, you can install CUDA or CUDA+cuDNN to accelerate image (and video) processing.
If you install CUDA or CUDA+cuDNN or upgrade the NVIDIA software, you need to delete the CMakeCache.txt file in the build directory and rebuild Darknet.
You can use the darknet.exe version command to check whether Darknet has been installed successfully.
Using Darknet
CLI
The following is not a complete list of all commands supported by Darknet.
In addition to the Darknet CLI, also note the DarkHelp project CLI, which provides an alternative CLI for Darknet/YOLO. DarkHelp CLI also has several advanced features not available in Darknet. You can use Darknet CLI and DarkHelp CLI at the same time, they are not mutually exclusive.
For most of the commands shown below, you need to use the .weights file corresponding to the .names and .cfg files. You can train your own network (highly recommended!) or download neural networks that others have trained and posted for free on the Internet. Examples of pre-training datasets include:
LEGO Gears (find objects in images)
Rolodex (find text in image)
MSCOCO (standard 80-category object detection)
Commands to run include:
darknet help: Display help information.
darknet version: Check version.
darknet detector test cars.data cars.cfg cars_best.weights image1.jpg: Use images for prediction (V2).
darknet02displayannotatedimages cars.cfg image1.jpg: Prediction using images (V3).
DarkHelp cars.cfg cars.cfg cars_best.weights image1.jpg: Prediction using images (DarkHelp).
darknet detector test animals.data animals.cfg animalsbest.weights -extoutput dog.jpg: Output coordinates (V2).
darknet01inference_images animals dog.jpg: Output coordinates (V3).
DarkHelp --json animals.cfg animals.names animals_best.weights dog.jpg: Output coordinates (DarkHelp).
darknet detector demo animals.data animals.cfg animalsbest.weights -extoutput test.mp4: Processing video (V2).
darknet03display_videos animals.cfg test.mp4: Processing videos (V3).
DarkHelp animals.cfg animals.names animals_best.weights test.mp4: Processing video (DarkHelp).
darknet detector demo animals.data animals.cfg animals_best.weights -c 0: Read from webcam (V2).
darknet08display_webcam animals: Read from webcam (V3).
darknet detector demo animals.data animals.cfg animalsbest.weights test.mp4 -outfilename res.avi: Save results to video (V2).
darknet05processvideosmultithreaded animals.cfg animals.names animals_best.weights test.mp4: Save results to video (V3).
DarkHelp animals.cfg animals.names animals_best.weights test.mp4: Save results to video (DarkHelp).
darknet detector demo animals.data animals.cfg animalsbest.weights test50.mp4 -jsonport 8070 -mjpegport 8090 -extoutput: Use JSON format (V2).
darknet06imagestojson animals image1.jpg: Use JSON format (V3).
DarkHelp --json animals.names animals.cfg animals_best.weights image1.jpg: Use JSON format (DarkHelp).
darknet detector demo animals.data animals.cfg animals_best.weights -i 1 test.mp4: Run on specific GPU.
darknet detector map driving.data driving.cfg driving_best.weights ...: Check the accuracy of the neural network.
darknet detector map animals.data animals.cfg animalsbest.weights -iouthresh 0.75: Check the accuracy of mAP@IoU=75.
darknet detector calcanchors animals.data -numof_clusters 6 -width 320 -height 256: Recalculate anchor points.
darknet detector -map -dont_show train animals.data animals.cfg: Train a new network.
train
How do I set up my files and directories?
Which profile should I use?
What commands should I use when training my own network?
Using DarkMark annotation and training is the simplest method, which creates all the necessary Darknet files. This is definitely the recommended way to train new neural networks.
If you prefer to manually set up the various files to train a custom network, follow these steps:
1. Create a new folder to store the files. For example, you will create a neural network for detecting animals, so the following directory will be created: ~/nn/animals/.
2. Copy a Darknet configuration file that you want to use as a template. See, for example, cfg/yolov4-tiny.cfg. Place it in the folder you created. For this example, we now have ~/nn/animals/animals.cfg.
3. Create an animals.names text file in the same folder where you place the configuration file. For this example, we now have ~/nn/animals/animals.names.
4. Use a text editor to edit the animals.names file. List the categories you want to use. Each line must have exactly one item, no blank lines or comments. For this example, the .names file will contain the following 4 lines:
`
dog
cat
bird
horse
`
5. Create an animals.data text file in the same folder. For this example, the .data file will contain:
`
classes=4
train=/home/username/nn/animals/animals_train.txt
valid=/home/username/nn/animals/animals_valid.txt
names=/home/username/nn/animals/animals.names
backup=/home/username/nn/animals
`
6. Create a folder to store your images and annotations. For example, this could be ~/nn/animals/dataset. Each image requires a corresponding .txt file that describes the annotations for that image. The format of .txt comment files is very specific. You cannot create these files manually because each annotation needs to contain the exact coordinates of the annotation. Please refer to DarkMark or other similar software to annotate your images. The YOLO annotation format is described in the Darknet/YOLO FAQ.
7. Create "train" and "valid" text files named in the .data file. These two text files need to list all the images that Darknet must use for training and validation (when calculating mAP%) respectively. One image per line. Paths and filenames can be relative or absolute.
8. Use a text editor to modify your .cfg file.
- Make sure batch=64.
- Pay attention to subdivisions. Depending on the network dimensions and the amount of memory available on the GPU, you may need to increase subdivisions. The optimal value is 1, so start with that. If 1 doesn't work for you, please see the Darknet/YOLO FAQ.
- Note maxbatches=…. A good value to start with is the number of categories multiplied by 2000. For this example, we have 4 animals, so 4 * 2000 = 8000. This means we will use maxbatches=8000.
- Note steps=…. This should be set to 80% and 90% of maxbatches. For this example, since maxbatches is set to 8000, we will use steps=6400,7200.
- Note width=... and height=.... These are network dimensions. The Darknet/YOLO FAQ explains how to calculate the optimal size to use.
- Look for all classes=... lines and modify it with the number of classes in your .names file. For this example we will use classes=4.
- Find all filters=... lines in the [convolutional] section before each [yolo] section. The value to use is (number of categories + 5) 3. This means that for this example, (4 + 5) 3 = 27. Therefore, we use filters=27 on the appropriate lines.
9. Start training! Run the following command:
`bash
cd ~/nn/animals/
darknet detector -map -dont_show train animals.data animals.cfg
`
Please wait. The best weights will be saved as animals_best.weights. You can observe the progress of training by viewing the chart.png file. See the Darknet/YOLO FAQ for additional parameters you may want to use when training a new network.
If you want to see more details during training, add the --verbose parameter. For example:
`bash
darknet detector -map -dont_show --verbose train animals.data animals.cfg
`
Other tools and links
To manage your Darknet/YOLO project, annotate images, validate your annotations, and generate the necessary files needed to train with Darknet, see DarkMark.
For a powerful alternative CLI to Darknet for working with image tiling, object tracking in video, or a powerful C++ API that can be easily used in commercial applications, see DarkHelp.
Please see the Darknet/YOLO FAQ to see if it can help answer your question.
Check out the many tutorials and example videos on Stéphane's YouTube channel.
If you have support questions or want to chat with other Darknet/YOLO users, please join the Darknet/YOLO Discord server.
roadmap
Last updated: 2024-10-30
Completed
Replaced qsort() with std::sort() during training (some other obscure replacements still exist)
Remove check_mistakes, getchar() and system()
Convert Darknet to use a C++ compiler (g++ on Linux, Visual Studio on Windows)
Fix Windows build
Fix Python support
Build darknet library
Re-enable labels on predictions ("alphabet" code)
Re-enable CUDA/GPU code
Re-enable CUDNN
Re-enable CUDNN half
Don't hardcode the CUDA architecture
Better CUDA version information
Re-enable AVX
Remove old solution and Makefile
Make OpenCV non-optional
Remove dependency on old pthread library
Delete STB
Rewrite CMakeLists.txt to use new CUDA instrumentation
Removed old "alphabet" code and deleted over 700 images in data/labels
Build outside source code
Have better version number output
Training-related performance optimizations (ongoing tasks)
Performance optimizations related to inference (ongoing tasks)
Use references by value whenever possible
Clean .hpp files
Rewrite darknet.h
Don't convert cv::Mat to void*, instead use it as a correct C++ object
Fix or keep internal image structures used consistently
Fix build for ARM architecture Jetson devices
- Original Jetson devices are unlikely to be fixed as they are no longer supported by NVIDIA (no C++17 compiler)
- New Jetson Orin device running
Fix Python API in V3
- Need better Python support (are there any Python developers willing to help with this issue?)
short term goals
Replace printf() with std::cout (work in progress)
Check out old zed camera support
Better and more consistent command line parsing (work in progress)
mid-term goals
Remove all char* codes and replace with std::string
Don't hide warnings and clean up compiler warnings (work in progress)
Better use of cv::Mat instead of custom image structures in C (work in progress)
Replace old list functions with std::vector or std::list
Fix support for single channel grayscale images
Add support for N-channel images where N > 3 (e.g. images with extra depth or thermal channels)
Ongoing code cleanup (in progress)
long term goals
Fix CUDA/CUDNN issues related to all GPUs
Rewrite CUDA+cuDNN code
Consider adding support for non-NVIDIA GPUs
Rotated bounding box, or some form of "angle" support
key points/skeleton
Heatmap (work in progress)
segmentation