Asp+Csv intelligent adaptive universal score inquiry system
Today, the editor of Downcodes will introduce to you a query system developed and published based on ASP to query data in csv format. Asp+Csv intelligently adapts to the universal score query system. I hope you will like it.
This software is extremely simple but is a very versatile and convenient score query system that can be used to query almost all Excel single and two-dimensional data tables.
Purpose
This system is suitable for various precise inquiries such as grades, wages, property utility bills, etc. that are modified infrequently and have low confidentiality. The specific usage scenarios are as follows:
1. Score query system: applicable to every school, educational institution, public institution examination, etc.
2. Salary query system: applicable to every school, educational institution, public institution examination, etc.
3. Property fee inquiry system: applicable to every enterprise, school and all units.
4. Utility bill inquiry system: suitable for communities, property companies, university dormitories, etc.
5. Other query systems: such as class placement query, admission query, certificate query and other query systems with few modifications.
Features and advantages
1. High versatility: It can be used in almost all two-dimensional tables and can meet most of your needs.
2. Simple and convenient: The code is small and simple, and can be quickly modified to suit various scenarios such as multi-table joint query.
3. Flexible and easy to use: Just modify a few parameters to customize the query.
4. Quick use: Posting a score can be solved in two to three minutes at the fastest.
Limitations
1. Not suitable for frequent modifications: Results, wages, water and electricity bills, etc. are generally released at one time without modification. This system is not suitable for scenarios where modifications are frequent.
2. Only suitable for two-dimensional tables: Generally, databases adopt a two-dimensional structure, with headers in the first row and rows, and one data per row thereafter. This system does not currently support data in other structures.
3. Recommended control of the number of records in a single database: This system does not limit the number of records in a single database, but it is recommended that the number of records in a single database be controlled within 30,000, and the databases can be divided into databases without affecting each other.
4. Formulas, pictures, URLs, etc. are not supported for the time being: This system does not support formulas, pictures, URLs, etc. for the time being.
Usage suggestions
It can be used directly by uploading via FTP. It is recommended to upload directly for query testing first.
Front-end access: http://website/directory/ (upload for direct use without the support of mysql database, etc.)
Then use notepad++ to open inc/conn.Asp to view the corresponding relationship between parameters and web pages, and then open the default built-in database to compare the query results and view the corresponding relationship.
Usage steps
For details, see the html format file in the compressed package.
Example
The following takes Darknet Object Detection Framework and YOLO as an example to show how to use Markdown's title tag element and some layout adjustments to make the information display neater and more convenient for users to read:
Darknet Object Detection Framework and YOLO
!darknet and hank.ai logos
Darknet is an open source neural network framework written in C, C++ and CUDA.
YOLO (You Only Look Once) is a state-of-the-art real-time object detection system running in the Darknet framework.
Papers
Paper YOLOv7
Paper Scaled-YOLOv4
Paper YOLOv4
Paper YOLOv3
General Information
The Darknet/YOLO framework continues to be faster and more accurate than other frameworks and YOLO versions.
The framework is completely free and open source. You can use Darknet/YOLO in existing projects and products, including commercial products, without licensing or fees.
Darknet V3 ("Jazz"), released in October 2024, can accurately run LEGO dataset video at up to 1000 FPS when using an NVIDIA RTX 3090 GPU, meaning each video frame takes 1 millisecond or less. Read, resized, and processed by Darknet/YOLO in seconds.
If you need help or want to discuss Darknet/YOLO, please join the Darknet/YOLO Discord server: https://discord.gg/zSq8rtW
The CPU version of Darknet/YOLO can run on simple devices such as Raspberry Pi, cloud & colab servers, desktops, laptops and high-end training equipment. The GPU version of Darknet/YOLO requires NVIDIA's CUDA-capable GPU.
Darknet/YOLO is known to run on Linux, Windows and Mac. Please see the build instructions below.
Darknet Version
The original Darknet tools, written by Joseph Redmon in 2013-2017, did not have version numbers. We think this is version 0.x.
The next popular Darknet repository maintained by Alexey Bochkovskiy from 2017-2021 also does not have a version number. We believe this is version 1.x.
The Darknet repository sponsored by Hank.ai and maintained by Stéphane Charette starting in 2023 is the first to have a version command. From 2023 to the end of 2024, it returns to version 2.x "OAK".
The goal is to get familiar with the code base while trying to break as little existing functionality as possible.
Rewritten the build steps so that we have a unified approach to building on Windows and Linux using CMake.
Convert the code base to use a C++ compiler.
Enhanced chart.png during training.
Bug fixes and performance-related optimizations, mainly related to reducing the time required to train the network.
The last branch of this code base is version 2.1 in the v2 branch.
The next phase of development begins in mid-2024, with release in October 2024. The version command now returns 3.x "JAZZ".
If you need to run one of these commands, you can always checkout the previous v2 branch. If you need help please let us know so we can investigate adding any missing commands.
Removed many old and unmaintained commands.
Many performance optimizations were made, including during training and inference.
The old C API has been modified; applications using the original Darknet API will require minor modifications: https://darknetcv.ai/api/api.html
New Darknet V3 C and C++ API: https://darknetcv.ai/api/api.html
New applications and sample code in src-examples: https://darknetcv.ai/api/files.html
MSCOCO Pre-trained Weights
For convenience, several popular YOLO versions are pre-trained on the MSCOCO dataset. The dataset contains 80 categories and can be seen in the text file cfg/coco.names.
There are several other simpler datasets and pre-trained weights available for testing Darknet/YOLO, such as LEGO Gears and Rolodex. See the Darknet/YOLO FAQ for details.
MSCOCO pre-trained weights can be downloaded from a number of different locations or from this repository:
YOLOv2, November 2016
*YOLOv2-tiny
*YOLOv2-full
YOLOv3, May 2018
* YOLOv3-tiny
*YOLOv3-full
YOLOv4, May 2020
* YOLOv4-tiny
*YOLOv4-full
YOLOv7, August 2022
* YOLOv7-tiny
*YOLOv7-full
MSCOCO pretrained weights are for demonstration purposes only. The corresponding .cfg and .names files for MSCOCO are located in the cfg directory. Example command:
`
wget --no-clobber https://github.com/hank-ai/darknet/releases/download/v2.0/yolov4-tiny.weights
darknet02displayannotatedimages coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg
darknet03display_videos coco.names yolov4-tiny.cfg yolov4-tiny.weights video1.avi
DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights image1.jpg
DarkHelp coco.names yolov4-tiny.cfg yolov4-tiny.weights video1.avi
`
Note that one should train their own network. MSCOCO is usually used to confirm that everything is working properly.
Building
Various build methods provided in the past (before 2023) have been merged into a unified solution. Darknet requires C++17 or newer, OpenCV, and uses CMake to generate the necessary project files.
You don't need to know C++ to build, install, or run Darknet/YOLO, just like you don't need to be a mechanic to drive a car.
Google Colab
The instructions for Google Colab are the same as for Linux. There are several Jupyter notebooks available that show how to perform certain tasks, such as training a new network.
See the notebook in the colab subdirectory, or follow the Linux instructions below.
Linux CMake Method
Optional: If you have a modern NVIDIA GPU, you can install CUDA or CUDA+cuDNN at this time. If installed, Darknet will use your GPU to accelerate image (and video) processing.
Required: You must delete the CMakeCache.txt file from your Darknet build directory to force CMake to re-find all necessary files.
Required: Remember to rebuild Darknet.
Darknet can run without it, but if you want to train a custom network, CUDA or CUDA+cuDNN is required.
Visit https://developer.nvidia.com/cuda-downloads to download and install CUDA.
Visit https://developer.nvidia.com/rdp/cudnn-download or https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#cudnn-package-manager-installation-overview to download and Install cuDNN.
Once CUDA is installed, make sure you can run nvcc and nvidia-smi. You may need to modify your PATH variable.
If you installed CUDA or CUDA+cuDNN at a later time, or you upgraded to a newer version of NVIDIA software:
These instructions assume (but don't necessarily require!) the system is running Ubuntu 22.04. If you are using another distribution, please adjust as needed.
`
sudo apt-get install build-essential git libopencv-dev cmake
mkdir ~/srccd ~/src
git clone https://github.com/hank-ai/darknetcd darknet
mkdir buildcd build
cmake -DCMAKEBUILDTYPE=Release ..
make -j4
package
sudo dpkg -i darknet-VERSION.deb
`
If you are using an older version of CMake, then you need to upgrade CMake before running the cmake command above. To upgrade CMake on Ubuntu you can use the following command:
`
sudo apt-get purge cmake
sudo snap install cmake --classic
`
If you use bash as your command shell, you may need to restart your shell. If you use fish it should pick up the new path immediately.
Advanced users:
If you want to build an RPM installation file instead of a DEB file, see the relevant lines in CM_package.cmake. Before running make -j4 package, you need to edit these two lines:
`
SET (CPACKGENERATOR "DEB")# SET (CPACKGENERATOR "RPM")
`
For distributions like Centos and OpenSUSE, you need to change these two lines in CM_package.cmake to:
`
SET (CPACKGENERATOR "DEB")SET (CPACKGENERATOR "RPM")
`
To install the installation package, once it has been built, use your distribution's usual package manager. For example, on a Debian-based system (such as Ubuntu):
`
sudo dpkg -i darknet-2.0.1-Linux.deb
`
Installing the .deb package will copy the following files:
/usr/bin/darknet is the usual Darknet executable. Run darknet version from the CLI to confirm it is installed correctly.
/usr/include/darknet.h is the Darknet API for C, C++ and Python developers.
/usr/include/darknet_version.h contains version information for developers.
/usr/lib/libdarknet.so is a library for C, C++ and Python developers to link against.
/opt/darknet/cfg/... is where all .cfg templates are stored.
Now you're done! Darknet has been built and installed into /usr/bin/. Run the following command to test: darknet version.
If you don't have /usr/bin/darknet, that means you didn't install it, you just built it! Make sure you install the .deb or .rpm file as described above.
Windows CMake Method
These instructions assume you have a clean installation of Windows 11 22H2.
Open a normal cmd.exe command prompt window and run the following command:
`
winget install Git.Git
winget install Kitware.CMake
winget install nsis.nsis
winget install Microsoft.VisualStudio.2022.Community
`
At this point we need to modify the Visual Studio installation to include support for C++ applications:
* Click the "Windows Start" menu and run "Visual Studio Installer".
* Click Modify.
* Select Desktop Development With C++.
* Click Modify in the lower right corner, then click Yes.
Once everything is downloaded and installed, click on the "Windows Start" menu again and select Developer Command Prompt for VS 2022. Do not use PowerShell to perform these steps, you will run into problems!
Advanced users:
Instead of running the Developer Command Prompt, you can use a normal command prompt or ssh into the device and manually run "Program FilesMicrosoft Visual Studio2022CommunityCommon7ToolsVsDevCmd.bat".
Once you've followed the instructions above and run the Developer Command Prompt (not PowerShell!), run the following command to install Microsoft VCPKG, which will be used to build OpenCV:
`
cd c:mkdir c:srccd c:src
git clone https://github.com/microsoft/vcpkgcd vcpkg
bootstrap-vcpkg.bat .vcpkg.exe integrate
install .vcpkg.exe integrate powershell.vcpkg.exe
install opencv[contrib,dnn,freetype,jpeg,openmp,png,webp,world]:x64-windows
`
Please be patient with this last step as it may take a long time to run. It requires downloading and building a lot of stuff.
Advanced users:
Note that there are many other optional modules you may want to add when building OpenCV. Run .vcpkg.exe search opencv to see the complete list.
Optional: If you have a modern NVIDIA GPU, you can install CUDA or CUDA+cuDNN at this time. If installed, Darknet will use your GPU to accelerate image (and video) processing.
Required: You must delete the CMakeCache.txt file from your Darknet build directory to force CMake to re-find all necessary files.
Required: Remember to rebuild Darknet.
Darknet can run without it, but if you want to train a custom network, CUDA or CUDA+cuDNN is required.
Visit https://developer.nvidia.com/cuda-downloads to download and install CUDA.
Visit https://developer.nvidia.com/rdp/cudnn-download or https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#download-windows to download and install cuDNN.
Once CUDA is installed, make sure you can run nvcc.exe and nvidia-smi.exe. You may need to modify your PATH variable.
Once you download cuDNN, unzip and copy the bin, include, and lib directories to C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/[version]/. You may need to overwrite some files.
If you installed CUDA or CUDA+cuDNN at a later time, or you upgraded to a newer version of NVIDIA software:
CUDA must be installed after Visual Studio. If you upgrade Visual Studio, remember to reinstall CUDA.
Once all previous steps are completed successfully, you need to clone Darknet and build it. In this step we also need to tell CMake where vcpkg is located so that it can find OpenCV and other dependencies:
`
cd c:src
git clone https://github.com/hank-ai/darknet.gitcd darknetmkdir buildcd build
cmake -DCMAKEBUILDTYPE=Release -DCMAKETOOLCHAINFILE=C:/src/vcpkg/scripts/buildsystems/vcpkg.cmake ..
msbuild.exe /property:Platform=x64;Configuration=Release /target:Build -maxCpuCount -verbosity:normal -detailedSummary darknet.sln
msbuild.exe /property:Platform=x64;Configuration=Release PACKAGE.vcxproj
`
If you get errors about some missing CUDA or cuDNN DLLs, such as cublas64_12.dll, then manually copy the CUDA .dll file into the same output directory as Darknet.exe. For example:
`
copy "C:Program FilesNVIDIA GPU Computing ToolkitCUDAv12.2bin*.dll" src-cliRelease
`
(This is an example! Please check to make sure which version you are running, and run the appropriate command for the version you have installed.)
Once the files are copied, re-run the last msbuild.exe command to generate the NSIS installation package:
`
msbuild.exe /property:Platform=x64;Configuration=Release PACKAGE.vcxproj
`
Advanced users:
Please note that the output of the cmake command is a normal Visual Studio solution file, Darknet.sln. If you are a software developer who frequently uses the Visual Studio GUI instead of msbuild.exe to build projects, you can ignore the command line and load the Darknet project in Visual Studio.
You should now have this file that you can run: C:srcDarknetbuildsrc-cliReleasedarknet.exe. Run the following command to test: C:srcDarknetbuildsrc-cliReleasedarknet.exe version.
To properly install Darknet, libraries, include files and necessary DLLs, run the NSIS installation wizard built in the last step. See the file darknet-VERSION.exe in the build directory. For example:
`
darknet-2.0.31-win64.exe
`
Installing the NSIS installation package will:
Create a directory called Darknet, for example C:Program FilesDarknet.
Install the CLI application, darknet.exe and other sample applications.
Install required third-party .dll files, such as those from OpenCV.
Install the necessary Darknet .dll, .lib, and .h files to use darknet.dll from another application.
Install the template .cfg file.
Now you're done! Once the installation wizard is complete, Darknet will be installed into C:Program FilesDarknet. Run the following command to test: C:Program FilesDarknetbindarknet.exe version.
If you don't have C:/Program Files/darknet/bin/darknet.exe, that means you didn't install it, you just built it! Make sure you go through each panel of the NSIS installation wizard as described in the previous step.
Using Darknet
CLI
The following is not a complete list of all commands supported by Darknet.
In addition to the Darknet CLI, also note the DarkHelp project CLI, which provides an alternative CLI to Darknet/YOLO. DarkHelp CLI also has some advanced features not directly available in Darknet. You can use Darknet CLI and DarkHelp CLI together, they are not mutually exclusive.
For most of the commands shown below, you will need to use the .weights file for the corresponding .names and .cfg files. You can train your own network (highly recommended!), or download neural networks from the Internet that have been trained by others and are freely available. Examples of pre-training datasets include:
LEGO Gears (find objects in images)
Rolodex (find text in image)
MSCOCO (standard 80-category target detection)
Commands to run include:
List some commands and options that can be run:
`
darknet help
`
Check version:
`
darknet version
`
Use images to make predictions:
`
V2: darknet detector test cars.data cars.cfg cars_best.weights image1.jpg
V3: darknet02displayannotatedimages cars.cfg image1.jpg
DarkHelp: DarkHelp cars.cfg cars.cfg cars_best.weights image1.jpg
`
Output coordinates:
`
V2: darknet detector test animals.data animals.cfg animalsbest.weights -extoutput dog.jpg
V3: darknet01inference_images animals dog.jpg
DarkHelp: DarkHelp --json animals.cfg animals.names animals_best.weights dog.jpg
`
Use video:
`
V2: darknet detector demo animals.data animals.cfg animalsbest.weights -extoutput test.mp4
V3: darknet03display_videos animals.cfg test.mp4
DarkHelp: DarkHelp animals.cfg animals.names animals_best.weights test.mp4
`
Reading from webcam:
`
V2: darknet detector demo animals.data animals.cfg animals_best.weights -c 0
V3: darknet08display_webcam animals
`
Save results to video:
`
V2: darknet detector demo animals.data animals.cfg animalsbest.weights test.mp4 -outfilename res.avi
V3: darknet05processvideosmultithreaded animals.cfg animals.names animals_best.weights test.mp4
DarkHelp: DarkHelp animals.cfg animals.names animals_best.weights test.mp4
`
JSON:
`
V2: darknet detector demo animals.data animals.cfg animalsbest.weights test50.mp4 -jsonport 8070 -mjpegport 8090 -extoutput
V3: darknet06imagestojson animals image1.jpg
DarkHelp: DarkHelp --json animals.names animals.cfg animals_best.weights image1.jpg
`
Run on specific GPU:
`
V2: darknet detector demo animals.data animals.cfg animals_best.weights -i 1 test.mp4
`
Check the accuracy of the neural network:
`
darknet detector map driving.data driving.cfg driving_best.weights ...
Id Name AvgPrecision TP FN FP TN Accuracy ErrorRate Precision Recall Specificity FalsePosRate
-- ---- ------------ ------ ------ ------ ------ -------- --------- --------- ------ ---------- ----------
0 vehicle 91.2495 32648 3903 5826 65129 0.9095 0.0905 0.8486 0.8932 0.9179 0.0821
1 motorcycle 80.4499 2936 513 569 5393 0.8850 0.1150 0.8377 0.8513 0.9046 0.0954
2 bicycle 89.0912 570 124 104 3548 0.9475 0.0525 0.8457 0.8213 0.9715 0.0285
3 person 76.7937 7072 1727 2574 27523 0.8894 0.1106 0.7332 0.8037 0.9145 0.0855
4 many vehicles 64.3089 1068 509 733 11288 0.9087 0.0913 0.5930 0.6772 0.9390 0.0610
5 green light 86.8118 1969 239 510 4116 0.8904 0.1096 0.7943 0.8918 0.8898 0.1102
6 yellow light 82.0390 126 38 30 1239 0.9525 0.0475 0.8077 0.7683 0.9764 0.0236
7 red light 94.1033 3449 217 451 4643 0.9237 0.0763 0.8844 0.9408 0.9115 0.0885
`
Check accuracy mAP@IoU=75:
`
darknet detector map animals.data animals.cfg animalsbest.weights -iouthresh 0.75
`
Recalculating the anchor points is best done in DarkMark as it will run it 100 times in a row and select the best anchor point from all the anchor points calculated. But if you want to run an older version in Darknet:
`
darknet detector calcanchors animals.data -numof_clusters 6 -width 320 -height 256
`
Train a new network:
`
darknet detector -map -dont_show train animals.data animals.cfg (also see the training section below)
`
Training
Quick link to the relevant section of the Darknet/YOLO FAQ:
* How do I set up my files and directories?
* Which profile should I use?
* What commands should I use when training my own network?
Use DarkMark to create all necessary Darknet files, which is the easiest way to annotate and train. This is definitely the recommended way to train new neural networks.
If you want to manually set up the various files to train a custom network:
1. Create a new folder to store the files. In this example, a neural network will be created to detect animals, so the following directory will be created: ~/nn/animals/.
2. Copy one of the Darknet configuration files you want to use as a template. For example, see cfg/yolov4-tiny.cfg. Place it in the folder you created. In this example, we now have ~/nn/animals/animals.cfg.
3. In the same folder where you placed the configuration file, create an animals.names text file. In this case, we now have ~/nn/animals/animals.names.
4. Use a text editor to edit the animals.names file. List the categories you want to use. You need to have exactly one entry per line, no blank lines, and no comments. In this example, the .names file will contain exactly 4 lines:
`
dog
cat
bird
horse
`
5. Create an animals.data text file in the same folder. In this example, the .data file will contain:
`
classes = 4
train = /home/username/nn/animals/animals_train.txt
valid = /home/username/nn/animals/animals_valid.txt
names = /home/username/nn/animals/animals.names
backup = /home/username/nn/animals
`
6. Create a folder to store your images and annotations. For example, this could be ~/nn/animals/dataset. Each image requires a corresponding .txt file with annotations describing that image. The format of .txt comment files is very specific. You cannot create these files manually because each annotation needs to contain the exact coordinates of the annotation. Please refer to DarkMark or other similar software to annotate your images. The YOLO annotation format is described in the Darknet/YOLO FAQ.
7. Create "train" and "valid" text files named in the .data file. These two text files need to list all the images that Darknet must use for training and validation (when calculating mAP%) respectively. There is exactly one image per row. Paths and filenames can be relative or absolute.
8. Use a text editor to modify your .cfg file.
* Make sure batch=64.
* Pay attention to subdivisions. Depending on the network dimensions and the amount of memory available on your GPU, you may need to increase subdivisions. The best value to use is 1, so start with that. If 1 doesn't work for you, see the Darknet/YOLO FAQ.
Note that maxbatches=…. In the beginning, a good value is 2000 times the number of categories. In this example we have 4 animals, so 4 2000 = 8000. This means we will use maxbatches=8000.
* Note steps=..... This should be set to 80% and 90% of maxbatches. In this example, we will use steps=6400,7200 since maxbatches is set to 8000.
* Pay attention to width=... and height=..... These are network dimensions. The Darknet/YOLO FAQ explains how to calculate the optimal size to use.
* Search for all instances containing the line classes=... and modify them with the number of classes in your .names file. In this example we will use classes=4.
In the [convolutional] section before each [yolo] section, search for all instances containing the line filters=... . The value to use is (number of categories + 5) 3. This means in this case, (4 + 5) * 3 = 27. So we will use filters=27 in the appropriate lines.
9. Start training! Run the following command:
`
cd ~/nn/animals/
darknet detector -map -dont_show train animals.data animals.cfg
`
Please wait. The best weights will be saved as animals_best.weights. You can observe the progress of training by viewing the chart.png file. See the Darknet/YOLO FAQ for additional parameters you may want to use when training a new network.
If you want to see more details during training, add the --verbose parameter. For example:
`
darknet detector -map -dont_show --verbose train animals.data animals.cfg
`
Other Tools and Links
To manage your Darknet/YOLO project, annotate images, validate your annotations, and generate the necessary files for training with Darknet, see DarkMark.
For a robust Darknet alternative CLI for using image tiling for object tracking in video, or for a robust C++ API that can be easily used in commercial applications, see DarkHelp.
Please check out the Darknet/YOLO FAQ and see if it helps answer your question.
Please watch the many tutorials and example videos on Stéphane's YouTube channel.
If you have support questions, or would like to chat with other Darknet/YOLO users, please join the Darknet/YOLO Discord server.
Roadmap
Last updated: 2024-10-30
Completed
Replaced qsort() used during training with std::sort() (some other less common ones still exist)
Remove check_mistakes, getchar() and system()
Convert Darknet to use a C++ compiler (g++ on Linux, Visual Studio on Windows)
Fix Windows build
Fix Python support
Build darknet library
Re-enable labels in predictions ("alphabet" code)
Re-enable CUDA/GPU code
Re-enable CUDNN
Re-enable CUDNN half
Don't hardcode the CUDA architecture
Better CUDA version information
Re-enable AVX
Remove old solution and Makefile
Make OpenCV non-optional
Remove dependency on old pthread library
Delete STB
Rewrite CMakeLists.txt to use new CUDA detection
Removed old "alphabet" code and deleted over 700 images in data/labels
Build beyond source code
Has better version number output
Training-related performance optimizations (ongoing tasks)
Performance optimizations related to inference (ongoing tasks)
Use references by value whenever possible
Clean .hpp files
Rewrite darknet.h
Don't cast cv::Mat to void, instead use it as a proper C++ object
Fix or keep internal image structures used consistently
Fix build for ARM-based Jetson devices
* Since NVIDIA no longer supports original Jetson devices, they are unlikely to be fixed (no C++17 compiler)
* New Jetson Orin device now running
Fix Python API in V3
Better Python support needed (any Python developers want to help with this?)
short term goals
Replace printf() with std::cout (work in progress)
Check out old zed camera support
Better, more consistent command line parsing (work in progress)
mid-term goals
Remove all char codes and replace with std::string
Don't hide warnings, clean up compiler warnings (work in progress)
Better to use cv::Mat instead of custom image structures in C (work in progress)
Replace old list functions with std::vector or std::list
Fix support for 1-channel grayscale images
Add support for N-channel images where N > 3 (e.g. images with additional depth or thermal channels)
Ongoing code cleanup (in progress)
long term goals
Fix CUDA/CUDNN issues for all GPUs
Rewrite CUDA+cuDNN code
Consider adding support for non-NVIDIA GPUs
Rotated bounding box, or some form of "angle" support
key points/skeleton
Heatmap (work in progress)
segmentation
In the example, the Markdown title tag element is used, and some content is logically supplemented, polished, and typesetting adjusted. For example, adding Chinese serial numbers, Arabic numeral serialization, line breaks and other formatting can make the information display more orderly and make it more convenient for users to check.
I hope this article can be helpful to everyone!