|
Hyperparameter Tuning
|
Neural Architecture Search
|
Model Compression
|
Algorithms
|
- Exhaustive search
- Heuristic search
- Anneal
- Evolution
- Hyperband
- PBT
- Bayesian optimization
- BOHB
- DNGO
- GP
- Metis
- SMAC
- TPE
|
- Multi-trial
- Grid Search
- Policy Based RL
- Random
- Regularized Evolution
- TPE
- One-shot
- DARTS
- ENAS
- FBNet
- ProxylessNAS
- SPOS
|
- Pruning
- Level
- L1 Norm
- Taylor FO Weight
- Movement
- AGP
- Auto Compress
- More...
- Quantization
- Naive
- QAT
- LSQ
- Observer
- DoReFa
- BNN
|
|
Supported Frameworks
|
Training Services
|
Tutorials
|
Supports
|
- PyTorch
- TensorFlow
- Scikit-learn
- XGBoost
- LightGBM
- MXNet
- Caffe2
- More...
|
- Local machine
- Remote SSH servers
- Azure Machine Learning (AML)
- Kubernetes Based
- OpenAPI
- Kubeflow
- FrameworkController
- AdaptDL
- PAI DLC
- Hybrid training services
|
- HPO
- NAS
- Compression
- Pruning
- Pruning Speedup
- Quantization
- Quantization Speedup
|
Resources
- NNI Documentation Homepage
- NNI Installation Guide
- NNI Examples
- Python API Reference
- Releases (Change Log)
- Related Research and Publications
- Youtube Channel of NNI
- Bilibili Space of NNI
- Webinar of Introducing Retiarii: A deep learning exploratory-training framework on NNI
- Community Discussions
Contribution guidelines
If you want to contribute to NNI, be sure to review the contribution guidelines, which includes instructions of submitting feedbacks, best coding practices, and code of conduct.
We use GitHub issues to track tracking requests and bugs.
Please use NNI Discussion for general questions and new ideas.
For questions of specific use cases, please go to Stack Overflow.
Participating discussions via the following IM groups is also welcomed.
Gitter |
|
WeChat |
|
OR |
|
Over the past few years, NNI has received thousands of feedbacks on GitHub issues, and pull requests from hundreds of contributors.
We appreciate all contributions from community to make NNI thrive.
Test status
Essentials
Type |
Status |
Fast test |
|
Full test - HPO |
|
Full test - NAS |
|
Full test - compression |
|
Training services
Type |
Status |
Local - linux |
|
Local - windows |
|
Remote - linux to linux |
|
Remote - windows to windows |
|
OpenPAI |
|
Frameworkcontroller |
|
Kubeflow |
|
Hybrid |
|
AzureML |
|
Related Projects
Targeting at openness and advancing state-of-art technology, Microsoft Research (MSR) had also released few other open source projects.
- OpenPAI : an open source platform that provides complete AI model training and resource management capabilities, it is easy to extend and supports on-premise, cloud and hybrid environments in various scale.
- FrameworkController : an open source general-purpose Kubernetes Pod Controller that orchestrate all kinds of applications on Kubernetes by a single controller.
- MMdnn : A comprehensive, cross-framework solution to convert, visualize and diagnose deep neural network models. The "MM" in MMdnn stands for model management and "dnn" is an acronym for deep neural network.
- SPTAG : Space Partition Tree And Graph (SPTAG) is an open source library for large scale vector approximate nearest neighbor search scenario.
- nn-Meter : An accurate inference latency predictor for DNN models on diverse edge devices.
We encourage researchers and students leverage these projects to accelerate the AI development and research.
License
The entire codebase is under MIT license.
Expand