Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A higher-level Neural Network library for microcontrollers.

License

NotificationsYou must be signed in to change notification settings

majianjia/nnom

Repository files navigation

Build StatusLicenseDOI

NNoM is a high-level inference Neural Network library specifically for microcontrollers.

[English Manual][中文简介]

Highlights

  • Deploy Keras model to NNoM model with one line of code.
  • Support complex structures; Inception, ResNet, DenseNet, Octave Convolution...
  • User-friendly interfaces.
  • High-performance backend selections.
  • Onboard pre-compiling - zero interpreter performance loss at runtime.
  • Onboard evaluation tools; Runtime analysis, Top-k, Confusion matrix...

The structure of NNoM is shown below:

More detail avaialble inDevelopment Guide

Discussions welcome usingissues.Pull request welcome. QQ/TIM group: 763089399.

Latest Updates - v0.4.x

Recurrent Layers (RNN) (0.4.1)

Recurrent layers(Simple RNN, GRU, LSTM) are implemented in version 0.4.1. Supportstatful andreturn_sequence options.

New Structured Interface (0.4.0)

NNoM has provided a new layer interface calledStructured Interface, all marked with_s suffix. which aims to use one C-structure to provided all the configuration for a layer. Different from the Layer API which is human friendly, this structured API are more machine friendly.

Per-Channel Quantisation (0.4.0)

The new structred API supports per-channel quantisation (per-axis) and dilations forConvolutional layers.

New Scripts (0.4.0)

From 0.4.0, NNoM will switch to structured interface as default to generate the model headerweights.h. The scripts corresponding to structured interfaces arennom.py while the Layer Interface corresponding tonnom_utils.py.

Licenses

NNoM is released under Apache License 2.0 since nnom-V0.2.0.License and copyright information can be found within the code.

Why NNoM?

The aims of NNoM is to provide a light-weight, user-friendly and flexible interface for fast deploying on MCU.

Nowadays, neural networks arewider,deeper, anddenser.

[1] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).

[2] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).

[3] Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700-4708).

After 2014, the development of Neural Networks are more focus on structure optimising to improve efficiency and performance, which is more important to the small footprint platforms such as MCUs.However, the available NN libs for MCU are too low-level which make it sooooo difficult to use with these complex strucures.

Therefore, we build NNoM to help embedded developers for faster and simpler deploying NN model directly to MCU.

NNoM will manage the strucutre, memory and everything else for the developer. All you need to do is feeding your new measurements and getting the results.

Installing

NNoM can be installed as a Python package

pip install git+https://github.com/majianjia/nnom@master

NNoM requiresTensorflow version<= 2.14.There are multiple options for how to install this, see the TensorFlow documentation.

For example:

pip install 'tensorflow-cpu<=2.14.1'

NOTE: Tensorflow 2.14 supports up until Python 3.11.However,Python 3.12 is not supported.

Accessing C files

The C headers and source code in NNoM are distributed in thennom_core Python package.You can find its location by running the following command.

python -c "import nnom_core; print(nnom_core.__path__[0])"

In your build system, add theinc/ andport/ directories as include directories,and compile the thesrc/*.c files.

Documentations

Guides

5 min to NNoM Guide

The temporary guide

Porting and optimising Guide

RT-Thread Guide(Chinese)

RT-Thread-MNIST example (Chinese)

Performance

There are many articles compared NNoM with other famous MCU AI tools, such as TensorFlow LiteSTM32Cube.AI .etc.

Raphael Zingg etc from Zurich University of Applied Sciences compare nnom with tflite, cube, and e-Ai in their paper"Artificial Intelligence on Microcontrollers" bloghttps://blog.zhaw.ch/high-performance/2020/05/14/artificial-intelligence-on-microcontrollers/

performance-comparison-tflite-cubeai-eai

Butt Usman Ali from POLITECNICO DI TORINO, did below comparison inthe thesis: On the deployment of Artificial Neural Networks (ANN) in lowcost embedded systems

performance-comparison-tflite-cubeai

Both articles shows that NNoM is not only comparable with other popular NN framework but with faster inference time and sometime less memory footprint.

Note: These graphs and tables are credited to their authors. Please refer the their original papers for details and copyright.

Examples

Documented examples

Please checkexamples and choose one to start with.

Available Operations

[API Manual]

*Notes: NNoM now supports both HWC and CHW formats. Some operation might not support both format currently. Please check the tables for the current status. *

Core Layers

LayersStruct APILayer APIComments
Convolutionconv2d_s()Conv2D()Support 1/2D, support dilations (New!)
ConvTransposed (New!)conv2d_trans_s()Conv2DTrans()Under Dev.
Depthwise Convdwconv2d_s()DW_Conv2D()Support 1/2D
Fully-connecteddense_s()Dense()
Lambdalambda_s()Lambda()single input / single output anonymous operation
Batch NormalizationN/AN/AThis layer is merged to the last Conv by the script
Flattenflatten_s()Flatten()
Reshape (New!)reshape_s()N/A
SoftMaxsoftmax_s()SoftMax()Softmax only has layer API
ActivationN/AActivation()A layer instance for activation
Input/Outputinput_s()/output_s()Input()/Output()
Up Samplingupsample_s()UpSample()
Zero Paddingzeropadding_s()ZeroPadding()
Croppingcropping_s()Cropping()

RNN Layers

LayersStatusStruct APIComments
Recurrent NN Layer(New!)Alpharnn_s()Layer wrapper of RNN
Simple Cell (New!)Alphasimple_cell_s()
GRU Cell (New!)Alphagru_cell_s()Gated Recurrent Network
LSTM Cell (New!)Alphalstm_s()Long Short-Term Memory

Activations

Activation can be used by itself as layer, or can be attached to the previous layer as"actail" to reduce memory cost.

There is no structred API for activation currently, since activation are not usually used as a layer.

ActrivationStruct APILayer APIActivation APIComments
ReLUN/AReLU()act_relu()
Leaky ReLU (New!)N/ALeakyReLU()act_leaky_relu()
Adv ReLU(New!)N/AN/Aact_adv_relu()advance ReLU, Slope, max, threshold
TanHN/ATanH()act_tanh()
Hard TanH (New!)N/ATanH()backend only
SigmoidN/ASigmoid()act_sigmoid()
Hard Sigmoid (New!)N/AN/AN/Abackend only

Pooling Layers

PoolingStruct APILayer APIComments
Max Poolingmaxpool_s()MaxPool()
Average Poolingavgpool_s()AvgPool()
Sum Poolingsumpool_s()SumPool()
Global Max Poolingglobal_maxpool_s()GlobalMaxPool()
Global Average Poolingglobal_avgpool_s()GlobalAvgPool()
Global Sum Poolingglobal_sumpool_s()GlobalSumPool()dynamic output shift

Matrix Operations Layers

MatrixStruct APILayer APIComments
Concatenateconcat_s()Concat()Concatenate through any axis
Multiplemult_s()Mult()
Additionadd_s()Add()
Substractionsub_s()Sub()

Dependencies

NNoM now use the local pure C backend implementation by default. Thus, there is no special dependency needed.

However, You will need to enablelibc for dynamic memory allocationmalloc(), free(), and memset(). Or you can port to the equivalent memory method in your system.

Optimization

CMSIS-NN/DSP is an optimized backend for ARM-Cortex-M4/7/33/35P. You can select it for up to 5x performance compared to the default C backend. NNoM will use the equivalent method in CMSIS-NN if the condition met.

Please checkPorting and optimising Guide for detail.

Known Issues

The Converter do not support implicitly defined activations

The script currently does not support implicit act:

x = Dense(32, activation="relu")(x)

Use the explicit activation instead.

x = Dense(32)(x)x = Relu()(x)

Tips - improving accuracy

  • Attaching an BatchNormalization after each convolutional layer limit the activation range thus help quantisation. BN add no extra computation in NNoM.
  • Dont train too much epoch. Large epoch number increases extreme number in activation -> lower the quantisation resolution.
  • Leave enough data for bottleneck - do not compress data at before the output of a model, infomation will be lost when it is quantised.

Contacts

Jianjia Mamajianjia@live.com

Also find me for field supports.

Citation are required in publication

Please contact me using above details if you have any problem.

Example:

@software{jianjia_ma_2020_4158710,  author       = {Jianjia Ma},  title        = {{A higher-level Neural Network library on Microcontrollers (NNoM)}},  month        = oct,  year         = 2020,  publisher    = {Zenodo},  version      = {v0.4.2},  doi          = {10.5281/zenodo.4158710},  url          = {https://doi.org/10.5281/zenodo.4158710}}

About

A higher-level Neural Network library for microcontrollers.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors16


[8]ページ先頭

©2009-2025 Movatter.jp