Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
/larqPublic

An Open-Source Library for Training Binarized Neural Networks

License

NotificationsYou must be signed in to change notification settings

larq/larq

Repository files navigation

logo


CodecovPyPI - Python VersionPyPIPyPI - LicenseDOICode style: black

Larq is an open-source deep learning library for training neural networks with extremely low precision weights and activations, such as Binarized Neural Networks (BNNs).

Existing deep neural networks use 32 bits, 16 bits or 8 bits to encode each weight and activation, making them large, slow and power-hungry.This prohibits many applications in resource-constrained environments. Larq is the first step towards solving this. It is designed to provide an easy to use, composable way to train BNNs (1 bit) and other types of Quantized Neural Networks (QNNs) and is based on thetf.keras interface. Note that efficient inference using a trained BNN requires the use of an optimized inference engine; we provide these for several platforms inLarq Compute Engine.

Larq is part of a family of libraries for BNN development; you can also check outLarq Zoo for pretrained models andLarq Compute Engine for deployment on mobile and edge devices.

Getting Started

To build a QNN, Larq introduces the concept ofquantized layers andquantizers. A quantizer defines the way of transforming a full precision input to a quantized output and the pseudo-gradient method used for the backwards pass. Each quantized layer requires aninput_quantizer and akernel_quantizer that describe the way of quantizing the incoming activations and weights of the layer respectively. If bothinput_quantizer andkernel_quantizer areNone the layer is equivalent to a full precision layer.

You can define a simple binarized fully-connected Keras model using theStraight-Through Estimator the following way:

model=tf.keras.models.Sequential(    [tf.keras.layers.Flatten(),larq.layers.QuantDense(512,kernel_quantizer="ste_sign",kernel_constraint="weight_clip"        ),larq.layers.QuantDense(10,input_quantizer="ste_sign",kernel_quantizer="ste_sign",kernel_constraint="weight_clip",activation="softmax",        ),    ])

This layer can be used inside aKeras model or with acustom training loop.

Examples

Check out our examples on how to train a Binarized Neural Network in just a few lines of code:

Installation

Before installing Larq, please install:

  • Python version3.7,3.8,3.9, or3.10
  • Tensorflow version1.14,1.15,2.0,2.1,2.2,2.3,2.4,2.5,2.6,2.7,2.8,2.9, or2.10:
    pip install tensorflow# or tensorflow-gpu

You can install Larq with Python'spip package manager:

pip install larq

About

Larq is being developed by a team of deep learning researchers and engineers at Plumerai to help accelerate both our own research and the general adoption of Binarized Neural Networks.


[8]ページ先頭

©2009-2025 Movatter.jp