Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Collection of recent methods on (deep) neural network compression and acceleration.

License

NotificationsYou must be signed in to change notification settings

MingSun-Tse/Efficient-Deep-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 

Repository files navigation

A collection of recent methods on DNN compression and acceleration. There are mainly 5 kinds of methods for efficient DNNs:

  • neural architecture re-design or search (NAS)
    • maintain accuracy, less cost (e.g., #Params, #FLOPs, etc.): MobileNet, ShuffleNet etc.
    • maintain cost, more accuracy: Inception, ResNeXt, Xception etc.
  • pruning (including structured and unstructured)
  • quantization
  • matrix/low-rank decomposition
  • knowledge distillation (KD)

Note, this repo is more about pruning (with lottery ticket hypothesis or LTH as a sub-topic), KD, and quantization. For other topics like NAS, see more comprehensive collections (## Related Repos and Websites) at the end of this file. Welcome to send a pull request if you'd like to add any pertinent papers.

Other repos:

  • LTH (lottery ticket hypothesis) and its broader version,pruning at initialization (PaI), now is at the frontier of network pruning. We single out the PaI papers tothis repo. Welcome to check it out!
  • Awesome-Efficient-ViT for a curated list of efficient vision transformers.

About abbreviation: In the list below,o for oral,s for spotlight,b for best paper,w for workshop.

Surveys

Papers [Pruning and Quantization]

1980s,1990s

2000s

2011

2013

2014

2015

2016

2017

2018

2019

2020

2021

2022

2023


Papers [Actual Acceleration via Sparsity]


Papers [Lottery Ticket Hypothesis (LTH)]

For LTH and otherPruning at Initialization papers, please refer toAwesome-Pruning-at-Initialization.


Papers [Bayesian Compression]

Papers [Knowledge Distillation (KD)]

Before 2014

2014

2016

2017

2018

2019

2020

2021

2022

Papers [AutoML (NAS etc.)]

Papers [Interpretability]

Workshops

Books & Courses

Lightweight DNN Engines/APIs

Related Repos and Websites

About

Collection of recent methods on (deep) neural network compression and acceleration.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp