Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
#

model-compression

Here are 339 public repositories matching this topic...

nni

Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.

  • UpdatedMar 15, 2025
  • Python

[CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.

  • UpdatedSep 7, 2025
  • Python

Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.

  • UpdatedJan 22, 2024
  • Python

An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.

  • UpdatedMar 31, 2023
  • Python

Awesome Knowledge-Distillation. 分类整理的知识蒸馏paper(2014-2021)。

  • UpdatedMay 30, 2023

A curated list of neural network pruning resources.

  • UpdatedApr 4, 2024

A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.

  • UpdatedMar 4, 2025

micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、reg…

  • UpdatedMay 6, 2025
  • Python

A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibility

  • UpdatedMar 25, 2023
  • Python

Pytorch implementation of various Knowledge Distillation (KD) methods.

  • UpdatedNov 25, 2021
  • Python

A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.

  • UpdatedDec 15, 2025
  • Python

Efficient computing methods developed by Huawei Noah's Ark Lab

  • UpdatedNov 5, 2024
  • Jupyter Notebook
channel-pruning

Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

  • UpdatedMay 2, 2024
  • Python

Collection of recent methods on (deep) neural network compression and acceleration.

  • UpdatedApr 4, 2025

[CVPR 2024] DeepCache: Accelerating Diffusion Models for Free

  • UpdatedJun 27, 2024
  • Python

TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.

  • UpdatedAug 21, 2025
  • Python

A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.

  • UpdatedJun 19, 2021

Improve this page

Add a description, image, and links to themodel-compression topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with themodel-compression topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp