Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
/OPTPublic

Implementation for <Orthogonal Over-Parameterized Training> in CVPR'21.

NotificationsYou must be signed in to change notification settings

wy1iu/OPT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

By Weiyang Liu, Rongmei Lin, Zhen Liu, James Rehg, Liam Paull, Li Xiong, Le Song, Adrian Weller

License

OPT is released under the MIT License (refer to the LICENSE file for details).

Contents

  1. Introduction
  2. Citation
  3. Short Video Introduction
  4. Requirements
  5. Usage

Introduction

The inductive bias of a neural network is largely determined by the architecture and the training algorithm. To achieve good generalization, how to effectively train a neural network is of great importance. We propose a novel orthogonal over-parameterized training (OPT) framework that can provably minimize the hyperspherical energy which characterizes the diversity of neurons on a hypersphere. See our previous work --MHE for an in-depth introduction.

By maintaining the minimum hyperspherical energy during training, OPT can greatly improve the empirical generalization. Specifically, OPT fixes the randomly initialized weights of the neurons and learns an orthogonal transformation that applies to these neurons. We consider multiple ways to learn such an orthogonal transformation, including unrolling orthogonalization algorithms, applying orthogonal parameterization, and designing orthogonality-preserving gradient descent. For better scalability, we propose the stochastic OPT which performs orthogonal transformation stochastically for partial dimensions of neurons.

Our OPT is accepted toCVPR 2021 as oral presentation and the full paper is available onarXiv andhere.

Citation

If you find our work useful in your research, please consider to cite:

@InProceedings{Liu2021OPT,    title={Orthogonal Over-Parameterized Training},    author={Liu, Weiyang and Lin, Rongmei and Liu, Zhen and Rehg, James M. and Paull, Liam      and Xiong, Li and Song, Le and Weller, Adrian},    booktitle={CVPR},    year={2021}}

Short Video Introduction

We also provide a short video introduction to help interested readers quickly go over our work and understand the essence of OPT. Please click the following figure to watch the Youtube video.

OPT_talk

Requirements

  1. Python 3.7
  2. TensorFlow 1.14.0

Usage

This repository provides both OPT and S-OPT implementations on CIFAR-100 as a demostration.

Part 1: Clone the repositary

git clone https://github.com/wy1iu/OPT.git

Part 2: Download the official CIFAR-100 training and testing data (python version)

wget https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz

Part 3: Train and test with the following code in different folder.

# Run Cayley Parameterization OPTcd opt_cppython train.py
# Run Gram-Schmidt OPTcd opt_gspython train.py
# Run Householder Reflection OPTcd opt_hrpython train.py
# Run Lowdin’s Symmetric OPTcd opt_lspython train.py
# Run Orthogonality-Preserving Gradient Descent OPTcd opt_ogdpython train.py
# Run Orthogonality Regularization OPTcd opt_orpython train.py
# Run Stochastic OPT (Gram-Schmidt)cd sopt_gspython train.py

Contact

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp