Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[TCAD'23] TransCODE: Co-design of Transformers and Accelerators for Efficient Training and Inference

License

NotificationsYou must be signed in to change notification settings

jha-lab/transcode

Repository files navigation

Python VersionCondaPyTorch

This repository contains the simulation code for the paper "TransCODE: Co-design of Transformers and Accelerators for Efficient Training and Inference" published at the IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

Table of Contents

Environment setup

Clone this repository and initialize sub-modules

git clone https://github.com/JHA-Lab/transcode.gitcd ./transcode/git submodule initgit submodule update

Setup python environment

To setup python environment, please look at the instruction in thetxf_design-space and theacceltran repositories.

Run DynaProp

To run evaluation of DynaProp when training transformer models, run the following command:

cd ./dynaprop/python run_evaluation.py --max_evaluation_threshold<tau_I> --max_train_threshold<tau_T>cd ..

Here,<tau_I and<tau_T> are the evaluation and training pruning thresholds. For more information on the possible inputs to the simulation script, use:

cd ./dynaprop/python3 run_evaluation.py --helpcd ..

Run Co-design

To run hardware-software co-design over the AccelTran and FlexiBERT 2.0 design spaces, use the following command:

cd ./co-design/python run_co-design.pycd ..

For more information on the possible inputs to the co-design script, use:

cd ./co-design/python3 run_co-design.py --helpcd ..

Developer

Shikhar Tuli. For any questions, comments or suggestions, please reach me atstuli@princeton.edu.

Cite this work

Cite our previous works that define the hardware (AccelTran) and software (FlexiBERT) design spaces, using the following bitex entry:

@article{tuli2023acceltran,author={Tuli, Shikhar and Jha, Niraj K.},journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},title={AccelTran: A Sparsity-Aware Accelerator for Dynamic Inference with Transformers},year={2023},volume={},number={},pages={1-1},doi={10.1109/TCAD.2023.3273992}}
@article{tuli2023flexibert,author ={Tuli, Shikhar and Dedhia, Bhishma and Tuli, Shreshth and Jha, Niraj K.},title ={{FlexiBERT}: Are Current Transformer Architectures Too Homogeneous and Rigid?},year ={2023},volume ={77},doi ={10.1613/jair.1.13942},journal ={Journal of Artificial Intelligence Reseasrch},numpages ={32}}

If you use the provided co-design scripts, please cite our paper:

@article{tuli2023transcode,title={{TransCODE}: Co-design of Transformers and Accelerators for Efficient Training and Inference},author={Tuli, Shikhar and Jha, Niraj K},journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},year={2023}}

License

BSD-3-Clause.Copyright (c) 2022, Shikhar Tuli and Jha Lab.All rights reserved.

See License file for more details.

About

[TCAD'23] TransCODE: Co-design of Transformers and Accelerators for Efficient Training and Inference

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp