🎯
Focusing
@UpstageAI , Neural network compression, Computer vision;Google Developers Experts for Machine Learning;
- UpstageAI
- Republic of Korea
- https://scholar.google.com/citations?user=onGHuFsAAAAJ
- @sseung0703
- Welcome to my Github page. I am a Ph.D at Inha University in South Korea.
My research areas are machine learning, deep learning, and especially the light-weighting the convolutional neural networks such as knowledge distillation and filter pruning.
You can find my curriculum vitaehere.
- Tensorflow (1.x and 2.x): Professional
- Pytorch: Upper intermediate
- JAX: Upper intermediate
- Google developers experts from May 2022
- Leader of deep learning paper study group:link
- Major contributor of the implementation project forPutting NeRF on a Dietin 🤗HuggingFace X GoogleAI Flax/JAX Community Week Event (won the 2nd price! 😆)
- Have served as a reviewer for CVPR, ICCV, ECCV, and so on.
- "Fast Filter Pruning via Coarse-to-Fine Neural Architecture Search and Contrastive Knowledge Transfer" on IEEE TNNLS (2023) [paper]
- "Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning" on ECCV2022 [paper] [code]
- "Interpretable Embedding Procedure Knowledge Transfer via Stacked Principal Component Analysis and Graph Neural Network" on AAAI2021 [paper] [code]
- "Knowledge Transfer via Decomposing Essential Information in Convolutional Neural Networks" on IEEE TNNLS (2020) [paper] [TF1 code,TF2 code]
- "Filter Pruning and Re-Initialization via Latent Space Clustering" on IEEE Access (2020) [paper]
- "Transformation of Non-Euclidean Space to Euclidean Space for Efficient Learning of Singular Vectors" on IEEE Access (2020) [paper]
- "Graph-based Knowledge Distillation by Multi-head Attention Network." on BMVC2019 oral [paper] [code]
- "Self-supervised Knowledge Distillation Using Singular Value Decomposition" on ECCV2018 [paper] [TF1 code,TF2 code]
- "CFA: Coupled-hypersphere-based Feature Adaptation for Target-Oriented Anomaly Localization" on IEEE Access (2022) [paper] [code]
- "Balanced knowledge distillation for one-stage object detector" on Neurocomputing (2022) [paper]
- "Vision Transformer for Small-Size Datasets" on arxiv preprint [paper] [code]
- "Contextual Gradient Scaling for Few-Shot Learning" on WACV2022 [paper] [code]
- "Zero-Shot Knowledge Distillation Using Label-Free Adversarial Perturbation With Taylor Approximation" on IEEE Access (2021) [paper] [code]
- "Channel Pruning Via Gradient Of Mutual Information For Light-Weight Convolutional Neural Networks" on ICIP 2020 [paper]
- "Real-time purchase behavior recognition system based on deep learning-based object detection and tracking for an unmanned product cabinet" on ESWA (2020) [paper]
- "Metric-Based Regularization and Temporal Ensemble for Multi-Task Learning using Heterogeneous Unsupervised Tasks" on ICCVW2019 [paper]
- "MUNet: macro unit-based convolutional neural network for mobile devices" on CVPRW2018 [paper]
PinnedLoading
- KD_methods_with_TF
KD_methods_with_TF PublicKnowledge distillation methods implemented with Tensorflow (now there are 11 (+1) methods, and will be added more.)
- Knowledge_distillation_via_TF2.0
Knowledge_distillation_via_TF2.0 PublicThe codes for recent knowledge distillation algorithms and benchmark results via TF2.0 low-level API
- codestella/putting-nerf-on-a-diet
codestella/putting-nerf-on-a-diet PublicPutting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis Implementation
- Zero-shot_Knowledge_Distillation
Zero-shot_Knowledge_Distillation PublicZero-Shot Knowledge Distillation in Deep Networks in ICML2019
Something went wrong, please refresh the page to try again.
If the problem persists, check theGitHub status page orcontact support.
If the problem persists, check theGitHub status page orcontact support.