Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
@psycoplankton
psycoplankton
Follow
View psycoplankton's full-sized avatar
🚩

Vansh Gupta psycoplankton

🚩
Undergraduate at IIT BHU pursuing dual degree in Engineering Physics
  • Indian Institute of Technology Varanasi (BHU)
  • Varanasi

Block or report psycoplankton

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more aboutblocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more aboutreporting abuse.

Report abuse
psycoplankton/README.md
Light mode Recommended

Aspiring Researcher | Coder | Problem Solver

psycoplankton


GITHUB TROPHIES 🏆

Trophies


💫 About Me

  • ⁠🎓 Undergraduate at IIT BHU, pursuing Engineering Physics
  • ⁠🤖 Passionate about Representation Learning, Natural Language Processing, Probabilistic ML, and Reinforcement Learning
  • ⁠👯 Open to research collaborations in LLMs, Generative Models, and Quantum Machine Learning
  • ⁠📫 Reach me at:

🧠 Experience & Research

🔬 Research Intern — Language and Speech Lab, NTU Singapore

Oct 2024 – Present
Exposure: LLM Fine-Tuning, PEFT, LoRA, LLaMA Adapters, Emotion Extraction, Prompt Tuning
•⁠ ⁠Researched low-resource LLM fine-tuning methods and emotion extraction from text.
•⁠ ⁠Working on synthetic interviews for depression detection using the DAIC-WOZ dataset, fine-tuned on Reddit data and integrated with emotion detection.


🧠 Research Intern — Visual Computing and Data Analytics Lab, IIT BHU

Aug 2023 – Apr 2024
Exposure: GANs, Graph Neural Networks, Fuzzy Logic, DeepWalk, Node2Vec, Struc2Vec
•⁠ ⁠Re-designed GraphGAN with Wasserstein Loss, improving accuracy from *84.7% → 88.57%.
•⁠ ⁠Implemented DeepWalk, Node2Vec, and Struc2Vec for node embeddings.
•⁠ ⁠Developed a Fuzzy Pre-processing Layer using a modified K-Means algorithm to boost accuracy to 88.95%.


⚙️ Machine Learning Engineer Intern — BingeClip.AI

Sep 2024 – Present
Exposure: Super Resolution, Quantization, Knowledge Distillation, LipSync, Mixed Precision Training
•⁠ ⁠Worked on inference optimization with Quantization, Knowledge Distillation, and Batch Inference.
•⁠ ⁠Applied Mixed Precision Training and Post-training Quantization on CodeFormer.
•⁠ ⁠Reduced inference time by 25%, and total forward pass time by 50%.


🧪 IBM Research Intern [AI 4 Code Team]

May 2025 – Present

  • ⁠Contributing to AI for Code tooling and research problems.
  • ⁠Exploring techniques for intelligent code understanding and generation.

🧠 Publications

Enriching Pre-Training Using Fuzzy Logic

  • ⁠Published and accepted at an IEEE Conference
  • ⁠Focused on enhancing language representation through fuzzy logic integration into the pre-training phase.

🌐 Socials


💻 Tech Stack


📊 GitHub Stats

Vansh's GitHub Stats


Vansh's GitHub Activity Graph


🔝 Top Contributed Repo

PinnedLoading

  1. GPT-DecodedGPT-DecodedPublic

    An implementation of the GPT(generative pretrained transformer) model, from scratch, which produces Shakespearean text by training on the dialogues written by Shakespeare along with the GPT Encoder.

    Jupyter Notebook

  2. Rupee-vs-Dollar-Time-Series-ForecastingRupee-vs-Dollar-Time-Series-ForecastingPublic

    An autoregressive forecasting implementation of a LSTM network, NBEATS architecture, ARIMA and SARIMAX regressions, and Autoformer architecture on rupee dollar exchange rates using pytorch, pytorch…

    Jupyter Notebook


[8]ページ先頭

©2009-2025 Movatter.jp