🎯
Focusing
Ph.D. student. Research Interests: LLM-Agents, Vision-Language.
- UESTC | TongYi Laboratory
- Sichuan ⇌ Beijing
- 15:03
(UTC +08:00) - https://zchoi.github.io/
Highlights
- Pro
👻 I'm Haonan, a Ph.D. student of Center for Future Media at UESTC.
- 🦾 Python / C++ / Jupyter / Pytorch
- 🤔 LLM-based Agents / Vision&Language / Multimodal Learning
- 🌱 Attending courses & doing research at UESTC
- 🍙 Homepage:
Link
- 🙋♂️ CV :
Link
(Last updated: 2024.2)
PinnedLoading
- Awesome-Embodied-Robotics-and-Agent
Awesome-Embodied-Robotics-and-Agent PublicThis is a curated list of "Embodied AI or robot with Large Language Models" research. Watch this repository for the latest updates! 🔥
- RainBowLuoCS/MMEvol
RainBowLuoCS/MMEvol Public🔥🔥🔥Code for "Empowering Multimodal Large Language Models with Evol-Instruct"
Jupyter Notebook 13
- S2-Transformer
S2-Transformer Public[IJCAI 2022] Official Pytorch code for paper “S2 Transformer for Image Captioning”
- RainBowLuoCS/OpenOmni
RainBowLuoCS/OpenOmni PublicOpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Real-Time Self-Aware Emotional Speech Synthesis
Something went wrong, please refresh the page to try again.
If the problem persists, check theGitHub status page orcontact support.
If the problem persists, check theGitHub status page orcontact support.