Highlights
2024.12 ~ 2026.03:Machine Learning Engineer at GenON, Korea2024.09 ~ 2024.11:Backend Engineer at Tmax WAPL, Korea2023.02 ~ 2024.09:Machine Learning Research Engineer at AgileSoDA, Korea2021.03 ~ 2023.02:M.S. in Mechanical Design and Production Engineering, Konkuk University, Koreaㄴ 2021.03 ~ 2023.02:Research Student at SiM Lab. (Smart intelligent Manufacturing system Laboratory)2017.03 ~ 2021.02:B.S. in Department of Mechanical Engineering, Konkuk University, Koreaㄴ 2019.11 ~ 2021.02:Research Intern at SiM Lab. (Smart intelligent Manufacturing system Laboratory)ㄴ 2018.06 ~ 2019.11:Research Intern at MRV Lab. (Medical Robotics and Virtual Reality Laboratory)
- Paper Review: PagedAttention
- Code Review: Deep Dive into vLLM's Architecture and Implementation Analysis of OpenAI-Compatible Serving (2/2)
- Code Review: Deep Dive into vLLM's Architecture and Implementation Analysis of OpenAI-Compatible Serving (1/2)
- System Design Interview Volume 2 (9)
- System Design Interview Volume 2 (8)
PinnedLoading
- vllm-project/vllm
vllm-project/vllm PublicA high-throughput and memory-efficient inference and serving engine for LLMs
- PyCon_KR_2025_Tutorial_vLLM
PyCon_KR_2025_Tutorial_vLLM Public archive🐍 PyCon Korea 2025 Tutorial: vLLM의 OpenAI-Compatible Server 톺아보기 🐍
Something went wrong, please refresh the page to try again.
If the problem persists, check theGitHub status page orcontact support.
If the problem persists, check theGitHub status page orcontact support.
Uh oh!
There was an error while loading.Please reload this page.




