Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
@jason-huang03
jason-huang03
Follow
View jason-huang03's full-sized avatar

Jason Huang jason-huang03

I am an undergraduate student from IIIS (Yao Class), Tsinghua University. I am currently interested in generative models and machine learning system.
  • Tsinghua University
  • Beijing, China

Organizations

@thu-nics@thu-ml

Block or report jason-huang03

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more aboutblocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more aboutreporting abuse.

Report abuse

PinnedLoading

  1. thu-ml/SageAttentionthu-ml/SageAttentionPublic

    Quantized Attention achieves speedup of 2-5x and 3-11x compared to FlashAttention and xformers, without lossing end-to-end metrics across language, image, and video models.

    Cuda 2k 152

  2. thu-ml/SpargeAttnthu-ml/SpargeAttnPublic

    SpargeAttention: A training-free sparse attention that can accelerate any model inference.

    Cuda 649 48

  3. SPH_ProjectSPH_ProjectPublic

    SPH Realization of Fluid Simulation. Featuring Large Scale Simulation, Rigid-Fluid Coupling and High Viscosity Fluid.

    Python 178 13

  4. mit-han-lab/llm-awqmit-han-lab/llm-awqPublic

    [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration

    Python 3.1k 264

  5. thu-nics/MoAthu-nics/MoAPublic

    [CoLM'25] The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>

    Python 139 7

  6. mit-han-lab/omniservemit-han-lab/omniservePublic

    [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Serving with Unified Sparse Attention

    C++ 717 48


[8]ページ先頭

©2009-2025 Movatter.jp