You signed in with another tab or window.Reload to refresh your session.You signed out in another tab or window.Reload to refresh your session.You switched accounts on another tab or window.Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -281,6 +281,8 @@ python3 download_pdfs.py # The code is generated by Doubao AI
281
281
|2024.12|🔥🔥[**Flex Attention**] FLEX ATTENTION: A PROGRAMMING MODEL FOR GENERATING OPTIMIZED ATTENTION KERNELS(@pytorch)|[[pdf]](https://arxiv.org/pdf/2412.05496)|[[attention-gym]](https://github.com/pytorch-labs/attention-gym)| ⭐️⭐️|
282
282
|2025.02| 🔥🔥🔥[**SeerAttention**] SeerAttention: Learning Intrinsic Sparse Attention in Your LLMs(@microsoft)|[[pdf]](https://arxiv.org/abs/2410.13276)|[[SeerAttention]](https://github.com/microsoft/SeerAttention)| ⭐️⭐️⭐️|
283
283
|2025.03|[**Slim attention**] Slim attention: cut your context memory in half without loss of accuracy, K-cache is all you need for MHA(@OpenMachine.ai)|[[pdf]](https://arxiv.org/pdf/2503.05840)|[[OpenMchine]](https://github.com/OpenMachine-ai/transformer-tricks)| ⭐️⭐️⭐️|
284
+
|2025.05|🔥🔥[**SageAttention-3**] SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-bit Training(@thu-ml)|[[pdf]](https://arxiv.org/pdf/2505.11594)|[[SageAttention]](https://github.com/thu-ml/SageAttention)| ⭐️⭐️|