Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork3.8k
Pull requests: unslothai/unsloth
Author
Uh oh!
There was an error while loading.Please reload this page.
Label
Uh oh!
There was an error while loading.Please reload this page.
Projects
Uh oh!
There was an error while loading.Please reload this page.
Milestones
Uh oh!
There was an error while loading.Please reload this page.
Reviews
Assignee
Assigned to nobodyLoading
Uh oh!
There was an error while loading.Please reload this page.
Sort
Pull requests list
Add Qwen2.5 Coder model support to registry and chat templates
#3436 openedOct 11, 2025 bySamama-IntellixcoreLoading…
[Part2] Reinstate llama.cpp Compatibility and GGUF Conversion with Multiple Quantizations and Automated Ollama Modelfile Creation
#3356 openedSep 23, 2025 byrolandtannousLoading…
[intel] change windows to remove windows-triton for intel xpu
#3168 openedAug 15, 2025 byleizhenyuanLoading…
Phi‑2 support: partial RoPE, deterministic dropout, loader dispatch, and smoke test
#3125 openedAug 9, 2025 byMagellaXLoading…
fix(issue 2950): properly handle spaces in file paths when invoking commands
#2951 openedJul 13, 2025 bydetjonmatajLoading…
Fix llama.cpp quantize location and execution on Windows.
#2894 openedJul 7, 2025 bysimpolismLoading…
Avoid materializing the entire logit matrix for logp calculations.
#2772 openedJun 19, 2025 byzkpranavLoading…
Fix beam search for Llama models by adding reorder_cache method
#2753 openedJun 17, 2025 byamrothemichLoading…
ProTip! Updated in the last three days:updated:>2025-10-10.