peft-fine-tuning-llm
Here are 139 public repositories matching this topic...
Language:All
Sort:Most stars
Practical course about Large Language Models.
- Updated
Dec 8, 2025 - Jupyter Notebook
Official repository of my book "A Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face"
- Updated
Oct 5, 2025 - Jupyter Notebook
[SIGIR'24] The official implementation code of MOELoRA.
- Updated
Jul 22, 2024 - Python
Repo for Qwen Image Finetune
- Updated
Dec 11, 2025 - Jupyter Notebook
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
- Updated
Mar 11, 2025 - Python
Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"
- Updated
Apr 28, 2024
Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"
- Updated
Aug 25, 2024 - Python
【AIGC 实战入门笔记 —— AIGC 摩天大楼】分享 大语言模型(LLMs),大模型高效微调(SFT),检索增强生成(RAG),智能体(Agent),PPT自动生成, 角色扮演,文生图(Stable Diffusion) ,图像文字识别(OCR),语音识别(ASR),语音合成(TTS),人像分割(SA),多模态(VLM),Ai 换脸(Face Swapping), 文生视频(VD),图生视频(SVD),Ai 动作迁移,Ai 虚拟试衣,数字人,全模态理解(Omni),Ai音乐生成 干货学习 等 实战与经验。
- Updated
Apr 26, 2025
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
- Updated
Jun 4, 2024 - Python
Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?
- Updated
Sep 23, 2024 - Python
CRE-LLM: A Domain-Specific Chinese Relation Extraction Framework with Fine-tuned Large Language Model
- Updated
Apr 27, 2024 - Python
memory-efficient fine-tuning; support 24G GPU memory fine-tuning 7B
- Updated
May 26, 2024 - Python
[ICML 2025] Fast and Low-Cost Genomic Foundation Models via Outlier Removal.
- Updated
Jun 19, 2025 - Python
AI Community Tutorial, including: LoRA/Qlora LLM fine-tuning, Training GPT-2 from scratch, Generative Model Architecture, Content safety and control implementation, Model distillation techniques, Dreambooth techniques, Transfer learning, etc for practice with real project!
- Updated
Dec 23, 2024 - Jupyter Notebook
An ultra-lightweight neural machine translation model fine-tuned specifically for Persian-to-English tasks, leveraging efficient PEFT (LoRA) techniques to deliver strong performance while staying fast and highly resource-efficient for real-world deployment.
- Updated
Dec 7, 2025 - Jupyter Notebook
High Quality Image Generation Model - Powered with NVIDIA A100
- Updated
Jul 27, 2024 - Python
A Python library for efficient and flexible cycle-consistency training of transformer models via iteratie back-translation. Memory and compute efficient techniques such as PEFT adapter switching allow for 7.5x larger models to be trained on the same hardware.
- Updated
Jan 13, 2025 - Python
Addestra il tuo Mini Language Model!
- Updated
Oct 1, 2025 - Jupyter Notebook
Mistral and Mixtral (MoE) from scratch
- Updated
May 27, 2024 - Python
A no-code toolkit to finetune LLMs on your local GPU—just upload data, pick a task, and deploy later. Perfect for hackathons or prototyping, with automatic hardware detection and a guided React interface.
- Updated
Nov 26, 2025 - Python
Improve this page
Add a description, image, and links to thepeft-fine-tuning-llm topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with thepeft-fine-tuning-llm topic, visit your repo's landing page and select "manage topics."