prefix-tuning
Here are 9 public repositories matching this topic...
Language:All
SMASHED is a toolkit designed to apply transformations to samples in datasets, such as fields extraction, tokenization, prompting, batching, and more. Supports datasets from Huggingface, torchdata iterables, or simple lists of dictionaries.
- Updated
May 24, 2024 - Python
Example code for prefix-tuning GPT/GPT-NeoX models and for inference with trained prefixes
- Updated
Mar 22, 2023 - Python
Parameter-efficient automation of data wrangling tasks with prefix-tuning and the T5 language model.
- Updated
Sep 28, 2022 - Python
Master Thesis on "Comparing Modular Approaches for Parameter-Efficient Fine-Tuning"
- Updated
Jan 7, 2024 - Python
Comparing QLoRA, Prompt & Prefix Tuning on Mistral-7B for medical instruction-following
- Updated
Jun 28, 2025 - Jupyter Notebook
Mitigating bias in pre-trained language models using Prefix-Tuning, focusing on altering word embeddings through contextual orthogonal training, achieving debiasing with minimal parameter training.
- Updated
Oct 23, 2023 - Python
Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.
- Updated
Feb 15, 2024 - Python
A practical introduction to Transformer fine-tuning, designed for participants of the International Olympiad in Artificial Intelligence – covering LoRA, Adapters, and the limitations of parameter-efficient methods.
- Updated
Jun 14, 2025
Build a production‑grade, modular pipeline for fine‑tuning large language models with LoRA on domain‑specific tasks (e.g., legal QA, medical summarization, financial reasoning).
- Updated
Oct 20, 2025
Improve this page
Add a description, image, and links to theprefix-tuning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with theprefix-tuning topic, visit your repo's landing page and select "manage topics."