Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
#

p-tuning

Here are 11 public repositories matching this topic...

We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to initiate any meaningful PR on this repo and integrate as many LLM related technologies as possible. 我们打造了方便研究人员上手和使用大模型等微调平台,我们欢迎开源爱好者发起任何有意义的pr!

  • UpdatedDec 12, 2023
  • Jupyter Notebook

基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等

  • UpdatedDec 12, 2023
  • Python

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks

  • UpdatedNov 16, 2023
  • Python

A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

  • UpdatedOct 6, 2022
  • Python

轻松玩转LLM兼容openai&langchain,支持文心一言、讯飞星火、腾讯混元、智谱ChatGLM等

  • UpdatedSep 24, 2024
  • Jupyter Notebook

Code for COLING22 paper, DPTDR: Deep Prompt Tuning for Dense Passage Retrieval

  • UpdatedAug 7, 2023
  • Python

Pipelines for Fine-Tuning LLMs using SFT and RLHF

  • UpdatedOct 7, 2025
  • Python

P-tuning-v2 integrated mrc for ner

  • UpdatedMar 30, 2023
  • Python

This bootcamp is designed to give NLP researchers an end-to-end overview on the fundamentals of NVIDIA NeMo framework, complete solution for building large language models. It will also have hands-on exercises complimented by tutorials, code snippets, and presentations to help researchers kick-start with NeMo LLM Service and Guardrails.

  • UpdatedMar 7, 2024
  • Jupyter Notebook

Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.

  • UpdatedFeb 15, 2024
  • Python

Reproduce a prompt-learning method: P-Tuning V2, from the paper 《P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks》, model usage: Deberta + ChatGLM2, additional_task: RACE

  • UpdatedMay 11, 2025
  • Python

Improve this page

Add a description, image, and links to thep-tuning topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with thep-tuning topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp