Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
#

pretrained-language-model

Here are 143 public repositories matching this topic...

YAYI 2 是中科闻歌研发的新一代开源大语言模型,采用了超过 2 万亿 Tokens 的高质量、多语言语料进行预训练。(Repo for YaYi 2 Chinese LLMs)

  • UpdatedApr 7, 2024
  • Python

An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks

  • UpdatedNov 16, 2023
  • Python

A plug-and-play library for parameter-efficient-tuning (Delta Tuning)

  • UpdatedSep 19, 2024
  • Python

中文法律LLaMA (LLaMA for Chinese legel domain)

  • UpdatedAug 28, 2024
  • Python

word2vec, sentence2vec, machine reading comprehension, dialog system, text classification, pretrained language model (i.e., XLNet, BERT, ELMo, GPT), sequence labeling, information retrieval, information extraction (i.e., entity, relation and event extraction), knowledge graph, text generation, network embedding

  • UpdatedJan 11, 2021
  • OpenEdge ABL

Code associated with the Don't Stop Pretraining ACL 2020 paper

  • UpdatedNov 15, 2021
  • Python

The official repository of the dots.llm1 base and instruct models proposed by rednote-hilab.

  • UpdatedAug 20, 2025

ACL'2023: DiffusionBERT: Improving Generative Masked Language Models with Diffusion Models

  • UpdatedFeb 17, 2024
  • Python

MWPToolkit is an open-source framework for math word problem(MWP) solvers.

  • UpdatedSep 28, 2022
  • Python

[ACM Computing Surveys 2025] This repository collects awesome survey, resource, and paper for Lifelong Learning with Large Language Models. (Updated Regularly)

  • UpdatedMay 30, 2025

[NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.

  • UpdatedNov 2, 2023
  • Python

EMNLP'23 survey: a curation of awesome papers and resources on refreshing large language models (LLMs) without expensive retraining.

  • UpdatedDec 12, 2023

Worth-reading papers and related resources on attention mechanism, Transformer and pretrained language model (PLM) such as BERT. 值得一读的注意力机制、Transformer和预训练语言模型论文与相关资源集合

  • UpdatedMar 27, 2021

The official code for "TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting (ICLR 2024)". TEMPO is one of the very first open source Time Series Foundation Models for forecasting task v1.0 version.

  • UpdatedFeb 23, 2025
  • Python

[NeurIPS 2021] COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining

  • UpdatedJul 25, 2023
  • Python

Improve this page

Add a description, image, and links to thepretrained-language-model topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with thepretrained-language-model topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp