Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!

License

NotificationsYou must be signed in to change notification settings

thorwebdev/transformers.js

 
 

Repository files navigation


transformers.js javascript library logo

NPMNPM DownloadsjsDelivr HitsLicenseDocumentation

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!

Transformers.js is designed to be functionally equivalent to Hugging Face'stransformers python library, meaning you can run the same pretrained models using a very similar API. These models support common tasks in different modalities, such as:

  • 📝Natural Language Processing: text classification, named entity recognition, question answering, language modeling, summarization, translation, multiple choice, and text generation.
  • 🖼️Computer Vision: image classification, object detection, and segmentation.
  • 🗣️Audio: automatic speech recognition and audio classification.
  • 🐙Multimodal: zero-shot image classification.

Transformers.js usesONNX Runtime to run models in the browser. The best part about it, is that you can easilyconvert your pretrained PyTorch, TensorFlow, or JAX models to ONNX using🤗 Optimum.

For more information, check out the fulldocumentation.

Quick tour

It's super simple to translate from existing code! Just like the python library, we support thepipeline API. Pipelines group together a pretrained model with preprocessing of inputs and postprocessing of outputs, making it the easiest way to run models with the library.

Python (original)Javascript (ours)
fromtransformersimportpipeline# Allocate a pipeline for sentiment-analysispipe=pipeline('sentiment-analysis')out=pipe('I love transformers!')# [{'label': 'POSITIVE', 'score': 0.999806941}]
import{pipeline}from'@xenova/transformers';// Allocate a pipeline for sentiment-analysisletpipe=awaitpipeline('sentiment-analysis');letout=awaitpipe('I love transformers!');// [{'label': 'POSITIVE', 'score': 0.999817686}]

You can also use a different model by specifying the model id or path as the second argument to thepipeline function. For example:

// Use a different model for sentiment-analysisletpipe=awaitpipeline('sentiment-analysis','Xenova/bert-base-multilingual-uncased-sentiment');

Installation

To install viaNPM, run:

npm i @xenova/transformers

Alternatively, you can use it in vanilla JS, without any bundler, by using a CDN or static hosting. For example, usingES Modules, you can import the library with:

<scripttype="module">import{pipeline}from'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.17.2';</script>

Examples

Want to jump straight in? Get started with one of our sample applications/templates:

NameDescriptionLinks
Whisper WebSpeech recognition w/ Whispercode,demo
Doodle DashReal-time sketch-recognition gameblog,code,demo
Code PlaygroundIn-browser code completion websitecode,demo
Semantic Image Search (client-side)Search for images with textcode,demo
Semantic Image Search (server-side)Search for images with text (Supabase)code,demo
Vanilla JavaScriptIn-browser object detectionvideo,code,demo
ReactMultilingual translation websitecode,demo
Text to speech (client-side)In-browser speech synthesiscode,demo
Browser extensionText classification extensioncode
ElectronText classification applicationcode
Next.js (client-side)Sentiment analysis (in-browser inference)code,demo
Next.js (server-side)Sentiment analysis (Node.js inference)code,demo
Node.jsSentiment analysis APIcode
Demo siteA collection of demoscode,demo

Check out the Transformers.jstemplate on Hugging Face to get started in one click!

Custom usage

By default, Transformers.js useshosted pretrained models andprecompiled WASM binaries, which should work out-of-the-box. You can customize this as follows:

Settings

import{env}from'@xenova/transformers';// Specify a custom location for models (defaults to '/models/').env.localModelPath='/path/to/models/';// Disable the loading of remote models from the Hugging Face Hub:env.allowRemoteModels=false;// Set location of .wasm files. Defaults to use a CDN.env.backends.onnx.wasm.wasmPaths='/path/to/files/';

For a full list of available settings, check out theAPI Reference.

Convert your models to ONNX

We recommend using ourconversion script to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses🤗 Optimum to perform conversion and quantization of your model.

python -m scripts.convert --quantize --model_id<model_name_or_path>

For example, convert and quantizebert-base-uncased using:

python -m scripts.convert --quantize --model_id bert-base-uncased

This will save the following files to./models/:

bert-base-uncased/├── config.json├── tokenizer.json├── tokenizer_config.json└── onnx/    ├── model.onnx    └── model_quantized.onnx

For the full list of supported architectures, see theOptimum documentation.

Supported tasks/models

Here is the list of all tasks and architectures currently supported by Transformers.js.If you don't see your task/model listed here or it is not yet supported, feel freeto open up a feature requesthere.

To find compatible models on the Hub, select the "transformers.js" library tag in the filter menu (or visitthis link).You can refine your search by selecting the task you're interested in (e.g.,text-classification).

Tasks

Natural Language Processing

TaskIDDescriptionSupported?
Fill-Maskfill-maskMasking some of the words in a sentence and predicting which words should replace those masks.(docs)
(models)
Question Answeringquestion-answeringRetrieve the answer to a question from a given text.(docs)
(models)
Sentence Similaritysentence-similarityDetermining how similar two texts are.(docs)
(models)
SummarizationsummarizationProducing a shorter version of a document while preserving its important information.(docs)
(models)
Table Question Answeringtable-question-answeringAnswering a question about information from a given table.
Text Classificationtext-classification orsentiment-analysisAssigning a label or class to a given text.(docs)
(models)
Text Generationtext-generationProducing new text by predicting the next word in a sequence.(docs)
(models)
Text-to-text Generationtext2text-generationConverting one text sequence into another text sequence.(docs)
(models)
Token Classificationtoken-classification ornerAssigning a label to each token in a text.(docs)
(models)
TranslationtranslationConverting text from one language to another.(docs)
(models)
Zero-Shot Classificationzero-shot-classificationClassifying text into classes that are unseen during training.(docs)
(models)
Feature Extractionfeature-extractionTransforming raw data into numerical features that can be processed while preserving the information in the original dataset.(docs)
(models)

Vision

TaskIDDescriptionSupported?
Depth Estimationdepth-estimationPredicting the depth of objects present in an image.(docs)
(models)
Image Classificationimage-classificationAssigning a label or class to an entire image.(docs)
(models)
Image Segmentationimage-segmentationDivides an image into segments where each pixel is mapped to an object. This task has multiple variants such as instance segmentation, panoptic segmentation and semantic segmentation.(docs)
(models)
Image-to-Imageimage-to-imageTransforming a source image to match the characteristics of a target image or a target image domain.(docs)
(models)
Mask Generationmask-generationGenerate masks for the objects in an image.
Object Detectionobject-detectionIdentify objects of certain defined classes within an image.(docs)
(models)
Video Classificationn/aAssigning a label or class to an entire video.
Unconditional Image Generationn/aGenerating images with no condition in any context (like a prompt text or another image).
Image Feature Extractionimage-feature-extractionTransforming raw data into numerical features that can be processed while preserving the information in the original image.(docs)
(models)

Audio

TaskIDDescriptionSupported?
Audio Classificationaudio-classificationAssigning a label or class to a given audio.(docs)
(models)
Audio-to-Audion/aGenerating audio from an input audio source.
Automatic Speech Recognitionautomatic-speech-recognitionTranscribing a given audio into text.(docs)
(models)
Text-to-Speechtext-to-speech ortext-to-audioGenerating natural-sounding speech given text input.(docs)
(models)

Tabular

TaskIDDescriptionSupported?
Tabular Classificationn/aClassifying a target category (a group) based on set of attributes.
Tabular Regressionn/aPredicting a numerical value given a set of attributes.

Multimodal

TaskIDDescriptionSupported?
Document Question Answeringdocument-question-answeringAnswering questions on document images.(docs)
(models)
Image-to-Textimage-to-textOutput text from a given image.(docs)
(models)
Text-to-Imagetext-to-imageGenerates images from input text.
Visual Question Answeringvisual-question-answeringAnswering open-ended questions based on an image.
Zero-Shot Audio Classificationzero-shot-audio-classificationClassifying audios into classes that are unseen during training.(docs)
(models)
Zero-Shot Image Classificationzero-shot-image-classificationClassifying images into classes that are unseen during training.(docs)
(models)
Zero-Shot Object Detectionzero-shot-object-detectionIdentify objects of classes that are unseen during training.(docs)
(models)

Reinforcement Learning

TaskIDDescriptionSupported?
Reinforcement Learningn/aLearning from actions by interacting with an environment through trial and error and receiving rewards (negative or positive) as feedback.

Models

  1. ALBERT (from Google Research and the Toyota Technological Institute at Chicago) released with the paperALBERT: A Lite BERT for Self-supervised Learning of Language Representations, by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut.
  2. Audio Spectrogram Transformer (from MIT) released with the paperAST: Audio Spectrogram Transformer by Yuan Gong, Yu-An Chung, James Glass.
  3. BART (from Facebook) released with the paperBART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov and Luke Zettlemoyer.
  4. BEiT (from Microsoft) released with the paperBEiT: BERT Pre-Training of Image Transformers by Hangbo Bao, Li Dong, Furu Wei.
  5. BERT (from Google) released with the paperBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
  6. Blenderbot (from Facebook) released with the paperRecipes for building an open-domain chatbot by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
  7. BlenderbotSmall (from Facebook) released with the paperRecipes for building an open-domain chatbot by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
  8. BLOOM (from BigScience workshop) released by theBigScience Workshop.
  9. CamemBERT (from Inria/Facebook/Sorbonne) released with the paperCamemBERT: a Tasty French Language Model by Louis Martin*, Benjamin Muller*, Pedro Javier Ortiz Suárez*, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah and Benoît Sagot.
  10. Chinese-CLIP (from OFA-Sys) released with the paperChinese CLIP: Contrastive Vision-Language Pretraining in Chinese by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou.
  11. CLAP (from LAION-AI) released with the paperLarge-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov.
  12. CLIP (from OpenAI) released with the paperLearning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever.
  13. CLIPSeg (from University of Göttingen) released with the paperImage Segmentation Using Text and Image Prompts by Timo Lüddecke and Alexander Ecker.
  14. CodeGen (from Salesforce) released with the paperA Conversational Paradigm for Program Synthesis by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong.
  15. CodeLlama (from MetaAI) released with the paperCode Llama: Open Foundation Models for Code by Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve.
  16. ConvBERT (from YituTech) released with the paperConvBERT: Improving BERT with Span-based Dynamic Convolution by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan.
  17. ConvNeXT (from Facebook AI) released with the paperA ConvNet for the 2020s by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie.
  18. ConvNeXTV2 (from Facebook AI) released with the paperConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie.
  19. DeBERTa (from Microsoft) released with the paperDeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
  20. DeBERTa-v2 (from Microsoft) released with the paperDeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen.
  21. Decision Transformer (from Berkeley/Facebook/Google) released with the paperDecision Transformer: Reinforcement Learning via Sequence Modeling by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch.
  22. DeiT (from Facebook) released with the paperTraining data-efficient image transformers & distillation through attention by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou.
  23. Depth Anything (from University of Hong Kong and TikTok) released with the paperDepth Anything: Unleashing the Power of Large-Scale Unlabeled Data by Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao.
  24. DETR (from Facebook) released with the paperEnd-to-End Object Detection with Transformers by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, Sergey Zagoruyko.
  25. DINOv2 (from Meta AI) released with the paperDINOv2: Learning Robust Visual Features without Supervision by Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski.
  26. DistilBERT (from HuggingFace), released together with the paperDistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter by Victor Sanh, Lysandre Debut and Thomas Wolf. The same method has been applied to compress GPT2 intoDistilGPT2, RoBERTa intoDistilRoBERTa, Multilingual BERT intoDistilmBERT and a German version of DistilBERT.
  27. DiT (from Microsoft Research) released with the paperDiT: Self-supervised Pre-training for Document Image Transformer by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei.
  28. Donut (from NAVER), released together with the paperOCR-free Document Understanding Transformer by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park.
  29. DPT (from Intel Labs) released with the paperVision Transformers for Dense Prediction by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun.
  30. EfficientNet (from Google Brain) released with the paperEfficientNet: Rethinking Model Scaling for Convolutional Neural Networks by Mingxing Tan, Quoc V. Le.
  31. ELECTRA (from Google Research/Stanford University) released with the paperELECTRA: Pre-training text encoders as discriminators rather than generators by Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning.
  32. ESM (from Meta AI) are transformer protein language models.ESM-1b was released with the paperBiological structure and function emerge from scaling unsupervised learning to 250 million protein sequences by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus.ESM-1v was released with the paperLanguage models enable zero-shot prediction of the effects of mutations on protein function by Joshua Meier, Roshan Rao, Robert Verkuil, Jason Liu, Tom Sercu and Alexander Rives.ESM-2 and ESMFold were released with the paperLanguage models of protein sequences at the scale of evolution enable accurate structure prediction by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido, Alexander Rives.
  33. Falcon (from Technology Innovation Institute) by Almazrouei, Ebtesam and Alobeidli, Hamza and Alshamsi, Abdulaziz and Cappelli, Alessandro and Cojocaru, Ruxandra and Debbah, Merouane and Goffinet, Etienne and Heslow, Daniel and Launay, Julien and Malartic, Quentin and Noune, Badreddine and Pannier, Baptiste and Penedo, Guilherme.
  34. FastViT (from Apple) released with the paperFastViT: A Fast Hybrid Vision Transformer using Structural Reparameterization by Pavan Kumar Anasosalu Vasu, James Gabriel, Jeff Zhu, Oncel Tuzel and Anurag Ranjan.
  35. FLAN-T5 (from Google AI) released in the repositorygoogle-research/t5x by Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Zhao, Yanping Huang, Andrew Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei
  36. GLPN (from KAIST) released with the paperGlobal-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim.
  37. GPT Neo (from EleutherAI) released in the repositoryEleutherAI/gpt-neo by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy.
  38. GPT NeoX (from EleutherAI) released with the paperGPT-NeoX-20B: An Open-Source Autoregressive Language Model by Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, Samuel Weinbach
  39. GPT-2 (from OpenAI) released with the paperLanguage Models are Unsupervised Multitask Learners by Alec Radford*, Jeffrey Wu*, Rewon Child, David Luan, Dario Amodei** and Ilya Sutskever**.
  40. GPT-J (from EleutherAI) released in the repositorykingoflolz/mesh-transformer-jax by Ben Wang and Aran Komatsuzaki.
  41. GPTBigCode (from BigCode) released with the paperSantaCoder: don't reach for the stars! by Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra.
  42. HerBERT (from Allegro.pl, AGH University of Science and Technology) released with the paperKLEJ: Comprehensive Benchmark for Polish Language Understanding by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, Ireneusz Gawlik.
  43. Hubert (from Facebook) released with the paperHuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed.
  44. LongT5 (from Google AI) released with the paperLongT5: Efficient Text-To-Text Transformer for Long Sequences by Mandy Guo, Joshua Ainslie, David Uthus, Santiago Ontanon, Jianmo Ni, Yun-Hsuan Sung, Yinfei Yang.
  45. LLaMA (from The FAIR team of Meta AI) released with the paperLLaMA: Open and Efficient Foundation Language Models by Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample.
  46. Llama2 (from The FAIR team of Meta AI) released with the paperLlama2: Open Foundation and Fine-Tuned Chat Models by Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushka rMishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing EllenTan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom.
  47. M2M100 (from Facebook) released with the paperBeyond English-Centric Multilingual Machine Translation by Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, Naman Goyal, Tom Birch, Vitaliy Liptchinsky, Sergey Edunov, Edouard Grave, Michael Auli, Armand Joulin.
  48. MarianMT Machine translation models trained usingOPUS data by Jörg Tiedemann. TheMarian Framework is being developed by the Microsoft Translator Team.
  49. mBART (from Facebook) released with the paperMultilingual Denoising Pre-training for Neural Machine Translation by Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer.
  50. mBART-50 (from Facebook) released with the paperMultilingual Translation with Extensible Multilingual Pretraining and Finetuning by Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, Angela Fan.
  51. Mistral (from Mistral AI) by TheMistral AI team: Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
  52. MMS (from Facebook) released with the paperScaling Speech Technology to 1,000+ Languages by Vineel Pratap, Andros Tjandra, Bowen Shi, Paden Tomasello, Arun Babu, Sayani Kundu, Ali Elkahky, Zhaoheng Ni, Apoorv Vyas, Maryam Fazel-Zarandi, Alexei Baevski, Yossi Adi, Xiaohui Zhang, Wei-Ning Hsu, Alexis Conneau, Michael Auli.
  53. MobileBERT (from CMU/Google Brain) released with the paperMobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices by Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou.
  54. MobileViT (from Apple) released with the paperMobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer by Sachin Mehta and Mohammad Rastegari.
  55. MobileViTV2 (from Apple) released with the paperSeparable Self-attention for Mobile Vision Transformers by Sachin Mehta and Mohammad Rastegari.
  56. MPNet (from Microsoft Research) released with the paperMPNet: Masked and Permuted Pre-training for Language Understanding by Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu.
  57. MPT (from MosaiML) released with the repositoryllm-foundry by the MosaicML NLP Team.
  58. MT5 (from Google AI) released with the papermT5: A massively multilingual pre-trained text-to-text transformer by Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel.
  59. NLLB (from Meta) released with the paperNo Language Left Behind: Scaling Human-Centered Machine Translation by the NLLB team.
  60. Nougat (from Meta AI) released with the paperNougat: Neural Optical Understanding for Academic Documents by Lukas Blecher, Guillem Cucurull, Thomas Scialom, Robert Stojnic.
  61. OPT (from Meta AI) released with the paperOPT: Open Pre-trained Transformer Language Models by Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen et al.
  62. OWL-ViT (from Google AI) released with the paperSimple Open-Vocabulary Object Detection with Vision Transformers by Matthias Minderer, Alexey Gritsenko, Austin Stone, Maxim Neumann, Dirk Weissenborn, Alexey Dosovitskiy, Aravindh Mahendran, Anurag Arnab, Mostafa Dehghani, Zhuoran Shen, Xiao Wang, Xiaohua Zhai, Thomas Kipf, and Neil Houlsby.
  63. OWLv2 (from Google AI) released with the paperScaling Open-Vocabulary Object Detection by Matthias Minderer, Alexey Gritsenko, Neil Houlsby.
  64. Phi (from Microsoft) released with the papers -Textbooks Are All You Need by Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee and Yuanzhi Li,Textbooks Are All You Need II: phi-1.5 technical report by Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar and Yin Tat Lee.
  65. Qwen2 (from the Qwen team, Alibaba Group) released with the paperQwen Technical Report by Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou and Tianhang Zhu.
  66. ResNet (from Microsoft Research) released with the paperDeep Residual Learning for Image Recognition by Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun.
  67. RoBERTa (from Facebook), released together with the paperRoBERTa: A Robustly Optimized BERT Pretraining Approach by Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov.
  68. RoFormer (from ZhuiyiTechnology), released together with the paperRoFormer: Enhanced Transformer with Rotary Position Embedding by Jianlin Su and Yu Lu and Shengfeng Pan and Bo Wen and Yunfeng Liu.
  69. SegFormer (from NVIDIA) released with the paperSegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers by Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo.
  70. Segment Anything (from Meta AI) released with the paperSegment Anything by Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alex Berg, Wan-Yen Lo, Piotr Dollar, Ross Girshick.
  71. SigLIP (from Google AI) released with the paperSigmoid Loss for Language Image Pre-Training by Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, Lucas Beyer.
  72. SpeechT5 (from Microsoft Research) released with the paperSpeechT5: Unified-Modal Encoder-Decoder Pre-Training for Spoken Language Processing by Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, Zhihua Wei, Yao Qian, Jinyu Li, Furu Wei.
  73. SqueezeBERT (from Berkeley) released with the paperSqueezeBERT: What can computer vision teach NLP about efficient neural networks? by Forrest N. Iandola, Albert E. Shaw, Ravi Krishna, and Kurt W. Keutzer.
  74. StableLm (from Stability AI) released with the paperStableLM 3B 4E1T (Technical Report) by Jonathan Tow, Marco Bellagente, Dakota Mahan, Carlos Riquelme Ruiz, Duy Phung, Maksym Zhuravinskyi, Nathan Cooper, Nikhil Pinnaparaju, Reshinth Adithyan, and James Baicoianu.
  75. Starcoder2 (from BigCode team) released with the paperStarCoder 2 and The Stack v2: The Next Generation by Anton Lozhkov, Raymond Li, Loubna Ben Allal, Federico Cassano, Joel Lamy-Poirier, Nouamane Tazi, Ao Tang, Dmytro Pykhtar, Jiawei Liu, Yuxiang Wei, Tianyang Liu, Max Tian, Denis Kocetkov, Arthur Zucker, Younes Belkada, Zijian Wang, Qian Liu, Dmitry Abulkhanov, Indraneil Paul, Zhuang Li, Wen-Ding Li, Megan Risdal, Jia Li, Jian Zhu, Terry Yue Zhuo, Evgenii Zheltonozhskii, Nii Osae Osae Dade, Wenhao Yu, Lucas Krauß, Naman Jain, Yixuan Su, Xuanli He, Manan Dey, Edoardo Abati, Yekun Chai, Niklas Muennighoff, Xiangru Tang, Muhtasham Oblokulov, Christopher Akiki, Marc Marone, Chenghao Mou, Mayank Mishra, Alex Gu, Binyuan Hui, Tri Dao, Armel Zebaze, Olivier Dehaene, Nicolas Patry, Canwen Xu, Julian McAuley, Han Hu, Torsten Scholak, Sebastien Paquet, Jennifer Robinson, Carolyn Jane Anderson, Nicolas Chapados, Mostofa Patwary, Nima Tajbakhsh, Yacine Jernite, Carlos Muñoz Ferrandis, Lingming Zhang, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries.
  76. Swin Transformer (from Microsoft) released with the paperSwin Transformer: Hierarchical Vision Transformer using Shifted Windows by Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo.
  77. Swin2SR (from University of Würzburg) released with the paperSwin2SR: SwinV2 Transformer for Compressed Image Super-Resolution and Restoration by Marcos V. Conde, Ui-Jin Choi, Maxime Burchi, Radu Timofte.
  78. T5 (from Google AI) released with the paperExploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
  79. T5v1.1 (from Google AI) released in the repositorygoogle-research/text-to-text-transfer-transformer by Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu.
  80. Table Transformer (from Microsoft Research) released with the paperPubTables-1M: Towards Comprehensive Table Extraction From Unstructured Documents by Brandon Smock, Rohith Pesala, Robin Abraham.
  81. TrOCR (from Microsoft), released together with the paperTrOCR: Transformer-based Optical Character Recognition with Pre-trained Models by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.
  82. UniSpeech (from Microsoft Research) released with the paperUniSpeech: Unified Speech Representation Learning with Labeled and Unlabeled Data by Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang.
  83. UniSpeechSat (from Microsoft Research) released with the paperUNISPEECH-SAT: UNIVERSAL SPEECH REPRESENTATION LEARNING WITH SPEAKER AWARE PRE-TRAINING by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu.
  84. Vision Transformer (ViT) (from Google AI) released with the paperAn Image is Worth 16x16 Words: Transformers for Image Recognition at Scale by Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby.
  85. ViTMatte (from HUST-VL) released with the paperViTMatte: Boosting Image Matting with Pretrained Plain Vision Transformers by Jingfeng Yao, Xinggang Wang, Shusheng Yang, Baoyuan Wang.
  86. VITS (from Kakao Enterprise) released with the paperConditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech by Jaehyeon Kim, Jungil Kong, Juhee Son.
  87. Wav2Vec2 (from Facebook AI) released with the paperwav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations by Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli.
  88. Wav2Vec2-BERT (from Meta AI) released with the paperSeamless: Multilingual Expressive and Streaming Speech Translation by the Seamless Communication team.
  89. WavLM (from Microsoft Research) released with the paperWavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing by Sanyuan Chen, Chengyi Wang, Zhengyang Chen, Yu Wu, Shujie Liu, Zhuo Chen, Jinyu Li, Naoyuki Kanda, Takuya Yoshioka, Xiong Xiao, Jian Wu, Long Zhou, Shuo Ren, Yanmin Qian, Yao Qian, Jian Wu, Michael Zeng, Furu Wei.
  90. Whisper (from OpenAI) released with the paperRobust Speech Recognition via Large-Scale Weak Supervision by Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever.
  91. XLM (from Facebook) released together with the paperCross-lingual Language Model Pretraining by Guillaume Lample and Alexis Conneau.
  92. XLM-RoBERTa (from Facebook AI), released together with the paperUnsupervised Cross-lingual Representation Learning at Scale by Alexis Conneau*, Kartikay Khandelwal*, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer and Veselin Stoyanov.
  93. YOLOS (from Huazhong University of Science & Technology) released with the paperYou Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.

About

State-of-the-art Machine Learning for the web. Run 🤗 Transformers directly in your browser, with no need for a server!

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • JavaScript91.2%
  • Python8.8%

[8]ページ先頭

©2009-2025 Movatter.jp