Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.

License

NotificationsYou must be signed in to change notification settings

huggingface/transformers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Hugging Face Transformers Library

Checkpoints on HubBuildGitHubDocumentationGitHub releaseContributor CovenantDOI

State-of-the-art pretrained models for inference and training

Transformers is a library of pretrained text, computer vision, audio, video, and multimodal models for inference and training. Use Transformers to fine-tune models on your data, build inference applications, and for generative AI use cases across multiple modalities.

There are over 500K+ Transformersmodel checkpoints on theHugging Face Hub you can use.

Explore theHub today to find a model and use Transformers to help you get started right away.

Installation

Transformers works with Python 3.9+PyTorch 2.0+,TensorFlow 2.6+, andFlax 0.4.1+.

Create and activate a virtual environment withvenv oruv, a fast Rust-based Python package and project manager.

# venvpython-mvenv .my-envsource .my-env/bin/activate# uvuvvenv .my-envsource .my-env/bin/activate

Install Transformers in your virtual environment.

# pippipinstalltransformers# uvuvpipinstalltransformers

Install Transformers from source if you want the latest changes in the library or are interested in contributing. However, thelatest version may not be stable. Feel free to open anissue if you encounter an error.

git clone https://github.com/huggingface/transformers.gitcd transformerspip install.

Quickstart

Get started with Transformers right away with thePipeline API. ThePipeline is a high-level inference class that supports text, audio, vision, and multimodal tasks. It handles preprocessing the input and returns the appropriate output.

Instantiate a pipeline and specify model to use for text generation. The model is downloaded and cached so you can easily reuse it again. Finally, pass some text to prompt the model.

fromtransformersimportpipelinepipeline=pipeline(task="text-generation",model="Qwen/Qwen2.5-1.5B")pipeline("the secret to baking a really good cake is ")[{'generated_text':'the secret to baking a really good cake is 1) to use the right ingredients and 2) to follow the recipe exactly. the recipe for the cake is as follows: 1 cup of sugar, 1 cup of flour, 1 cup of milk, 1 cup of butter, 1 cup of eggs, 1 cup of chocolate chips. if you want to make 2 cakes, how much sugar do you need? To make 2 cakes, you will need 2 cups of sugar.'}]

To chat with a model, the usage pattern is the same. The only difference is you need to construct a chat history (the input toPipeline) between you and the system.

Tip

You can also chat with a model directly from the command line.

transformers-cli chat --model_name_or_path Qwen/Qwen2.5-0.5B-Instruct
importtorchfromtransformersimportpipelinechat= [    {"role":"system","content":"You are a sassy, wise-cracking robot as imagined by Hollywood circa 1986."},    {"role":"user","content":"Hey, can you tell me any fun things to do in New York?"}]pipeline=pipeline(task="text-generation",model="meta-llama/Meta-Llama-3-8B-Instruct",torch_dtype=torch.bfloat16,device_map="auto")response=pipeline(chat,max_new_tokens=512)print(response[0]["generated_text"][-1]["content"])

Expand the examples below to see howPipeline works for different modalities and tasks.

Automatic speech recognition
fromtransformersimportpipelinepipeline=pipeline(task="automatic-speech-recognition",model="openai/whisper-large-v3")pipeline("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac"){'text':' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'}
Image classification

fromtransformersimportpipelinepipeline=pipeline(task="image-classification",model="facebook/dinov2-small-imagenet1k-1-layer")pipeline("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")[{'label':'macaw','score':0.997848391532898}, {'label':'sulphur-crested cockatoo, Kakatoe galerita, Cacatua galerita','score':0.0016551691805943847}, {'label':'lorikeet','score':0.00018523589824326336}, {'label':'African grey, African gray, Psittacus erithacus','score':7.85409429227002e-05}, {'label':'quail','score':5.502637941390276e-05}]
Visual question answering

fromtransformersimportpipelinepipeline=pipeline(task="visual-question-answering",model="Salesforce/blip-vqa-base")pipeline(image="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/idefics-few-shot.jpg",question="What is in the image?",)[{'answer':'statue of liberty'}]

Why should I use Transformers?

  1. Easy-to-use state-of-the-art models:

    • High performance on natural language understanding & generation, computer vision, audio, video, and multimodal tasks.
    • Low barrier to entry for researchers, engineers, and developers.
    • Few user-facing abstractions with just three classes to learn.
    • A unified API for using all our pretrained models.
  2. Lower compute costs, smaller carbon footprint:

    • Share trained models instead of training from scratch.
    • Reduce compute time and production costs.
    • Dozens of model architectures with 1M+ pretrained checkpoints across all modalities.
  3. Choose the right framework for every part of a models lifetime:

    • Train state-of-the-art models in 3 lines of code.
    • Move a single model between PyTorch/JAX/TF2.0 frameworks at will.
    • Pick the right framework for training, evaluation, and production.
  4. Easily customize a model or an example to your needs:

    • We provide examples for each architecture to reproduce the results published by its original authors.
    • Model internals are exposed as consistently as possible.
    • Model files can be used independently of the library for quick experiments.
Hugging Face Enterprise Hub

Why shouldn't I use Transformers?

  • This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
  • The training API is optimized to work with PyTorch models provided by Transformers. For generic machine learning loops, you should use another library likeAccelerate.
  • The example scripts are onlyexamples. They may not necessarily work out-of-the-box on your specific use case and you'll need to adapt the code for it to work.

100 projects using Transformers

Transformers is more than a toolkit to use pretrained models, it's a community of projects built around it and theHugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyoneelse to build their dream projects.

In order to celebrate Transformers 100,000 stars, we wanted to put the spotlight on thecommunity with theawesome-transformers page which lists 100incredible projects built with Transformers.

If you own or use a project that you believe should be part of the list, please open a PR to add it!

Example models

You can test most of our models directly on theirHub model pages.

Expand each modality below to see a few example models for various use cases.

Audio
Computer vision
Multimodal
NLP
  • Masked word completion withModernBERT
  • Named entity recognition withGemma
  • Question answering withMixtral
  • Summarization withBART
  • Translation withT5
  • Text generation withLlama
  • Text classification withQwen

Citation

We now have apaper you can cite for the 🤗 Transformers library:

@inproceedings{wolf-etal-2020-transformers,title ="Transformers: State-of-the-Art Natural Language Processing",author ="Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",booktitle ="Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",month = oct,year ="2020",address ="Online",publisher ="Association for Computational Linguistics",url ="https://www.aclweb.org/anthology/2020.emnlp-demos.6",pages ="38--45"}

[8]ページ先頭

©2009-2025 Movatter.jp