Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

License

NotificationsYou must be signed in to change notification settings

nng555/fairseq-ssmba

 
 

Repository files navigation



MIT LicenseLatest ReleaseBuild StatusDocumentation Status


Fairseq(-py) is a sequence modeling toolkit that allows researchers anddevelopers to train custom models for translation, summarization, languagemodeling and other text generation tasks.

We provide reference implementations of various sequence modeling papers:

List of implemented papers

What's New:

Previous updates

Features:

We also providepre-trained models for translation and language modelingwith a convenienttorch.hub interface:

en2de=torch.hub.load('pytorch/fairseq','transformer.wmt19.en-de.single_model')en2de.translate('Hello world',beam=5)# 'Hallo Welt'

See the PyTorch Hub tutorials fortranslationandRoBERTa for more examples.

Requirements and Installation

  • PyTorch version >= 1.5.0
  • Python version >= 3.6
  • For training new models, you'll also need an NVIDIA GPU andNCCL
  • To install fairseq and develop locally:
git clone https://github.com/pytorch/fairseqcd fairseqpip install --editable ./# on MacOS:# CFLAGS="-stdlib=libc++" pip install --editable ./# to install the latest stable release (0.10.x)# pip install fairseq
  • For faster training install NVIDIA'sapex library:
git clone https://github.com/NVIDIA/apexcd apexpip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" \  --global-option="--deprecated_fused_adam" --global-option="--xentropy" \  --global-option="--fast_multihead_attn" ./
  • For large datasets installPyArrow:pip install pyarrow
  • If you use Docker make sure to increase the shared memory size either with--ipc=host or--shm-sizeas command line options tonvidia-docker run .

Getting Started

Thefull documentation contains instructionsfor getting started, training new models and extending fairseq with new modeltypes and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below,as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is MIT-licensed.The license applies to the pre-trained models as well.

Citation

Please cite as:

@inproceedings{ott2019fairseq,title ={fairseq: A Fast, Extensible Toolkit for Sequence Modeling},author ={Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},booktitle ={Proceedings of NAACL-HLT 2019: Demonstrations},year ={2019},}

About

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python96.6%
  • Shell1.4%
  • Cuda1.3%
  • Other0.7%

[8]ページ先頭

©2009-2025 Movatter.jp