Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Open Source Neural Machine Translation and (Large) Language Models in PyTorch

License

NotificationsYou must be signed in to change notification settings

OpenNMT/OpenNMT-py

Repository files navigation

We started a new projectEole available onGithub

It is a spin-off of OpenNMT-py in terms of features but we revamped a lot of stuff.

Eole handles NMT, LLM, Encoders as well as a new concept of Estimator within a NMT Model See thispost and thisnews

If you are a developer, switch now. If you are a user only, then we will publish the first py-pi versions shortly.

OpenNMT-py: Open-Source Neural Machine Translation and (Large) Language Models

Build StatusDocumentationGitterForum

OpenNMT-py is thePyTorch version of theOpenNMT project, an open-source (MIT) neural machine translation (and beyond!) framework. It is designed to be research friendly to try out new ideas in translation, language modeling, summarization, and many other NLP tasks. Some companies have proven the code to be production ready.

We love contributions! Please look at issues marked with thecontributions welcome tag.

Before raising an issue, make sure you read the requirements and theFull Documentation examples.

Unless there is a bug, please use theForum orGitter to ask questions.


For beginners:

There is a step-by-step and explained tuto (Thanks to Yasmin Moslem):Tutorial

Please try to read and/or follow before raising newbies issues.

Otherwise you can just have a look at theQuickstart steps


New:

  • You will need Pytorch v2 preferably v2.2 which fixes somescaled_dot_product_attention issues
  • LLM support with converters for: Llama (+ Mistral), OpenLlama, Redpajama, MPT-7B, Falcon.
  • Support for 8bit and 4bit quantization along with LoRA adapters, with or without checkpointing.
  • You can finetune 7B and 13B models on a single RTX 24GB with 4-bit quantization.
  • Inference can be forced in 4/8bit using the same layer quantization as in finetuning.
  • Tensor parallelism when the model does not fit on one GPU's memory (both training and inference)
  • Once your model is finetuned you can run inference either with OpenNMT-py or faster with CTranslate2.
  • MMLU evaluation script, see resultshere

For all usecases including NMT, you can now use Multiquery instead of Multihead attention (faster at training and inference) and remove biases from all Linear (QKV as well as FeedForward modules).

If you used previous versions of OpenNMT-py, you can check theChangelog or theBreaking Changes


Tutorials:


Setup

Using docker

To facilitate setup and reproducibility, some docker images are made available via the Github Container Registry:https://github.com/OpenNMT/OpenNMT-py/pkgs/container/opennmt-py

You can adapt the workflow and build your own image(s) depending on specific needs by usingbuild.sh andDockerfile in thedocker directory of the repo.

docker pull ghcr.io/opennmt/opennmt-py:3.4.3-ubuntu22.04-cuda12.1

Example oneliner to run a container and open a bash shell within it

docker run --rm -it --runtime=nvidia ghcr.io/opennmt/opennmt-py:test-ubuntu22.04-cuda12.1

Note: you need to have theNvidia Container Toolkit (formerly nvidia-docker) installed to properly take advantage of the CUDA/GPU features.

Depending on your needs you can add various flags:

  • -p 5000:5000 to forward some exposed port from your container to your host;
  • -v /some/local/directory:/some/container/directory to mount some local directory to some container directory;
  • --entrypoint some_command to directly run some specific command as the container entry point (instead of the default bash shell);

Installing locally

OpenNMT-py requires:

  • Python >= 3.8
  • PyTorch >= 2.0 <2.2

InstallOpenNMT-py frompip:

pip install OpenNMT-py

or from the source:

git clone https://github.com/OpenNMT/OpenNMT-py.gitcd OpenNMT-pypip install -e.

Note: if you encounter aMemoryError during installation, try to usepip with--no-cache-dir.

(Optional) Some advanced features (e.g. working pretrained models or specific transforms) require extra packages, you can install them with:

pip install -r requirements.opt.txt

Manual installation of some dependencies

Apex is highly recommended to have fast performance (especially the legacy fusedadam optimizer and FusedRMSNorm)

git clone https://github.com/NVIDIA/apexcd apexpip3 install -v --no-build-isolation --config-settings --build-option="--cpp_ext --cuda_ext --deprecated_fused_adam --xentropy --fast_multihead_attn" ./cd ..

Flash attention:

As of Oct. 2023 flash attention 1 has been upstreamed to pytorch v2 but it is recommended to use flash attention 2 with v2.3.1 for sliding window attention support.

When using regularposition_encoding=True or Rotary withmax_relative_positions=-1 OpenNMT-py will try to use an optimized dot-product path.

if you want to useflash attention then you need to manually install it first:

pip install flash-attn --no-build-isolation

if flash attention 2 is not installed, then we will useF.scaled_dot_product_attention from pytorch 2.x

When usingmax_relative_positions > 0 or Alibimax_relative_positions=-2 OpenNMT-py will use its legacy code for matrix multiplications.

flash attention andF.scaled_dot_product_attention are a bit faster and saves some GPU memory.

AWQ:

If you want to run inference or quantize an AWQ model you will need AutoAWQ.

ForAutoAWQ:pip install autoawq

Documentation & FAQs

Full HTML Documentation

FAQs

Acknowledgements

OpenNMT-py is run as a collaborative open-source project.Project was incubated by Systran and Harvard NLP in 2016 in Lua and ported to Pytorch in 2017.

Current maintainers (since 2018):

François HernandezVincent Nguyen (Seedfall)

Citation

If you are using OpenNMT-py for academic work, please cite the initialsystem demonstration paper published in ACL 2017:

@misc{klein2018opennmt,      title={OpenNMT: Neural Machine Translation Toolkit},       author={Guillaume Klein and Yoon Kim and Yuntian Deng and Vincent Nguyen and Jean Senellart and Alexander M. Rush},      year={2018},      eprint={1805.11462},      archivePrefix={arXiv},      primaryClass={cs.CL}}

[8]ページ先頭

©2009-2025 Movatter.jp