Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
This repository was archived by the owner on Nov 1, 2024. It is now read-only.

Repo for external large-scale work

License

NotificationsYou must be signed in to change notification settings

facebookresearch/metaseq

A codebase for working withOpen Pre-trained Transformers, originally forked fromfairseq.

Community Integrations

Using OPT with 🤗 Transformers

The OPT 125M--66B models are now available inHugging Face Transformers. You can access them under thefacebook organization on theHugging Face Hub

Using OPT-175B with Alpa

The OPT 125M--175B models are now supported in theAlpa project, whichenables serving OPT-175B with more flexible parallelisms on older generations of GPUs, such as 40GB A100, V100, T4, M60, etc.

Using OPT with Colossal-AI

The OPT models are now supported in theColossal-AI, which helps users to efficiently and quickly deploy OPT models training and inference, reducing large AI model budgets and scaling down the labor cost of learning and deployment.

Using OPT with CTranslate2

The OPT 125M--66B models can be executed withCTranslate2, which is a fast inference engine for Transformer models. The project integrates theSmoothQuant technique to allow 8-bit quantization of OPT models. See theusage example to get started.

Using OPT with FasterTransformer

The OPT models can be served withFasterTransformer, a highly optimized inference framework written and maintained by NVIDIA. We provide instructions to convert OPT checkpoints into FasterTransformer format anda usage example with some benchmark results.

Using OPT with DeepSpeed

The OPT models can be finetuned usingDeepSpeed. See theDeepSpeed-Chat example to get started.

Getting Started in Metaseq

Followsetup instructions here to get started.

Documentation on workflows

Background Info

Support

If you have any questions, bug reports, or feature requests regarding either the codebase or the models released in the projects section, please don't hesitate to post on ourGithub Issues page.

Please remember to follow ourCode of Conduct.

Contributing

We welcome PRs from the community!

You can find information about contributing to metaseq in ourContributing document.

The Team

Metaseq is currently maintained by the CODEOWNERS:Susan Zhang,Naman Goyal,Punit Singh Koura,Moya Chen,Kurt Shuster,David Esiobu,Igor Molybog,Peter Albert,Andrew Poulton,Nikolay Bashlykov,Binh Tang,Uriel Singer,Yuchen Zhang,Armen Aghajanya,Lili Yu, andAdam Polyak.

License

The majority of metaseq is licensed under the MIT license, however portions of the project are available under separate license terms:

About

Repo for external large-scale work

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp