Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

License

NotificationsYou must be signed in to change notification settings

LLaVA-VL/LLaVA-NeXT

Repository files navigation

LLaVA-NeXT: Open Large Multimodal Models

Static BadgeStatic Badgellava_next-blog

llava_onevision-demollava_next-video_demollava_next-interleave_demoOpenbayes Demo

llava_video-checkpointsllava_onevision-checkpointsllava_next-interleave_checkpointsllava_next-image_checkpoints

Release Notes

  • [2025/08/29] 🔥 LLaVA-Critic-R1 We release LLaVA-Critic-R1, a family of generative critic VLM trained through GRPO using pairwise critic data. LLaVA-Critic-R1 not only demonstrates strong critic capability, but also achieves state-of-the-art policy performance at the 7B scale. Refer toLLaVA-Critic-R1 for more training details.

    📄Explore more:

  • [2024/10/04] 🔥 LLaVA-Video (formerly LLaVA-NeXT-Video) has undergone a major upgrade! We are excited to releaseLLaVA-Video-178K, a high-quality synthetic dataset for video instruction tuning. This dataset includes:

    • 178,510 caption entries
    • 960,792 open-ended Q&A pairs
    • 196,198 multiple-choice Q&A items

    Along with this, we’re also releasing theLLaVA-Video 7B/72B models, which deliver competitive performance on the latest video benchmarks, includingVideo-MME,LongVideoBench, andDream-1K.

    📄Explore more:

  • [2024/09/13] 🔥🚀LLaVA-OneVision-Chat. The new LLaVA-OV-Chat (7B/72B) significantly improves the chat experience of LLaVA-OV. 📄

  • [2024/08/06] 🔥🚀LLaVA-OneVision (OV)! The new LLaVA-OV models (0.5B/7B/72B) achieve new state-of-the-art performance across single-image, multi-image, and video benchmarks, sometimes rivaling top commercial models on 47 diverse benchmarks. 📄 Explore More:

    • [Paper]: In-depth insights, new emegerging scenarios, ie, strong video understadning through task transfer from images.
    • [LLaVA-OV Doc]: Model inference and evaluation guidance.
    • [Scripts]: Start training models on your single-image/multi-image/video data.
  • [2024/07/16] 🔥LLaVA-NeXT-Video has been upgraded. The new 32B model achieves the best open-source performance on several video benchmarks, includingVideo-MME. Please refer tothis page for details, refer tollava_next-video_demo for demo.

  • [2024/06/23] 🔥LLaVA-NeXT-Interleave is released. We utilize image-text interleaved format to unify multi-image, video, and 3D tasks in one LLM and achieveSoTA performance on a wide range of benchmarks. Check outpaper,blog, andcheckpoints to see new capabilities and improved performance! We have released 0.5b, 7b, and 7b-dpo models.

  • [2024/05/25] 🔥 Wondering "What Else Influences Visual Instruction Tuning Beyond Data?" Our newblog summarizes empirical explorations to ablate the various design choices in improving LMMs except instruct data itself. Meanwhile, open-source the recapioned high-quality data using LLaVA-NeXT-34B on[COCO][LCS][CC3M].

    • Architectures (LMM & Vision Encoder)
    • Visual Representations (Resolution & # Tokens)
    • Training Strategies (High-quality data & Trainable modules)
  • [2024/05/10] 🔥LLaVA-NeXT (Stronger) models are released, with support of stronger LMM inlcuding LLama-3 (8B) and Qwen-1.5 (72B/110B) Check out [blog] and [checkpoints] to see improved performance!

  • [2024/05/10] 🔥LLaVA-NeXT (Video) is released. The image-only-trained LLaVA-NeXT model is surprisingly strong on video tasks with zero-shot modality transfer. DPO training with AI feedback on videos can yield significant improvement. [Blog], [checkpoints] and [sglang]

  • [2024/01/30] 🔥LLaVA-NeXT is out! With additional scaling to LLaVA-1.5, LLaVA-NeXT-34B outperforms Gemini Pro on some benchmarks. It can now process 4x more pixels and perform more tasks/applications than before. Check out theblog post, and explore thedemo! Models are available inModel Zoo. Training/eval data and scripts coming soon.

More
  • [2024/03/10] 🔥 ReleasingLMMs-Eval, a highly efficient evaluation pipeline we used when developing LLaVA-NeXT. It supports the evaluation of LMMs on dozens of public datasets and allows new dataset onboarding, making the dev of new LMMs much faster. [Blog] [Codebase]

  • [2023/11/10]LLaVA-Plus is released: Learning to Use Tools for Creating Multimodal Agents, with LLaVA-Plus (LLaVA that Plug and Learn to Use Skills). [Project Page] [Demo] [Code] [Paper]

  • [2023/11/02]LLaVA-Interactive is released: Experience the future of human-AI multimodal interaction with an all-in-one demo for Image Chat, Segmentation, Generation and Editing. [Project Page] [Demo] [Code] [Paper]

  • [2023/10/26] 🔥 LLaVA-1.5 with LoRA achieves comparable performance as full-model finetuning, with a reduced GPU RAM requirement (ckpts,script). We also provide adoc on how to finetune LLaVA-1.5 on your own dataset with LoRA.

  • [2023/10/12] Check out the Korean LLaVA (Ko-LLaVA), created by ETRI, who has generously supported our research! [🤗 Demo]

  • [2023/10/05] 🔥 LLaVA-1.5 is out! Achieving SoTA on 11 benchmarks, with just simple modifications to the original LLaVA, utilizes all public data, completes training in ~1 day on a single 8-A100 node, and surpasses methods like Qwen-VL-Chat that use billion-scale data. Check out thetechnical report, and explore thedemo! Models are available inModel Zoo. The training data and scripts of LLaVA-1.5 are releasedhere, and evaluation scripts are releasedhere!

  • [2023/09/26] LLaVA is improved with reinforcement learning from human feedback (RLHF) to improve fact grounding and reduce hallucination. Check out the new SFT and RLHF checkpoints at project[LLavA-RLHF]

  • [2023/09/22]LLaVA is accepted by NeurIPS 2023 asoral presentation, andLLaVA-Med is accepted by NeurIPS 2023 Datasets and Benchmarks Track asspotlight presentation.

  • [2023/11/06] SupportIntel dGPU and CPU platforms.More details here.

  • [2023/10/12] LLaVA is now supported inllama.cpp with 4-bit / 5-bit quantization support!

  • [2023/10/11] The training data and scripts of LLaVA-1.5 are releasedhere, and evaluation scripts are releasedhere!

  • [2023/10/10]Roboflow Deep Dive: First Impressions with LLaVA-1.5.

  • [2023/09/20] We summarize our empirical study of training 33B and 65B LLaVA models in anote. Further, if you are interested in the comprehensive review, evolution and trend of multimodal foundation models, please check out our recent survey paper``Multimodal Foundation Models: From Specialists to General-Purpose Assistants''.

  • [2023/07/19] 🔥 We release a major upgrade, including support for LLaMA-2, LoRA training, 4-/8-bit inference, higher resolution (336x336), and a lot more. We releaseLLaVA Bench for benchmarking open-ended visual chat with results from Bard and Bing-Chat. We also support and verify training with RTX 3090 and RTX A6000. Check outLLaVA-from-LLaMA-2, and ourmodel zoo!
  • [2023/06/26]CVPR 2023 Tutorial onLarge Multimodal Models: Towards Building and Surpassing Multimodal GPT-4! Please check out [Slides] [Notes] [YouTube] [Bilibli].
  • [2023/06/11] We released the preview for the most requested feature: DeepSpeed and LoRA support! Please see documentationshere.
  • [2023/06/01] We releasedLLaVA-Med: Large Language and Vision Assistant for Biomedicine, a step towards building biomedical domain large language and vision models with GPT-4 level capabilities. Checkout thepaper andpage.
  • [2023/05/06] We are releasingLLaVA-Lighting-MPT-7B-preview, based on MPT-7B-Chat! Seehere for more details.
  • [2023/05/02] 🔥 We are releasing LLaVA-Lighting! Train a lite, multimodal GPT-4 with just $40 in 3 hours! Seehere for more details.
  • [2023/04/27] Thanks to the community effort, LLaVA-13B with 4-bit quantization allows you to run on a GPU with as few as 12GB VRAM! Try it outhere.
  • [2023/04/17] 🔥 We releasedLLaVA: Large Language and Vision Assistant. We propose visual instruction tuning, towards building large language and vision models with GPT-4 level capabilities. Checkout thepaper anddemo.

Usage and License Notices: This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses, including but not limited to theOpenAI Terms of Use for the dataset and the specific licenses for base language models for checkpoints trained using the dataset (e.g.Llama-1/2 community license for LLaMA-2 and Vicuna-v1.5,Tongyi Qianwen RESEARCH LICENSE AGREEMENT andLlama-3 Research License). This project does not impose any additional constraints beyond those stipulated in the original licenses. Furthermore, users are reminded to ensure that their use of the dataset and checkpoints is in compliance with all applicable laws and regulations.

Models & Scripts

Installation

1.Clone this repository and navigate to the LLaVA folder:

git clone https://github.com/LLaVA-VL/LLaVA-NeXTcd LLaVA-NeXT

2.Install the inference package:

conda create -n llava python=3.10 -yconda activate llavapip install --upgrade pip# Enable PEP 660 support.pip install -e".[train]"

Project Navigation

Please checkout the following page for more inference & evaluation details.

-LLaVA-OneVision: Easy Task Transfer

-LLaVA-NeXT: Stronger LLMs Supercharge Multimodal Capabilities in the Wild

- LLaVA-NeXT: A Strong Zero-shot Video Understanding Model

- LLaVA-NeXT: Tackling Multi-image, Video, and 3D in Large Multimodal Models

SGLang for SpeedUp Inference and Deployment

We useSGLang to speed up inference and deployment of LLaVA-NeXT. You could make LLaVA-NeXT as a backend API service with SGLang.

Prepare Environment:Following the instruction in thesglang

LLaVA-NeXT/OneVision

Checkout the HTTP Post/Get and SRT usage atsglang/examples/runtime/llava_onevision

LLaVA-NeXT (Video)

Launch and Run on (K) Nodes:

  • Go to sglang project
    cd PATH_TO/sglang
  • First node:
    bash examples/usage/llava_video/srt_example_llava_v.sh K 0 YOUR_VIDEO_PATH YOUR_MODEL_PATH FRAMES_PER_VIDEO(e.g. bash examples/usage/llava_video/srt_example_llava_v.sh K 0 examples/usage/llava_video/videos/Q98Z4OTh8RwmDonc.mp4 lmms-lab/LLaVA-NeXT-Video-7B-DPO 16)
  • Second node:
    bash examples/usage/llava_video/srt_example_llava_v.sh K 1 YOUR_VIDEO_PATH YOUR_MODEL_PATH FRAMES_PER_VIDEO
  • The K node:
    bash examples/usage/llava_video/srt_example_llava_v.sh K K-1 YOUR_VIDEO_PATH YOUR_MODEL_PATH FRAMES_PER_VIDEO

Citation

If you find it useful for your research and applications, please cite related papers/blogs using this BibTeX:

@article{li2024llava,title={LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models},author={Li, Feng and Zhang, Renrui and Zhang, Hao and Zhang, Yuanhan and Li, Bo and Li, Wei and Ma, Zejun and Li, Chunyuan},journal={arXiv preprint arXiv:2407.07895},year={2024}}@misc{li2024llavanext-ablations,title={LLaVA-NeXT: What Else Influences Visual Instruction Tuning Beyond Data?},url={https://llava-vl.github.io/blog/2024-05-25-llava-next-ablations/},author={Li, Bo and Zhang, Hao and Zhang, Kaichen and Guo, Dong and Zhang, Yuanhan and Zhang, Renrui and Li, Feng and Liu, Ziwei and Li, Chunyuan},month={May},year={2024}}@misc{li2024llavanext-strong,title={LLaVA-NeXT: Stronger LLMs Supercharge Multimodal Capabilities in the Wild},url={https://llava-vl.github.io/blog/2024-05-10-llava-next-stronger-llms/},author={Li, Bo and Zhang, Kaichen and Zhang, Hao and Guo, Dong and Zhang, Renrui and Li, Feng and Zhang, Yuanhan and Liu, Ziwei and Li, Chunyuan},month={May},year={2024}}@misc{zhang2024llavanext-video,title={LLaVA-NeXT: A Strong Zero-shot Video Understanding Model},url={https://llava-vl.github.io/blog/2024-04-30-llava-next-video/},author={Zhang, Yuanhan and Li, Bo and Liu, haotian and Lee, Yong jae and Gui, Liangke and Fu, Di and Feng, Jiashi and Liu, Ziwei and Li, Chunyuan},month={April},year={2024}}@misc{liu2024llavanext,title={LLaVA-NeXT: Improved reasoning, OCR, and world knowledge},url={https://llava-vl.github.io/blog/2024-01-30-llava-next/},author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Li, Bo and Zhang, Yuanhan and Shen, Sheng and Lee, Yong Jae},month={January},year={2024}}@misc{liu2023improvedllava,title={Improved Baselines with Visual Instruction Tuning},author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Lee, Yong Jae},publisher={arXiv:2310.03744},year={2023},}@misc{liu2023llava,title={Visual Instruction Tuning},author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},publisher={NeurIPS},year={2023},}

Acknowledgement

  • Vicuna: the codebase we built upon, and our base model Vicuna-13B that has the amazing language capabilities!
  • The LLaVA-NeXT project is currently maintained by the team along with our contributors (listed alphabetically by the first names):Bo Li,Dong Guo,Feng Li,Hao Zhang,Kaichen Zhang,Renrui Zhang,Yuanhan Zhang, led byChunyuan Li and with the guidance and help fromHaotian Liu.
  • Thelmms-eval framework and its core contributors, including Peiyuan Zhang, Fanyi Pu, Joshua Adrian Cahyono, and Kairui Hu, for their support on the evaluation side.

Related Projects

For future project ideas, please check out:

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors16


[8]ページ先頭

©2009-2025 Movatter.jp