internvl
Here are 4 public repositories matching this topic...
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 500+ LLMs (Qwen3, Qwen3-MoE, Llama4, InternLM3, GLM4, Mistral, Yi1.5, DeepSeek-R1, ...) and 200+ MLLMs (Qwen2.5-VL, Qwen2.5-Omni, Qwen2-Audio, Ovis2, InternVL3, Llava, MiniCPM-V-2.6, GLM4v, Xcomposer2.5, DeepSeek-VL2, Phi4, GOT-OCR2, ...).
- Updated
Apr 28, 2025 - Python
Higher performance OpenAI LLM service than vLLM serve: A pure C++ high-performance OpenAI LLM service implemented with GPRS+TensorRT-LLM+Tokenizers.cpp, supporting chat and function call, AI agents, distributed multi-GPU inference, multimodal capabilities, and a Gradio chat interface.
- Updated
Apr 18, 2025 - Python
Are VLMs Ready for Autonomous Driving? An Empirical Study from the Reliability, Data, and Metric Perspectives
- Updated
Feb 22, 2025 - Python
Streamlit App Combining Vision, Language, and Audio AI Models
- Updated
Jan 27, 2025 - Python
Improve this page
Add a description, image, and links to theinternvl topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with theinternvl topic, visit your repo's landing page and select "manage topics."