Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Large-Scale Visual Representation Model

License

NotificationsYou must be signed in to change notification settings

deepglint/unicom

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MLCD

UNICOM & MLCD

ArxivArxivHugging Face

This repository focuses on building foundational visual models for large multimodal language models using large-scale datasets such as LAION400M and COYO700M. We employ sample-to-cluster contrastive learning to optimize performance. Our models are primarily used for multimodal visual large language models, such as LLaVA.

We adopted the officialLLaVA-NeXT and the official training datasetLLaVA-NeXT-Data for evaluating the foundational visual models.
The language model is Qwen2.5-7B.

Vision TowerRoPE2DChartQADocVQAInfoVQAOCRBenchMMMU
CLIP (ViT-L-14-336px)×66.5275.2138.88525.0044.20
SigLIP (ViT-SO400M-384px)×69.2876.7141.38554.0046.78
DFN5B (ViT-H-14-378px)×64.3670.8738.59473.0048.00
HF:MLCD (ViT-L-14-336px)×67.8476.4643.48531.0044.30
HF:MLCD (ViT-bigG-14-336px)71.0779.6344.38572.0046.78
HF:MLCD (ViT-bigG-14-448px)73.8083.3446.59582.0046.00

The results of the ImageNet linear probe are as follows:

Model NameImageNet Linear ProbeHugging Face
MLCD-ViT-B-32-224px79.1HF:MLCD-ViT-B-32-224px
MLCD-ViT-L-14-336px86.3HF:MLCD-ViT-L-14-336px
MLCD-ViT-bigG-14-224px87.1HF:MLCD-ViT-bigG-14-224px

Quickstart Example

Here is an example of how to use theMLCDVisionModel from the Transformers library for feature extraction. Please note that this requires thetransformers library from themaster branch. We will update this with a specific version number in the future.

# pip install git+https://github.com/huggingface/transformers@v4.51.3-MLCD-previewimportrequestsfromPILimportImagefromtransformersimportAutoProcessor,MLCDVisionModelimporttorch# Load model and processormodel=MLCDVisionModel.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-448")processor=AutoProcessor.from_pretrained("DeepGlint-AI/mlcd-vit-bigG-patch14-448")# Process single imageurl="http://images.cocodataset.org/val2017/000000039769.jpg"image=Image.open(requests.get(url,stream=True).raw)inputs=processor(images=image,return_tensors="pt")# Generate outputswithtorch.no_grad():outputs=model(**inputs)# Get visual featuresfeatures=outputs.last_hidden_stateprint(f"Extracted features shape:{features.shape}")

Latest News

🤗 [2025/08]RICE accepted as a highlight paper at ICCV 2025.
🤗 [2025/04] MLCD bigG has been merged into the Transformers library and can be accessedhere.
💖 [2025/02] We have released theMLCD-bigG-14-448px model, which has demonstrated excellent performance within the LLaVA-NeXT framework. You can reproduce these results from here[1],[2].
🎅 [2024/12] We have launched theMLCD-Seg-7B, achieving scores of 85.3/81.5 on RefCOCO[testA/B], 82.9/75.6 on RefCOCO+[testA/B], and 80.5 on RefCOCOg[test].
🤖 [2024/11] We have launched theMLCD-Embodied-7B, which can reach the level of GPT-4V in embodied capabilities and possesses excellent general understanding abilities.
🤗 [2024/10] We releaseMLCD-NeXT-7B to Hugging Face.
🏰 [2024/07]MLCD was accepted to ECCV2024.
🌍 [2023/03]UNICOM was accepted to ICLR2023.

MLCD-Embodied

Hugging Face

More details about MLCD-Embodied can be found in theMLCD-Embodied file.

Comparison with LLaVA OneVision-7B and GPT-4

DatasetSplitMLCD-Embodied-7BLLaVA OneVision-7BGPT-4vGPT-4o
Vision Encoder-MLCD-ViT-L-14-336pxSigLIP--
ChartQAtest83.080.078.585.7
DocVQAtest91.687.588.492.8
InfoVQAval73.970.7--
InfoVQAtest70.068.8--
MMMUval47.348.856.869.1
MMStartest58.561.757.163.9
OCRBench-749.0697.0656.0805.0
RealWorldQAtest68.966.361.458.6
SeedBenchimage74.975.449.976.2
MMEtest578/1603418/1580517/1409-

Multi-Label Cluster Discrimination (MLCD)

ArxivHugging Face

More details about MLCD can be found in theMLCD.md file.

While CLIP models have shown excellence in many tasks via image-text contrastive learning, they often struggle with encoding complex semantic structures within images. To address this limitation, we introduceMulti-Label Cluster Discrimination (MLCD).

MLCD improves upon traditional approaches by clustering the the LAION dataset, which contains billions of images, into one million centers and assigning multiple closest clusters as labels to each image. This technique accounts for the presence of multiple objects within a single image. We also introduce a novel multi-label classification loss, which separately handles positive and negative class losses, minimizing label ambiguity. Our experiments demonstrate that MLCD achieves state-of-the-art performance in linear probe. Moreover, MLCD shows significant potential when integrated with multimodal large language models. The following two figures compare the evaluation performance of our model on MLLM and Linear Probe. The model we used is ViT-L-14@336px.

Image 1Image 2

UNICOM

ArxivGoogle Drive

For image representation:

  1. ImageNet pretraining is not universal enough to generalize to diverse open-world objects.
  2. Supervised learning is not scalable because manual annotation of large-scale training data is time-consuming, costly, and even infeasible.
  3. Instance discrimination method (e.g., CLIP) can hardly encode the semantic structure of training data, because instance-wise contrastive learning always treats two samples as a negative pair, regardless of their semantic similarity.

UNICOM demonstrates superior performance in image retrieval, thanks to its ability to cluster400000000 images into1000000 pseudo classes using joint textual and visual features extracted by the CLIP model. Additionally, our use of a margin-based softmax loss (ArcFace) and random partial class/feature (PartialFC) selections enhances the robustness and compactness of the feature embedding. Our method outperforms state-of-the-art unsupervised and supervised image retrieval approaches, making it a powerful tool for researchers and practitioners in the field.

For detailed instructions, please refer to the UNICOMDocumentation.

Contributors

Thanks so much to all of our amazing contributors!

jiankangdeng
JiankangDeng
daixiangzi
Daixiangzi
anxiangsir
Xiang An
yiyexy
Yiyexy
xiaranqing
xiaranqing
SNHIPOW
Athinklo
tanhuajie
Tanhuajie
ZhaoYan-ai
ZhaoYan-ai
wkzhang636
wkzhang636

Dataset Contributors

This project would not have been possible without the invaluable contributions of the following individuals, who have been instrumental in data scraping and collection:
Thank you to all the contributors for their hard work and dedication!

ContributorEmial
Bin Qinskyqin@gmail.com
Lan Wubah-wl@hotmail.com
Haiqiang Jianghaiqiangjiang@deepglint.com
Yuling Wuyulingwu@deepglint.com

Citation

@inproceedings{yinxie_2025_rice,  title={Region-based Cluster Discrimination for Visual Representation Learning},  author={Xie, Yin and Yang, Kaicheng and An, Xiang and Wu, Kun and Zhao, Yongle and Deng, Weimo and Ran, Zimin and Wang, Yumeng and Feng, Ziyong And Roy, Miles And Ismail, Elezi And Deng, Jiankang},  booktitle={ICCV},  year={2025}}@inproceedings{anxiang_2024_mlcd,  title={Multi-label Cluster Discrimination for Visual Representation Learning},  author={An, Xiang and Yang, Kaicheng and Dai, Xiangzi and Feng, Ziyong and Deng, Jiankang},  booktitle={ECCV},  year={2024}}@inproceedings{anxiang_2023_unicom,  title={Unicom: Universal and Compact Representation Learning for Image Retrieval},  author={An, Xiang and Deng, Jiankang and Yang, Kaicheng and Li, Jiawei and Feng, Ziyong and Guo, Jia and Yang, Jing and Liu, Tongliang},  booktitle={ICLR},  year={2023}}@inproceedings{anxiang_2022_partialfc,    author={An, Xiang and Deng, Jiankang and Guo, Jia and Feng, Ziyong and Zhu, XuHan and Yang, Jing and Liu, Tongliang},    title={Killing Two Birds With One Stone: Efficient and Robust Training of Face Recognition CNNs by Partial FC},    booktitle={CVPR},    year={2022},}@inproceedings{deng_2019_arcface,  title={Arcface: Additive angular margin loss for deep face recognition},  author={Deng, Jiankang and Guo, Jia and Xue, Niannan and Zafeiriou, Stefanos},  booktitle={CVPR},  year={2019}}

Acknowledgement

We extend our deepest gratitude to the creators and contributors of the following projects:

  1. llava-next: The comprehensive codebase for training Vision-Language Models (VLMs).
  2. lmms-eval: The robust tool for evaluating Vision-Language Models (VLMs).
  3. OpenEQA: A wonderful benchmark for Embodied Question Answering.
  4. RoboVQA: Provide high level reasoning model and dataset for robotics.

Their exceptional work has been instrumental to our research and development efforts.


[8]ページ先頭

©2009-2025 Movatter.jp