Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
#

multi-modal-fusion

Here are 28 public repositories matching this topic...

[Paper][AAAI 2025] (MyGO)Tokenization, Fusion, and Augmentation: Towards Fine-grained Multi-modal Entity Representation

  • UpdatedDec 19, 2024
  • Python

[IEEE TCYB 2023] The first large-scale tracking dataset by fusing RGB and Event cameras.

  • UpdatedFeb 14, 2025
  • Python

Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Zeta

  • UpdatedJan 27, 2025
  • Python

This repository contains the source code for our paper: "Husformer: A Multi-Modal Transformer for Multi-Modal Human State Recognition". For more details, please refer to our paper athttps://arxiv.org/abs/2209.15182.

  • UpdatedJul 2, 2023
  • Python

Code for J. Wang, J. Li, Y. Shi, J. Lai and X. Tan, "AM3Net: Adaptive Mutual-learning-based Multimodal Data Fusion Network," in IEEE TCSVT, 2022. We conducted the experiments on the hyperspectral and lidar dataset(Houston and Trento) and multispectral and synthetic aperture radar data (grss-dfc-2007 datasets).

  • UpdatedMar 27, 2023
  • Python

Training for multi-modal image fusion with PyTorch.

  • UpdatedNov 30, 2023
  • Python

[Paper][SIGIR 2024] NativE: Multi-modal Knowledge Graph Completion in the Wild

  • UpdatedAug 12, 2024
  • Python

[IV'24] UniBEV: the official implementation of UniBEV

  • UpdatedJun 26, 2024
  • Python

[Paper][LREC-COLING 2024] Unleashing the Power of Imbalanced Modality Information for Multi-modal Knowledge Graph Completion

  • UpdatedApr 16, 2024
  • Python

The open source implementation of the model from "Scaling Vision Transformers to 22 Billion Parameters"

  • UpdatedJan 27, 2025
  • Python

[CVPR-2023 Workshop@NFVLR] Official PyTorch implementation of Learning CLIP Guided Visual-Text Fusion Transformer for Video-based Pedestrian Attribute Recognition

  • UpdatedJun 11, 2024
  • Python

Adaptive Confidence Multi-View Hashing

  • UpdatedDec 13, 2023
  • Python

[Paper][ICLR 2025] Multiple Heads are Better than One: Mixture of Modality Knowledge Experts for Entity Representation Learning

  • UpdatedMar 14, 2025
  • Python

The official implementation of "TFormer: A throughout fusion transformer for multi-modal skin lesion diagnosis"

  • UpdatedJan 29, 2024
  • Python

IEEE 802.11n CSI and camera synchronization toolkit.

  • UpdatedDec 25, 2024
  • C

[CHI2021] Hidden emotion detection using multi-modal signals

  • UpdatedSep 30, 2021
  • Python

Multi-Modal Attention-based Hierarchical Graph Neural Network for Object Interaction Recommendation in Internet of Things (IoT)

  • UpdatedDec 15, 2021
  • Python

Improve this page

Add a description, image, and links to themulti-modal-fusion topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with themulti-modal-fusion topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp