Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

The official implement of Mind's eye: image recognition by EEG via multimodal similarity-keeping contrastive learning.

NotificationsYou must be signed in to change notification settings

ChiShengChen/MUSE_EEG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

arXiv
The official repository implement ofMind's Eye: Image Recognition by EEG via Multimodal Similarity-Keeping Contrastive Learning .We provide new types of EEG encoders and Similarity-Keeping Contrastive Learning framework to reach the SOTA on EEG-image zero-shot classification task.paper_img_eeg_music_com_c

Multimodal Similarity-Keeping ContrastivE (MUSE) Learning

paper_img_eeg_clip_corrThe details of the MUSE. (a.) The contrastive learning loss is calculated from EEGencoding and image encoding. (b.)(c.) The similarity-keeping loss comes from the final similarity ofself-batch similarity of the input modal data.

New EEG encoder series

paper_img_model_c

Performance

image

Datasets

many thanks for sharing good datasets!

  1. Things-EEG2

The data we use is "Raw EEG data" inhere.
image

EEG pre-processing

Script path

  • ./preprocessing/

Data path

  • raw data:./Data/Things-EEG2/Raw_data/
  • proprocessed eeg data:./Data/Things-EEG2/Preprocessed_data_250Hz/

Steps

  1. pre-processing EEG data of each subject

    • modifypreprocessing_utils.py as you need.
      • choose channels
      • epoching
      • baseline correction
      • resample to 250 Hz
      • sort by condition
      • Multivariate Noise Normalization (z-socre is also ok)
    • python preprocessing.py for each subject (run by per subject), note that need to modified the defaultparser.add_argument('--sub', default=<Your_Subject_Want_to_Preprocessing>, type=int).
    • The output files will svaed in./Data/Things-EEG2/Preprocessed_data_250Hz/.
  2. get the center images of each test condition (for testing, contrast with EEG features)

    • get images from original Things dataset but discard the images used in EEG test sessions.

Image features from pre-trained models

Script path

  • ./clipvit_feature_extraction/

Data path (follow the original dataset setting)

  • raw image:./Data/Things-EEG2/Image_set/image_set/
  • preprocessed eeg data:./Data/Things-EEG2/Preprocessed_data_250Hz/
  • features of each images:./Data/Things-EEG2/DNN_feature_maps/full_feature_maps/model/pretrained-True/
  • features been packaged:./Data/Things-EEG2/DNN_feature_maps/pca_feature_maps/model/pretrained-True/
  • features of condition centers:./Data/Things-EEG2/Image_set/

Steps

  1. obtain feature maps with each pre-trained model withobtain_feature_maps_xxx.py (clip, vit, resnet...)
  2. package all the feature maps into one .npy file withfeature_maps_xxx.py
  3. obtain feature maps of center images withcenter_fea_xxx.py
    • save feature maps of each center image intocenter_all_image_xxx.npy
    • save feature maps of each condition intocenter_xxx.npy (used in training)

Training and testing

Script path

  • ./model/main_train.py

Star History

Star History Chart

Reference

The code is modified based onNICE_EEG.

Citation

Hope this code is helpful. I would appreciate you citing us in your paper, and the github.

@article{chen2024mind,  title={Mind's Eye: Image Recognition by EEG via Multimodal Similarity-Keeping Contrastive Learning},  author={Chen, Chi-Sheng and Wei, Chun-Shu},  journal={arXiv preprint arXiv:2406.16910},  year={2024}}

About

The official implement of Mind's eye: image recognition by EEG via multimodal similarity-keeping contrastive learning.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp