Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
Cornell University

arXiv Is Hiring Software Devs

View Jobs
We gratefully acknowledge support from the Simons Foundation,member institutions, and all contributors.Donate
arxiv logo>eess> arXiv:2205.03239
arXiv logo
Cornell University Logo

Electrical Engineering and Systems Science > Signal Processing

arXiv:2205.03239 (eess)
[Submitted on 29 Apr 2022]

Title:Multichannel Synthetic Preictal EEG Signals to Enhance the Prediction of Epileptic Seizures

View PDF
Abstract:Epilepsy is a chronic neurological disorder affecting 1\% of people worldwide, deep learning (DL) algorithms-based electroencephalograph (EEG) analysis provides the possibility for accurate epileptic seizure (ES) prediction, thereby benefiting patients suffering from epilepsy. To identify the preictal region that precedes the onset of seizure, a large number of annotated EEG signals are required to train DL algorithms. However, the scarcity of seizure onsets leads to significant insufficiency of data for training the DL algorithms. To overcome this data insufficiency, in this paper, we propose a preictal artificial signal synthesis algorithm based on a generative adversarial network to generate synthetic multichannel EEG preictal samples. A high-quality single-channel architecture, determined by visual and statistical evaluations, is used to train the generators of multichannel samples. The effectiveness of the synthetic samples is evaluated by comparing the ES prediction performances without and with synthetic preictal sample augmentation. The leave-one-seizure-out cross validation ES prediction accuracy and corresponding area under the receiver operating characteristic curve evaluation improve from 73.0\% and 0.676 to 78.0\% and 0.704 by 10$\times$ synthetic sample augmentation, respectively. The obtained results indicate that synthetic preictal samples are effective for enhancing ES prediction performance.
Comments:10 pages, 10 figures, 4 tables, accepted to IEEE Transactions on Biomedical Engineering
Subjects:Signal Processing (eess.SP); Machine Learning (cs.LG)
Cite as:arXiv:2205.03239 [eess.SP]
 (orarXiv:2205.03239v1 [eess.SP] for this version)
 https://doi.org/10.48550/arXiv.2205.03239
arXiv-issued DOI via DataCite
Related DOI:https://doi.org/10.1109/TBME.2022.3171982
DOI(s) linking to related resources

Submission history

From: Yankun Xu [view email]
[v1] Fri, 29 Apr 2022 03:33:47 UTC (7,715 KB)
Full-text links:

Access Paper:

  • View PDF
  • TeX Source
  • Other Formats
Current browse context:
eess.SP
Change to browse by:
export BibTeX citation

Bookmark

BibSonomy logoReddit logo

Bibliographic and Citation Tools

Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
scite Smart Citations(What are Smart Citations?)

Code, Data and Media Associated with this Article

CatalyzeX Code Finder for Papers(What is CatalyzeX?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)

Demos

Hugging Face Spaces(What is Spaces?)

Recommenders and Search Tools

Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.

Which authors of this paper are endorsers? |Disable MathJax (What is MathJax?)

[8]ページ先頭

©2009-2025 Movatter.jp