1
|
Schilling A, Gerum R, Boehm C, Rasheed J, Metzner C, Maier A, Reindl C, Hamer H, Krauss P. Deep learning based decoding of single local field potential events. Neuroimage 2024; 297:120696. [PMID: 38909761 DOI: 10.1016/j.neuroimage.2024.120696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 06/12/2024] [Accepted: 06/18/2024] [Indexed: 06/25/2024] Open
Abstract
How is information processed in the cerebral cortex? In most cases, recorded brain activity is averaged over many (stimulus) repetitions, which erases the fine-structure of the neural signal. However, the brain is obviously a single-trial processor. Thus, we here demonstrate that an unsupervised machine learning approach can be used to extract meaningful information from electro-physiological recordings on a single-trial basis. We use an auto-encoder network to reduce the dimensions of single local field potential (LFP) events to create interpretable clusters of different neural activity patterns. Strikingly, certain LFP shapes correspond to latency differences in different recording channels. Hence, LFP shapes can be used to determine the direction of information flux in the cerebral cortex. Furthermore, after clustering, we decoded the cluster centroids to reverse-engineer the underlying prototypical LFP event shapes. To evaluate our approach, we applied it to both extra-cellular neural recordings in rodents, and intra-cranial EEG recordings in humans. Finally, we find that single channel LFP event shapes during spontaneous activity sample from the realm of possible stimulus evoked event shapes. A finding which so far has only been demonstrated for multi-channel population coding.
Collapse
Affiliation(s)
- Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, Germany; Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany
| | - Richard Gerum
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany; Department of Physics and Center for Vision Research, York University, Toronto, Canada
| | - Claudia Boehm
- Neuroscience Lab, University Hospital Erlangen, Germany; Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany
| | - Jwan Rasheed
- Neuroscience Lab, University Hospital Erlangen, Germany; Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany
| | - Claus Metzner
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany; Pattern Recognition Lab, University Erlangen-Nürnberg, Germany
| | - Andreas Maier
- Pattern Recognition Lab, University Erlangen-Nürnberg, Germany
| | - Caroline Reindl
- Epilepsy Center, Department of Neurology, University Hospital Erlangen, Germany
| | - Hajo Hamer
- Epilepsy Center, Department of Neurology, University Hospital Erlangen, Germany
| | - Patrick Krauss
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany; Pattern Recognition Lab, University Erlangen-Nürnberg, Germany.
| |
Collapse
|
2
|
Yue H, Chen Z, Guo W, Sun L, Dai Y, Wang Y, Ma W, Fan X, Wen W, Lei W. Research and application of deep learning-based sleep staging: Data, modeling, validation, and clinical practice. Sleep Med Rev 2024; 74:101897. [PMID: 38306788 DOI: 10.1016/j.smrv.2024.101897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/30/2023] [Accepted: 01/04/2024] [Indexed: 02/04/2024]
Abstract
Over the past few decades, researchers have attempted to simplify and accelerate the process of sleep stage classification through various approaches; however, only a few such approaches have gained widespread acceptance. Artificial intelligence technology, particularly deep learning, is promising for earning the trust of the sleep medicine community in automated sleep-staging systems, thus facilitating its application in clinical practice and integration into daily life. We aimed to comprehensively review the latest methods that are applying deep learning for enhancing sleep staging efficiency and accuracy. Starting from the requisite "data" for constructing deep learning algorithms, we elucidated the current landscape of this domain and summarized the fundamental modeling process, encompassing signal selection, data pre-processing, model architecture, classification tasks, and performance metrics. Furthermore, we reviewed the applications of automated sleep staging in scenarios such as sleep-disorder screening, diagnostic procedures, and health monitoring and management. Finally, we conducted an in-depth analysis and discussion of the challenges and future in intelligent sleep staging, particularly focusing on large-scale sleep datasets, interdisciplinary collaborations, and human-computer interactions.
Collapse
Affiliation(s)
- Huijun Yue
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, People's Republic of China
| | - Zhuqi Chen
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, People's Republic of China
| | - Wenbin Guo
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, People's Republic of China
| | - Lin Sun
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, People's Republic of China
| | - Yidan Dai
- School of Computer Science, South China Normal University, Guangzhou, People's Republic of China
| | - Yiming Wang
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, People's Republic of China
| | - Wenjun Ma
- School of Computer Science, South China Normal University, Guangzhou, People's Republic of China
| | - Xiaomao Fan
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, People's Republic of China
| | - Weiping Wen
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, People's Republic of China; Department of Otolaryngology, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, People's Republic of China.
| | - Wenbin Lei
- Otorhinolaryngology Hospital, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, People's Republic of China.
| |
Collapse
|
3
|
Khan SU, Jan SU, Koo I. Robust Epileptic Seizure Detection Using Long Short-Term Memory and Feature Fusion of Compressed Time-Frequency EEG Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:9572. [PMID: 38067944 PMCID: PMC10708722 DOI: 10.3390/s23239572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 11/27/2023] [Accepted: 11/28/2023] [Indexed: 12/18/2023]
Abstract
Epilepsy is a prevalent neurological disorder with considerable risks, including physical impairment and irreversible brain damage from seizures. Given these challenges, the urgency for prompt and accurate seizure detection cannot be overstated. Traditionally, experts have relied on manual EEG signal analyses for seizure detection, which is labor-intensive and prone to human error. Recognizing this limitation, the rise in deep learning methods has been heralded as a promising avenue, offering more refined diagnostic precision. On the other hand, the prevailing challenge in many models is their constrained emphasis on specific domains, potentially diminishing their robustness and precision in complex real-world environments. This paper presents a novel model that seamlessly integrates the salient features from the time-frequency domain along with pivotal statistical attributes derived from EEG signals. This fusion process involves the integration of essential statistics, including the mean, median, and variance, combined with the rich data from compressed time-frequency (CWT) images processed using autoencoders. This multidimensional feature set provides a robust foundation for subsequent analytic steps. A long short-term memory (LSTM) network, meticulously optimized for the renowned Bonn Epilepsy dataset, was used to enhance the capability of the proposed model. Preliminary evaluations underscore the prowess of the proposed model: a remarkable 100% accuracy in most of the binary classifications, exceeding 95% accuracy in three-class and four-class challenges, and a commendable rate, exceeding 93.5% for the five-class classification.
Collapse
Affiliation(s)
- Shafi Ullah Khan
- Department of Electrical Electronic and Computer Engineering, University of Ulsan, Ulsan 44610, Republic of Korea
| | - Sana Ullah Jan
- School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, UK;
| | - Insoo Koo
- Department of Electrical Electronic and Computer Engineering, University of Ulsan, Ulsan 44610, Republic of Korea
| |
Collapse
|