Yoo SH, Santosa H, Kim CS, Hong KS. Decoding Multiple Sound-Categories in the Auditory Cortex by Neural Networks: An fNIRS Study.
Front Hum Neurosci 2021;
15:636191. [PMID:
33994978 PMCID:
PMC8113416 DOI:
10.3389/fnhum.2021.636191]
[Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 03/31/2021] [Indexed: 11/13/2022] Open
Abstract
This study aims to decode the hemodynamic responses (HRs) evoked by multiple sound-categories using functional near-infrared spectroscopy (fNIRS). The six different sounds were given as stimuli (English, non-English, annoying, nature, music, and gunshot). The oxy-hemoglobin (HbO) concentration changes are measured in both hemispheres of the auditory cortex while 18 healthy subjects listen to 10-s blocks of six sound-categories. Long short-term memory (LSTM) networks were used as a classifier. The classification accuracy was 20.38 ± 4.63% with six class classification. Though LSTM networks' performance was a little higher than chance levels, it is noteworthy that we could classify the data subject-wise without feature selections.
Collapse