1
|
Li KH, Chien CY, Tai SY, Chan LP, Chang NC, Wang LF, Ho KY, Lien YJ, Ho WH. Prognosis Prediction of Sudden Sensorineural Hearing Loss Using Ensemble Artificial Intelligence Learning Models. Otol Neurotol 2024; 45:759-764. [PMID: 38918073 DOI: 10.1097/mao.0000000000004241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
OBJECTIVE We used simple variables to construct prognostic prediction ensemble learning models for patients with sudden sensorineural hearing loss (SSNHL). STUDY DESIGN Retrospectively study. SETTING Tertiary medical center. PATIENTS 1,572 patients with SSNHL. INTERVENTION Prognostic. MAIN OUTCOME MEASURES We selected four variables, namely, age, days after onset of hearing loss, vertigo, and type of hearing loss. We also compared the accuracy between different ensemble learning models based on the boosting, bagging, AdaBoost, and stacking algorithms. RESULTS We enrolled 1,572 patients with SSNHL; 73.5% of them showed improving and 26.5% did not. Significant between-group differences were noted in terms of age ( p = 0.011), days after onset of hearing loss ( p < 0.001), and concurrent vertigo ( p < 0.001), indicating that the patients who showed improving to treatment were younger and had fewer days after onset and fewer vertigo symptoms. Among ensemble learning models, the AdaBoost algorithm, compared with the other algorithms, achieved higher accuracy (82.89%), higher precision (86.66%), a higher F1 score (89.20), and a larger area under the receiver operating characteristics curve (0.79), as indicated by test results of a dataset with 10 independent runs. Furthermore, Gini scores indicated that age and days after onset are two key parameters of the predictive model. CONCLUSIONS The AdaBoost model is an effective model for predicting SSNHL. The use of simple parameters can increase its practicality and applicability in remote medical care. Moreover, age may be a key factor influencing prognosis.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Yu-Jui Lien
- Department of Healthcare Administration and Medical Informatics, Kaohsiung Medical University, Kaohsiung, Taiwan
| | | |
Collapse
|
2
|
Wang SY, Barrette LX, Ng JJ, Sangal NR, Cannady SB, Brody RM, Bur AM, Brant JA. Predicting reoperation and readmission for head and neck free flap patients using machine learning. Head Neck 2024; 46:1999-2009. [PMID: 38357827 DOI: 10.1002/hed.27690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 01/17/2024] [Accepted: 02/05/2024] [Indexed: 02/16/2024] Open
Abstract
BACKGROUND To develop machine learning (ML) models predicting unplanned readmission and reoperation among patients undergoing free flap reconstruction for head and neck (HN) surgery. METHODS Data were extracted from the 2012-2019 NSQIP database. eXtreme Gradient Boosting (XGBoost) was used to develop ML models predicting 30-day readmission and reoperation based on demographic and perioperative factors. Models were validated using 2019 data and evaluated. RESULTS Four-hundred and sixty-six (10.7%) of 4333 included patients were readmitted within 30 days of initial surgery. The ML model demonstrated 82% accuracy, 63% sensitivity, 85% specificity, and AUC of 0.78. Nine-hundred and four (18.3%) of 4931 patients underwent reoperation within 30 days of index surgery. The ML model demonstrated 62% accuracy, 51% sensitivity, 64% specificity, and AUC of 0.58. CONCLUSION XGBoost was used to predict 30-day readmission and reoperation for HN free flap patients. Findings may be used to assist clinicians and patients in shared decision-making and improve data collection in future database iterations.
Collapse
Affiliation(s)
- Stephanie Y Wang
- Department of Otolaryngology - Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Louis-Xavier Barrette
- Department of Otolaryngology - Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Jinggang J Ng
- Department of Otolaryngology - Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Neel R Sangal
- Department of Otolaryngology - Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Steven B Cannady
- Department of Otolaryngology - Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Robert M Brody
- Department of Otolaryngology - Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Corporal Michael J. Crescenz VAMC, Philadelphia, Pennsylvania, USA
| | - Andrés M Bur
- Department of Otolaryngology - Head and Neck Surgery, University of Kansas Medical Center, Kansas City, Kansas, USA
| | - Jason A Brant
- Department of Otolaryngology - Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania, USA
- Corporal Michael J. Crescenz VAMC, Philadelphia, Pennsylvania, USA
| |
Collapse
|
3
|
Shon S, Lim K, Chae M, Lee H, Choi J. Predicting Sudden Sensorineural Hearing Loss Recovery with Patient-Personalized Seigel's Criteria Using Machine Learning. Diagnostics (Basel) 2024; 14:1296. [PMID: 38928711 PMCID: PMC11202901 DOI: 10.3390/diagnostics14121296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 06/04/2024] [Accepted: 06/15/2024] [Indexed: 06/28/2024] Open
Abstract
BACKGROUND Accurate prognostic prediction is crucial for managing Idiopathic Sudden Sensorineural Hearing Loss (ISSHL). Previous studies developing ISSHL prognosis models often overlooked individual variability in hearing damage by relying on fixed frequency domains. This study aims to develop models predicting ISSHL prognosis one month after treatment, focusing on patient-specific hearing impairments. METHODS Patient-Personalized Seigel's Criteria (PPSC) were developed considering patient-specific hearing impairment related to ISSHL criteria. We performed a statistical test to assess the shift in the recovery assessment when applying PPSC. The utilized dataset of 581 patients comprised demographic information, health records, laboratory testing, onset and treatment, and hearing levels. To reduce the model's reliance on hearing level features, we used only the averages of hearing levels of the impaired frequencies. Then, model development, evaluation, and interpretation proceeded. RESULTS The chi-square test (p-value: 0.106) indicated that the shift in recovery assessment is not statistically significant. The soft-voting ensemble model was most effective, achieving an Area Under the Receiver Operating Characteristic Curve (AUROC) of 0.864 (95% CI: 0.801-0.927), with model interpretation based on the SHapley Additive exPlanations value. CONCLUSIONS With PPSC, providing a hearing assessment comparable to traditional Seigel's criteria, the developed models successfully predicted ISSHL recovery one month post-treatment by considering patient-specific impairments.
Collapse
Affiliation(s)
- Sanghyun Shon
- Department of Biomedical Informatics, Korea University College of Medicine, Seoul 02708, Republic of Korea; (S.S.); (M.C.)
| | - Kanghyeon Lim
- Department of Otorhinolaryngology-Head and Neck Surgery, Korea University Ansan Hospital, Ansan-si 15355, Republic of Korea;
| | - Minsu Chae
- Department of Biomedical Informatics, Korea University College of Medicine, Seoul 02708, Republic of Korea; (S.S.); (M.C.)
| | - Hwamin Lee
- Department of Biomedical Informatics, Korea University College of Medicine, Seoul 02708, Republic of Korea; (S.S.); (M.C.)
| | - June Choi
- Department of Biomedical Informatics, Korea University College of Medicine, Seoul 02708, Republic of Korea; (S.S.); (M.C.)
- Department of Otorhinolaryngology-Head and Neck Surgery, Korea University Ansan Hospital, Ansan-si 15355, Republic of Korea;
| |
Collapse
|
4
|
Wu Y, Yao J, Xu XM, Zhou LL, Salvi R, Ding S, Gao X. Combination of static and dynamic neural imaging features to distinguish sensorineural hearing loss: a machine learning study. Front Neurosci 2024; 18:1402039. [PMID: 38933814 PMCID: PMC11201293 DOI: 10.3389/fnins.2024.1402039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2024] [Accepted: 05/13/2024] [Indexed: 06/28/2024] Open
Abstract
Purpose Sensorineural hearing loss (SNHL) is the most common form of sensory deprivation and is often unrecognized by patients, inducing not only auditory but also nonauditory symptoms. Data-driven classifier modeling with the combination of neural static and dynamic imaging features could be effectively used to classify SNHL individuals and healthy controls (HCs). Methods We conducted hearing evaluation, neurological scale tests and resting-state MRI on 110 SNHL patients and 106 HCs. A total of 1,267 static and dynamic imaging characteristics were extracted from MRI data, and three methods of feature selection were computed, including the Spearman rank correlation test, least absolute shrinkage and selection operator (LASSO) and t test as well as LASSO. Linear, polynomial, radial basis functional kernel (RBF) and sigmoid support vector machine (SVM) models were chosen as the classifiers with fivefold cross-validation. The receiver operating characteristic curve, area under the curve (AUC), sensitivity, specificity and accuracy were calculated for each model. Results SNHL subjects had higher hearing thresholds in each frequency, as well as worse performance in cognitive and emotional evaluations, than HCs. After comparison, the selected brain regions using LASSO based on static and dynamic features were consistent with the between-group analysis, including auditory and nonauditory areas. The subsequent AUCs of the four SVM models (linear, polynomial, RBF and sigmoid) were as follows: 0.8075, 0.7340, 0.8462 and 0.8562. The RBF and sigmoid SVM had relatively higher accuracy, sensitivity and specificity. Conclusion Our research raised attention to static and dynamic alterations underlying hearing deprivation. Machine learning-based models may provide several useful biomarkers for the classification and diagnosis of SNHL.
Collapse
Affiliation(s)
- Yuanqing Wu
- Department of Otorhinolaryngology Head and Neck Surgery, Nanjing Drum Tower Hospital Clinical College of Nanjing Medical University, Nanjing, China
- Department of Otorhinolaryngology Head and Neck Surgery, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Jun Yao
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Xiao-Min Xu
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Lei-Lei Zhou
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Richard Salvi
- Center for Hearing and Deafness, University at Buffalo, The State University of New York, Buffalo, NY, United States
| | - Shaohua Ding
- Department of Radiology, The Affiliated Taizhou People's Hospital of Nanjing Medical University, Taizhou School of Clinical Medicine, Nanjing Medical University, Taizhou, China
| | - Xia Gao
- Department of Otorhinolaryngology Head and Neck Surgery, Nanjing Drum Tower Hospital Clinical College of Nanjing Medical University, Nanjing, China
| |
Collapse
|
5
|
Wang CT, Chen TM, Lee NT, Fang SH. AI Detection of Glottic Neoplasm Using Voice Signals, Demographics, and Structured Medical Records. Laryngoscope 2024. [PMID: 38864282 DOI: 10.1002/lary.31563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 04/16/2024] [Accepted: 05/21/2024] [Indexed: 06/13/2024]
Abstract
OBJECTIVE This study investigated whether artificial intelligence (AI) models combining voice signals, demographics, and structured medical records can detect glottic neoplasm from benign voice disorders. METHODS We used a primary dataset containing 2-3 s of vowel "ah", demographics, and 26 items of structured medical records (e.g., symptoms, comorbidity, smoking and alcohol consumption, vocal demand) from 60 patients with pathology-proved glottic neoplasm (i.e., squamous cell carcinoma, carcinoma in situ, and dysplasia) and 1940 patients with benign voice disorders. The validation dataset comprised data from 23 patients with glottic neoplasm and 1331 patients with benign disorders. The AI model combined convolutional neural networks, gated recurrent units, and attention layers. We used 10-fold cross-validation (training-validation-testing: 8-1-1) and preserved the percentage between neoplasm and benign disorders in each fold. RESULTS Results from the AI model using voice signals reached an area under the ROC curve (AUC) value of 0.631, and additional demographics increased this to 0.807. The highest AUC of 0.878 was achieved when combining voice, demographics, and medical records (sensitivity: 0.783, specificity: 0.816, accuracy: 0.815). External validation yielded an AUC value of 0.785 (voice plus demographics; sensitivity: 0.739, specificity: 0.745, accuracy: 0.745). Subanalysis showed that AI had higher sensitivity but lower specificity than human assessment (p < 0.01). The accuracy of AI detection with additional medical records was comparable with human assessment (82% vs. 83%, p = 0.78). CONCLUSIONS Voice signal alone was insufficient for AI differentiation between glottic neoplasm and benign voice disorders, but additional demographics and medical records notably improved AI performance and approximated the prediction accuracy of humans. LEVEL OF EVIDENCE NA Laryngoscope, 2024.
Collapse
Affiliation(s)
- Chi-Te Wang
- Department of Otolaryngology Head and Neck Surgery, Far Eastern Memorial Hospital, Taipei, Taiwan
- Center of Artificial Intelligence, Far Eastern Memorial Hospital, Taipei, Taiwan
- Department of Electrical Engineering, Yuan Ze University, Taoyuan, Taiwan
| | - Tsai-Min Chen
- Graduate Program of Data Science, National Taiwan University and Academia Sinica, Taipei, Taiwan
- Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
| | - Nien-Ting Lee
- Center of Artificial Intelligence, Far Eastern Memorial Hospital, Taipei, Taiwan
| | - Shih-Hau Fang
- Department of Electrical Engineering, Yuan Ze University, Taoyuan, Taiwan
- Department of Electrical Engineering, National Taiwan Normal University, Taipei, Taiwan
| |
Collapse
|
6
|
Chen H, Ma X, Rives H, Serpedin A, Yao P, Rameau A. Trust in Machine Learning Driven Clinical Decision Support Tools Among Otolaryngologists. Laryngoscope 2024; 134:2799-2804. [PMID: 38230948 DOI: 10.1002/lary.31260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 11/29/2023] [Accepted: 12/20/2023] [Indexed: 01/18/2024]
Abstract
BACKGROUND Machine learning driven clinical decision support tools (ML-CDST) are on the verge of being integrated into clinical settings, including in Otolaryngology-Head & Neck Surgery. In this study, we investigated whether such CDST may influence otolaryngologists' diagnostic judgement. METHODS Otolaryngologists were recruited virtually across the United States for this experiment on human-AI interaction. Participants were shown 12 different video-stroboscopic exams from patients with previously diagnosed laryngopharyngeal reflux or vocal fold paresis and asked to determine the presence of disease. They were then exposed to a random diagnosis purportedly resulting from an ML-CDST and given the opportunity to revise their diagnosis. The ML-CDST output was presented with no explanation, a general explanation, or a specific explanation of its logic. The ML-CDST impact on diagnostic judgement was assessed with McNemar's test. RESULTS Forty-five participants were recruited. When participants reported less confidence (268 observations), they were significantly (p = 0.001) more likely to change their diagnostic judgement after exposure to ML-CDST output compared to when they reported more confidence (238 observations). Participants were more likely to change their diagnostic judgement when presented with a specific explanation of the CDST logic (p = 0.048). CONCLUSIONS Our study suggests that otolaryngologists are susceptible to accepting ML-CDST diagnostic recommendations, especially when less confident. Otolaryngologists' trust in ML-CDST output is increased when accompanied with a specific explanation of its logic. LEVEL OF EVIDENCE 2 Laryngoscope, 134:2799-2804, 2024.
Collapse
Affiliation(s)
- Hannah Chen
- Sean Parker Institute for the Voice, Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
| | - Xiaoyue Ma
- Division of Biostatistics, Department of Population Health Sciences, Weill Cornell Medical College, New York, New York, USA
| | - Hal Rives
- Sean Parker Institute for the Voice, Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
| | - Aisha Serpedin
- Sean Parker Institute for the Voice, Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
| | - Peter Yao
- Sean Parker Institute for the Voice, Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
| | - Anaïs Rameau
- Sean Parker Institute for the Voice, Department of Otolaryngology-Head and Neck Surgery, Weill Cornell Medicine, New York, New York, USA
| |
Collapse
|
7
|
Dou Z, Li Y, Deng D, Zhang Y, Pang A, Fang C, Bai X, Bing D. Pure tone audiogram classification using deep learning techniques. Clin Otolaryngol 2024. [PMID: 38745553 DOI: 10.1111/coa.14170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 12/21/2023] [Accepted: 04/20/2024] [Indexed: 05/16/2024]
Abstract
OBJECTIVE Pure tone audiometry has played a critical role in audiology as the initial diagnostic tool, offering vital insights for subsequent analyses. This study aims to develop a robust deep learning framework capable of accurately classifying audiograms across various commonly encountered tasks. DESIGN, SETTING, AND PARTICIPANTS This single-centre retrospective study was conducted in accordance with the STROBE guidelines. A total of 12 518 audiograms were collected from 6259 patients aged between 4 and 96 years, who underwent pure tone audiometry testing between February 2018 and April 2022 at Tongji Hospital, Tongji Medical College, Wuhan, China. Three experienced audiologists independently annotated the audiograms, labelling the hearing loss in degrees, types and configurations of each audiogram. MAIN OUTCOME MEASURES A deep learning framework was developed and utilised to classify audiograms across three tasks: determining the degrees of hearing loss, identifying the types of hearing loss, and categorising the configurations of audiograms. The classification performance was evaluated using four commonly used metrics: accuracy, precision, recall and F1-score. RESULTS The deep learning method consistently outperformed alternative methods, including K-Nearest Neighbors, ExtraTrees, Random Forest, XGBoost, LightGBM, CatBoost and FastAI Net, across all three tasks. It achieved the highest accuracy rates, ranging from 96.75% to 99.85%. Precision values fell within the range of 88.93% to 98.41%, while recall values spanned from 89.25% to 98.38%. The F1-score also exhibited strong performance, ranging from 88.99% to 98.39%. CONCLUSIONS This study demonstrated that a deep learning approach could accurately classify audiograms into their respective categories and could contribute to assisting doctors, particularly those lacking audiology expertise or experience, in better interpreting pure tone audiograms, enhancing diagnostic accuracy in primary care settings, and reducing the misdiagnosis rate of hearing conditions. In scenarios involving large-scale audiological data, the automated classification system could be used as a research tool to efficiently provide a comprehensive overview and statistical analysis. In the era of mobile audiometry, our deep learning framework can also help patients quickly and reliably understand their self-tested audiograms, potentially encouraging timely consultations with audiologists for further evaluation and intervention.
Collapse
Affiliation(s)
- Zhiyong Dou
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Yingqiang Li
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Dongzhou Deng
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yunxue Zhang
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Anran Pang
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Cong Fang
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Xiang Bai
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China
| | - Dan Bing
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
8
|
Wang Y, Yao X, Wang D, Ye C, Xu L. A machine learning screening model for identifying the risk of high-frequency hearing impairment in a general population. BMC Public Health 2024; 24:1160. [PMID: 38664666 PMCID: PMC11044481 DOI: 10.1186/s12889-024-18636-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 04/17/2024] [Indexed: 04/29/2024] Open
Abstract
BACKGROUND Hearing impairment (HI) has become a major public health issue in China. Currently, due to the limitations of primary health care, the gold standard for HI diagnosis (pure-tone hearing test) is not suitable for large-scale use in community settings. Therefore, the purpose of this study was to develop a cost-effective HI screening model for the general population using machine learning (ML) methods and data gathered from community-based scenarios, aiming to help improve the hearing-related health outcomes of community residents. METHODS This study recruited 3371 community residents from 7 health centres in Zhejiang, China. Sixty-eight indicators derived from questionnaire surveys and routine haematological tests were delivered and used for modelling. Seven commonly used ML models (the naive Bayes (NB), K-nearest neighbours (KNN), support vector machine (SVM), random forest (RF), eXtreme Gradient Boosting (XGBoost), boosting, and least absolute shrinkage and selection operator (LASSO regression)) were adopted and compared to develop the final high-frequency hearing impairment (HFHI) screening model for community residents. The model was constructed with a nomogram to obtain the risk score of the probability of individuals suffering from HFHI. According to the risk score, the population was divided into three risk stratifications (low, medium and high) and the risk factor characteristics of each dimension under different risk stratifications were identified. RESULTS Among all the algorithms used, the LASSO-based model achieved the best performance on the validation set by attaining an area under the curve (AUC) of 0.868 (95% confidence interval (CI): 0.847-0.889) and reaching precision, specificity and F-score values all greater than 80%. Five demographic indicators, 7 disease-related features, 5 behavioural factors, 2 environmental exposures, 2 hearing cognitive factors, and 13 blood test indicators were identified in the final screening model. A total of 91.42% (1235/1129) of the subjects in the high-risk group were confirmed to have HI by audiometry, which was 3.99 times greater than that in the low-risk group (22.91%, 301/1314). The high-risk population was mainly characterized as older, low-income and low-educated males, especially those with multiple chronic conditions, noise exposure, poor lifestyle, abnormal blood indices (e.g., red cell distribution width (RDW) and platelet distribution width (PDW)) and liver function indicators (e.g., triglyceride (TG), indirect bilirubin (IBIL), aspartate aminotransferase (AST) and low-density lipoprotein (LDL)). An HFHI nomogram was further generated to improve the operability of the screening model for community applications. CONCLUSIONS The HFHI risk screening model developed based on ML algorithms can more accurately identify residents with HFHI by categorizing them into the high-risk groups, which can further help to identify modifiable and immutable risk factors for residents at high risk of HI and promote their personalized HI prevention or intervention.
Collapse
Affiliation(s)
- Yi Wang
- Department of Epidemiology and Biostatistics, School of Public Health, Hangzhou Normal University, Hangzhou, 311121, Zhejiang, China
- Hangzhou Center for Disease Control and Prevention, Hangzhou, Zhejiang, China
| | - Xinmeng Yao
- Department of Epidemiology and Biostatistics, School of Public Health, Hangzhou Normal University, Hangzhou, 311121, Zhejiang, China
| | - Dahui Wang
- Department of Health Management, School of Public Health, Hangzhou Normal University, Hangzhou, Zhejiang, China
| | - Chengyin Ye
- Department of Health Management, School of Public Health, Hangzhou Normal University, Hangzhou, Zhejiang, China.
| | - Liangwen Xu
- Department of Epidemiology and Biostatistics, School of Public Health, Hangzhou Normal University, Hangzhou, 311121, Zhejiang, China.
| |
Collapse
|
9
|
Ghasemzadeh H, Hillman RE, Mehta DD. Toward Generalizable Machine Learning Models in Speech, Language, and Hearing Sciences: Estimating Sample Size and Reducing Overfitting. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:753-781. [PMID: 38386017 PMCID: PMC11005022 DOI: 10.1044/2023_jslhr-23-00273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 08/29/2023] [Accepted: 12/19/2023] [Indexed: 02/23/2024]
Abstract
PURPOSE Many studies using machine learning (ML) in speech, language, and hearing sciences rely upon cross-validations with single data splitting. This study's first purpose is to provide quantitative evidence that would incentivize researchers to instead use the more robust data splitting method of nested k-fold cross-validation. The second purpose is to present methods and MATLAB code to perform power analysis for ML-based analysis during the design of a study. METHOD First, the significant impact of different cross-validations on ML outcomes was demonstrated using real-world clinical data. Then, Monte Carlo simulations were used to quantify the interactions among the employed cross-validation method, the discriminative power of features, the dimensionality of the feature space, the dimensionality of the model, and the sample size. Four different cross-validation methods (single holdout, 10-fold, train-validation-test, and nested 10-fold) were compared based on the statistical power and confidence of the resulting ML models. Distributions of the null and alternative hypotheses were used to determine the minimum required sample size for obtaining a statistically significant outcome (5% significance) with 80% power. Statistical confidence of the model was defined as the probability of correct features being selected for inclusion in the final model. RESULTS ML models generated based on the single holdout method had very low statistical power and confidence, leading to overestimation of classification accuracy. Conversely, the nested 10-fold cross-validation method resulted in the highest statistical confidence and power while also providing an unbiased estimate of accuracy. The required sample size using the single holdout method could be 50% higher than what would be needed if nested k-fold cross-validation were used. Statistical confidence in the model based on nested k-fold cross-validation was as much as four times higher than the confidence obtained with the single holdout-based model. A computational model, MATLAB code, and lookup tables are provided to assist researchers with estimating the minimum sample size needed during study design. CONCLUSION The adoption of nested k-fold cross-validation is critical for unbiased and robust ML studies in the speech, language, and hearing sciences. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25237045.
Collapse
Affiliation(s)
- Hamzeh Ghasemzadeh
- Center for Laryngeal Surgery and Voice Rehabilitation, Massachusetts General Hospital, Boston
- Department of Surgery, Harvard Medical School, Boston, MA
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing
| | - Robert E. Hillman
- Center for Laryngeal Surgery and Voice Rehabilitation, Massachusetts General Hospital, Boston
- Department of Surgery, Harvard Medical School, Boston, MA
- Speech and Hearing Bioscience and Technology, Division of Medical Sciences, Harvard Medical School, Boston, MA
- MGH Institute of Health Professions, Boston, MA
| | - Daryush D. Mehta
- Center for Laryngeal Surgery and Voice Rehabilitation, Massachusetts General Hospital, Boston
- Department of Surgery, Harvard Medical School, Boston, MA
- Speech and Hearing Bioscience and Technology, Division of Medical Sciences, Harvard Medical School, Boston, MA
- MGH Institute of Health Professions, Boston, MA
| |
Collapse
|
10
|
Aghakhani A, Yousefi M, Yekaninejad MS. Machine Learning Models for Predicting Sudden Sensorineural Hearing Loss Outcome: A Systematic Review. Ann Otol Rhinol Laryngol 2024; 133:268-276. [PMID: 37864312 DOI: 10.1177/00034894231206902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2023]
Abstract
BACKGROUND Machine Learning models have been applied in various healthcare fields, including Audiology, to predict disease outcomes. The prognosis of sudden sensorineural hearing loss is difficult to predict due to the variable course of the disease. Hence, researchers have attempted to utilize ML models to predict the outcome of patients with sudden sensorineural hearing loss. The objectives of this study were to review the performance of these machine learning models and assess their applicability in real-world settings. METHODS A systematic search was conducted in PubMed, Web of Science and Scopus. Only studies that built machine learning prediction models were included, and studies that used algorithms such as logistic regression only for the purpose of adjusting for confounding variables were excluded. The risk of bias was assessed using the Prediction model Risk of Bias Assessment Tool (PROBAST). RESULTS After screening, a total of 7 papers were eligible for synthesis. In total, these studies built 48 ML models. The most common utilized algorithms were Logistic Regression, Support Vector Machine (SVM) and boosting. The area under the curve of the receiver operating characteristic curve ranged between 0.59 and 0.915. All of the included studies had a high risk of bias; hence there are concerns regarding their applicability. CONCLUSION Although these models showed great performance and promising results, future studies are still needed before these models can be applied in a real-world setting. Future studies should employ multiple cohorts, different feature selection methods, and external validation to further validate the models' applicability.
Collapse
Affiliation(s)
- Amirhossein Aghakhani
- Department of Epidemiology and Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
| | - Milad Yousefi
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| | - Mir Saeed Yekaninejad
- Department of Epidemiology and Biostatistics, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
11
|
Zhu M, Gong Q. EEG spectral and microstate analysis originating residual inhibition of tinnitus induced by tailor-made notched music training. Front Neurosci 2023; 17:1254423. [PMID: 38148944 PMCID: PMC10750374 DOI: 10.3389/fnins.2023.1254423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 11/21/2023] [Indexed: 12/28/2023] Open
Abstract
Tailor-made notched music training (TMNMT) is a promising therapy for tinnitus. Residual inhibition (RI) is one of the few interventions that can temporarily inhibit tinnitus, which is a useful technique that can be applied to tinnitus research and explore tinnitus mechanisms. In this study, RI effect of TMNMT in tinnitus was investigated mainly using behavioral tests, EEG spectral and microstate analysis. To our knowledge, this study is the first to investigate RI effect of TMNMT. A total of 44 participants with tinnitus were divided into TMNMT group (22 participants; ECnm, NMnm, RInm represent that EEG recordings with eyes closed stimuli-pre, stimuli-ing, stimuli-post by TMNMT music, respectively) and Placebo control group (22 participants; ECpb, PBpb, RIpb represent that EEG recordings with eyes closed stimuli-pre, stimuli-ing, stimuli-post by Placebo music, respectively) in a single-blind manner. Behavioral tests, EEG spectral analysis (covering delta, theta, alpha, beta, gamma frequency bands) and microstate analysis (involving four microstate classes, A to D) were employed to evaluate RI effect of TMNMT. The results of the study showed that TMNMT had a stronger inhibition ability and longer inhibition time according to the behavioral tests compared to Placebo. Spectral analysis showed that RI effect of TMNMT increased significantly the power spectral density (PSD) of delta, theta bands and decreased significantly the PSD of alpha2 band, and microstate analysis showed that RI effect of TMNMT had shorter duration (microstate B, microstate C), higher Occurrence (microstate A, microstate C, microstate D), Coverage (microstate A) and transition probabilities (microstate A to microstate B, microstate A to microstate D and microstate D to microstate A). Meanwhile, RI effect of Placebo decreased significantly the PSD of alpha2 band, and microstate analysis showed that RI effect of Placebo had shorter duration (microstate C, microstate D), higher occurrence (microstate B, microstate C), lower coverage (microstate C, microstate D), higher transition probabilities (microstate A to microstate B, microstate B to microstate A). It was also found that the intensity of tinnitus symptoms was significant positively correlated with the duration of microstate B in five subgroups (ECnm, NMnm, RInm, ECpb, PBpb). Our study provided valuable experimental evidence and practical applications for the effectiveness of TMNMT as a novel music therapy for tinnitus. The observed stronger residual inhibition (RI) ability of TMNMT supported its potential applications in tinnitus treatment. Furthermore, the temporal dynamics of EEG microstates serve as novel functional and trait markers of synchronous brain activity that contribute to a deep understanding of the neural mechanism underlying TMNMT treatment for tinnitus.
Collapse
Affiliation(s)
- Min Zhu
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Qin Gong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
- School of Medicine, Shanghai University, Shanghai, China
| |
Collapse
|
12
|
Huang GJ, Luo MS, Lu BQ, Li SH. Noninvasive prognostic factors and web predictive tools for idiopathic sudden sensorineural hearing loss. Am J Otolaryngol 2023; 44:103965. [PMID: 37413817 DOI: 10.1016/j.amjoto.2023.103965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 06/10/2023] [Accepted: 06/25/2023] [Indexed: 07/08/2023]
Abstract
PURPOSE To friendly predict a reference prognostic outcome for idiopathic sudden sensorineural hearing loss (ISSNHL) patients with or without anxiety, we identified independent prognostic factors and developed practical predictive tools without invasive tests. METHODS Patients with ISSNHL in our center were enrolled from June 2013 to December 2018. Univariate and multivariate logistic regression analyses were performed to identify independent prognostic factors of the complete recovery and the overall recovery for ISSNHL, which were subsequently utilized to develop the web nomograms. The discrimination, calibration, and clinical benefit were used to evaluate the performance of nomograms for ISSNHL. RESULTS 704 ISSNHL patients were finally enrolled in this study. Multivariate logistic regression analysis showed that age, time of onset, gender, affected ear, degree, and type of hearing loss were independent prognostic factors of complete recovery. Age, time of onset, affected ear, and type of hearing loss were independent prognostic factors of overall recovery. Web predictive nomograms were developed with excellent discrimination, calibration, and clinical value. CONCLUSION Based on the patients' data with a considerable size, independent noninvasive prognostic factors of complete recovery and overall recovery of ISSNHL were identified. Integrating these prognostic factors without invasive tests, practical web predictive nomograms were developed. Using web nomograms, clinical doctors could provide reference data (the predicted recovery rate) for supporting prognostic consultation of ISSNHL patients, especially those with anxiety.
Collapse
Affiliation(s)
- Guan-Jiang Huang
- Department of Otorhinolaryngology Head and Neck Surgery, Zhongshan Hospital of Traditional Chinese Medicine, Affiliated to Guangzhou University of Chinese Medicine, Zhongshan, Guangdong, China
| | - Meng-Si Luo
- Department of Anesthesiology, Zhongshan Hospital of Traditional Chinese Medicine, Affiliated to Guangzhou University of Chinese Medicine, Zhongshan, Guangdong, China
| | - Biao-Qing Lu
- Department of Otorhinolaryngology Head and Neck Surgery, Zhongshan Hospital of Traditional Chinese Medicine, Affiliated to Guangzhou University of Chinese Medicine, Zhongshan, Guangdong, China.
| | - Shao-Hua Li
- Department of Otorhinolaryngology Head and Neck Surgery, Zhongshan Hospital of Traditional Chinese Medicine, Affiliated to Guangzhou University of Chinese Medicine, Zhongshan, Guangdong, China.
| |
Collapse
|
13
|
Balan JR, Rodrigo H, Saxena U, Mishra SK. Explainable machine learning reveals the relationship between hearing thresholds and speech-in-noise recognition in listeners with normal audiograms. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:2278-2288. [PMID: 37823779 DOI: 10.1121/10.0021303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 09/17/2023] [Indexed: 10/13/2023]
Abstract
Some individuals complain of listening-in-noise difficulty despite having a normal audiogram. In this study, machine learning is applied to examine the extent to which hearing thresholds can predict speech-in-noise recognition among normal-hearing individuals. The specific goals were to (1) compare the performance of one standard (GAM, generalized additive model) and four machine learning models (ANN, artificial neural network; DNN, deep neural network; RF, random forest; XGBoost; eXtreme gradient boosting), and (2) examine the relative contribution of individual audiometric frequencies and demographic variables in predicting speech-in-noise recognition. Archival data included thresholds (0.25-16 kHz) and speech recognition thresholds (SRTs) from listeners with clinically normal audiograms (n = 764 participants or 1528 ears; age, 4-38 years old). Among the machine learning models, XGBoost performed significantly better than other methods (mean absolute error; MAE = 1.62 dB). ANN and RF yielded similar performances (MAE = 1.68 and 1.67 dB, respectively), whereas, surprisingly, DNN showed relatively poorer performance (MAE = 1.94 dB). The MAE for GAM was 1.61 dB. SHapley Additive exPlanations revealed that age, thresholds at 16 kHz, 12.5 kHz, etc., on the order of importance, contributed to SRT. These results suggest the importance of hearing in the extended high frequencies for predicting speech-in-noise recognition in listeners with normal audiograms.
Collapse
Affiliation(s)
- Jithin Raj Balan
- Department of Speech, Language and Hearing Sciences, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Hansapani Rodrigo
- School of Mathematical and Statistical Sciences, The University of Texas Rio Grande Valley, Edinburg, Texas 78539, USA
| | - Udit Saxena
- Department of Audiology and Speech-Language Pathology, Gujarat Medical Education and Research Society, Medical College and Hospital, Ahmedabad, 380060, India
| | - Srikanta K Mishra
- Department of Speech, Language and Hearing Sciences, The University of Texas at Austin, Austin, Texas 78712, USA
| |
Collapse
|
14
|
Amanian A, Heffernan A, Ishii M, Creighton FX, Thamboo A. The Evolution and Application of Artificial Intelligence in Rhinology: A State of the Art Review. Otolaryngol Head Neck Surg 2023; 169:21-30. [PMID: 35787221 PMCID: PMC11110957 DOI: 10.1177/01945998221110076] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 06/10/2022] [Indexed: 11/16/2022]
Abstract
OBJECTIVE To provide a comprehensive overview on the applications of artificial intelligence (AI) in rhinology, highlight its limitations, and propose strategies for its integration into surgical practice. DATA SOURCES Medline, Embase, CENTRAL, Ei Compendex, IEEE, and Web of Science. REVIEW METHODS English studies from inception until January 2022 and those focusing on any application of AI in rhinology were included. Study selection was independently performed by 2 authors; discrepancies were resolved by the senior author. Studies were categorized by rhinology theme, and data collection comprised type of AI utilized, sample size, and outcomes, including accuracy and precision among others. CONCLUSIONS An overall 5435 articles were identified. Following abstract and title screening, 130 articles underwent full-text review, and 59 articles were selected for analysis. Eleven studies were from the gray literature. Articles were stratified into image processing, segmentation, and diagnostics (n = 27); rhinosinusitis classification (n = 14); treatment and disease outcome prediction (n = 8); optimizing surgical navigation and phase assessment (n = 3); robotic surgery (n = 2); olfactory dysfunction (n = 2); and diagnosis of allergic rhinitis (n = 3). Most AI studies were published from 2016 onward (n = 45). IMPLICATIONS FOR PRACTICE This state of the art review aimed to highlight the increasing applications of AI in rhinology. Next steps will entail multidisciplinary collaboration to ensure data integrity, ongoing validation of AI algorithms, and integration into clinical practice. Future research should be tailored at the interplay of AI with robotics and surgical education.
Collapse
Affiliation(s)
- Ameen Amanian
- Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, University of British Columbia, Vancouver, Canada
| | - Austin Heffernan
- Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, University of British Columbia, Vancouver, Canada
| | - Masaru Ishii
- Department of Otolaryngology–Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
| | - Andrew Thamboo
- Division of Otolaryngology–Head and Neck Surgery, Department of Surgery, University of British Columbia, Vancouver, Canada
| |
Collapse
|
15
|
Uhm TW, Yi S, Choi SW, Oh SJ, Kong SK, Lee IW, Lee HM. Hearing recovery prediction and prognostic factors of idiopathic sudden sensorineural hearing loss: a retrospective analysis with a deep neural network model. Braz J Otorhinolaryngol 2023; 89:101273. [PMID: 37307713 PMCID: PMC10391245 DOI: 10.1016/j.bjorl.2023.04.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 04/08/2023] [Indexed: 06/14/2023] Open
Abstract
OBJECTIVE Idiopathic Sudden Sensorineural Hearing Loss (ISSHL) is an otologic emergency, and an early prediction of prognosis may facilitate proper treatment. Therefore, we investigated the prognostic factors for predicting the recovery in patients with ISSHL treated with combined treatment method using machine learning models. METHODS We retrospectively reviewed the medical records of 298 patients with ISSHL at a tertiary medical institution between January 2015 and September 2020. Fifty-two variables were analyzed to predict hearing recovery. Recovery was defined using Siegel's criteria, and the patients were categorized into recovery and non-recovery groups. Recovery was predicted by various machine learning models. In addition, the prognostic factors were analyzed using the difference in the loss function. RESULTS There were significant differences in variables including age, hypertension, previous hearing loss, ear fullness, duration of hospital admission, initial hearing level of the affected and unaffected ears, and post-treatment hearing level between recovery and non-recovery groups. The deep neural network model showed the highest predictive performance (accuracy, 88.81%; area under the receiver operating characteristic curve, 0.9448). In addition, initial hearing level of affected and non-affected ear, post-treatment (2-weeks) hearing level of affected ear were significant factors for predicting the prognosis. CONCLUSION The deep neural network model showed the highest predictive performance for recovery in patients with ISSHL. Some factors with prognostic value were identified. Further studies using a larger patient population are warranted. LEVEL OF EVIDENCE Level 4.
Collapse
Affiliation(s)
- Tae Woong Uhm
- Department of Statistics, Pukyong National University, Busan, Republic of Korea
| | - Seongbaek Yi
- Department of Statistics, Pukyong National University, Busan, Republic of Korea
| | - Sung Won Choi
- Department of Otorhinolaryngology-Head and Neck Surgery, Pusan National University College of Medicine, Pusan National University Hospital, Busan, Republic of Korea
| | - Se Joon Oh
- Department of Otorhinolaryngology-Head and Neck Surgery, Pusan National University College of Medicine, Pusan National University Hospital, Busan, Republic of Korea
| | - Soo Keun Kong
- Department of Otorhinolaryngology-Head and Neck Surgery, Pusan National University College of Medicine, Pusan National University Hospital, Busan, Republic of Korea
| | - Il Woo Lee
- Department of Otorhinolaryngology-Head and Neck Surgery, Pusan National University College of Medicine, Pusan National University Yangsan Hospital, Yangsan, Gyeongnam, Republic of Korea
| | - Hyun Min Lee
- Department of Otorhinolaryngology-Head and Neck Surgery, Pusan National University College of Medicine, Pusan National University Yangsan Hospital, Yangsan, Gyeongnam, Republic of Korea.
| |
Collapse
|
16
|
Li Y, Zhou X, Dou Z, Deng D, Bing D. Clinical features and prognosis of pediatric idiopathic sudden sensorineural hearing loss: A bi-center retrospective study. Front Neurol 2023; 14:1121656. [PMID: 37006497 PMCID: PMC10050692 DOI: 10.3389/fneur.2023.1121656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 02/20/2023] [Indexed: 03/17/2023] Open
Abstract
ObjectiveLimited research has focused on the clinical features of sudden sensorineural hearing loss (SSNHL) in pediatric patients. This study is aimed to investigate the relationship between clinical features and the baseline hearing severity and outcomes of SSNHL in the pediatric population.MethodWe conducted a bi-center retrospective observational study in 145 SSNHL patients aged no more than 18 years who were recruited between November 2013 and October 2022. Data extracted from medical records, audiograms, complete blood count (CBC) and coagulation tests have been assessed for the relationship with the severity (the thresholds of the initial hearing) and outcomes (recovery rate, hearing gain and the thresholds of the final hearing).ResultsA lower lymphocyte count (P = 0.004) and a higher platelet-to-lymphocyte ratio (PLR) (P = 0.041) were found in the patient group with profound initial hearing than in the less severe group. Vertigo (β = 13.932, 95%CI: 4.082–23.782, P = 0.007) and lymphocyte count (β = −6.686, 95%CI: −10.919 to −2.454, P = 0.003) showed significant associations with the threshold of the initial hearing. In the multivariate logistic model, the probability of recovery was higher for patients with ascending and flat audiograms compared to those with descending audiograms (ascending: OR 8.168, 95% CI 1.450–70.143, P = 0.029; flat: OR 3.966, 95% CI 1.341–12.651, P = 0.015). Patients with tinnitus had a 3.2-fold increase in the probability of recovery (OR 3.222, 95% CI 1.241–8.907, P = 0.019), while the baseline hearing threshold (OR 0.968, 95% CI 0.936–0.998, P = 0.047) and duration to the onset of therapy (OR 0.942, 95% CI 0.890–0.977, P = 0.010) were negatively associated with the odds of recovery.ConclusionsThe present study showed that accompanying tinnitus, the severity of initial hearing loss, the time elapse and the audiogram configuration might be related to the prognosis of pediatric SSNHL. Meanwhile, the presence of vertigo, lower lymphocytes and higher PLR were associated with worse severity.
Collapse
Affiliation(s)
- Yingqiang Li
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaowei Zhou
- Otological Department, The First People's Hospital of Foshan, Foshan, China
| | - Zhiyong Dou
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Dongzhou Deng
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Dan Bing
- Department of Otolaryngology-Head and Neck Surgery, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- *Correspondence: Dan Bing
| |
Collapse
|
17
|
Liu X, Teng L, Zuo W, Zhong S, Xu Y, Sun J. Deafness gene screening based on a multilevel cascaded BPNN model. BMC Bioinformatics 2023; 24:56. [PMID: 36803022 PMCID: PMC9942297 DOI: 10.1186/s12859-023-05182-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 02/11/2023] [Indexed: 02/22/2023] Open
Abstract
Sudden sensorineural hearing loss is a common and frequently occurring condition in otolaryngology. Existing studies have shown that sudden sensorineural hearing loss is closely associated with mutations in genes for inherited deafness. To identify these genes associated with deafness, researchers have mostly used biological experiments, which are accurate but time-consuming and laborious. In this paper, we proposed a computational method based on machine learning to predict deafness-associated genes. The model is based on several basic backpropagation neural networks (BPNNs), which were cascaded as multiple-level BPNN models. The cascaded BPNN model showed a stronger ability for screening deafness-associated genes than the conventional BPNN. A total of 211 of 214 deafness-associated genes from the deafness variant database (DVD v9.0) were used as positive data, and 2110 genes extracted from chromosomes were used as negative data to train our model. The test achieved a mean AUC higher than 0.98. Furthermore, to illustrate the predictive performance of the model for suspected deafness-associated genes, we analyzed the remaining 17,711 genes in the human genome and screened the 20 genes with the highest scores as highly suspected deafness-associated genes. Among these 20 predicted genes, three genes were mentioned as deafness-associated genes in the literature. The analysis showed that our approach has the potential to screen out highly suspected deafness-associated genes from a large number of genes, and our predictions could be valuable for future research and discovery of deafness-associated genes.
Collapse
Affiliation(s)
- Xiao Liu
- School of Microelectronics and Communication Engineering, Chongqing University, 174 Shapingba District, Chongqing, 400044, China.
| | - Li Teng
- grid.190737.b0000 0001 0154 0904School of Microelectronics and Communication Engineering, Chongqing University, 174 Shapingba District, Chongqing, 400044 China
| | - Wenqi Zuo
- grid.452206.70000 0004 1758 417XDepartment of Otolaryngology, The First Affiliated Hospital of Chongqing Medical University, NO. 1 Youyi Road, Yuzhong District, Chongqing, 400016 China
| | - Shixun Zhong
- grid.452206.70000 0004 1758 417XDepartment of Otolaryngology, The First Affiliated Hospital of Chongqing Medical University, NO. 1 Youyi Road, Yuzhong District, Chongqing, 400016 China
| | - Yuqiao Xu
- grid.190737.b0000 0001 0154 0904School of Microelectronics and Communication Engineering, Chongqing University, 174 Shapingba District, Chongqing, 400044 China
| | - Jing Sun
- grid.190737.b0000 0001 0154 0904School of Microelectronics and Communication Engineering, Chongqing University, 174 Shapingba District, Chongqing, 400044 China
| |
Collapse
|
18
|
Ensemble filters with harmonize PSO-SVM algorithm for optimal hearing disorder prediction. Neural Comput Appl 2023; 35:10473-10496. [PMID: 36747886 PMCID: PMC9894525 DOI: 10.1007/s00521-023-08244-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 01/06/2023] [Indexed: 02/05/2023]
Abstract
Discovering a hearing disorder at an earlier intervention is critical for reducing the effects of hearing loss and the approaches to increase the remaining hearing ability can be implemented to achieve the successful development of human communication. Recently, the explosive dataset features have increased the complexity for audiologists to decide the proper treatment for the patient. In most cases, data with irrelevant features and improper classifier parameters causes a crucial influence on the audiometry system in terms of accuracy. This is due to the dependent processes of these two, where the classification accuracy performance could be worsened if both processes are conducted independently. Although the filter algorithm is capable of eliminating irrelevant features, it still lacks the ability to consider feature reliance and results in a poor selection of significant features. Improper kernel parameter settings may also contribute to poor accuracy performance. In this paper, an ensemble filters feature selection based on Information Gain (IG), Gain Ratio (GR), Chi-squared (CS), and Relief-F (RF) with harmonize optimization of Particle Swarm Optimization (PSO) and Support Vector Machine (SVM) is presented to mitigate these problems. Ensemble filters are utilized so that the initial top dominant features relevant for classification can be considered. Then, PSO and SVM are optimized simultaneously to achieve the optimal solution. The results on a standard Audiology dataset show that the proposed method produces 96.50% accuracy with optimal solution compared to classical SVM, which signifies the proposed method is effective in handling high dimensional data for hearing disorder prediction.
Collapse
|
19
|
Mathews S, Dham R, Dutta A, Jose A. Computational Intelligence in Otorhinolaryngology. JOURNAL OF MARINE MEDICAL SOCIETY 2023. [DOI: 10.4103/jmms.jmms_159_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023] Open
|
20
|
Machine Learning in the Management of Lateral Skull Base Tumors: A Systematic Review. JOURNAL OF OTORHINOLARYNGOLOGY, HEARING AND BALANCE MEDICINE 2022. [DOI: 10.3390/ohbm3040007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The application of machine learning (ML) techniques to otolaryngology remains a topic of interest and prevalence in the literature, though no previous articles have summarized the current state of ML application to management and the diagnosis of lateral skull base (LSB) tumors. Subsequently, we present a systematic overview of previous applications of ML techniques to the management of LSB tumors. Independent searches were conducted on PubMed and Web of Science between August 2020 and February 2021 to identify the literature pertaining to the use of ML techniques in LSB tumor surgery written in the English language. All articles were assessed in regard to their application task, ML methodology, and their outcomes. A total of 32 articles were examined. The number of articles involving applications of ML techniques to LSB tumor surgeries has significantly increased since the first article relevant to this field was published in 1994. The most commonly employed ML category was tree-based algorithms. Most articles were included in the category of surgical management (13; 40.6%), followed by those in disease classification (8; 25%). Overall, the application of ML techniques to the management of LSB tumor has evolved rapidly over the past two decades, and the anticipated growth in the future could significantly augment the surgical outcomes and management of LSB tumors.
Collapse
|
21
|
Yuan H, Liu CC, Ma PW, Chen JW, Wang WL, Gao W, Lu PH, Ding XR, Lun YQ, Lu LJ. Systemic steroid administration combined with intratympanic steroid injection in the treatment of a unilateral sudden hearing loss prognosis prediction model: A retrospective observational study. Front Neurol 2022; 13:976393. [PMID: 36203999 PMCID: PMC9530985 DOI: 10.3389/fneur.2022.976393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
Idiopathic sudden sensorineural hearing loss (ISSNHL) is an emergency ear disease that is referred to as a sensorineural hearing loss of at least 30 dB in three sequential frequencies and occurs over a period of < 72 h. Because of its etiology, pathogenesis, and prognostic factors, the current treatment methods are not ideal. Previous studies have developed prognostic models to predict hearing recovery from ISSNHL, but few studies have incorporated serum biochemical indicators into previous models. The aim of this study was to explore the factors influencing the ISSNHL prognosis of combination therapy (combined intratympanic and systemic use of steroids, CT), among the patient population data, the serum biochemical indicators before the treatment, and the clinical features of ISSNHL. The new prediction model was developed through these factors. From November 2015 to April 2022, 430 patients who underwent CT at the Department of Otorhinolaryngology Head and Neck Surgery, Tangdu Hospital, Air Force Medical University for ISSNHL, were reviewed retrospectively. We found significant differences in age (P = 0.018), glucose (P = 0.035), white blood cell (WBC) (P = 0.021), vertigo (P = 0.000) and type (P = 0.000) with different therapeutic efficacies. Multivariate logistic regression analysis showed that age (OR = 0.715, P = 0.023), WBC (OR = 0.527, P = 0.01), platelet to lymphocyte ratio (PLR) (OR = 0.995, P = 0.038), vertigo (OR = 0.48, P = 0.004), course (time from onset to treatment) (OR = 0.681, P = 0.016) and type (OR = 0.409, P = 0.000) were independent risk factors for ISSNHL prognosis. Based on independent risk factors, a predictive model and nomogram were developed to predict hearing outcomes in ISSNHL patients. The area under the curve (AUC) value of the model developed in this study was 0.773 (95% CI = 0.730–0.812), which has a certain predictive ability. The calibration curve indicated good consistency between the actual diagnosed therapeutic effectiveness and the predicted probability. The model and nomogram can predict the hearing prognosis of ISSNHL patients treated with CT and can provide help for medical staff to make the best clinical decision. This study has been registered with the registration number ChiCTR2200061379.
Collapse
|
22
|
Iliadou E, Su Q, Kikidis D, Bibas T, Kloukinas C. Profiling hearing aid users through big data explainable artificial intelligence techniques. Front Neurol 2022; 13:933940. [PMID: 36090867 PMCID: PMC9459083 DOI: 10.3389/fneur.2022.933940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
Debilitating hearing loss (HL) affects ~6% of the human population. Only 20% of the people in need of a hearing assistive device will eventually seek and acquire one. The number of people that are satisfied with their Hearing Aids (HAids) and continue using them in the long term is even lower. Understanding the personal, behavioral, environmental, or other factors that correlate with the optimal HAid fitting and with users' experience of HAids is a significant step in improving patient satisfaction and quality of life, while reducing societal and financial burden. In SMART BEAR we are addressing this need by making use of the capacity of modern HAids to provide dynamic logging of their operation and by combining this information with a big amount of information about the medical, environmental, and social context of each HAid user. We are studying hearing rehabilitation through a 12-month continuous monitoring of HL patients, collecting data, such as participants' demographics, audiometric and medical data, their cognitive and mental status, their habits, and preferences, through a set of medical devices and wearables, as well as through face-to-face and remote clinical assessments and fitting/fine-tuning sessions. Descriptive, AI-based analysis and assessment of the relationships between heterogeneous data and HL-related parameters will help clinical researchers to better understand the overall health profiles of HL patients, and to identify patterns or relations that may be proven essential for future clinical trials. In addition, the future state and behavioral (e.g., HAids Satisfiability and HAids usage) of the patients will be predicted with time-dependent machine learning models to assist the clinical researchers to decide on the nature of the interventions. Explainable Artificial Intelligence (XAI) techniques will be leveraged to better understand the factors that play a significant role in the success of a hearing rehabilitation program, constructing patient profiles. This paper is a conceptual one aiming to describe the upcoming data collection process and proposed framework for providing a comprehensive profile for patients with HL in the context of EU-funded SMART BEAR project. Such patient profiles can be invaluable in HL treatment as they can help to identify the characteristics making patients more prone to drop out and stop using their HAids, using their HAids sufficiently long during the day, and being more satisfied by their HAids experience. They can also help decrease the number of needed remote sessions with their Audiologist for counseling, and/or HAids fine tuning, or the number of manual changes of HAids program (as indication of poor sound quality and bad adaptation of HAids configuration to patients' real needs and daily challenges), leading to reduced healthcare cost.
Collapse
Affiliation(s)
- Eleftheria Iliadou
- 1st Department of Otorhinolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens Medical School, Athens, Greece
| | - Qiqi Su
- Department of Computer Science, University of London, London, United Kingdom
| | - Dimitrios Kikidis
- 1st Department of Otorhinolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens Medical School, Athens, Greece
| | - Thanos Bibas
- 1st Department of Otorhinolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens Medical School, Athens, Greece
| | - Christos Kloukinas
- Department of Computer Science, University of London, London, United Kingdom
| |
Collapse
|
23
|
Formeister EJ, Baum RT, Sharon JD. Supervised machine learning models for classifying common causes of dizziness. Am J Otolaryngol 2022; 43:103402. [PMID: 35221115 DOI: 10.1016/j.amjoto.2022.103402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 02/13/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE The objective of this study was to use a supervised machine learning (ML) platform and a national dataset to identify factors important in classifying common types of dizziness. METHODS Using established clinical criteria and responses to the balance and dizziness supplement from the 2016 National health Interview Survey (n = 33,028), case definitions for vestibular migraine (VM), benign paroxysmal positional vertigo (BPPV) Ménière's disease (MD), persistent postural-perceptual dizziness (PPPD), superior canal dehiscence (SCD), and bilateral vestibular hypofunction (BVH) were generated. One hundred thirty-six variables consisting of sociodemographic characteristics and medical comorbidities were used to develop decision tree models to predict these common types of dizziness. RESULTS The one-year prevalence of dizziness in the U.S. was 16.8% (5562 respondents). VM was highly prevalent, representing 4.0% of the overall respondents (n = 1327). ML decision tree models were able to correctly classify all 6 dizziness subtypes with high accuracy (sensitivity range, 70-92%; specificity range, 89-99%) using responses to questions about functional limitations due to dizziness, such as falls due to dizziness and modification of social activities due to dizziness. CONCLUSIONS In a large population-based dataset, supervised ML models accurately predicted dizziness subtypes according to responses to questions that do not pertain to dizziness symptoms alone.
Collapse
|
24
|
Prediction of hearing recovery in unilateral sudden sensorineural hearing loss using artificial intelligence. Sci Rep 2022; 12:3977. [PMID: 35273267 PMCID: PMC8913667 DOI: 10.1038/s41598-022-07881-2] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 02/28/2022] [Indexed: 11/08/2022] Open
Abstract
Despite the significance of predicting the prognosis of idiopathic sudden sensorineural hearing loss (ISSNHL), no predictive models have been established. This study used artificial intelligence to develop prognosis models to predict recovery from ISSNHL. We retrospectively reviewed the medical data of 453 patients with ISSNHL (men, 220; women, 233; mean age, 50.3 years) who underwent treatment at a tertiary hospital between January 2021 and December 2019 and were followed up after 1 month. According to Siegel's criteria, 203 patients recovered in 1 month. Demographic characteristics, clinical and laboratory data, and pure-tone audiometry were analyzed. Logistic regression (baseline), a support vector machine, extreme gradient boosting, a light gradient boosting machine, and multilayer perceptron were used. The outcomes were the area under the receiver operating characteristic curve (AUROC) primarily, area under the precision-recall curve, Brier score, balanced accuracy, and F1 score. The light gradient boosting machine model had the best AUROC and balanced accuracy. Together with multilayer perceptron, it was also significantly superior to logistic regression in terms of AUROC. Using the SHapley Additive exPlanation method, we found that the initial audiogram shape is the most important prognostic factor. Machine/deep learning methods were successfully established to predict the prognosis of ISSNHL.
Collapse
|
25
|
Lan L, Chen YC, Shang S, Lu L, Xu JJ, Yin X, Wu Y, Cai Y. Topological features of limbic dysfunction in chronicity of tinnitus with intact hearing: New hypothesis for 'noise-cancellation' mechanism. Prog Neuropsychopharmacol Biol Psychiatry 2022; 113:110459. [PMID: 34666066 DOI: 10.1016/j.pnpbp.2021.110459] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 10/11/2021] [Accepted: 10/12/2021] [Indexed: 12/17/2022]
Abstract
PURPOSE The reorganization of the limbic regions extend to general cognitive network is believed to exist in the chronicity of tinnitus with particular 'hubs' contributing to a 'noise-cancellation' mechanism. To test this hypothesis, we investigated the topological brain network of tinnitus in different periods. METHODS Resting-state functional magnetic resonance imaging were obtained from 32 patients with acute tinnitus, 41 patients with chronic tinnitus and 60 age- and gender- matched healthy controls (HC). The topological features of their brain networks were explored using graph theory analysis. RESULTS Common small-world attributes were compared between the three groups, all showed a significantly increased values in Cp, Lp, λ (all p < 0.05). Significantly increased nodal centralities in the left superior frontal gyrus and the right precuneus, significantly decreased nodal centralities in the right inferior temporal gyrus were observed for acute tinnitus patients compared to HC. While for chronic tinnitus patients, there were significant increased nodal centralities in the left hippocampus, amygdala, and temporal pole, but decreased nodal centralities in the right inferior temporal gyrus. Additionally, significant higher nodal centralities were found in bilateral medial superior frontal gyrus for acute tinnitus patients compared to chronic tinnitus patients. Besides, alterations in rich-club organization were found in acute tinnitus patients and chronic tinnitus patients compared with HC, with increased functional connections among rich-club nodes and peripheral nodes in patients with tinnitus. CONCLUSIONS Brain network topological properties altered across prefrontal-limbic-subcortical regions in tinnitus. The existed hubs in tinnitus might indicate an emotional and cognitive burden in 'noise-cancellation' mechanism.
Collapse
Affiliation(s)
- Liping Lan
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, Guangdong Province, China
| | - Yu-Chen Chen
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Song'an Shang
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Liyan Lu
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Jin-Jing Xu
- Department of Otolaryngology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Xindao Yin
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Yuanqing Wu
- Department of Otolaryngology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China.
| | - Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, Guangdong Province, China.
| |
Collapse
|
26
|
Lan L, Liu Y, Wu Y, Xu ZG, Xu JJ, Song JJ, Salvi R, Yin X, Chen YC, Cai Y. Specific brain network predictors of interventions with different mechanisms for tinnitus patients. EBioMedicine 2022; 76:103862. [PMID: 35104784 PMCID: PMC8814370 DOI: 10.1016/j.ebiom.2022.103862] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 01/17/2022] [Accepted: 01/18/2022] [Indexed: 01/22/2023] Open
Abstract
BACKGROUND The aberrant brain network that gives rise to the phantom sound of tinnitus is believed to determine the effectiveness of tinnitus therapies involving neuromodulation with repetitive transcranial magnetic stimulation (rTMS) and sound therapy utilizing tailor-made notch music training (TMNMT). To test this hypothesis, we determined how effective rTMS or TMNMT were in ameliorating tinnitus in patients with different functional brain networks. METHODS Resting-state functional MRI was used to construct brain functional networks in patients with tinnitus (41 males/45 females, mean age 49.53±11.19 years) and gender-matched healthy controls (22 males/35 females, mean age 46.23±10.23 years) with independent component analysis (ICA). A 2 × 2 analysis of variance with treatment outcomes (Effective group, EG/Ineffective group, IG) and treatment types (rTMS/TMNMT) was used to test the interaction between outcomes and treatment types associated with functional network connections (FNCs). FINDINGS The optimal neuroimaging indicator for responding to rTMS (AUC 0.804, sensitivity 0.700, specificity 0.913) was FNCs in the salience network-right frontoparietal network (SN-RFPN) while for responding to TMNMT (AUC 0.764, sensitivity 0.864, specificity 0.667) was the combination of FNCs in the auditory network- salience network (AUN-SN) and auditory network-cerebellar network (AUN-CN). INTERPRETATION Tinnitus patients with higher FNCs in the SN-RFPN is associated with a recommendation for rTMS whereas patients with lower FNCs in the AUN-SN and AUN-CN would suggest TMNMT as the better choice. These results indicate that brain network-based measures aid in the selection of the optimal form of treatment for a patient contributing to advances in precision medicine. FUNDING Yuexin Cai is supported by Key R&D Program of Guangdong Province, China (Grant No. 2018B030339001), National Natural Science Foundation of China (82071062), Natural Science Foundation of Guangdong province (2021A1515012038), the Fundamental Research Funds for the Central Universities (20ykpy91), and Sun Yat-Sen Clinical Research Cultivating Program (SYS-Q-201903). Yu-Chen Chen is supported by Medical Science and Technology Development Foundation of Nanjing Department of Health (No. ZKX20037), and Natural Science Foundation of Jiangsu Province (No. BK20211008).
Collapse
Affiliation(s)
- Liping Lan
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107 West Yanjiang Road, Guangzhou, Guangdong 510120, China
| | - Yin Liu
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing 210006, China
| | - Yuanqing Wu
- Department of Otolaryngology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Zhen-Gui Xu
- Department of Otolaryngology, Nanjing Pukou Central Hospital, Pukou Branch Hospital of Jiangsu Province Hospital, Nanjing, China
| | - Jin-Jing Xu
- Department of Otolaryngology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Jae-Jin Song
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University Bundang Hospital, Seongnam-si, South Korea
| | - Richard Salvi
- Center for Hearing and Deafness, University at Buffalo, The State University of New York, Buffalo, United States
| | - Xindao Yin
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing 210006, China
| | - Yu-Chen Chen
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing 210006, China.
| | - Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, 107 West Yanjiang Road, Guangzhou, Guangdong 510120, China; Shenshan Medical Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, China.
| |
Collapse
|
27
|
Wu H, Wan W, Jiang H, Xiong Y. Prognosis of Idiopathic Sudden Sensorineural Hearing Loss: The Nomogram Perspective. Ann Otol Rhinol Laryngol 2022; 132:5-12. [PMID: 35081764 DOI: 10.1177/00034894221075114] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
OBJECTIVE The aim of this study is to create a nomogram for accurately predicting the prognosis of idiopathic sudden sensorineural hearing loss (ISSNHL) and provide a reference for clinical treatment. METHODS Three hundred and twenty-three patients with ISSNHL were admitted from September 2014 to November 2020. The clinical data were retrospectively reviewed. Prognostic factors for ISSNHL were assessed based on univariate and multivariate logistic regression analysis and used to create a nomogram. Nomogram performance in terms of predictive and discriminatory ability was evaluated by calculating the concordance index (C-index) and generating calibration plots. RESULTS The overall hearing improvement rate was 41.4%, comprising complete recovery (13.3%), marked recovery (17.0%), and slight recovery (11.1%). Multivariate logistic regression analysis showed that age, symptoms of vertigo, interval between onset and treatment, low-density lipoprotein, and type of hearing loss were independent predictors of ISSNHL. A nomogram based on these 5 factors had a C index of 0.798 (95% confidence interval 0.750-0.845). CONCLUSIONS Age, vertigo, interval between onset and treatment, low-density lipoprotein level, and type of hearing loss are closely associated with hearing recovery. The nomogram may enable prediction of the prognosis of ISSNHL and facilitate clinical decision-making.
Collapse
Affiliation(s)
- Huadong Wu
- Department of Otolaryngology, The First Affiliated Hospital of Nangchang University, Nanchang, Jiangxi, China
| | - Wei Wan
- Department of Otolaryngology, The First Affiliated Hospital of Nangchang University, Nanchang, Jiangxi, China
| | - Hongqun Jiang
- Department of Otolaryngology, The First Affiliated Hospital of Nangchang University, Nanchang, Jiangxi, China.,Otorhinolaryngology Institute of Jiangxi Province, Nanchang, Jiangxi, China
| | - Yuanping Xiong
- Department of Otolaryngology, The First Affiliated Hospital of Nangchang University, Nanchang, Jiangxi, China.,Otorhinolaryngology Institute of Jiangxi Province, Nanchang, Jiangxi, China
| |
Collapse
|
28
|
Frequency-specific prediction model of hearing outcomes in patients with idiopathic sudden sensorineural hearing loss. Eur Arch Otorhinolaryngol 2022; 279:4727-4733. [PMID: 35015092 DOI: 10.1007/s00405-021-07246-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Accepted: 12/28/2021] [Indexed: 11/03/2022]
Abstract
PURPOSE The hearing outcome of idiopathic sudden sensorineural hearing loss (ISSNHL) is hard to predict. We herein constructed a multiple regression model for hearing outcomes in each frequency separately in an attempt to achieve practical prediction in ISSNHL. METHODS We enrolled 235 consecutive in-patients with ISSNHL who were treated in our department from 2015 to 2020 (average hearing level at 250-4000 Hz ≥ 40 dB; time from onset to treatment ≤ 14 days; 126 males/109 females; age range 17-87 years (average 61.0 years)). All patients received systemic prednisolone administration combined with intratympanic dexamethasone injection. The pure-tone hearing threshold of 125-8000 Hz was measured at every octave before (HLpre) and after (HLpost) treatment. A multiple regression model was constructed for HLpost (dependent variable) using five explanatory variables (age, days from onset to treatment, presence of vertigo, HLpre, and hearing level of the contralateral ear). RESULTS The multiple correlation coefficient increased as the frequency increased. Strong correlations were seen in high frequencies, with multiple correlation coefficients of 0.784/0.830 for 4000/8000 Hz. The width of the 70% prediction interval was narrower for 4000/8000 Hz (± 18.2/16.3 dB) than for low to mid-frequencies. Among the five explanatory variables, HLpre showed the largest partial correlation coefficient for any frequency. The partial correlation coefficient for HLpre increased as the frequency increased, which may partially explain the high multiple correlation coefficients for high frequencies. CONCLUSION The present model would be of practical use for predicting hearing outcomes in high frequencies in patients with ISSNHL.
Collapse
|
29
|
George MM, Tolley NS. AIM in Otolaryngology and Head and Neck Surgery. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/09/2022]
|
30
|
鲍 凤, 杨 成, 周 国. [Construction and evaluation of a model for predicting ischemic stroke risk in patients with sudden sensorineural hearing loss]. LIN CHUANG ER BI YAN HOU TOU JING WAI KE ZA ZHI = JOURNAL OF CLINICAL OTORHINOLARYNGOLOGY, HEAD, AND NECK SURGERY 2021; 35:1078-1084. [PMID: 34886620 PMCID: PMC10127649 DOI: 10.13201/j.issn.2096-7993.2021.12.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Indexed: 06/13/2023]
Abstract
Objective:To explore the related factors of sudden sensorineural hearing loss complicated with ischemic stroke, construct the risk prediction model, and verify the prediction effect of the model. Methods:A retrospective analysis was performed on 901 sudden sensorineural hearing loss patients hospitalized from January 2017 to December 2020, The patients were divided into the ischemic stroke group(100 cases) and the sudden deafness group(801 cases) according to whether they were complicated with ischemic stroke, The independent correlation factors of sudden deafness complicated with ischemic stroke were screened by univariate analysis and multivariate Logistic regression model, and the risk prediction model and internal verification were established. The original data were randomly divided into the modeling group(631 cases) and the validation group(270 cases) at a 7∶3 ratio. Hosmer-Lemeshow and receiver operating characteristic curve were used to test the goodness of fit and predictive effect of the model, and 270 patients were included again in the application research of the model and to test the prediction effect of the model. Results:The results of single factor analysis showed that age, NEUR, NC, NLR, PLR, TC, HDL-C, BUN, TC-HDL-C, TG/HDL-C, LDL-C/HDL-C, Hcy, FIB and cervical vascular plaque were related factors of sudden sensorineural hearing loss complicated with ischemic stroke(P<0.05). Age(OR=2.816), NEUR(OR=2.707), Hcy(OR=88.833), FIB(OR=1.389), TC-HDL-C(OR=1.613), cervical vascular plaque(OR=2.862) are the independent risk factors of SNHL complicated with ischemic stroke. These 6 factors are used to construct a prediction model. Hosmer-lemeshow test results, the area under the ROC curve of the modeling group was 0.846, P=0.555, Youden index was 0.564, sensitivity was 0.820, and specificity was 0.744. In the validation group, the area under ROC curve was 0.847, P=0.288, Youden index was 0.432, sensitivity was 0.783, and specificity was 0.649. Conclusion:The risk prediction model constructed in this study shows good prediction efficiency. which can provide references for the clinical screening of ischemic stroke risks in patients with sudden sensorineural hearing loss and early interventions in early stage.
Collapse
Affiliation(s)
- 凤香 鲍
- 南京医科大学康达学院第一附属医院 徐州医科大学附属连云港医院 连云港市第一人民医院耳鼻咽喉头颈外科(江苏连云港,222061)Department of Otolaryngology Head and Neck Surgery, the First Affiliated Hospital of Kangda College of Nanjing Medical University, the Affiliated Lianyungang Hospital of Xuzhou Medical University, the First People's Hospital of Lianyungang, Lianyungang, 222061, China
| | - 成俊 杨
- 江苏联合职业技术学院连云港中医药分院基础医学部Department of Basic Medicine, Lianyungang TCM Branch of Jiangsu United Higher Vocational Technical College
| | - 国辉 周
- 南京医科大学康达学院第一附属医院 徐州医科大学附属连云港医院 连云港市第一人民医院病案统计室Department of Statistical Medical Records, the First Affiliated Hospital of Kangda College of Nanjing Medical University, the Affiliated Lianyungang Hospital of Xuzhou Medical University, the First People's Hospital of Lianyungang
| |
Collapse
|
31
|
Gong Q, Liu Y, Xu R, Liang D, Peng Z, Yang H. Objective Assessment System for Hearing Prediction Based on Stimulus-Frequency Otoacoustic Emissions. Trends Hear 2021; 25:23312165211059628. [PMID: 34817273 PMCID: PMC8738859 DOI: 10.1177/23312165211059628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Stimulus-frequency otoacoustic emissions (SFOAEs) can be useful tools for assessing cochlear function noninvasively. However, there is a lack of reports describing their utility in predicting hearing capabilities. Data for model training were collected from 245 and 839 ears with normal hearing and sensorineural hearing loss, respectively. Based on SFOAEs, this study developed an objective assessment system consisting of three mutually independent modules, with the routine test module and the fast test module used for threshold prediction and the hearing screening module for identifying hearing loss. Results evaluated via cross-validation show that the routine test module and the fast test module predict hearing thresholds with similar performance from 0.5 to 8 kHz, with mean absolute errors of 7.06–11.61 dB for the routine module and of 7.40–12.60 dB for the fast module. However, the fast module involves less test time than is needed in the routine module. The hearing screening module identifies hearing status with a large area under the receiver operating characteristic curve (0.912–0.985), high accuracy (88.4–95.9%), and low false negative rate (2.9–7.0%) at 0.5–8 kHz. The three modules are further validated on unknown data, and the results are similar to those obtained through cross-validation, indicating these modules can be well generalized to new data. Both the routine module and fast module are potential tools for predicting hearing thresholds. However, their prediction performance in ears with hearing loss requires further improvement to facilitate their clinical utility. The hearing screening module shows promise as a clinical tool for identifying hearing loss.
Collapse
Affiliation(s)
- Qin Gong
- Department of Biomedical Engineering, 12442Tsinghua University, Beijing, China.,School of Medicine, Shanghai University, Shanghai, China
| | - Yin Liu
- Department of Biomedical Engineering, 12442Tsinghua University, Beijing, China
| | - Runyi Xu
- Department of Biomedical Engineering, 12442Tsinghua University, Beijing, China
| | - Dong Liang
- Department of Biomedical Engineering, 12442Tsinghua University, Beijing, China
| | - Zewen Peng
- Department of Biomedical Engineering, 12442Tsinghua University, Beijing, China
| | - Honghao Yang
- Department of Biomedical Engineering, 12442Tsinghua University, Beijing, China
| |
Collapse
|
32
|
Wang Y, Ye C, Wang D, Li C, Wang S, Li J, Wu J, Wang X, Xu L. Construction and Evaluation of a High-Frequency Hearing Loss Screening Tool for Community Residents. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph182312311. [PMID: 34886032 PMCID: PMC8657277 DOI: 10.3390/ijerph182312311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/10/2021] [Accepted: 11/20/2021] [Indexed: 11/27/2022]
Abstract
Early screening and detection of individuals at high risk of high-frequency hearing loss and identification of risk factors are critical to reduce the prevalence at community level. However, unlike those for individuals facing occupational auditory hazards, a limited number of hearing loss screening models have been developed for community residents. Therefore, this study used lasso regression with 10-fold cross-validation for feature selection and model construction on 38 questionnaire-based variables of 4010 subjects and applied the model to training and testing cohorts to obtain a risk score. The model achieved an area under the curve (AUC) of 0.844 in the model validation stage and individuals’ risk scores were subsequently stratified into low-, medium-, and high-risk categories. A total of 92.79% (1094/1179) of subjects in the high-risk category were confirmed to have hearing loss by audiometry test, which was 3.7 times higher than that in the low-risk group (25.18%, 457/1815). Half of the key indicators were related to modifiable contexts, and they were identified as significantly associated with the incident hearing loss. These results demonstrated that the developed model would be feasible to identify residents at high risk of hearing loss via regular community-level health examinations and detecting individualized risk factors, and eventually provide precision interventions.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Liangwen Xu
- Correspondence: ; Tel./Fax: +86-0571-2886-5510
| |
Collapse
|
33
|
Kwak C, Seo YJ, Yoon C, Lee J, Han W. The value of having an initial word recognition score for a precise prognosis of idiopathic sudden sensorineural hearing loss. Auris Nasus Larynx 2021; 49:554-563. [PMID: 34772562 DOI: 10.1016/j.anl.2021.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Revised: 09/27/2021] [Accepted: 10/13/2021] [Indexed: 11/19/2022]
Abstract
OBJECTIVE Although the hearing thresholds of patients with idiopathic sudden sensorineural hearing loss (ISSNHL) closely relate to the prognosis that results in progressive floor effects, many studies have usually used hearing thresholds as the main outcome of the measurement of prognostic factors. The present study aimed to identify the prognostic factors related to initial hearing tests and speculates the effects of word recognition score (WRS) on the prognoses for patients with ISSNHL. METHODS Between March 2011 and November 2020, we retrospectively reviewed chart profiles of 2,636 ISSNHL patients. The 180 patients who met the inclusion criteria were asked to participate in the present study. Based on their initial WRS, all these patients were divided into good WRS (GW) and poor WRS (PW) groups with 52% as the cut-off points. Demographic, clinical, and audiological variables, such as age, onset time, duration of treatment, gender, ear side, comorbidities (i.e., hypertension, diabetes mellitus, tinnitus, dizziness), hearing configuration (i.e., ascending, descending, flat, irregular, and profound), treatment options (i.e., systemic corticosteroid therapy per oral, intratympanic steroid injection, and hyperbaric oxygen therapy), and WRS were analyzed as being underlying prognostic factors. RESULTS Both groups showed significantly different distributions for hearing thresholds and hyperbaric oxygen therapy (HBOT) as general characteristics. The results of a multivariate logistic regression analysis showed that the odds ratio (OR) of age (OR: 0.96, 95% CI: 0.59 - 24.25), duration of treatment (OR: 0.98, 95% CI: 0.96 - 1.00), ascending configuration (OR: 4.97, 95% CI: 1.64 - 16.62), irregular configuration (OR: 4.58, 95% CI: 1.62 - 13.79), and WRS (OR: 1.01, 95% CI: 1.00 - 1.02) were the significant prognostic factors for all the patients. Further analysis of those patients with WRS under 52% cut-off points showed that an ascending configuration (OR: 5.87, 95% CI: 1.18 - 35.99), irregular configuration (OR: 8.03, 95% CI: 1.69 - 46.30), and WRS (OR: 1.05, 95% CI: 1.01 - 1.10) significantly affected the prognosis. As the initial WRS of ISSNHL patients decreased, the OR of the WRS itself increased. These results suggested that the importance of WRS as the prognostic factor was stressed for PW patients. CONCLUSION The age, duration of treatment, initial hearing configuration (ascending and irregular types), and WRS were the significant prognostic factors for patients with ISSNHL. It was learned that WRS could be a remarkable prognostic factor to consider, especially for ISSNHL patients with poor WRS.
Collapse
Affiliation(s)
- Chanbeom Kwak
- Laboratory of Hearing and Technology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Korea; Division of Speech Pathology and Audiology, College of Natural Sciences, Hallym University, Chuncheon, Korea
| | - Young Joon Seo
- Department of Otorhinolaryngology, Yonsei University Wonju College of Medicine, Wonju, Korea; Research Institute of Hearing Enhancement, Yonsei University Wonju College of Medicine, Wonju, Korea
| | - ChulYoung Yoon
- Department of biostatistics, Yonsei University Wonju College of Medicine, Wonju, Korea
| | - JuHyung Lee
- Department of biostatistics, Yonsei University Wonju College of Medicine, Wonju, Korea
| | - Woojae Han
- Laboratory of Hearing and Technology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Korea; Division of Speech Pathology and Audiology, College of Natural Sciences, Hallym University, Chuncheon, Korea.
| |
Collapse
|
34
|
[Artificial intelligence in otorhinolaryngology]. HNO 2021; 70:87-93. [PMID: 34374811 PMCID: PMC8353610 DOI: 10.1007/s00106-021-01095-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/26/2021] [Indexed: 11/24/2022]
Abstract
Hintergrund Die fortschreitende Digitalisierung ermöglicht zunehmend den Einsatz von künstlicher Intelligenz (KI). Sie wird Gesellschaft und Medizin in den nächsten Jahren maßgeblich beeinflussen. Ziel der Arbeit Darstellung des gegenwärtigen Einsatzspektrums von KI in der Hals-Nasen-Ohren-Heilkunde und Skizzierung zukünftiger Entwicklungen bei der Anwendung dieser Technologie. Material und Methoden Es erfolgte die Auswertung und Diskussion wissenschaftlicher Studien und Expertenanalysen. Ergebnisse Durch die Verwendung von KI kann der Nutzen herkömmlicher diagnostischer Werkzeuge in der Hals-Nasen-Ohren-Heilkunde gesteigert werden. Zudem kann der Einsatz dieser Technologie die chirurgische Präzision in der Kopf-Hals-Chirurgie weiter erhöhen. Schlussfolgerungen KI besitzt ein großes Potenzial zur weiteren Verbesserung diagnostischer und therapeutischer Verfahren in der Hals-Nasen-Ohren-Heilkunde. Allerdings ist die Anwendung dieser Technologie auch mit Herausforderungen verbunden, beispielsweise im Bereich des Datenschutzes.
Collapse
|
35
|
Deep Learning in Automated Region Proposal and Diagnosis of Chronic Otitis Media Based on Computed Tomography. Ear Hear 2021; 41:669-677. [PMID: 31567561 DOI: 10.1097/aud.0000000000000794] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
OBJECTIVES The purpose of this study was to develop a deep-learning framework for the diagnosis of chronic otitis media (COM) based on temporal bone computed tomography (CT) scans. DESIGN A total of 562 COM patients with 672 temporal bone CT scans of both ears were included. The final dataset consisted of 1147 ears, and each of them was assigned with a ground truth label from one of the 3 conditions: normal, chronic suppurative otitis media, and cholesteatoma. A random selection of 85% dataset (n = 975) was used for training and validation. The framework contained two deep-learning networks with distinct functions: a region proposal network for extracting regions of interest from 2-dimensional CT slices; and a classification network for diagnosis of COM based on the extracted regions. The performance of this framework was evaluated on the remaining 15% dataset (n = 172) and compared with that of 6 clinical experts who read the same CT images only. The panel included 2 otologists, 3 otolaryngologists, and 1 radiologist. RESULTS The area under the receiver operating characteristic curve of the artificial intelligence model in classifying COM versus normal was 0.92, with sensitivity (83.3%) and specificity (91.4%) exceeding the averages of clinical experts (81.1% and 88.8%, respectively). In a 3-class classification task, this network had higher overall accuracy (76.7% versus 73.8%), higher recall rates in identifying chronic suppurative otitis media (75% versus 70%) and cholesteatoma (76% versus 53%) cases, and superior consistency in duplicated cases (100% versus 81%) compared with clinical experts. CONCLUSIONS This article presented a deep-learning framework that automatically extracted the region of interest from two-dimensional temporal bone CT slices and made diagnosis of COM. The performance of this model was comparable and, in some cases, superior to that of clinical experts. These results implied a promising prospect for clinical application of artificial intelligence in the diagnosis of COM based on CT images.
Collapse
|
36
|
Innovative Artificial Intelligence Approach for Hearing-Loss Symptoms Identification Model Using Machine Learning Techniques. SUSTAINABILITY 2021. [DOI: 10.3390/su13105406] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Physicians depend on their insight and experience and on a fundamentally indicative or symptomatic approach to decide on the possible ailment of a patient. However, numerous phases of problem identification and longer strategies can prompt a longer time for consulting and can subsequently cause other patients that require attention to wait for longer. This can bring about pressure and tension concerning those patients. In this study, we focus on developing a decision-support system for diagnosing the symptoms as a result of hearing loss. The model is implemented by utilizing machine learning techniques. The Frequent Pattern Growth (FP-Growth) algorithm is used as a feature transformation method and the multivariate Bernoulli naïve Bayes classification model as the classifier. To find the correlation that exists between the hearing thresholds and symptoms of hearing loss, the FP-Growth and association rule algorithms were first used to experiment with small sample and large sample datasets. The result of these two experiments showed the existence of this relationship, and that the performance of the hybrid of the FP-Growth and naïve Bayes algorithms in identifying hearing-loss symptoms was found to be efficient, with a very small error rate. The average accuracy rate and average error rate for the multivariate Bernoulli model with FP-Growth feature transformation, using five training sets, are 98.25% and 1.73%, respectively.
Collapse
|
37
|
Uhm T, Lee JE, Yi S, Choi SW, Oh SJ, Kong SK, Lee IW, Lee HM. Predicting hearing recovery following treatment of idiopathic sudden sensorineural hearing loss with machine learning models. Am J Otolaryngol 2021; 42:102858. [PMID: 33445040 DOI: 10.1016/j.amjoto.2020.102858] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 12/08/2020] [Accepted: 12/22/2020] [Indexed: 11/19/2022]
Abstract
PURPOSE Idiopathic sudden sensorineural hearing loss (ISSHL) is an emergency otological disease, and its definite prognostic factors remain unclear. This study applied machine learning methods to develop a new ISSHL prognosis prediction model. MATERIALS AND METHODS This retrospective study reviewed the medical data of 244 patients who underwent combined intratympanic and systemic steroid treatment for ISSHL at a tertiary referral center between January 2015 and October 2019. We used 35 variables to predict hearing recovery based on Siegel's criteria. In addition to performing an analysis based on the conventional logistic regression model, we developed prediction models with five machine learning methods: least absolute shrinkage and selection operator, decision tree, random forest (RF), support vector machine, and boosting. To compare the predictive ability of each model, the accuracy, precision, recall, F-score, and the area under the receiver operator characteristic curves (ROC-AUC) were calculated. RESULTS Former otological history, ear fullness, delay between symptom onset and treatment, delay between symptom onset and intratympanic steroid injection (ITSI), and initial hearing thresholds of the affected and unaffected ears differed significantly between the recovery and non-recovery groups. While the RF method (accuracy: 72.22%, ROC-AUC: 0.7445) achieved the highest predictive power, the other methods also featured relatively good predictive power. In the RF model, the following variables were identified to be important for hearing-recovery prediction: delay between symptom onset and ITSI or the initial treatment, initial hearing levels of the affected and non-affected ears, body mass index, and a previous history of hearing loss. CONCLUSIONS The machine learning models predictive of hearing recovery following treatment for ISSHL showed superior predictive power relative to the conventional logistic regression method, potentially allowing for better patient treatment outcomes.
Collapse
Affiliation(s)
- Taewoong Uhm
- Department of Statistics, Pukyong National University, Busan, Republic of Korea
| | - Jae Eun Lee
- Department of Statistics, Pukyong National University, Busan, Republic of Korea
| | - Seongbaek Yi
- Department of Statistics, Pukyong National University, Busan, Republic of Korea
| | - Sung Won Choi
- Department of Otorhinolaryngology-Head and Neck Surgery, Pusan National University College of Medicine, Pusan National University Hospital, Busan, Republic of Korea
| | - Se Joon Oh
- Department of Otorhinolaryngology-Head and Neck Surgery, Pusan National University College of Medicine, Pusan National University Hospital, Busan, Republic of Korea
| | - Soo Keun Kong
- Department of Otorhinolaryngology-Head and Neck Surgery, Pusan National University College of Medicine, Pusan National University Hospital, Busan, Republic of Korea
| | - Il Woo Lee
- Department of Otorhinolaryngology-Head and Neck Surgery, Pusan National University College of Medicine, Pusan National University Yangsan Hospital, Yangsan, Republic of Korea
| | - Hyun Min Lee
- Department of Otorhinolaryngology-Head and Neck Surgery, Pusan National University College of Medicine, Pusan National University Yangsan Hospital, Yangsan, Republic of Korea.
| |
Collapse
|
38
|
Chen F, Cao Z, Grais EM, Zhao F. Contributions and limitations of using machine learning to predict noise-induced hearing loss. Int Arch Occup Environ Health 2021; 94:1097-1111. [PMID: 33491101 PMCID: PMC8238747 DOI: 10.1007/s00420-020-01648-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 12/29/2020] [Indexed: 12/20/2022]
Abstract
Purpose Noise-induced hearing loss (NIHL) is a global issue that impacts people’s life and health. The current review aims to clarify the contributions and limitations of applying machine learning (ML) to predict NIHL by analyzing the performance of different ML techniques and the procedure of model construction. Methods The authors searched PubMed, EMBASE and Scopus on November 26, 2020. Results Eight studies were recruited in the current review following defined inclusion and exclusion criteria. Sample size in the selected studies ranged between 150 and 10,567. The most popular models were artificial neural networks (n = 4), random forests (n = 3) and support vector machines (n = 3). Features mostly correlated with NIHL and used in the models were: age (n = 6), duration of noise exposure (n = 5) and noise exposure level (n = 4). Five included studies used either split-sample validation (n = 3) or ten-fold cross-validation (n = 2). Assessment of accuracy ranged in value from 75.3% to 99% with a low prediction error/root-mean-square error in 3 studies. Only 2 studies measured discrimination risk using the receiver operating characteristic (ROC) curve and/or the area under ROC curve. Conclusion In spite of high accuracy and low prediction error of machine learning models, some improvement can be expected from larger sample sizes, multiple algorithm use, completed reports of model construction and the sufficient evaluation of calibration and discrimination risk.
Collapse
Affiliation(s)
- Feifan Chen
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| | - Zuwei Cao
- Center for Rehabilitative Auditory Research, Guizhou Provincial People's Hospital, Guiyang, China
| | - Emad M Grais
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK
| | - Fei Zhao
- Centre for Speech and Language Therapy and Hearing Science, Cardiff School of Sport and Health Sciences, Cardiff Metropolitan University, Cardiff, UK. .,Department of Hearing and Speech Science, Xinhua College, Sun Yat-Sen University, Guangzhou, China.
| |
Collapse
|
39
|
Uçar M, Akyol K, Atila Ü, Uçar E. Classification of Different Tympanic Membrane Conditions Using Fused Deep Hypercolumn Features and Bidirectional LSTM. Ing Rech Biomed 2021. [DOI: 10.1016/j.irbm.2021.01.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
40
|
George MM, Tolley NS. AIM in Otolaryngology and Head & Neck Surgery. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_198-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
41
|
Tama BA, Kim DH, Kim G, Kim SW, Lee S. Recent Advances in the Application of Artificial Intelligence in Otorhinolaryngology-Head and Neck Surgery. Clin Exp Otorhinolaryngol 2020; 13:326-339. [PMID: 32631041 PMCID: PMC7669308 DOI: 10.21053/ceo.2020.00654] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 05/24/2020] [Accepted: 06/09/2020] [Indexed: 12/12/2022] Open
Abstract
This study presents an up-to-date survey of the use of artificial intelligence (AI) in the field of otorhinolaryngology, considering opportunities, research challenges, and research directions. We searched PubMed, the Cochrane Central Register of Controlled Trials, Embase, and the Web of Science. We initially retrieved 458 articles. The exclusion of non-English publications and duplicates yielded a total of 90 remaining studies. These 90 studies were divided into those analyzing medical images, voice, medical devices, and clinical diagnoses and treatments. Most studies (42.2%, 38/90) used AI for image-based analysis, followed by clinical diagnoses and treatments (24 studies). Each of the remaining two subcategories included 14 studies. Machine learning and deep learning have been extensively applied in the field of otorhinolaryngology. However, the performance of AI models varies and research challenges remain.
Collapse
Affiliation(s)
- Bayu Adhi Tama
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
| | - Do Hyun Kim
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Gyuwon Kim
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
| | - Soo Whan Kim
- Department of Otolaryngology-Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Seungchul Lee
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang, Korea
- Graduate School of Artificial Intelligence, Pohang University of Science and Technology, Pohang, Korea
| |
Collapse
|
42
|
Lan L, Li J, Chen Y, Chen W, Li W, Zhao F, Chen G, Liu J, Chen Y, Li Y, Wang CD, Zheng Y, Cai Y. Alterations of brain activity and functional connectivity in transition from acute to chronic tinnitus. Hum Brain Mapp 2020; 42:485-494. [PMID: 33090584 PMCID: PMC7776005 DOI: 10.1002/hbm.25238] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2020] [Revised: 09/04/2020] [Accepted: 09/29/2020] [Indexed: 02/06/2023] Open
Abstract
The objective of this study was to investigate alterations to brain activity and functional connectivity in patients with tinnitus, exploring neural features in the transition from acute to chronic phantom perception. Twenty‐four patients with acute tinnitus, 23 patients with chronic tinnitus, and 32 healthy controls were recruited. High‐density electroencephalography (EEG) was used to explore changes in brain areas and functional connectivity in different groups. When compared with healthy subjects, acute tinnitus patients had a significant reduction in superior frontal cortex activity across all frequency bands, whereas chronic tinnitus patients had a significant reduction in the superior frontal cortex at beta 3 and gamma frequency bands as well as a significant increase in the inferior frontal cortex at delta‐band and superior temporal cortex at alpha 1 frequency band. When compared to the chronic tinnitus group, the acute tinnitus group activity was significantly increased in the middle frontal and parietal gyrus at the gamma‐band. Functional connectivity analysis showed that the chronic tinnitus group had increased connections between the parahippocampus gyrus, posterior cingulate cortex, and precuneus when compared with the healthy group. Alterations of local brain activity and connections between the parahippocampus gyrus and other nonauditory areas appeared in the transition from acute to chronic tinnitus. This indicates that the appearance and development of tinnitus is a dynamic process involving aberrant local neural activity and abnormal connectivity in multifunctional brain networks.
Collapse
Affiliation(s)
- Liping Lan
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, Guangdong Province, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou City, Guangdong Province, China
| | - Jiahong Li
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, Guangdong Province, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou City, Guangdong Province, China
| | - Yanhong Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, Guangdong Province, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou City, Guangdong Province, China
| | - Wan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Wenrui Li
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, Guangdong Province, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou City, Guangdong Province, China
| | - Fei Zhao
- Department of Speech and Language Therapy and Hearing Science, Cardiff Metropolitan University, Cardiff, UK.,Department of Hearing and Speech Science, Xinhua College, Sun Yat-Sen University, Guangzhou, China
| | - Guisheng Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, Guangdong Province, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou City, Guangdong Province, China
| | - Jiahao Liu
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, Guangdong Province, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou City, Guangdong Province, China
| | - Yuchen Chen
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China
| | - Yuanqing Li
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
| | - Chang-Dong Wang
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, Guangdong Province, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou City, Guangdong Province, China
| | - Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou City, Guangdong Province, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou City, Guangdong Province, China
| |
Collapse
|
43
|
Artificial Intelligence Applications in Otology: A State of the Art Review. Otolaryngol Head Neck Surg 2020; 163:1123-1133. [DOI: 10.1177/0194599820931804] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Objective Recent advances in artificial intelligence (AI) are driving innovative new health care solutions. We aim to review the state of the art of AI in otology and provide a discussion of work underway, current limitations, and future directions. Data Sources Two comprehensive databases, MEDLINE and EMBASE, were mined using a directed search strategy to identify all articles that applied AI to otology. Review Methods An initial abstract and title screening was completed. Exclusion criteria included nonavailable abstract and full text, language, and nonrelevance. References of included studies and relevant review articles were cross-checked to identify additional studies. Conclusion The database search identified 1374 articles. Abstract and title screening resulted in full-text retrieval of 96 articles. A total of N = 38 articles were retained. Applications of AI technologies involved the optimization of hearing aid technology (n = 5; 13% of all articles), speech enhancement technologies (n = 4; 11%), diagnosis and management of vestibular disorders (n = 11; 29%), prediction of sensorineural hearing loss outcomes (n = 9; 24%), interpretation of automatic brainstem responses (n = 5; 13%), and imaging modalities and image-processing techniques (n = 4; 10%). Publication counts of the included articles from each decade demonstrated a marked increase in interest in AI in recent years. Implications for Practice This review highlights several applications of AI that otologists and otolaryngologists alike should be aware of given the possibility of implementation in mainstream clinical practice. Although there remain significant ethical and regulatory challenges, AI powered systems offer great potential to shape how healthcare systems of the future operate and clinicians are key stakeholders in this process.
Collapse
|
44
|
Cha D, Shin SH, Kim SH, Choi JY, Moon IS. Machine learning approach for prediction of hearing preservation in vestibular schwannoma surgery. Sci Rep 2020; 10:7136. [PMID: 32346085 PMCID: PMC7188896 DOI: 10.1038/s41598-020-64175-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Accepted: 04/10/2020] [Indexed: 12/21/2022] Open
Abstract
In vestibular schwannoma patients with functional hearing status, surgical resection while preserving the hearing is feasible. Hearing levels, tumor size, and location of the tumor have been known to be candidates of predictors. We used a machine learning approach to predict hearing outcomes in vestibular schwannoma patients who underwent hearing preservation surgery: middle cranial fossa, or retrosigmoid approach. After reviewing the medical records of 52 patients with a pathologically confirmed vestibular schwannoma, we included 50 patient’s records in the study. Hearing preservation was regarded as positive if the postoperative hearing was within serviceable hearing (50/50 rule). The categorical variable included the surgical approach, and the continuous variable covered audiometric and vestibular function tests, and the largest diameter of the tumor. Four different algorithms were lined up for comparison of accuracy: support vector machine(SVM), gradient boosting machine(GBM), deep neural network(DNN), and diffuse random forest(DRF). The average accuracy of predicting hearing preservation ranged from 62% (SVM) to 90% (DNN). The current study is the first to incorporate machine learning methodology into a prediction of successful hearing preservation surgery. Although a larger population may be needed for better generalization, this study could aid the surgeon’s decision to perform a hearing preservation approach for vestibular schwannoma surgery.
Collapse
Affiliation(s)
- Dongchul Cha
- Department of Otorhinolaryngology, Yonsei University College of Medicine, Seoul, South Korea
| | - Seung Ho Shin
- Department of Otorhinolaryngology, Yonsei University College of Medicine, Seoul, South Korea
| | - Sung Huhn Kim
- Department of Otorhinolaryngology, Yonsei University College of Medicine, Seoul, South Korea
| | - Jae Young Choi
- Department of Otorhinolaryngology, Yonsei University College of Medicine, Seoul, South Korea
| | - In Seok Moon
- Department of Otorhinolaryngology, Yonsei University College of Medicine, Seoul, South Korea.
| |
Collapse
|
45
|
Viscaino M, Maass JC, Delano PH, Torrente M, Stott C, Auat Cheein F. Computer-aided diagnosis of external and middle ear conditions: A machine learning approach. PLoS One 2020; 15:e0229226. [PMID: 32163427 PMCID: PMC7067442 DOI: 10.1371/journal.pone.0229226] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 01/31/2020] [Indexed: 12/27/2022] Open
Abstract
In medicine, a misdiagnosis or the absence of specialists can affect the patient’s health, leading to unnecessary tests and increasing the costs of healthcare. In particular, the lack of specialists in otolaryngology in third world countries forces patients to seek medical attention from general practitioners, whom might not have enough training and experience for making correct diagnosis in this field. To tackle this problem, we propose and test a computer-aided system based on machine learning models and image processing techniques for otoscopic examination, as a support for a more accurate diagnosis of ear conditions at primary care before specialist referral; in particular, for myringosclerosis, earwax plug, and chronic otitis media. To characterize the tympanic membrane and ear canal for each condition, we implemented three different feature extraction methods: color coherence vector, discrete cosine transform, and filter bank. We also considered three machine learning algorithms: support vector machine (SVM), k-nearest neighbor (k-NN) and decision trees to develop the ear condition predictor model. To conduct the research, our database included 160 images as testing set and 720 images as training and validation sets of 180 patients. We repeatedly trained the learning models using the training dataset and evaluated them using the validation dataset to thus obtain the best feature extraction method and learning model that produce the highest validation accuracy. The results showed that the SVM and k-NN presented the best performance followed by decision trees model. Finally, we performed a classification stage –i.e., diagnosis– using testing data, where the SVM model achieved an average classification accuracy of 93.9%, average sensitivity of 87.8%, average specificity of 95.9%, and average positive predictive value of 87.7%. The results show that this system might be used for general practitioners as a reference to make better decisions in the ear pathologies diagnosis.
Collapse
Affiliation(s)
- Michelle Viscaino
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso, Chile
| | - Juan C. Maass
- Interdisciplinary Program of Phisiology and Biophisics, Facultad de Medicina, Instituto de Ciencias Biomedicas, Universidad de Chile, Santiago, Chile
- Department of Otolaryngology, Hospital Clínico de la Universidad de Chile, Santiago, Chile
| | - Paul H. Delano
- Department of Neuroscience, Facultad de Medicina, Universidad de Chile, Santiago, Chile
- Department of Otolaryngology, Hospital Clínico de la Universidad de Chile, Santiago, Chile
| | - Mariela Torrente
- Department of Otolaryngology, Hospital Clínico de la Universidad de Chile, Santiago, Chile
| | - Carlos Stott
- Department of Otolaryngology, Hospital Clínico de la Universidad de Chile, Santiago, Chile
| | - Fernando Auat Cheein
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso, Chile
- * E-mail:
| |
Collapse
|
46
|
Park KV, Oh KH, Jeong YJ, Rhee J, Han MS, Han SW, Choi J. Machine Learning Models for Predicting Hearing Prognosis in Unilateral Idiopathic Sudden Sensorineural Hearing Loss. Clin Exp Otorhinolaryngol 2020; 13:148-156. [PMID: 32156103 PMCID: PMC7248600 DOI: 10.21053/ceo.2019.01858] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Accepted: 02/23/2020] [Indexed: 12/02/2022] Open
Abstract
Objectives. Prognosticating idiopathic sudden sensorineural hearing loss (ISSNHL) is an important challenge. In our study, a dataset was split into training and test sets and cross-validation was implemented on the training set, thereby determining the hyperparameters for machine learning models with high test accuracy and low bias. The effectiveness of the following five machine learning models for predicting the hearing prognosis in patients with ISSNHL after 1 month of treatment was assessed: adaptive boosting, K-nearest neighbor, multilayer perceptron, random forest (RF), and support vector machine (SVM). Methods. The medical records of 523 patients with ISSNHL admitted to Korea University Ansan Hospital between January 2010 and October 2017 were retrospectively reviewed. In this study, we analyzed data from 227 patients (recovery, 106; no recovery, 121) after excluding those with missing data. To determine risk factors, statistical hypothesis tests (e.g., the two-sample t-test for continuous variables and the chi-square test for categorical variables) were conducted to compare patients who did or did not recover. Variables were selected using an RF model depending on two criteria (mean decreases in the Gini index and accuracy). Results. The SVM model using selected predictors achieved both the highest accuracy (75.36%) and the highest F-score (0.74) on the test set. The RF model with selected variables demonstrated the second-highest accuracy (73.91%) and F-score (0.74). The RF model with the original variables showed the same accuracy (73.91%) as that of the RF model with selected variables, but a lower F-score (0.73). All the tested models, except RF, demonstrated better performance after variable selection based on RF. Conclusion. The SVM model with selected predictors was the best-performing of the tested prediction models. The RF model with selected predictors was the second-best model. Therefore, machine learning models can be used to predict hearing recovery in patients with ISSNHL.
Collapse
Affiliation(s)
- Keon Vin Park
- School of Industrial Management Engineering, Korea University, Seoul, Korea
| | - Kyoung Ho Oh
- Department of Otorhinolaryngology-Head and Neck Surgery, Korea University Ansan Hospital, Korea University College of Medicine, Ansan, Korea
| | - Yong Jun Jeong
- Department of Otorhinolaryngology-Head and Neck Surgery, Korea University Ansan Hospital, Korea University College of Medicine, Ansan, Korea
| | - Jihye Rhee
- Department of Otorhinolaryngology-Head and Neck Surgery, Korea University Ansan Hospital, Korea University College of Medicine, Ansan, Korea.,Department of Otorhinolaryngology-Head and Neck Surgery, Veterans Health Service Medical Center, Seoul, Korea
| | - Mun Soo Han
- Department of Otorhinolaryngology-Head and Neck Surgery, Korea University Ansan Hospital, Korea University College of Medicine, Ansan, Korea
| | - Sung Won Han
- School of Industrial Management Engineering, Korea University, Seoul, Korea
| | - June Choi
- Department of Otorhinolaryngology-Head and Neck Surgery, Korea University Ansan Hospital, Korea University College of Medicine, Ansan, Korea
| |
Collapse
|
47
|
Cai Y, Li J, Chen Y, Chen W, Dang C, Zhao F, Li W, Chen G, Chen S, Liang M, Zheng Y. Inhibition of Brain Area and Functional Connectivity in Idiopathic Sudden Sensorineural Hearing Loss With Tinnitus, Based on Resting-State EEG. Front Neurosci 2019; 13:851. [PMID: 31474821 PMCID: PMC6702325 DOI: 10.3389/fnins.2019.00851] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2019] [Accepted: 07/30/2019] [Indexed: 12/18/2022] Open
Abstract
This study aimed to identify the mechanism behind idiopathic sudden sensorineural hearing loss (ISSNHL) in patients with tinnitus by investigating aberrant activity in areas of the brain and functional connectivity. High-density electroencephalography (EEG) was used to investigate central nervous changes in 25 ISSNHL subjects and 27 healthy controls. ISSNHL subjects had significantly reduced activity in the left frontal lobe at the alpha 2 frequency band compared with controls. Linear lagged connectivity and lagged coherence analysis showed significantly reduced functional connectivity between the temporal gyrus and supramarginal gyrus at the gamma 2 frequency band in the ISSNHL group. Additionally, a significantly reduced functional connectivity was found between the central cingulate gyrus and frontal lobe under lagged phase synchronization analysis. These results strongly indicate inhibition of brain area activity and change in functional connectivity in ISSNHL with tinnitus patients.
Collapse
Affiliation(s)
- Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Jiahong Li
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Yanhong Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Wan Chen
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, China
| | - Caiping Dang
- Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, China.,Department of Psychology, Guangzhou Medical University, Guangzhou, China
| | - Fei Zhao
- Department of Speech Language Therapy and Hearing Science, Cardiff Metropolitan University, Cardiff, United Kingdom.,Department of Hearing and Speech Science, Xinhua College, Sun Yat-sen University, Guangzhou, China
| | - Wenrui Li
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Guisheng Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Suijun Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Maojin Liang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
48
|
Tobore I, Li J, Yuhang L, Al-Handarish Y, Kandwal A, Nie Z, Wang L. Deep Learning Intervention for Health Care Challenges: Some Biomedical Domain Considerations. JMIR Mhealth Uhealth 2019; 7:e11966. [PMID: 31376272 PMCID: PMC6696854 DOI: 10.2196/11966] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 04/14/2019] [Accepted: 06/12/2019] [Indexed: 01/10/2023] Open
Abstract
The use of deep learning (DL) for the analysis and diagnosis of biomedical and health care problems has received unprecedented attention in the last decade. The technique has recorded a number of achievements for unearthing meaningful features and accomplishing tasks that were hitherto difficult to solve by other methods and human experts. Currently, biological and medical devices, treatment, and applications are capable of generating large volumes of data in the form of images, sounds, text, graphs, and signals creating the concept of big data. The innovation of DL is a developing trend in the wake of big data for data representation and analysis. DL is a type of machine learning algorithm that has deeper (or more) hidden layers of similar function cascaded into the network and has the capability to make meaning from medical big data. Current transformation drivers to achieve personalized health care delivery will be possible with the use of mobile health (mHealth). DL can provide the analysis for the deluge of data generated from mHealth apps. This paper reviews the fundamentals of DL methods and presents a general view of the trends in DL by capturing literature from PubMed and the Institute of Electrical and Electronics Engineers database publications that implement different variants of DL. We highlight the implementation of DL in health care, which we categorize into biological system, electronic health record, medical image, and physiological signals. In addition, we discuss some inherent challenges of DL affecting biomedical and health domain, as well as prospective research directions that focus on improving health management by promoting the application of physiological signals and modern internet technology.
Collapse
Affiliation(s)
- Igbe Tobore
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China.,Graduate University, Chinese Academy of Sciences, Beijing, China
| | - Jingzhen Li
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Liu Yuhang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yousef Al-Handarish
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Abhishek Kandwal
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zedong Nie
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Lei Wang
- Center for Medical Robotics and Minimally Invasive Surgical Devices, Shenzhen Institutes of Advance Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
49
|
Saeed HS, Stivaros SM, Saeed SR. The potential for machine learning to improve precision medicine in cochlear implantation. Cochlear Implants Int 2019; 20:229-230. [PMID: 31210097 DOI: 10.1080/14670100.2019.1631520] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- H S Saeed
- a Department of Paediatric ENT Surgery, Royal Manchester Children's Hospital , Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Sciences Centre , Manchester , UK
| | - S M Stivaros
- b Academic Unit of Paediatric Radiology , Royal Manchester Children's Hospital, Central Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Sciences Centre , Manchester , UK.,c Division of Informatics, Imaging & Data Sciences, School of Health Sciences, Faculty of Biology, Medicine and Health , University of Manchester, Manchester Academic Health Science Centre , Manchester , UK
| | - S R Saeed
- d Department of ENT surgery, Royal National Nose, Throat and Ear Hospital , University College Hospitals London , London , UK
| |
Collapse
|
50
|
Cai Y, Chen S, Chen Y, Li J, Wang CD, Zhao F, Dang CP, Liang J, He N, Liang M, Zheng Y. Altered Resting-State EEG Microstate in Idiopathic Sudden Sensorineural Hearing Loss Patients With Tinnitus. Front Neurosci 2019; 13:443. [PMID: 31133786 PMCID: PMC6514099 DOI: 10.3389/fnins.2019.00443] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Accepted: 04/17/2019] [Indexed: 12/26/2022] Open
Abstract
In order to clarify the central reorganization in acute period of hearing loss, this study explored the aberrant dynamics of electroencephalogram (EEG) microstates and the correlations with the features of idiopathic sudden sensorineural hearing loss (ISSNHL) and tinnitus. We used high-density EEG with 128 channels to investigate alterations in microstate parameters between 25 ISSNHL patients with tinnitus and 27 healthy subjects. This study also explored the associations between microstate characteristics and tinnitus features. Microstates were clustered into four categories. There was a reduced presence of microstate A in amplitude, coverage, lifespan, frequency and an increased presence of microstate B in frequency in ISSNHL patients with tinnitus. According to the syntax analysis, a reduced transition from microstate C to microstate A and an increased transition from microstate C to microstate B were found in ISSNHL subjects. In addition, the significant negative correlations were found between Tinnitus Handicap Inventory (THI) scores and frequency of microstate A as well as between THI scores and the probability of transition from microstate D to microstate A. While THI was positively correlated with the transition probability from microstate D to microstate B. To sum up, the significant differences in the characteristics of resting-state EEG microstates were found between ISSNHL subjects with tinnitus and healthy controls. This study suggests that the alterations of central neural networks occur in acute stage of hearing loss and tinnitus. And EEG microstate may be considered as a useful tool to study the whole brain network in ISSNHL patients.
Collapse
Affiliation(s)
- Yuexin Cai
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Suijun Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Yanhong Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Jiahong Li
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Chang-Dong Wang
- School of Data and Computer Science, Sun Yat-sen University, Guangzhou, China
| | - Fei Zhao
- Department of Speech Language Therapy and Hearing Science, Cardiff Metropolitan University, Cardiff, United Kingdom.,Department of Hearing and Speech Science, Xinhua College, Sun Yat-sen University, Guangzhou, China
| | - Cai-Ping Dang
- Affiliated Brain Hospital of Guangzhou Medical University, Guangzhou, China.,Department of Psychology, Guangzhou Medical University, Guangzhou, China
| | - Jianheng Liang
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, China
| | - Nannan He
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, China
| | - Maojin Liang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.,Institute of Hearing and Speech-Language Science, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|