1
|
Andéol G, Paraouty N, Giraudet F, Wallaert N, Isnard V, Moulin A, Suied C. Predictors of Speech-in-Noise Understanding in a Population of Occupationally Noise-Exposed Individuals. BIOLOGY 2024; 13:416. [PMID: 38927296 PMCID: PMC11200776 DOI: 10.3390/biology13060416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 05/20/2024] [Accepted: 05/21/2024] [Indexed: 06/28/2024]
Abstract
Understanding speech in noise is particularly difficult for individuals occupationally exposed to noise due to a mix of noise-induced auditory lesions and the energetic masking of speech signals. For years, the monitoring of conventional audiometric thresholds has been the usual method to check and preserve auditory function. Recently, suprathreshold deficits, notably, difficulties in understanding speech in noise, has pointed out the need for new monitoring tools. The present study aims to identify the most important variables that predict speech in noise understanding in order to suggest a new method of hearing status monitoring. Physiological (distortion products of otoacoustic emissions, electrocochleography) and behavioral (amplitude and frequency modulation detection thresholds, conventional and extended high-frequency audiometric thresholds) variables were collected in a population of individuals presenting a relatively homogeneous occupational noise exposure. Those variables were used as predictors in a statistical model (random forest) to predict the scores of three different speech-in-noise tests and a self-report of speech-in-noise ability. The extended high-frequency threshold appears to be the best predictor and therefore an interesting candidate for a new way of monitoring noise-exposed professionals.
Collapse
Affiliation(s)
- Guillaume Andéol
- Institut de Recherche Biomédicale des Armées, 1 Place Valérie André, 91220 Brétigny sur Orge, France; (V.I.); (C.S.)
| | - Nihaad Paraouty
- iAudiogram—My Medical Assistant SAS, 51100 Reims, France; (N.P.); (N.W.)
| | - Fabrice Giraudet
- Department of Neurosensory Biophysics, INSERM U1107 NEURO-DOL, School of Medecine, Université Clermont Auvergne, 63000 Clermont-Ferrand, France;
| | - Nicolas Wallaert
- iAudiogram—My Medical Assistant SAS, 51100 Reims, France; (N.P.); (N.W.)
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d’Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences et Lettres (PSL), 75005 Paris, France
- Department of Otorhinolaryngology-Head and Neck Surgery, Rennes University Hospital, 35000 Rennes, France
| | - Vincent Isnard
- Institut de Recherche Biomédicale des Armées, 1 Place Valérie André, 91220 Brétigny sur Orge, France; (V.I.); (C.S.)
| | - Annie Moulin
- Centre de Recherche en Neurosciences de Lyon, CRNL Inserm U1028—CNRS UMR5292—UCBLyon1, Perception Attention Memory Team, Bâtiment 452 B, 95 Bd Pinel, 69675 Bron Cedex, France;
| | - Clara Suied
- Institut de Recherche Biomédicale des Armées, 1 Place Valérie André, 91220 Brétigny sur Orge, France; (V.I.); (C.S.)
| |
Collapse
|
2
|
Sadegh-Zadeh SA, Soleimani Mamalo A, Kavianpour K, Atashbar H, Heidari E, Hajizadeh R, Roshani AS, Habibzadeh S, Saadat S, Behmanesh M, Saadat M, Gargari SS. Artificial intelligence approaches for tinnitus diagnosis: leveraging high-frequency audiometry data for enhanced clinical predictions. Front Artif Intell 2024; 7:1381455. [PMID: 38774833 PMCID: PMC11106786 DOI: 10.3389/frai.2024.1381455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Accepted: 04/22/2024] [Indexed: 05/24/2024] Open
Abstract
This research investigates the application of machine learning to improve the diagnosis of tinnitus using high-frequency audiometry data. A Logistic Regression (LR) model was developed alongside an Artificial Neural Network (ANN) and various baseline classifiers to identify the most effective approach for classifying tinnitus presence. The methodology encompassed data preprocessing, feature extraction focused on point detection, and rigorous model evaluation through performance metrics including accuracy, Area Under the ROC Curve (AUC), precision, recall, and F1 scores. The main findings reveal that the LR model, supported by the ANN, significantly outperformed other machine learning models, achieving an accuracy of 94.06%, an AUC of 97.06%, and high precision and recall scores. These results demonstrate the efficacy of the LR model and ANN in accurately diagnosing tinnitus, surpassing traditional diagnostic methods that rely on subjective assessments. The implications of this research are substantial for clinical audiology, suggesting that machine learning, particularly advanced models like ANNs, can provide a more objective and quantifiable tool for tinnitus diagnosis, especially when utilizing high-frequency audiometry data not typically assessed in standard hearing tests. The study underscores the potential for machine learning to facilitate earlier and more accurate tinnitus detection, which could lead to improved patient outcomes. Future work should aim to expand the dataset diversity, explore a broader range of algorithms, and conduct clinical trials to validate the models' practical utility. The research highlights the transformative potential of machine learning, including the LR model and ANN, in audiology, paving the way for advancements in the diagnosis and treatment of tinnitus.
Collapse
Affiliation(s)
- Seyed-Ali Sadegh-Zadeh
- Department of Computing, School of Digital, Technologies and Arts, Staffordshire University, Stoke-on-Trent, United Kingdom
| | | | - Kaveh Kavianpour
- Department of Computer Science and Mathematics, Amirkabir University of Technology, Tehran, Iran
| | - Hamed Atashbar
- Department of Computer Science and Mathematics, Amirkabir University of Technology, Tehran, Iran
| | - Elham Heidari
- Department of Computer Science and Mathematics, Amirkabir University of Technology, Tehran, Iran
| | - Reza Hajizadeh
- Department of Cardiology, School of Medicine, Urmia University of Medical Sciences, Urmia, Iran
| | - Amir Sam Roshani
- Department of Otorhinolaryngology - Head and Neck Surgery, Imam Khomeini University Hospital, Urmia, Iran
| | - Shima Habibzadeh
- Department of Audiology, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Shayan Saadat
- Hull York Medical School, University of York, York, United Kingdom
| | - Majid Behmanesh
- Student Research Committee, Urmia University of Medical Sciences, Urmia, Iran
| | - Mozafar Saadat
- Department of Mechanical Engineering, School of Engineering, University of Birmingham, Birmingham, United Kingdom
| | | |
Collapse
|
3
|
Ghasemzadeh H, Hillman RE, Mehta DD. Toward Generalizable Machine Learning Models in Speech, Language, and Hearing Sciences: Estimating Sample Size and Reducing Overfitting. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:753-781. [PMID: 38386017 PMCID: PMC11005022 DOI: 10.1044/2023_jslhr-23-00273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 08/29/2023] [Accepted: 12/19/2023] [Indexed: 02/23/2024]
Abstract
PURPOSE Many studies using machine learning (ML) in speech, language, and hearing sciences rely upon cross-validations with single data splitting. This study's first purpose is to provide quantitative evidence that would incentivize researchers to instead use the more robust data splitting method of nested k-fold cross-validation. The second purpose is to present methods and MATLAB code to perform power analysis for ML-based analysis during the design of a study. METHOD First, the significant impact of different cross-validations on ML outcomes was demonstrated using real-world clinical data. Then, Monte Carlo simulations were used to quantify the interactions among the employed cross-validation method, the discriminative power of features, the dimensionality of the feature space, the dimensionality of the model, and the sample size. Four different cross-validation methods (single holdout, 10-fold, train-validation-test, and nested 10-fold) were compared based on the statistical power and confidence of the resulting ML models. Distributions of the null and alternative hypotheses were used to determine the minimum required sample size for obtaining a statistically significant outcome (5% significance) with 80% power. Statistical confidence of the model was defined as the probability of correct features being selected for inclusion in the final model. RESULTS ML models generated based on the single holdout method had very low statistical power and confidence, leading to overestimation of classification accuracy. Conversely, the nested 10-fold cross-validation method resulted in the highest statistical confidence and power while also providing an unbiased estimate of accuracy. The required sample size using the single holdout method could be 50% higher than what would be needed if nested k-fold cross-validation were used. Statistical confidence in the model based on nested k-fold cross-validation was as much as four times higher than the confidence obtained with the single holdout-based model. A computational model, MATLAB code, and lookup tables are provided to assist researchers with estimating the minimum sample size needed during study design. CONCLUSION The adoption of nested k-fold cross-validation is critical for unbiased and robust ML studies in the speech, language, and hearing sciences. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25237045.
Collapse
Affiliation(s)
- Hamzeh Ghasemzadeh
- Center for Laryngeal Surgery and Voice Rehabilitation, Massachusetts General Hospital, Boston
- Department of Surgery, Harvard Medical School, Boston, MA
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing
| | - Robert E. Hillman
- Center for Laryngeal Surgery and Voice Rehabilitation, Massachusetts General Hospital, Boston
- Department of Surgery, Harvard Medical School, Boston, MA
- Speech and Hearing Bioscience and Technology, Division of Medical Sciences, Harvard Medical School, Boston, MA
- MGH Institute of Health Professions, Boston, MA
| | - Daryush D. Mehta
- Center for Laryngeal Surgery and Voice Rehabilitation, Massachusetts General Hospital, Boston
- Department of Surgery, Harvard Medical School, Boston, MA
- Speech and Hearing Bioscience and Technology, Division of Medical Sciences, Harvard Medical School, Boston, MA
- MGH Institute of Health Professions, Boston, MA
| |
Collapse
|
4
|
Ahmed MAO, Satar YA, Darwish EM, Zanaty EA. Synergistic integration of Multi-View Brain Networks and advanced machine learning techniques for auditory disorders diagnostics. Brain Inform 2024; 11:3. [PMID: 38219249 PMCID: PMC10788326 DOI: 10.1186/s40708-023-00214-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 12/06/2023] [Indexed: 01/16/2024] Open
Abstract
In the field of audiology, achieving accurate discrimination of auditory impairments remains a formidable challenge. Conditions such as deafness and tinnitus exert a substantial impact on patients' overall quality of life, emphasizing the urgent need for precise and efficient classification methods. This study introduces an innovative approach, utilizing Multi-View Brain Network data acquired from three distinct cohorts: 51 deaf patients, 54 with tinnitus, and 42 normal controls. Electroencephalogram (EEG) recording data were meticulously collected, focusing on 70 electrodes attached to an end-to-end key with 10 regions of interest (ROI). This data is synergistically integrated with machine learning algorithms. To tackle the inherently high-dimensional nature of brain connectivity data, principal component analysis (PCA) is employed for feature reduction, enhancing interpretability. The proposed approach undergoes evaluation using ensemble learning techniques, including Random Forest, Extra Trees, Gradient Boosting, and CatBoost. The performance of the proposed models is scrutinized across a comprehensive set of metrics, encompassing cross-validation accuracy (CVA), precision, recall, F1-score, Kappa, and Matthews correlation coefficient (MCC). The proposed models demonstrate statistical significance and effectively diagnose auditory disorders, contributing to early detection and personalized treatment, thereby enhancing patient outcomes and quality of life. Notably, they exhibit reliability and robustness, characterized by high Kappa and MCC values. This research represents a significant advancement in the intersection of audiology, neuroimaging, and machine learning, with transformative implications for clinical practice and care.
Collapse
Affiliation(s)
- Muhammad Atta Othman Ahmed
- Department of Computer Science, Faculty of Computers and Information, Luxor University, 85951, Luxor, Egypt.
| | - Yasser Abdel Satar
- Mathematics Department, Faculty of Science, Sohag University, 82511, Sohag, Egypt
| | - Eed M Darwish
- Physics Department, College of Science, Taibah University, Medina, 41411, Saudi Arabia
- Physics Department, Faculty of Science, Sohag University, 82524, Sohag, Egypt
| | - Elnomery A Zanaty
- Department of Computer Science, Faculty of Computers and Artificial Intelligence, Sohag University, 82511, Sohag, Egypt
| |
Collapse
|
5
|
Balan JR, Rodrigo H, Saxena U, Mishra SK. Explainable machine learning reveals the relationship between hearing thresholds and speech-in-noise recognition in listeners with normal audiograms. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:2278-2288. [PMID: 37823779 DOI: 10.1121/10.0021303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 09/17/2023] [Indexed: 10/13/2023]
Abstract
Some individuals complain of listening-in-noise difficulty despite having a normal audiogram. In this study, machine learning is applied to examine the extent to which hearing thresholds can predict speech-in-noise recognition among normal-hearing individuals. The specific goals were to (1) compare the performance of one standard (GAM, generalized additive model) and four machine learning models (ANN, artificial neural network; DNN, deep neural network; RF, random forest; XGBoost; eXtreme gradient boosting), and (2) examine the relative contribution of individual audiometric frequencies and demographic variables in predicting speech-in-noise recognition. Archival data included thresholds (0.25-16 kHz) and speech recognition thresholds (SRTs) from listeners with clinically normal audiograms (n = 764 participants or 1528 ears; age, 4-38 years old). Among the machine learning models, XGBoost performed significantly better than other methods (mean absolute error; MAE = 1.62 dB). ANN and RF yielded similar performances (MAE = 1.68 and 1.67 dB, respectively), whereas, surprisingly, DNN showed relatively poorer performance (MAE = 1.94 dB). The MAE for GAM was 1.61 dB. SHapley Additive exPlanations revealed that age, thresholds at 16 kHz, 12.5 kHz, etc., on the order of importance, contributed to SRT. These results suggest the importance of hearing in the extended high frequencies for predicting speech-in-noise recognition in listeners with normal audiograms.
Collapse
Affiliation(s)
- Jithin Raj Balan
- Department of Speech, Language and Hearing Sciences, The University of Texas at Austin, Austin, Texas 78712, USA
| | - Hansapani Rodrigo
- School of Mathematical and Statistical Sciences, The University of Texas Rio Grande Valley, Edinburg, Texas 78539, USA
| | - Udit Saxena
- Department of Audiology and Speech-Language Pathology, Gujarat Medical Education and Research Society, Medical College and Hospital, Ahmedabad, 380060, India
| | - Srikanta K Mishra
- Department of Speech, Language and Hearing Sciences, The University of Texas at Austin, Austin, Texas 78712, USA
| |
Collapse
|
6
|
Lenatti M, Paglialonga A, Orani V, Ferretti M, Mongelli M. Characterization of Synthetic Health Data Using Rule-Based Artificial Intelligence Models. IEEE J Biomed Health Inform 2023; 27:3760-3769. [PMID: 37018683 DOI: 10.1109/jbhi.2023.3236722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
The aim of this study is to apply and characterize eXplainable AI (XAI) to assess the quality of synthetic health data generated using a data augmentation algorithm. In this exploratory study, several synthetic datasets are generated using various configurations of a conditional Generative Adversarial Network (GAN) from a set of 156 observations related to adult hearing screening. A rule-based native XAI algorithm, the Logic Learning Machine, is used in combination with conventional utility metrics. The classification performance in different conditions is assessed: models trained and tested on synthetic data, models trained on synthetic data and tested on real data, and models trained on real data and tested on synthetic data. The rules extracted from real and synthetic data are then compared using a rule similarity metric. The results indicate that XAI may be used to assess the quality of synthetic data by (i) the analysis of classification performance and (ii) the analysis of the rules extracted on real and synthetic data (number, covering, structure, cut-off values, and similarity). These results suggest that XAI can be used in an original way to assess synthetic health data and extract knowledge about the mechanisms underlying the generated data.
Collapse
|
7
|
Ma T, Wu Q, Jiang L, Zeng X, Wang Y, Yuan Y, Wang B, Zhang T. Artificial Intelligence and Machine (Deep) Learning in Otorhinolaryngology: A Bibliometric Analysis Based on VOSviewer and CiteSpace. EAR, NOSE & THROAT JOURNAL 2023:1455613231185074. [PMID: 37515527 DOI: 10.1177/01455613231185074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/31/2023] Open
Abstract
BACKGROUND Otorhinolaryngology diseases are well suited for artificial intelligence (AI)-based interpretation. The use of AI, particularly AI based on deep learning (DL), in the treatment of human diseases is becoming more and more popular. However, there are few bibliometric analyses that have systematically studied this field. OBJECTIVE The objective of this study was to visualize the research hot spots and trends of AI and DL in ENT diseases through bibliometric analysis to help researchers understand the future development of basic and clinical research. METHODS In all, 232 articles and reviews were retrieved from The Web of Science Core Collection. Using CiteSpace and VOSviewer software, countries, institutions, authors, references, and keywords in the field were visualized and examined. RESULTS The majority of these papers came from 44 nations and 498 institutions, with China and the United States leading the way. Common diseases used by AI in ENT include otosclerosis, otitis media, nasal polyps, sinusitis, and so on. In the early years, research focused on the analysis of hearing and articulation disorders, and in recent years mainly on the diagnosis, localization, and grading of diseases. CONCLUSIONS The analysis shows the periodical hot spots and development direction of AI and DL application in ENT diseases from the time dimension. The diagnosis and prognosis of otolaryngology diseases and the analysis of otolaryngology endoscopic images have been the focus of current research and the development trend of future.
Collapse
Affiliation(s)
- Tianyu Ma
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Qilong Wu
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Li Jiang
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiaoyun Zeng
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yuyao Wang
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yi Yuan
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Bingxuan Wang
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Tianhong Zhang
- Department of Otorhinolaryngology Head and Neck Surgery, The First Affiliated Hospital of Harbin Medical University, Harbin, China
| |
Collapse
|
8
|
Guida F, Lenatti M, Keshavjee K, Khatami A, Guergachi A, Paglialonga A. Characterization of Inclination Analysis for Predicting Onset of Heart Failure from Primary Care Electronic Medical Records. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094228. [PMID: 37177432 PMCID: PMC10181219 DOI: 10.3390/s23094228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 04/05/2023] [Accepted: 04/22/2023] [Indexed: 05/15/2023]
Abstract
The aim of this study is to characterize the performance of an inclination analysis for predicting the onset of heart failure (HF) from routinely collected clinical biomarkers extracted from primary care electronic medical records. A balanced dataset of 698 patients (with/without HF), including a minimum of five longitudinal measures of nine biomarkers (body mass index, diastolic and systolic blood pressure, fasting glucose, glycated hemoglobin, low-density and high-density lipoproteins, total cholesterol, and triglycerides) is used. The proposed algorithm achieves an accuracy of 0.89 (sensitivity of 0.89, specificity of 0.90) to predict the inclination of biomarkers (i.e., their trend towards a 'survival' or 'collapse' as defined by an inclination analysis) on a labeled, balanced dataset of 40 patients. Decision trees trained on the predicted inclination of biomarkers have significantly higher recall (0.69 vs. 0.53) and significantly higher negative predictive value (0.60 vs. 0.55) than those trained on the average values computed from the measures of biomarkers available before the onset of the disease, suggesting that an inclination analysis can help identify the onset of HF in the primary care patient population from routinely available clinical data. This exploratory study provides the basis for further investigations of inclination analyses to identify at-risk patients and generate preventive measures (i.e., personalized recommendations to reverse the trend of biomarkers towards collapse).
Collapse
Affiliation(s)
- Federica Guida
- Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, 20133 Milan, Italy
| | - Marta Lenatti
- Cnr-Istituto di Elettronica e di Ingegneria dell'Informazione e delle Telecomunicazioni (CNR-IEIIT), 20133 Milan, Italy
| | - Karim Keshavjee
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON M5T 3M6, Canada
| | - Alireza Khatami
- Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, ON M5T 3M6, Canada
| | - Aziz Guergachi
- Ted Rogers School of Management, Toronto Metropolitan University, Toronto, ON M5G 2C3, Canada
- Ted Rogers School of Information Technology Management, Toronto Metropolitan University, Toronto, ON M5G 2C3, Canada
- Department of Mathematics and Statistics, York University, Toronto, ON M3J 1P3, Canada
| | - Alessia Paglialonga
- Cnr-Istituto di Elettronica e di Ingegneria dell'Informazione e delle Telecomunicazioni (CNR-IEIIT), 20133 Milan, Italy
| |
Collapse
|
9
|
Shafiro V, Coco L, Preminger JE, Saunders GH. Introduction for the 5th International Meeting on Internet and Audiology Special Issue of the American Journal of Audiology. Am J Audiol 2022; 31:845-848. [PMID: 36108277 PMCID: PMC9886160 DOI: 10.1044/2022_aja-22-00125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Affiliation(s)
- Valeriy Shafiro
- Department of Communication Disorders & Sciences, Rush University, Chicago, IL
| | - Laura Coco
- Oregon Health & Science University, Portland,School of Speech, Language, and Hearing Sciences, San Diego State University, CA
| | - Jill E. Preminger
- School of Speech, Language, and Hearing Sciences, San Diego State University, CA
| | - Gabrielle H. Saunders
- Manchester Centre for Audiology and Deafness, University of Manchester, United Kingdom
| |
Collapse
|
10
|
Iliadou E, Su Q, Kikidis D, Bibas T, Kloukinas C. Profiling hearing aid users through big data explainable artificial intelligence techniques. Front Neurol 2022; 13:933940. [PMID: 36090867 PMCID: PMC9459083 DOI: 10.3389/fneur.2022.933940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
Debilitating hearing loss (HL) affects ~6% of the human population. Only 20% of the people in need of a hearing assistive device will eventually seek and acquire one. The number of people that are satisfied with their Hearing Aids (HAids) and continue using them in the long term is even lower. Understanding the personal, behavioral, environmental, or other factors that correlate with the optimal HAid fitting and with users' experience of HAids is a significant step in improving patient satisfaction and quality of life, while reducing societal and financial burden. In SMART BEAR we are addressing this need by making use of the capacity of modern HAids to provide dynamic logging of their operation and by combining this information with a big amount of information about the medical, environmental, and social context of each HAid user. We are studying hearing rehabilitation through a 12-month continuous monitoring of HL patients, collecting data, such as participants' demographics, audiometric and medical data, their cognitive and mental status, their habits, and preferences, through a set of medical devices and wearables, as well as through face-to-face and remote clinical assessments and fitting/fine-tuning sessions. Descriptive, AI-based analysis and assessment of the relationships between heterogeneous data and HL-related parameters will help clinical researchers to better understand the overall health profiles of HL patients, and to identify patterns or relations that may be proven essential for future clinical trials. In addition, the future state and behavioral (e.g., HAids Satisfiability and HAids usage) of the patients will be predicted with time-dependent machine learning models to assist the clinical researchers to decide on the nature of the interventions. Explainable Artificial Intelligence (XAI) techniques will be leveraged to better understand the factors that play a significant role in the success of a hearing rehabilitation program, constructing patient profiles. This paper is a conceptual one aiming to describe the upcoming data collection process and proposed framework for providing a comprehensive profile for patients with HL in the context of EU-funded SMART BEAR project. Such patient profiles can be invaluable in HL treatment as they can help to identify the characteristics making patients more prone to drop out and stop using their HAids, using their HAids sufficiently long during the day, and being more satisfied by their HAids experience. They can also help decrease the number of needed remote sessions with their Audiologist for counseling, and/or HAids fine tuning, or the number of manual changes of HAids program (as indication of poor sound quality and bad adaptation of HAids configuration to patients' real needs and daily challenges), leading to reduced healthcare cost.
Collapse
Affiliation(s)
- Eleftheria Iliadou
- 1st Department of Otorhinolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens Medical School, Athens, Greece
| | - Qiqi Su
- Department of Computer Science, University of London, London, United Kingdom
| | - Dimitrios Kikidis
- 1st Department of Otorhinolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens Medical School, Athens, Greece
| | - Thanos Bibas
- 1st Department of Otorhinolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens Medical School, Athens, Greece
| | - Christos Kloukinas
- Department of Computer Science, University of London, London, United Kingdom
| |
Collapse
|