1
|
Dadras AA, Aichinger P. Deep Learning-Based Detection of Glottis Segmentation Failures. Bioengineering (Basel) 2024; 11:443. [PMID: 38790311 PMCID: PMC11118004 DOI: 10.3390/bioengineering11050443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2024] [Revised: 04/23/2024] [Accepted: 04/26/2024] [Indexed: 05/26/2024] Open
Abstract
Medical image segmentation is crucial for clinical applications, but challenges persist due to noise and variability. In particular, accurate glottis segmentation from high-speed videos is vital for voice research and diagnostics. Manual searching for failed segmentations is labor-intensive, prompting interest in automated methods. This paper proposes the first deep learning approach for detecting faulty glottis segmentations. For this purpose, faulty segmentations are generated by applying both a poorly performing neural network and perturbation procedures to three public datasets. Heavy data augmentations are added to the input until the neural network's performance decreases to the desired mean intersection over union (IoU). Likewise, the perturbation procedure involves a series of image transformations to the original ground truth segmentations in a randomized manner. These data are then used to train a ResNet18 neural network with custom loss functions to predict the IoU scores of faulty segmentations. This value is then thresholded with a fixed IoU of 0.6 for classification, thereby achieving 88.27% classification accuracy with 91.54% specificity. Experimental results demonstrate the effectiveness of the presented approach. Contributions include: (i) a knowledge-driven perturbation procedure, (ii) a deep learning framework for scoring and detecting faulty glottis segmentations, and (iii) an evaluation of custom loss functions.
Collapse
Affiliation(s)
| | - Philipp Aichinger
- Speech and Hearing Science Lab, Division of Phoniatrics-Logopedics, Department of Otorhinolaryngology, Medical University of Vienna, Währinger Gürtel 18-20, 1090 Vienna, Austria;
| |
Collapse
|
2
|
Darvish M, Kist AM. A Generative Method for a Laryngeal Biosignal. J Voice 2024:S0892-1997(24)00019-5. [PMID: 38395653 DOI: 10.1016/j.jvoice.2024.01.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 01/26/2024] [Accepted: 01/26/2024] [Indexed: 02/25/2024]
Abstract
The Glottal Area Waveform (GAW) is an important component in quantitative clinical voice assessment, providing valuable insights into vocal fold function. In this study, we introduce a novel method employing Variational Autoencoders (VAEs) to generate synthetic GAWs. Our approach enables the creation of synthetic GAWs that closely replicate real-world data, offering a versatile tool for researchers and clinicians. We elucidate the process of manipulating the VAE latent space using the Glottal Opening Vector (GlOVe). The GlOVe allows precise control over the synthetic closure and opening of the vocal folds. By utilizing the GlOVe, we generate synthetic laryngeal biosignals. These biosignals accurately reflect vocal fold behavior, allowing for the emulation of realistic glottal opening changes. This manipulation extends to the introduction of arbitrary oscillations in the vocal folds, closely resembling real vocal fold oscillations. The range of factor coefficient values enables the generation of diverse biosignals with varying frequencies and amplitudes. Our results demonstrate that this approach yields highly accurate laryngeal biosignals, with the Normalized Mean Absolute Error values for various frequencies ranging from 9.6 ⋅ 10-3 to 1.20 ⋅ 10-2 for different experimented frequencies, alongside a remarkable training effectiveness, reflected in reductions of up to approximately 89.52% in key loss components. This proposed method may have implications for downstream speech synthesis and phonetics research, offering the potential for advanced and natural-sounding speech technologies.
Collapse
Affiliation(s)
- Mahdi Darvish
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Andreas M Kist
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.
| |
Collapse
|
3
|
Malinowski J, Pietruszewska W, Stawiski K, Kowalczyk M, Barańska M, Rycerz A, Niebudek-Bogusz E. High-Speed Videoendoscopy Enhances the Objective Assessment of Glottic Organic Lesions: A Case-Control Study with Multivariable Data-Mining Model Development. Cancers (Basel) 2023; 15:3716. [PMID: 37509377 PMCID: PMC10378075 DOI: 10.3390/cancers15143716] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 07/13/2023] [Accepted: 07/19/2023] [Indexed: 07/30/2023] Open
Abstract
The aim of the study was to utilize a quantitative assessment of the vibratory characteristics of vocal folds in diagnosing benign and malignant lesions of the glottis using high-speed videolaryngoscopy (HSV). METHODS Case-control study including 100 patients with unilateral vocal fold lesions in comparison to 38 normophonic subjects. Quantitative assessment with the determination of vocal fold oscillation parameters was performed based on HSV kymography. Machine-learning predictive models were developed and validated. RESULTS All calculated parameters differed significantly between healthy subjects and patients with organic lesions. The first predictive model distinguishing any organic lesion patients from healthy subjects reached an area under the curve (AUC) equal to 0.983 and presented with 89.3% accuracy, 97.0% sensitivity, and 71.4% specificity on the testing set. The second model identifying malignancy among organic lesions reached an AUC equal to 0.85 and presented with 80.6% accuracy, 100% sensitivity, and 71.1% specificity on the training set. Important predictive factors for the models were frequency perturbation measures. CONCLUSIONS The standard protocol for distinguishing between benign and malignant lesions continues to be clinical evaluation by an experienced ENT specialist and confirmed by histopathological examination. Our findings did suggest that advanced machine learning models, which consider the complex interactions present in HSV data, could potentially indicate a heightened risk of malignancy. Therefore, this technology could prove pivotal in aiding in early cancer detection, thereby emphasizing the need for further investigation and validation.
Collapse
Affiliation(s)
- Jakub Malinowski
- Department of Otolaryngology, Head and Neck Oncology, Medical University of Lodz, 90-419 Lodz, Poland
| | - Wioletta Pietruszewska
- Department of Otolaryngology, Head and Neck Oncology, Medical University of Lodz, 90-419 Lodz, Poland
| | - Konrad Stawiski
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA 02115, USA
- Department of Biostatistics and Translational Medicine, Medical University of Lodz, 90-419 Lodz, Poland
| | - Magdalena Kowalczyk
- Department of Otolaryngology, Head and Neck Oncology, Medical University of Lodz, 90-419 Lodz, Poland
| | - Magda Barańska
- Department of Otolaryngology, Head and Neck Oncology, Medical University of Lodz, 90-419 Lodz, Poland
| | - Aleksander Rycerz
- Department of Biostatistics and Translational Medicine, Medical University of Lodz, 90-419 Lodz, Poland
| | - Ewa Niebudek-Bogusz
- Department of Otolaryngology, Head and Neck Oncology, Medical University of Lodz, 90-419 Lodz, Poland
| |
Collapse
|
4
|
Villani FP, Paderno A, Fiorentino MC, Casella A, Piazza C, Moccia S. Classifying Vocal Folds Fixation from Endoscopic Videos with Machine Learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082565 DOI: 10.1109/embc40787.2023.10340017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Vocal folds motility evaluation is paramount in both the assessment of functional deficits and in the accurate staging of neoplastic disease of the glottis. Diagnostic endoscopy, and in particular videoendoscopy, is nowadays the method through which the motility is estimated. The clinical diagnosis, however, relies on the examination of the videoendoscopic frames, which is a subjective and professional-dependent task. Hence, a more rigorous, objective, reliable, and repeatable method is needed. To support clinicians, this paper proposes a machine learning (ML) approach for vocal cords motility classification. From the endoscopic videos of 186 patients with both vocal cords preserved motility and fixation, a dataset of 558 images relative to the two classes was extracted. Successively, a number of features was retrieved from the images and used to train and test four well-grounded ML classifiers. From test results, the best performance was achieved using XGBoost, with precision = 0.82, recall = 0.82, F1 score = 0.82, and accuracy = 0.82. After comparing the most relevant ML models, we believe that this approach could provide precise and reliable support to clinical evaluation.Clinical Relevance- This research represents an important advancement in the state-of-the-art of computer-assisted otolaryngology, to develop an effective tool for motility assessment in the clinical practice.
Collapse
|
5
|
Kumar S P, B P. Optical Flow Glottovibrogram for the examination of vocal fold pathology. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083520 DOI: 10.1109/embc40787.2023.10340075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Laryngeal high-speed video endoscopy is performed to examine the cycles of vocal fold vibrations in detail and to diagnose voice abnormalities. One of the recent image processing techniques for visualizing vocal fold vibration is optical flow-based playbacks, which include optical flow kymograms (OFKG) for local dynamics, optical flow glottovibrogram (OFGVG) and glottal optical flow waveforms (GOFW) for global dynamics. In recent times, various optical flow computing algorithms have been developed. In this paper, we used four well-known optical flow algorithms Horn Schunk, Lucas Kanade, Gunnar Farneback, and TVL1 to construct the optical flow playbacks. The proposed playback reliability is examined by comparing them to traditional representations such as Phonovibrogram (PVG). Since PVG and OFGVG are interconnected, a comparison study was carried out to better comprehend their interaction.Clinical Relevance- Both OFGVG and PVG add to the precision of interpreting pathological conditions by offering complementary information to the conventional spatiotemporal representations.
Collapse
|
6
|
Kruse E, Döllinger M, Schützenberger A, Kist AM. GlottisNetV2: Temporal Glottal Midline Detection Using Deep Convolutional Neural Networks. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 11:137-144. [PMID: 36816097 PMCID: PMC9933989 DOI: 10.1109/jtehm.2023.3237859] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 11/27/2022] [Accepted: 01/04/2023] [Indexed: 11/26/2023]
Abstract
High-speed videoendoscopy is a major tool for quantitative laryngology. Glottis segmentation and glottal midline detection are crucial for computing vocal fold-specific, quantitative parameters. However, fully automated solutions show limited clinical applicability. Especially unbiased glottal midline detection remains a challenging problem. We developed a multitask deep neural network for glottis segmentation and glottal midline detection. We used techniques from pose estimation to estimate the anterior and posterior points in endoscopy images. Neural networks were set up in TensorFlow/Keras and trained and evaluated with the BAGLS dataset. We found that a dual decoder deep neural network termed GlottisNetV2 outperforms the previously proposed GlottisNet in terms of MAPE on the test dataset (1.85% to 6.3%) while converging faster. Using various hyperparameter tunings, we allow fast and directed training. Using temporal variant data on an additional data set designed for this task, we can improve the median prediction accuracy from 2.1% to 1.76% when using 12 consecutive frames and additional temporal filtering. We found that temporal glottal midline detection using a dual decoder architecture together with keypoint estimation allows accurate midline prediction. We show that our proposed architecture allows stable and reliable glottal midline predictions ready for clinical use and analysis of symmetry measures.
Collapse
Affiliation(s)
- Elina Kruse
- Department Artificial Intelligence in Biomedical EngineeringFriedrich-Alexander-University Erlangen–Nürnberg (FAU)91052ErlangenGermany
| | - Michael Döllinger
- Division of Phoniatrics and Pediatric AudiologyDepartment of Otorhinolaryngology, Head and Neck SurgeryUniversity Hospital Erlangen, Friedrich-Alexander-University Erlangen–Nürnberg (FAU)91054ErlangenGermany
| | - Anne Schützenberger
- Division of Phoniatrics and Pediatric AudiologyDepartment of Otorhinolaryngology, Head and Neck SurgeryUniversity Hospital Erlangen, Friedrich-Alexander-University Erlangen–Nürnberg (FAU)91054ErlangenGermany
| | - Andreas M. Kist
- Department Artificial Intelligence in Biomedical EngineeringFriedrich-Alexander-University Erlangen–Nürnberg (FAU)91052ErlangenGermany
| |
Collapse
|
7
|
Fast JF, Oltmann A, Spindeldreier S, Ptok M. Computational Analysis of the Droplet-Stimulated Laryngeal Adductor Reflex in High-Speed Sequences. Laryngoscope 2022; 132:2412-2419. [PMID: 35133015 DOI: 10.1002/lary.30041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 12/28/2021] [Accepted: 01/23/2022] [Indexed: 12/16/2022]
Abstract
OBJECTIVES/HYPOTHESIS The laryngeal adductor reflex (LAR) is an important protective mechanism of the airways. Its physiology is still not completely understood. The available methods for LAR evaluation offer limited reproducibility and/or rely on subjective interpretation. A new approach, termed Microdroplet Impulse Testing of the LAR (MIT-LAR), was recently introduced. Here, the LAR is elicited by a droplet and a laryngoscopic high-speed recording is acquired simultaneously. In the present work, image-processing algorithms for autonomous MIT-LAR sequence analysis were developed. This allowed the automated approximation of kinematic LAR parameters in humans. STUDY DESIGN Development and testing of computational methods. METHODS Computational image processing enabled the autonomous estimation of the glottal area, the glottal angle, and the vocal fold edge distance in MIT-LAR sequences. A suitable analytical representation of these glottal parameters allowed the extraction of seven relevant LAR parameters. The obtained values were compared to the literature. RESULTS A generalized logistic function showed the highest average goodness of fit among four different analytical approaches for each of the glottal parameters. Autonomous sequence analysis yielded bilateral LAR response latencies of (229 ± 116) ms and (182 ± 60) ms for cases of complete and incomplete glottal closure, respectively. The initial/average/maximum angular vocal fold adduction velocity was estimated at (157 ± 115) °s-1 /(891 ± 516) °s-1 /(929 ± 583) °s-1 and (88 ± 53) °s-1 /(421 ± 221) °s-1 /(520 ± 238) °s-1 for complete and incomplete glottal closure, respectively. CONCLUSION The automated extraction of LAR parameters from laryngoscopic high-speed sequences can potentially increase the objectiveness of optical LAR characterization and reduce the associated workload. The proposed methods may thus be helpful for future research on this vital reflex. LEVEL OF EVIDENCE NA Laryngoscope, 132:2412-2419, 2022.
Collapse
Affiliation(s)
- Jacob Friedemann Fast
- Department of Phoniatrics and Pediatric Audiology, Hannover Medical School, Hanover, Germany.,Institute of Mechatronic Systems, Leibniz Universität Hannover, Hanover, Germany
| | - Andra Oltmann
- Institute of Mechatronic Systems, Leibniz Universität Hannover, Hanover, Germany.,Department of Modeling and Simulation, Fraunhofer Research Institution for Individualized and Cell-Based Medical Engineering, Lübeck, Germany
| | - Svenja Spindeldreier
- Institute of Mechatronic Systems, Leibniz Universität Hannover, Hanover, Germany
| | - Martin Ptok
- Department of Phoniatrics and Pediatric Audiology, Hannover Medical School, Hanover, Germany
| |
Collapse
|
8
|
Kaluza J, Niebudek-Bogusz E, Malinowski J, Strumillo P, Pietruszewska W. Assessment of Vocal Fold Stiffness by Means of High-Speed Videolaryngoscopy with Laryngotopography in Prediction of Early Glottic Malignancy: Preliminary Report. Cancers (Basel) 2022; 14:cancers14194697. [PMID: 36230618 PMCID: PMC9563419 DOI: 10.3390/cancers14194697] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 09/07/2022] [Accepted: 09/19/2022] [Indexed: 11/16/2022] Open
Abstract
Simple Summary The method described in our manuscript can help to objectively assess the vibration of each vocal fold using larygotopographic analysis of high-speed videoendoscopy (HSV) recordings. We have developed image processing and analysis procedures to detect vocal fold regions in HSV films and quantitatively analyze their shape and kinematics. We proposed the term Stiffness Asymmetry Index which can provide valuable information on the texture and kinematic properties of individual vocal fold tissues, which can be important in the diagnosis of early glottis cancer. Our study showed that a low value of SAI indicated large, non-vibrating vocal fold areas, characteristic of infiltrative lesions such as invasive carcinoma. This important clinical information can help to assess the depth of vocal fold invasion before direct histologic examination and discriminate benign from malignant lesions. Abstract One of the most important challenges in laryngological practice is the early diagnosis of laryngeal cancer. Detection of non-vibrating areas affected by neoplastic lesions of the vocal folds can be crucial in the recognition of early cancerogenous infiltration. Glottal pathologies associated with abnormal vibration patterns of the vocal folds can be detected and quantified using High-speed Videolaryngoscopy (HSV), also in subjects with severe voice disorders, and analyzed with the aid of computer image processing procedures. We present a method that enables the assessment of vocal fold pathologies with the use of HSV. The calculated laryngotopographic (LTG) maps of the vocal folds based on HSV allowed for a detailed characterization of vibration patterns and abnormalities in different regions of the vocal folds. We verified our methods with HSV recordings from 31 subjects with a normophonic voice and benign and malignant vocal fold lesions. We proposed the novel Stiffness Asymmetry Index (SAI) to differentiate between early glottis cancer (SAI = 0.65 ± 0.18) and benign vocal fold masses (SAI = 0.16 ± 0.13). Our results showed that these glottal pathologies might be noninvasively distinguished prior to histopathological examination. However, this needs to be confirmed by further research on larger groups of benign and malignant laryngeal lesions.
Collapse
Affiliation(s)
- Justyna Kaluza
- Institute of Electronics, Lodz University of Technology, 90-924 Lodz, Poland
| | - Ewa Niebudek-Bogusz
- Department of Otolaryngology, Head and Neck Oncology, Medical University of Lodz, 90-001 Lodz, Poland
| | - Jakub Malinowski
- Department of Otolaryngology, Head and Neck Oncology, Medical University of Lodz, 90-001 Lodz, Poland
| | - Pawel Strumillo
- Institute of Electronics, Lodz University of Technology, 90-924 Lodz, Poland
| | - Wioletta Pietruszewska
- Department of Otolaryngology, Head and Neck Oncology, Medical University of Lodz, 90-001 Lodz, Poland
- Correspondence:
| |
Collapse
|
9
|
Long-term performance assessment of fully automatic biomedical glottis segmentation at the point of care. PLoS One 2022; 17:e0266989. [PMID: 36129922 PMCID: PMC9491538 DOI: 10.1371/journal.pone.0266989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 07/25/2022] [Indexed: 12/04/2022] Open
Abstract
Deep Learning has a large impact on medical image analysis and lately has been adopted for clinical use at the point of care. However, there is only a small number of reports of long-term studies that show the performance of deep neural networks (DNNs) in such an environment. In this study, we measured the long-term performance of a clinically optimized DNN for laryngeal glottis segmentation. We have collected the video footage for two years from an AI-powered laryngeal high-speed videoendoscopy imaging system and found that the footage image quality is stable across time. Next, we determined the DNN segmentation performance on lossy and lossless compressed data revealing that only 9% of recordings contain segmentation artifacts. We found that lossy and lossless compression is on par for glottis segmentation, however, lossless compression provides significantly superior image quality. Lastly, we employed continual learning strategies to continuously incorporate new data into the DNN to remove the aforementioned segmentation artifacts. With modest manual intervention, we were able to largely alleviate these segmentation artifacts by up to 81%. We believe that our suggested deep learning-enhanced laryngeal imaging platform consistently provides clinically sound results, and together with our proposed continual learning scheme will have a long-lasting impact on the future of laryngeal imaging.
Collapse
|
10
|
Analysis of Laryngeal High-Speed Videoendoscopy recordings – ROI detection. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
11
|
Zita A, Novozámský A, Zitová B, Šorel M, Herbst CT, Vydrová J, Švec JG. Videokymogram Analyzer Tool: Human–computer comparison. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
12
|
A single latent channel is sufficient for biomedical glottis segmentation. Sci Rep 2022; 12:14292. [PMID: 35995933 PMCID: PMC9395348 DOI: 10.1038/s41598-022-17764-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 07/30/2022] [Indexed: 11/23/2022] Open
Abstract
Glottis segmentation is a crucial step to quantify endoscopic footage in laryngeal high-speed videoendoscopy. Recent advances in deep neural networks for glottis segmentation allow for a fully automatic workflow. However, exact knowledge of integral parts of these deep segmentation networks remains unknown, and understanding the inner workings is crucial for acceptance in clinical practice. Here, we show that a single latent channel as a bottleneck layer is sufficient for glottal area segmentation using systematic ablations. We further demonstrate that the latent space is an abstraction of the glottal area segmentation relying on three spatially defined pixel subtypes allowing for a transparent interpretation. We further provide evidence that the latent space is highly correlated with the glottal area waveform, can be encoded with four bits, and decoded using lean decoders while maintaining a high reconstruction accuracy. Our findings suggest that glottis segmentation is a task that can be highly optimized to gain very efficient and explainable deep neural networks, important for application in the clinic. In the future, we believe that online deep learning-assisted monitoring is a game-changer in laryngeal examinations.
Collapse
|
13
|
Kopczynski B, Niebudek-Bogusz E, Pietruszewska W, Strumillo P. Segmentation of Glottal Images from High-Speed Videoendoscopy Optimized by Synchronous Acoustic Recordings. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22051751. [PMID: 35270897 PMCID: PMC8915112 DOI: 10.3390/s22051751] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 02/12/2022] [Accepted: 02/15/2022] [Indexed: 05/17/2023]
Abstract
Laryngeal high-speed videoendoscopy (LHSV) is an imaging technique offering novel visualization quality of the vibratory activity of the vocal folds. However, in most image analysis methods, the interaction of the medical personnel and access to ground truth annotations are required to achieve accurate detection of vocal folds edges. In our fully automatic method, we combine video and acoustic data that are synchronously recorded during the laryngeal endoscopy. We show that the image segmentation algorithm of the glottal area can be optimized by matching the Fourier spectra of the pre-processed video and the spectra of the acoustic recording during the phonation of sustained vowel /i:/. We verify our method on a set of LHSV recordings taken from subjects with normophonic voice and patients with voice disorders due to glottal insufficiency. We show that the computed geometric indices of the glottal area make it possible to discriminate between normal and pathologic voices. The median of the Open Quotient and Minimal Relative Glottal Area values for healthy subjects were 0.69 and 0.06, respectively, while for dysphonic subjects were 1 and 0.35, respectively. We also validate these results using independent phoniatrician experts.
Collapse
Affiliation(s)
- Bartosz Kopczynski
- Institute of Electronics, Lodz University of Technology, 90-924 Lodz, Poland;
| | - Ewa Niebudek-Bogusz
- Department of Otolaryngology, Head and Neck Oncology, Medical University of Lodz, 90-001 Lodz, Poland; (E.N.-B.); (W.P.)
| | - Wioletta Pietruszewska
- Department of Otolaryngology, Head and Neck Oncology, Medical University of Lodz, 90-001 Lodz, Poland; (E.N.-B.); (W.P.)
| | - Pawel Strumillo
- Institute of Electronics, Lodz University of Technology, 90-924 Lodz, Poland;
- Correspondence:
| |
Collapse
|
14
|
Kist AM, Dürr S, Schützenberger A, Döllinger M. OpenHSV: an open platform for laryngeal high-speed videoendoscopy. Sci Rep 2021; 11:13760. [PMID: 34215788 PMCID: PMC8253769 DOI: 10.1038/s41598-021-93149-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 06/03/2021] [Indexed: 11/22/2022] Open
Abstract
High-speed videoendoscopy is an important tool to study laryngeal dynamics, to quantify vocal fold oscillations, to diagnose voice impairments at laryngeal level and to monitor treatment progress. However, there is a significant lack of an open source, expandable research tool that features latest hardware and data analysis. In this work, we propose an open research platform termed OpenHSV that is based on state-of-the-art, commercially available equipment and features a fully automatic data analysis pipeline. A publicly available, user-friendly graphical user interface implemented in Python is used to interface the hardware. Video and audio data are recorded in synchrony and are subsequently fully automatically analyzed. Video segmentation of the glottal area is performed using efficient deep neural networks to derive glottal area waveform and glottal midline. Established quantitative, clinically relevant video and audio parameters were implemented and computed. In a preliminary clinical study, we recorded video and audio data from 28 healthy subjects. Analyzing these data in terms of image quality and derived quantitative parameters, we show the applicability, performance and usefulness of OpenHSV. Therefore, OpenHSV provides a valid, standardized access to high-speed videoendoscopy data acquisition and analysis for voice scientists, highlighting its use as a valuable research tool in understanding voice physiology. We envision that OpenHSV serves as basis for the next generation of clinical HSV systems.
Collapse
Affiliation(s)
- Andreas M Kist
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Erlangen, Friedrich-Alexander-University Erlangen-Nürnberg, Waldstr. 1, 91054, Erlangen, Germany. .,Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nürnberg, Henkestr. 91, 91054, Erlangen, Germany.
| | - Stephan Dürr
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Erlangen, Friedrich-Alexander-University Erlangen-Nürnberg, Waldstr. 1, 91054, Erlangen, Germany
| | - Anne Schützenberger
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Erlangen, Friedrich-Alexander-University Erlangen-Nürnberg, Waldstr. 1, 91054, Erlangen, Germany
| | - Michael Döllinger
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Erlangen, Friedrich-Alexander-University Erlangen-Nürnberg, Waldstr. 1, 91054, Erlangen, Germany
| |
Collapse
|
15
|
Kist AM, Gómez P, Dubrovskiy D, Schlegel P, Kunduk M, Echternach M, Patel R, Semmler M, Bohr C, Dürr S, Schützenberger A, Döllinger M. A Deep Learning Enhanced Novel Software Tool for Laryngeal Dynamics Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1889-1903. [PMID: 34000199 DOI: 10.1044/2021_jslhr-20-00498] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose High-speed videoendoscopy (HSV) is an emerging, but barely used, endoscopy technique in the clinic to assess and diagnose voice disorders because of the lack of dedicated software to analyze the data. HSV allows to quantify the vocal fold oscillations by segmenting the glottal area. This challenging task has been tackled by various studies; however, the proposed approaches are mostly limited and not suitable for daily clinical routine. Method We developed a user-friendly software in C# that allows the editing, motion correction, segmentation, and quantitative analysis of HSV data. We further provide pretrained deep neural networks for fully automatic glottis segmentation. Results We freely provide our software Glottis Analysis Tools (GAT). Using GAT, we provide a general threshold-based region growing platform that enables the user to analyze data from various sources, such as in vivo recordings, ex vivo recordings, and high-speed footage of artificial vocal folds. Additionally, especially for in vivo recordings, we provide three robust neural networks at various speed and quality settings to allow a fully automatic glottis segmentation needed for application by untrained personnel. GAT further evaluates video and audio data in parallel and is able to extract various features from the video data, among others the glottal area waveform, that is, the changing glottal area over time. In total, GAT provides 79 unique quantitative analysis parameters for video- and audio-based signals. Many of these parameters have already been shown to reflect voice disorders, highlighting the clinical importance and usefulness of the GAT software. Conclusion GAT is a unique tool to process HSV and audio data to determine quantitative, clinically relevant parameters for research, diagnosis, and treatment of laryngeal disorders. Supplemental Material https://doi.org/10.23641/asha.14575533.
Collapse
Affiliation(s)
- Andreas M Kist
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology-Head & Neck Surgery, University Hospital Erlangen, Germany
| | - Pablo Gómez
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology-Head & Neck Surgery, University Hospital Erlangen, Germany
| | - Denis Dubrovskiy
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology-Head & Neck Surgery, University Hospital Erlangen, Germany
| | - Patrick Schlegel
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology-Head & Neck Surgery, University Hospital Erlangen, Germany
| | - Melda Kunduk
- Department of Communication Sciences and Disorders, Louisiana State University, Baton Rouge
| | - Matthias Echternach
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology, Munich University Hospital (LMU), Germany
| | - Rita Patel
- Department of Speech, Language and Hearing Sciences, College of Arts and Sciences, Indiana University, Bloomington
| | - Marion Semmler
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology-Head & Neck Surgery, University Hospital Erlangen, Germany
| | - Christopher Bohr
- Klinik und Poliklinik für Hals-Nasen-Ohren-Heilkunde Universitätsklinikum Regensburg, Germany
| | - Stephan Dürr
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology-Head & Neck Surgery, University Hospital Erlangen, Germany
| | - Anne Schützenberger
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology-Head & Neck Surgery, University Hospital Erlangen, Germany
| | - Michael Döllinger
- Division of Phoniatrics and Pediatric Audiology, Department of Otorhinolaryngology-Head & Neck Surgery, University Hospital Erlangen, Germany
| |
Collapse
|
16
|
Mohd Khairuddin KA, Ahmad K, Mohd Ibrahim H, Yan Y. Description of the Features and Vibratory Behaviors of the Nyquist Plot Analyzed From Laryngeal High-Speed Videoendoscopy Images. J Voice 2020; 36:582.e11-582.e22. [PMID: 32861565 DOI: 10.1016/j.jvoice.2020.07.036] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 07/25/2020] [Accepted: 07/27/2020] [Indexed: 11/17/2022]
Abstract
Facilitative playback-based subjective measures offer a more reliable evaluation of the vocal fold vibration than those derived from direct inspection of video playback. One of the measures is a Nyquist plot, which presents the analyzed cycle-to-cycle vibratory information in a graphical form. While the potential is evident, the information of the features of the Nyquist plot, which the evaluation is based on, is still incomplete. The current identified features and their vibratory behaviors may be inadequate to guarantee accurate interpretation of the findings. The present study aims to address this issue by examining the features of the Nyquist plot and their vibratory behaviors. A total of 56 young normophonic speakers, that is, 20 males and 36 females were recruited as the participants. Each of them underwent laryngeal high-speed videoendoscopy to record the images of the vocal fold vibration, which were then analyzed to generate the Nyquist plots. The features were identified by inspecting the properties of the plot points forming the Nyquist plots. For each identified feature, its vibratory behaviors were examined. The results revealed four features: rim contour depicting the longitudinal phase difference; left edge shape signifying the glottal configuration, phase closure, and closed phase duration; rim width and rim pattern visualizing the regularity of glottal areas and the regularity of the intracycle variations, respectively. The findings present a more complete reference of the features and their vibratory behaviors that is pertinent for the Nyquist plot interpretation.
Collapse
Affiliation(s)
- Khairy Anuar Mohd Khairuddin
- Speech Sciences Program, Centre for Rehabilitation and Special Needs, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia; Speech Pathology Program, School of Health Sciences, Universiti Sains Malaysia, Kelantan, Malaysia.
| | - Kartini Ahmad
- Speech Sciences Program, Centre for Rehabilitation and Special Needs, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| | - Hasherah Mohd Ibrahim
- Speech Sciences Program, Centre for Rehabilitation and Special Needs, Faculty of Health Sciences, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
| | - Yuling Yan
- Department of Bioengineering, School of Engineering, Santa Clara University, California, USA
| |
Collapse
|