1
|
Dritsas S, Chua KWD, Goh ZH, Simpson RE. Classification, registration and segmentation of ear canal impressions using convolutional neural networks. Med Image Anal 2024; 94:103152. [PMID: 38531210 DOI: 10.1016/j.media.2024.103152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 12/12/2023] [Accepted: 03/20/2024] [Indexed: 03/28/2024]
Abstract
Today, fitting bespoke hearing aids involves injecting silicone into patients' ears to produce ear canal molds. These are subsequently 3D scanned to create digital ear canal impressions. However, before digital impressions can be used they require a substantial amount of effort in manual 3D editing. In this article, we present computational methods to pre-process ear canal impressions. The aim is to create automation tools to assist the hearing aid design, manufacturing and fitting processes as well as normalizing anatomical data to assist the study of the outer ear canal's morphology. The methods include classifying the handedness of the impression into left and right ear types, orienting the geometries onto the same coordinate system sense, and removing extraneous artifacts introduced by the silicone mold. We investigate the use of convolutional neural networks for performing these semantic tasks and evaluate their accuracy using a dataset of 3000 ear canal impressions. The neural networks proved highly effective at performing these tasks with 95.8% adjusted accuracy in classification, 92.3% within 20° angular error in registration and 93.4% intersection over union in segmentation.
Collapse
Affiliation(s)
- Stylianos Dritsas
- Singapore University of Technology and Design, 8 Somapah Road, 487372, Singapore.
| | | | - Zhi Hwee Goh
- Singapore University of Technology and Design, 8 Somapah Road, 487372, Singapore
| | | |
Collapse
|
2
|
Lenham FM, Iball GR. Improving the quality of computed tomography brain images in the presence of cochlear implant induced metal artefacts through the additional use of tissue mimicking materials alongside metal artefact reduction software. Radiography (Lond) 2024; 30:813-820. [PMID: 38513334 DOI: 10.1016/j.radi.2024.03.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 02/15/2024] [Accepted: 03/08/2024] [Indexed: 03/23/2024]
Abstract
INTRODUCTION Metal artefact reduction software (MAR) can be used to improve Computed Tomography (CT) image quality in the presence of implanted metalwork; however, this software is not effective for superficial metallic structures such as cochlear implants (CI). This study aimed to investigate whether the effectiveness of MAR software could be improved for brain scans with CI present through the use of tissue mimicking materials (TMM) placed exteriorly to the implant. METHODS In this two-part study, a CI was positioned on the surface of water and anthropomorphic phantoms and imaged using a helical CT brain protocol. Three TMM, Superflab, Sure Thermal heat packs, and Bart's Bolus, were utilised and images were acquired to assess the resulting artefact reduction in terms of CT numbers, noise and artefact index (Aind). Changes in CTDIvol were assessed for the anthropomorphic phantom scans. RESULTS In the water phantom, statistically significant reductions in CT number (p = 0.038) and noise (p = 0.033) were observed for Superflab, whilst the heat packs produced similar significant reductions in CT number (p < 0.001) and noise (p = 0.001) for the anthropomorphic phantom images. Aind values were significantly reduced through the use of Superflab (p = 0.009) and the heat packs (p < 0.001). No significant effects were observed for Bart's Bolus. CTDIvol increases of generally less than 5% were observed for scans with TMM in place. CONCLUSION The additional use of TMM alongside MAR software yielded statistically significant reductions in CI induced metal artefacts on both water and anthropomorphic phantom scans with minimal dose increases. IMPLICATIONS FOR PRACTICE The extent of metal artefacts in clinical head scans with CI in place could be significantly reduced through combined use of TMM and MAR software, consequently providing greater diagnostic confidence in the images.
Collapse
Affiliation(s)
- F M Lenham
- Department of Medical Physics & Engineering, Old Medical School, Leeds General Infirmary, Leeds, LS1 3EX, UK.
| | - G R Iball
- Department of Medical Physics & Engineering, Old Medical School, Leeds General Infirmary, Leeds, LS1 3EX, UK; Faculty of Health Studies, University of Bradford, Bradford, BD7 1DP, UK.
| |
Collapse
|
3
|
Fukuda M, Kotaki S, Nozawa M, Kuwada C, Kise Y, Ariji E, Ariji Y. A cycle generative adversarial network for generating synthetic contrast-enhanced computed tomographic images from non-contrast images in the internal jugular lymph node-bearing area. Odontology 2024:10.1007/s10266-024-00933-1. [PMID: 38607582 DOI: 10.1007/s10266-024-00933-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 03/24/2024] [Indexed: 04/13/2024]
Abstract
The objectives of this study were to create a mutual conversion system between contrast-enhanced computed tomography (CECT) and non-CECT images using a cycle generative adversarial network (cycleGAN) for the internal jugular region. Image patches were cropped from CT images in 25 patients who underwent both CECT and non-CECT imaging. Using a cycleGAN, synthetic CECT and non-CECT images were generated from original non-CECT and CECT images, respectively. The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were calculated. Visual Turing tests were used to determine whether oral and maxillofacial radiologists could tell the difference between synthetic versus original images, and receiver operating characteristic (ROC) analyses were used to assess the radiologists' performances in discriminating lymph nodes from blood vessels. The PSNR of non-CECT images was higher than that of CECT images, while the SSIM was higher in CECT images. The Visual Turing test showed a higher perceptual quality in CECT images. The area under the ROC curve showed almost perfect performances in synthetic as well as original CECT images. In conclusion, synthetic CECT images created by cycleGAN appeared to have the potential to provide effective information in patients who could not receive contrast enhancement.
Collapse
Affiliation(s)
- Motoki Fukuda
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, 1-5-17 Otemae, Chuo-Ku, Osaka, Japan.
| | - Shinya Kotaki
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, 1-5-17 Otemae, Chuo-Ku, Osaka, Japan
| | - Michihito Nozawa
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, 1-5-17 Otemae, Chuo-Ku, Osaka, Japan
| | - Chiaki Kuwada
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Yoshitaka Kise
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Eiichiro Ariji
- Department of Oral and Maxillofacial Radiology, Aichi-Gakuin University School of Dentistry, Nagoya, Japan
| | - Yoshiko Ariji
- Department of Oral Radiology, School of Dentistry, Osaka Dental University, 1-5-17 Otemae, Chuo-Ku, Osaka, Japan
| |
Collapse
|
4
|
Quatre R, Schmerber S, Attyé A. Improving rehabilitation of deaf patients by advanced imaging before cochlear implantation. J Neuroradiol 2024; 51:145-154. [PMID: 37806523 DOI: 10.1016/j.neurad.2023.10.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 10/05/2023] [Accepted: 10/05/2023] [Indexed: 10/10/2023]
Abstract
INTRODUCTION Cochlear implants have advanced the management of severe to profound deafness. However, there is a strong disparity in hearing performance after implantation from one patient to another. Moreover, there are several advanced kinds of imaging assessment before cochlear implantation. Microstructural white fiber degeneration can be studied with Diffusion weighted MRI (DWI) or tractography of the central auditory pathways. Functional MRI (fMRI) allows us to evaluate brain function, and CT or MRI segmentation to better detect inner ear anomalies. OBJECTIVE This literature review aims to evaluate how helpful pre-implantation anatomic imaging can be to predict hearing rehabilitation outcomes in deaf patients. These techniques include DWI and fMRI of the central auditory pathways, and automated labyrinth segmentation by CT scan, cone beam CT and MRI. DESIGN This systematic review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Studies were selected by searching in PubMed and by checking the reference lists of relevant articles. Inclusion criteria were adults over 18, with unilateral or bilateral hearing loss, who had DWI acquisition or fMRI or CT/ Cone Beam CT/ MRI image segmentation. RESULTS After reviewing 172 articles, we finally included 51. Studies on DWI showed changes in the central auditory pathways affecting the white matter, extending to the primary and non-primary auditory cortices, even in sudden and mild hearing impairment. Hearing loss patients show a reorganization of brain activity in various areas, such as the auditory and visual cortices, as well as regions involved in language and emotions, according to fMRI studies. Deep Learning's automatic segmentation produces the best CT segmentation in just a few seconds. MRI segmentation is mainly used to evaluate fluid space of the inner ear and determine the presence of an endolymphatic hydrops. CONCLUSION Before cochlear implantation, a DWI with tractography can evaluate the central auditory pathways up to the primary and non-primary auditory cortices. This data is then used to generate predictions on the auditory rehabilitation of patients. A CT segmentation with systematic 3D reconstruction allow a better evaluation of cochlear malformations and predictable difficulties during surgery.
Collapse
Affiliation(s)
- Raphaële Quatre
- Department of Oto-Rhino-Laryngology, Head and Neck Surgery, University Hospital, Grenoble, France; BrainTech Lab INSERM UMR 2015, Grenoble, France; GeodAIsics, Grenoble, France.
| | - Sébastien Schmerber
- Department of Oto-Rhino-Laryngology, Head and Neck Surgery, University Hospital, Grenoble, France; BrainTech Lab INSERM UMR 2015, Grenoble, France
| | - Arnaud Attyé
- Department of Neuroradiology, University Hospital, Grenoble, France; GeodAIsics, Grenoble, France
| |
Collapse
|
5
|
Liang D, Zhang S, Zhao Z, Wang G, Sun J, Zhao J, Li W, Xu LX. Two-stage generative adversarial networks for metal artifact reduction and visualization in ablation therapy of liver tumors. Int J Comput Assist Radiol Surg 2023; 18:1991-2000. [PMID: 37391537 DOI: 10.1007/s11548-023-02986-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Accepted: 06/12/2023] [Indexed: 07/02/2023]
Abstract
PURPOSE The strong metal artifacts produced by the electrode needle cause poor image quality, thus preventing physicians from observing the surgical situation during the puncture process. To address this issue, we propose a metal artifact reduction and visualization framework for CT-guided ablation therapy of liver tumors. METHODS Our framework contains a metal artifact reduction model and an ablation therapy visualization model. A two-stage generative adversarial network is proposed to reduce the metal artifacts of intraoperative CT images and avoid image blurring. To visualize the puncture process, the axis and tip of the needle are localized, and then the needle is rebuilt in 3D space intraoperatively. RESULTS Experiments show that our proposed metal artifact reduction method achieves higher SSIM (0.891) and PSNR (26.920) values than the state-of-the-art methods. The accuracy of ablation needle reconstruction is 2.76 mm average in needle tip localization and 1.64° average in needle axis localization. CONCLUSION We propose a novel metal artifact reduction and an ablation therapy visualization framework for CT-guided ablation therapy of liver cancer. The experiment results indicate that our approach can reduce metal artifacts and improve image quality. Furthermore, our proposed method demonstrates the potential for displaying the relative position of the tumor and the needle intraoperatively.
Collapse
Affiliation(s)
- Duan Liang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Shunan Zhang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Ziqi Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Guangzhi Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Jianqi Sun
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Wentao Li
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai, 200240, China
| | - Lisa X Xu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| |
Collapse
|
6
|
Petsiou DP, Martinos A, Spinos D. Applications of Artificial Intelligence in Temporal Bone Imaging: Advances and Future Challenges. Cureus 2023; 15:e44591. [PMID: 37795060 PMCID: PMC10545916 DOI: 10.7759/cureus.44591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/02/2023] [Indexed: 10/06/2023] Open
Abstract
The applications of artificial intelligence (AI) in temporal bone (TB) imaging have gained significant attention in recent years, revolutionizing the field of otolaryngology and radiology. Accurate interpretation of imaging features of TB conditions plays a crucial role in diagnosing and treating a range of ear-related pathologies, including middle and inner ear diseases, otosclerosis, and vestibular schwannomas. According to multiple clinical studies published in the literature, AI-powered algorithms have demonstrated exceptional proficiency in interpreting imaging findings, not only saving time for physicians but also enhancing diagnostic accuracy by reducing human error. Although several challenges remain in routinely relying on AI applications, the collaboration between AI and healthcare professionals holds the key to better patient outcomes and significantly improved patient care. This overview delivers a comprehensive update on the advances of AI in the field of TB imaging, summarizes recent evidence provided by clinical studies, and discusses future insights and challenges in the widespread integration of AI in clinical practice.
Collapse
Affiliation(s)
- Dioni-Pinelopi Petsiou
- Otolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens, School of Medicine, Athens, GRC
| | - Anastasios Martinos
- Otolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens, School of Medicine, Athens, GRC
| | - Dimitrios Spinos
- Otolaryngology-Head and Neck Surgery, Gloucestershire Hospitals NHS Foundation Trust, Gloucester, GBR
| |
Collapse
|
7
|
Bernstein JGW, Jensen KK, Stakhovskaya OA, Noble JH, Hoa M, Kim HJ, Shih R, Kolberg E, Cleary M, Goupell MJ. Interaural Place-of-Stimulation Mismatch Estimates Using CT Scans and Binaural Perception, But Not Pitch, Are Consistent in Cochlear-Implant Users. J Neurosci 2021; 41:10161-10178. [PMID: 34725189 PMCID: PMC8660045 DOI: 10.1523/jneurosci.0359-21.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 08/23/2021] [Accepted: 10/01/2021] [Indexed: 11/21/2022] Open
Abstract
Bilateral cochlear implants (BI-CIs) or a CI for single-sided deafness (SSD-CI; one normally functioning acoustic ear) can partially restore spatial-hearing abilities, including sound localization and speech understanding in noise. For these populations, however, interaural place-of-stimulation mismatch can occur and thus diminish binaural sensitivity that relies on interaurally frequency-matched neurons. This study examined whether plasticity-reorganization of central neural pathways over time-can compensate for peripheral interaural place mismatch. We hypothesized differential plasticity across two systems: none for binaural processing but adaptation for pitch perception toward frequencies delivered by the specific electrodes. Interaural place mismatch was evaluated in 19 BI-CI and 23 SSD-CI human subjects (both sexes) using binaural processing (interaural-time-difference discrimination with simultaneous bilateral stimulation), pitch perception (pitch ranking for single electrodes or acoustic tones with sequential bilateral stimulation), and physical electrode-location estimates from computed-tomography (CT) scans. On average, CT scans revealed relatively little BI-CI interaural place mismatch (26° insertion-angle mismatch) but a relatively large SSD-CI mismatch, particularly at low frequencies (166° for an electrode tuned to 300 Hz, decreasing to 14° at 7000 Hz). For BI-CI subjects, the three metrics were in agreement because there was little mismatch. For SSD-CI subjects, binaural and CT measurements were in agreement, suggesting little binaural-system plasticity induced by mismatch. The pitch measurements disagreed with binaural and CT measurements, suggesting place-pitch plasticity or a procedural bias. These results suggest that reducing interaural place mismatch and potentially improving binaural processing by reprogramming the CI frequency allocation would be better done using CT-scan than pitch information.SIGNIFICANCE STATEMENT Electrode-array placement for cochlear implants (bionic prostheses that partially restore hearing) does not explicitly align neural representations of frequency information. The resulting interaural place-of-stimulation mismatch can diminish spatial-hearing abilities. In this study, adults with two cochlear implants showed reasonable interaural alignment, whereas those with one cochlear implant but normal hearing in the other ear often showed mismatch. In cases of mismatch, binaural sensitivity was best when the same cochlear locations were stimulated in both ears, suggesting that binaural brainstem pathways do not experience plasticity to compensate for mismatch. In contrast, interaurally pitch-matched electrodes deviated from cochlear-location estimates and did not optimize binaural sensitivity. Clinical correction of interaural place mismatch using binaural or computed-tomography (but not pitch) information may improve spatial-hearing benefits.
Collapse
Affiliation(s)
- Joshua G W Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889
| | - Kenneth K Jensen
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889
| | - Olga A Stakhovskaya
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742
| | - Jack H Noble
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee 37232
| | - Michael Hoa
- Department of Otolaryngology Head and Neck Surgery, Georgetown University Medical Center, Washington, DC 20057
| | - H Jeffery Kim
- Department of Otolaryngology Head and Neck Surgery, Georgetown University Medical Center, Washington, DC 20057
| | - Robert Shih
- Department of Radiology, Walter Reed National Military Medical Center, Bethesda, Maryland 20889
| | - Elizabeth Kolberg
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742
| | - Miranda Cleary
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742
| |
Collapse
|
8
|
Fan Y, Zhang D, Banalagay R, Wang J, Noble JH, Dawant BM. Hybrid active shape and deep learning method for the accurate and robust segmentation of the intracochlear anatomy in clinical head CT and CBCT images. J Med Imaging (Bellingham) 2021; 8:064002. [PMID: 34853805 DOI: 10.1117/1.jmi.8.6.064002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Accepted: 11/08/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Robust and accurate segmentation methods for the intracochlear anatomy (ICA) are a critical step in the image-guided cochlear implant programming process. We have proposed an active shape model (ASM)-based method and a deep learning (DL)-based method for this task, and we have observed that the DL method tends to be more accurate than the ASM method while the ASM method tends to be more robust. Approach: We propose a DL-based U-Net-like architecture that incorporates ASM segmentation into the network. A quantitative analysis is performed on a dataset that consists of 11 cochlea specimens for which a segmentation ground truth is available. To qualitatively evaluate the robustness of the method, an experienced expert is asked to visually inspect and grade the segmentation results on a clinical dataset made of 138 image volumes acquired with conventional CT scanners and of 39 image volumes acquired with cone beam CT (CBCT) scanners. Finally, we compare training the network (1) first with the ASM results, and then fine-tuning it with the ground truth segmentation and (2) directly with the specimens with ground truth segmentation. Results: Quantitative and qualitative results show that the proposed method increases substantially the robustness of the DL method while having only a minor detrimental effect (though not significant) on its accuracy. Expert evaluation of the clinical dataset shows that by incorporating the ASM segmentation into the DL network, the proportion of good segmentation cases increases from 60/177 to 119/177 when training only with the specimens and increases from 129/177 to 151/177 when pretraining with the ASM results. Conclusions: A hybrid ASM and DL-based segmentation method is proposed to segment the ICA in CT and CBCT images. Our results show that combining DL and ASM methods leads to a solution that is both robust and accurate.
Collapse
Affiliation(s)
- Yubo Fan
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | | | - Rueben Banalagay
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Jianing Wang
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Jack H Noble
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Benoit M Dawant
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| |
Collapse
|
9
|
Leydon P, O'Connell M, Greene D, Curran KM. Bone segmentation in contrast enhanced whole-body computed tomography. Biomed Phys Eng Express 2021; 8. [PMID: 34749353 DOI: 10.1088/2057-1976/ac37ab] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 11/08/2021] [Indexed: 11/12/2022]
Abstract
Segmentation of bone regions allows for enhanced diagnostics, disease characterisation and treatment monitoring in CT imaging. In contrast enhanced whole-body scans accurate automatic segmentation is particularly difficult as low dose whole body protocols reduce image quality and make contrast enhanced regions more difficult to separate when relying on differences in pixel intensities. This paper outlines a U-net architecture with novel preprocessing techniques, based on the windowing of training data and the modification of sigmoid activation threshold selection to successfully segment bone-bone marrow regions from low dose contrast enhanced whole-body CT scans. The proposed method achieved mean Dice coefficients of 0.979 ±0.02, 0.965 ±0.03, and 0.934 ±0.06 on two internal datasets and one external test dataset respectively. We have demonstrated that appropriate preprocessing is important for differentiating between bone and contrast dye, and that excellent results can be achieved with limited data.
Collapse
Affiliation(s)
- Patrick Leydon
- Applied Science, Limerick Institute of Technology, Moylish, Limerick, IRELAND
| | - Martin O'Connell
- School of Medicine, University College Dublin, University College Dublin, Dublin, Dublin 4, IRELAND
| | - Derek Greene
- School of Computer Science, University College Dublin, University College Dublin, Dublin, Dublin 4, IRELAND
| | - Kathleen M Curran
- School of Medicine, University College Dublin, University College Dublin, Dublin, 4, IRELAND
| |
Collapse
|
10
|
Inner-ear augmented metal artifact reduction with simulation-based 3D generative adversarial networks. Comput Med Imaging Graph 2021; 93:101990. [PMID: 34607275 DOI: 10.1016/j.compmedimag.2021.101990] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Revised: 09/08/2021] [Accepted: 09/18/2021] [Indexed: 11/21/2022]
Abstract
Metal Artifacts creates often difficulties for a high quality visual assessment of post-operative imaging in computed tomography (CT). A vast body of methods have been proposed to tackle this issue, but these methods were designed for regular CT scans and their performance is usually insufficient when imaging tiny implants. In the context of post-operative high-resolution CT imaging, we propose a 3D metal artifact reduction algorithm based on a generative adversarial neural network. It is based on the simulation of physically realistic CT metal artifacts created by cochlea implant electrodes on preoperative images. The generated images serve to train a 3D generative adversarial networks for artifacts reduction. The proposed approach was assessed qualitatively and quantitatively on clinical conventional and cone beam CT of cochlear implant postoperative images. These experiments show that the proposed method outperforms other general metal artifact reduction approaches.
Collapse
|
11
|
Chawdhary G, Shoman N. Emerging artificial intelligence applications in otological imaging. Curr Opin Otolaryngol Head Neck Surg 2021; 29:357-364. [PMID: 34459798 DOI: 10.1097/moo.0000000000000754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW To highlight the recent literature on artificial intelligence (AI) pertaining to otological imaging and to discuss future directions, obstacles and opportunities. RECENT FINDINGS The main themes in the recent literature centre around automated otoscopic image diagnosis and automated image segmentation for application in virtual reality surgical simulation and planning. Other applications that have been studied include identification of tinnitus MRI biomarkers, facial palsy analysis, intraoperative augmented reality systems, vertigo diagnosis and endolymphatic hydrops ratio calculation in Meniere's disease. Studies are presently at a preclinical, proof-of-concept stage. SUMMARY The recent literature on AI in otological imaging is promising and demonstrates the future potential of this technology in automating certain imaging tasks in a healthcare environment of ever-increasing demand and workload. Some studies have shown equivalence or superiority of the algorithm over physicians, albeit in narrowly defined realms. Future challenges in developing this technology include the compilation of large high quality annotated datasets, fostering strong collaborations between the health and technology sectors, testing the technology within real-world clinical pathways and bolstering trust among patients and physicians in this new method of delivering healthcare.
Collapse
Affiliation(s)
- Gaurav Chawdhary
- ENT Department, Royal Hallamshire Hospital, Broomhall, Sheffield, UK
| | - Nael Shoman
- ENT Department, Queen Elizabeth II Health Sciences Centre, Halifax, Nova Scotia, Canada
| |
Collapse
|
12
|
Gomi T, Sakai R, Hara H, Watanabe Y, Mizukami S. Usefulness of a Metal Artifact Reduction Algorithm in Digital Tomosynthesis Using a Combination of Hybrid Generative Adversarial Networks. Diagnostics (Basel) 2021; 11:diagnostics11091629. [PMID: 34573971 PMCID: PMC8467368 DOI: 10.3390/diagnostics11091629] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 08/30/2021] [Accepted: 08/30/2021] [Indexed: 11/22/2022] Open
Abstract
In this study, a novel combination of hybrid generative adversarial networks (GANs) comprising cycle-consistent GAN, pix2pix, and (mask pyramid network) MPN (CGpM-metal artifact reduction [MAR]), was developed using projection data to reduce metal artifacts and the radiation dose during digital tomosynthesis. The CGpM-MAR algorithm was compared with the conventional filtered back projection (FBP) without MAR, FBP with MAR, and convolutional neural network MAR. The MAR rates were compared using the artifact index (AI) and Gumbel distribution of the largest variation analysis using a prosthesis phantom at various radiation doses. The novel CGpM-MAR yielded an adequately effective overall performance in terms of AI. The resulting images yielded good results independently of the type of metal used in the prosthesis phantom (p < 0.05) and good artifact removal at 55% radiation-dose reduction. Furthermore, the CGpM-MAR represented the minimum in the model with the largest variation at 55% radiation-dose reduction. Regarding the AI and Gumbel distribution analysis, the novel CGpM-MAR yielded superior MAR when compared with the conventional reconstruction algorithms with and without MAR at 55% radiation-dose reduction and presented features most similar to the reference FBP. CGpM-MAR presents a promising method for metal artifact and radiation-dose reduction in clinical practice.
Collapse
|
13
|
Wang J, Su D, Fan Y, Chakravorti S, Noble JH, Dawant BM. Atlas-based Segmentation of Intracochlear Anatomy in Metal Artifact Affected CT Images of the Ear with Co-trained Deep Neural Networks. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2021; 12904:14-23. [PMID: 35360271 PMCID: PMC8964077 DOI: 10.1007/978-3-030-87202-1_2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
We propose an atlas-based method to segment the intracochlear anatomy (ICA) in the post-implantation CT (Post-CT) images of cochlear implant (CI) recipients that preserves the point-to-point correspondence between the meshes in the atlas and the segmented volumes. To solve this problem, which is challenging because of the strong artifacts produced by the implant, we use a pair of co-trained deep networks that generate dense deformation fields (DDFs) in opposite directions. One network is tasked with registering an atlas image to the Post-CT images and the other network is tasked with registering the Post-CT images to the atlas image. The networks are trained using loss functions based on voxel-wise labels, image content, fiducial registration error, and cycle-consistency constraint. The segmentation of the ICA in the Post-CT images is subsequently obtained by transferring the predefined segmentation meshes of the ICA in the atlas image to the Post-CT images using the corresponding DDFs generated by the trained registration networks. Our model can learn the underlying geometric features of the ICA even though they are obscured by the metal artifacts. We show that our end-to-end network produces results that are comparable to the current state of the art (SOTA) that relies on a two-steps approach that first uses conditional generative adversarial networks to synthesize artifact-free images from the Post-CT images and then uses an active shape model-based method to segment the ICA in the synthetic images. Our method requires a fraction of the time needed by the SOTA, which is important for end-user acceptance.
Collapse
Affiliation(s)
- Jianing Wang
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Dingjie Su
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Yubo Fan
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Srijata Chakravorti
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Jack H Noble
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Benoit M Dawant
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| |
Collapse
|
14
|
Li J, von Campe G, Pepe A, Gsaxner C, Wang E, Chen X, Zefferer U, Tödtling M, Krall M, Deutschmann H, Schäfer U, Schmalstieg D, Egger J. Automatic skull defect restoration and cranial implant generation for cranioplasty. Med Image Anal 2021; 73:102171. [PMID: 34340106 DOI: 10.1016/j.media.2021.102171] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Revised: 07/09/2021] [Accepted: 07/12/2021] [Indexed: 11/25/2022]
Abstract
A fast and fully automatic design of 3D printed patient-specific cranial implants is highly desired in cranioplasty - the process to restore a defect on the skull. We formulate skull defect restoration as a 3D volumetric shape completion task, where a partial skull volume is completed automatically. The difference between the completed skull and the partial skull is the restored defect; in other words, the implant that can be used in cranioplasty. To fulfill the task of volumetric shape completion, a fully data-driven approach is proposed. Supervised skull shape learning is performed on a database containing 167 high-resolution healthy skulls. In these skulls, synthetic defects are injected to create training and evaluation data pairs. We propose a patch-based training scheme tailored for dealing with high-resolution and spatially sparse data, which overcomes the disadvantages of conventional patch-based training methods in high-resolution volumetric shape completion tasks. In particular, the conventional patch-based training is applied to images of high resolution and proves to be effective in tasks such as segmentation. However, we demonstrate the limitations of conventional patch-based training for shape completion tasks, where the overall shape distribution of the target has to be learnt, since it cannot be captured efficiently by a sub-volume cropped from the target. Additionally, the standard dense implementation of a convolutional neural network tends to perform poorly on sparse data, such as the skull, which has a low voxel occupancy rate. Our proposed training scheme encourages a convolutional neural network to learn from the high-resolution and spatially sparse data. In our study, we show that our deep learning models, trained on healthy skulls with synthetic defects, can be transferred directly to craniotomy skulls with real defects of greater irregularity, and the results show promise for clinical use. Project page: https://github.com/Jianningli/MIA.
Collapse
Affiliation(s)
- Jianning Li
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz 8010, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, Graz 8010, Austria; Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University Graz, Auenbruggerplatz 2(2), Graz 8036, Austria.
| | - Gord von Campe
- Department of Neurosurgery, Medical University of Graz, Auenbruggerplatz 29, Graz, Austria
| | - Antonio Pepe
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz 8010, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, Graz 8010, Austria
| | - Christina Gsaxner
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz 8010, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, Graz 8010, Austria
| | - Enpeng Wang
- School of Mechanical Engineering, Shanghai Jiao Tong University, Minhang District, Shanghai 200240, China
| | - Xiaojun Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Minhang District, Shanghai 200240, China
| | - Ulrike Zefferer
- Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University Graz, Auenbruggerplatz 2(2), Graz 8036, Austria
| | - Martin Tödtling
- Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University Graz, Auenbruggerplatz 2(2), Graz 8036, Austria
| | - Marcell Krall
- Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University Graz, Auenbruggerplatz 2(2), Graz 8036, Austria
| | - Hannes Deutschmann
- Department of Radiology, Medical University of Graz, Auenbruggerplatz 9, Graz 8036, Austria
| | - Ute Schäfer
- Research Unit Experimental Neurotraumatology, Department of Neurosurgery, Medical University Graz, Auenbruggerplatz 2(2), Graz 8036, Austria; BioTechMed-Graz, Mozartgasse 12/II, Graz 8010, Austria
| | - Dieter Schmalstieg
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz 8010, Austria; BioTechMed-Graz, Mozartgasse 12/II, Graz 8010, Austria
| | - Jan Egger
- Institute of Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, Graz 8010, Austria; Department of Oral and Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 2, Graz 8036, Austria; Computer Algorithms for Medicine Laboratory, Graz, Austria; BioTechMed-Graz, Mozartgasse 12/II, Graz 8010, Austria.
| |
Collapse
|
15
|
Nakamura M, Nakao M, Imanishi K, Hirashima H, Tsuruta Y. Geometric and dosimetric impact of 3D generative adversarial network-based metal artifact reduction algorithm on VMAT and IMPT for the head and neck region. Radiat Oncol 2021; 16:96. [PMID: 34092240 PMCID: PMC8182914 DOI: 10.1186/s13014-021-01827-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 05/28/2021] [Indexed: 11/26/2022] Open
Abstract
Background We investigated the geometric and dosimetric impact of three-dimensional (3D) generative adversarial network (GAN)-based metal artifact reduction (MAR) algorithms on volumetric-modulated arc therapy (VMAT) and intensity-modulated proton therapy (IMPT) for the head and neck region, based on artifact-free computed tomography (CT) volumes with dental fillings. Methods Thirteen metal-free CT volumes of the head and neck regions were obtained from The Cancer Imaging Archive. To simulate metal artifacts on CT volumes, we defined 3D regions of the teeth for pseudo-dental fillings from the metal-free CT volumes. HU values of 4000 HU were assigned to the selected teeth region of interest. Two different CT volumes, one with four (m4) and the other with eight (m8) pseudo-dental fillings, were generated for each case. These CT volumes were used as the Reference. CT volumes with metal artifacts were then generated from the Reference CT volumes (Artifacts). On the Artifacts CT volumes, metal artifacts were manually corrected for using the water density override method with a value of 1.0 g/cm3 (Water). By contrast, the CT volumes with reduced metal artifacts using 3D GAN model extension of CycleGAN were also generated (GAN-MAR). The structural similarity (SSIM) index within the planning target volume was calculated as quantitative error metric between the Reference CT volumes and the other volumes. After creating VMAT and IMPT plans on the Reference CT volumes, the reference plans were recalculated for the remaining CT volumes. Results The time required to generate a single GAN-MAR CT volume was approximately 30 s. The median SSIMs were lower in the m8 group than those in the m4 group, and ANOVA showed a significant difference in the SSIM for the m8 group (p < 0.05). Although the median differences in D98%, D50% and D2% were larger in the m8 group than the m4 group, those from the reference plans were within 3% for VMAT and 1% for IMPT. Conclusions The GAN-MAR CT volumes generated in a short time were closer to the Reference CT volumes than the Water and Artifacts CT volumes. The observed dosimetric differences compared to the reference plan were clinically acceptable.
Collapse
Affiliation(s)
- Mitsuhiro Nakamura
- Division of Medical Physics, Department of Information Technology and Medical Engineering, Human Health Sciences, Graduate School of Medicine, Kyoto University, 53 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan. .,Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto, Japan.
| | - Megumi Nakao
- Department of Systems Science, Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | | | - Hideaki Hirashima
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto, Japan
| | - Yusuke Tsuruta
- Division of Medical Physics, Department of Information Technology and Medical Engineering, Human Health Sciences, Graduate School of Medicine, Kyoto University, 53 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan.,Division of Clinical Radiology Service, Kyoto University Hospital, Kyoto, Japan
| |
Collapse
|
16
|
Funama Y, Oda S, Kidoh M, Nagayama Y, Goto M, Sakabe D, Nakaura T. Conditional generative adversarial networks to generate pseudo low monoenergetic CT image from a single-tube voltage CT scanner. Phys Med 2021; 83:46-51. [DOI: 10.1016/j.ejmp.2021.02.015] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 02/11/2021] [Accepted: 02/21/2021] [Indexed: 01/29/2023] Open
|
17
|
Yurt M, Dar SU, Erdem A, Erdem E, Oguz KK, Çukur T. mustGAN: multi-stream Generative Adversarial Networks for MR Image Synthesis. Med Image Anal 2021; 70:101944. [PMID: 33690024 DOI: 10.1016/j.media.2020.101944] [Citation(s) in RCA: 46] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2020] [Revised: 12/11/2020] [Accepted: 12/15/2020] [Indexed: 01/28/2023]
Abstract
Multi-contrast MRI protocols increase the level of morphological information available for diagnosis. Yet, the number and quality of contrasts are limited in practice by various factors including scan time and patient motion. Synthesis of missing or corrupted contrasts from other high-quality ones can alleviate this limitation. When a single target contrast is of interest, common approaches for multi-contrast MRI involve either one-to-one or many-to-one synthesis methods depending on their input. One-to-one methods take as input a single source contrast, and they learn a latent representation sensitive to unique features of the source. Meanwhile, many-to-one methods receive multiple distinct sources, and they learn a shared latent representation more sensitive to common features across sources. For enhanced image synthesis, we propose a multi-stream approach that aggregates information across multiple source images via a mixture of multiple one-to-one streams and a joint many-to-one stream. The complementary feature maps generated in the one-to-one streams and the shared feature maps generated in the many-to-one stream are combined with a fusion block. The location of the fusion block is adaptively modified to maximize task-specific performance. Quantitative and radiological assessments on T1,- T2-, PD-weighted, and FLAIR images clearly demonstrate the superior performance of the proposed method compared to previous state-of-the-art one-to-one and many-to-one methods.
Collapse
Affiliation(s)
- Mahmut Yurt
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey; National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey
| | - Salman Uh Dar
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey; National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey
| | - Aykut Erdem
- Department of Computer Engineering, Koç University, İstanbul, TR-34450, Turkey
| | - Erkut Erdem
- Department of Computer Engineering, Hacettepe University, Ankara, TR-06800, Turkey
| | - Kader K Oguz
- National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey; Department of Radiology, Hacettepe University, Ankara, TR-06100, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara, TR-06800, Turkey; National Magnetic Resonance Research Center, Bilkent University, Ankara, TR-06800, Turkey; Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent, Ankara, TR-06800, Turkey.
| |
Collapse
|
18
|
Deep learning-based metal artefact reduction in PET/CT imaging. Eur Radiol 2021; 31:6384-6396. [PMID: 33569626 PMCID: PMC8270868 DOI: 10.1007/s00330-021-07709-z] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 12/31/2020] [Accepted: 01/21/2021] [Indexed: 12/12/2022]
Abstract
Objectives The susceptibility of CT imaging to metallic objects gives rise to strong streak artefacts and skewed information about the attenuation medium around the metallic implants. This metal-induced artefact in CT images leads to inaccurate attenuation correction in PET/CT imaging. This study investigates the potential of deep learning–based metal artefact reduction (MAR) in quantitative PET/CT imaging. Methods Deep learning–based metal artefact reduction approaches were implemented in the image (DLI-MAR) and projection (DLP-MAR) domains. The proposed algorithms were quantitatively compared to the normalized MAR (NMAR) method using simulated and clinical studies. Eighty metal-free CT images were employed for simulation of metal artefact as well as training and evaluation of the aforementioned MAR approaches. Thirty 18F-FDG PET/CT images affected by the presence of metallic implants were retrospectively employed for clinical assessment of the MAR techniques. Results The evaluation of MAR techniques on the simulation dataset demonstrated the superior performance of the DLI-MAR approach (structural similarity (SSIM) = 0.95 ± 0.2 compared to 0.94 ± 0.2 and 0.93 ± 0.3 obtained using DLP-MAR and NMAR, respectively) in minimizing metal artefacts in CT images. The presence of metallic artefacts in CT images or PET attenuation correction maps led to quantitative bias, image artefacts and under- and overestimation of scatter correction of PET images. The DLI-MAR technique led to a quantitative PET bias of 1.3 ± 3% compared to 10.5 ± 6% without MAR and 3.2 ± 0.5% achieved by NMAR. Conclusion The DLI-MAR technique was able to reduce the adverse effects of metal artefacts on PET images through the generation of accurate attenuation maps from corrupted CT images. Key Points • The presence of metallic objects, such as dental implants, gives rise to severe photon starvation, beam hardening and scattering, thus leading to adverse artefacts in reconstructed CT images. • The aim of this work is to develop and evaluate a deep learning–based MAR to improve CT-based attenuation and scatter correction in PET/CT imaging. • Deep learning–based MAR in the image (DLI-MAR) domain outperformed its counterpart implemented in the projection (DLP-MAR) domain. The DLI-MAR approach minimized the adverse impact of metal artefacts on whole-body PET images through generating accurate attenuation maps from corrupted CT images. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-07709-z.
Collapse
|
19
|
Khan MMR, Labadie RF, Noble JH. Preoperative prediction of angular insertion depth of lateral wall cochlear implant electrode arrays. J Med Imaging (Bellingham) 2020; 7:031504. [PMID: 32509912 DOI: 10.1117/1.jmi.7.3.031504] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Accepted: 05/19/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Cochlear implants (CIs) use an array of electrodes surgically threaded into the cochlea to restore hearing sensation. Techniques for predicting the insertion depth of the array into the cochlea could guide surgeons toward more optimal placement of the array to reduce trauma and preserve the residual hearing. In addition to the electrode array geometry, the base insertion depth (BID) and the cochlear size could impact the overall array insertion depth. Approach: We investigated using these measurements to develop a linear regression model that can make preoperative or intraoperative predictions of the insertion depth of lateral wall CI electrodes. Computed tomography (CT) images of 86 CI recipients were analyzed. Using previously developed automated algorithms, the relative electrode position inside the cochlea was measured from the CT images. Results: A linear regression model is proposed for insertion depth prediction based on cochlea size, array geometry, and BID. The model is able to accurately predict angular insertion depths with a standard deviation of 41 deg and absolute deviation error of 32 deg. Conclusions: Surgeons may use this model for patient-customized selection of electrode array and/or to plan a BID for a given array that minimizes the likelihood of causing trauma to regions of the cochlea where residual hearing exists.
Collapse
Affiliation(s)
- Mohammad M R Khan
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Robert F Labadie
- Vanderbilt University Medical Center, Department of Otolaryngology-Head and Neck Surgery, Nashville, Tennessee, United States
| | - Jack H Noble
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| |
Collapse
|