1
|
Zou K, Lin T, Han Z, Wang M, Yuan X, Chen H, Zhang C, Shen X, Fu H. Confidence-aware multi-modality learning for eye disease screening. Med Image Anal 2024; 96:103214. [PMID: 38815358 DOI: 10.1016/j.media.2024.103214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Revised: 05/06/2024] [Accepted: 05/17/2024] [Indexed: 06/01/2024]
Abstract
Multi-modal ophthalmic image classification plays a key role in diagnosing eye diseases, as it integrates information from different sources to complement their respective performances. However, recent improvements have mainly focused on accuracy, often neglecting the importance of confidence and robustness in predictions for diverse modalities. In this study, we propose a novel multi-modality evidential fusion pipeline for eye disease screening. It provides a measure of confidence for each modality and elegantly integrates the multi-modality information using a multi-distribution fusion perspective. Specifically, our method first utilizes normal inverse gamma prior distributions over pre-trained models to learn both aleatoric and epistemic uncertainty for uni-modality. Then, the normal inverse gamma distribution is analyzed as the Student's t distribution. Furthermore, within a confidence-aware fusion framework, we propose a mixture of Student's t distributions to effectively integrate different modalities, imparting the model with heavy-tailed properties and enhancing its robustness and reliability. More importantly, the confidence-aware multi-modality ranking regularization term induces the model to more reasonably rank the noisy single-modal and fused-modal confidence, leading to improved reliability and accuracy. Experimental results on both public and internal datasets demonstrate that our model excels in robustness, particularly in challenging scenarios involving Gaussian noise and modality missing conditions. Moreover, our model exhibits strong generalization capabilities to out-of-distribution data, underscoring its potential as a promising solution for multimodal eye disease screening.
Collapse
Affiliation(s)
- Ke Zou
- National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Chengdu, 610065, China; College of Computer Science, Sichuan University, Chengdu, 610065, China
| | - Tian Lin
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, Shantou 515041, China; Medical College, Shantou University, Shantou 515041, China
| | - Zongbo Han
- College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
| | - Meng Wang
- Institute of High Performance Computing, Agency for Science, Technology and Research, 138632, Singapore
| | - Xuedong Yuan
- National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Chengdu, 610065, China; College of Computer Science, Sichuan University, Chengdu, 610065, China.
| | - Haoyu Chen
- Joint Shantou International Eye Center, Shantou University and the Chinese University of Hong Kong, Shantou 515041, China; Medical College, Shantou University, Shantou 515041, China.
| | - Changqing Zhang
- College of Intelligence and Computing, Tianjin University, Tianjin 300350, China
| | - Xiaojing Shen
- National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Chengdu, 610065, China; College of Mathematics, Sichuan University, Chengdu, 610065, China
| | - Huazhu Fu
- Institute of High Performance Computing, Agency for Science, Technology and Research, 138632, Singapore.
| |
Collapse
|
2
|
Santos da Silva G, Casanova D, Oliva JT, Rodrigues EO. Cardiac fat segmentation using computed tomography and an image-to-image conditional generative adversarial neural network. Med Eng Phys 2024; 124:104104. [PMID: 38418017 DOI: 10.1016/j.medengphy.2024.104104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 08/17/2023] [Accepted: 01/09/2024] [Indexed: 03/01/2024]
Abstract
In recent years, research has highlighted the association between increased adipose tissue surrounding the human heart and elevated susceptibility to cardiovascular diseases such as atrial fibrillation and coronary heart disease. However, the manual segmentation of these fat deposits has not been widely implemented in clinical practice due to the substantial workload it entails for medical professionals and the associated costs. Consequently, the demand for more precise and time-efficient quantitative analysis has driven the emergence of novel computational methods for fat segmentation. This study presents a novel deep learning-based methodology that offers autonomous segmentation and quantification of two distinct types of cardiac fat deposits. The proposed approach leverages the pix2pix network, a generative conditional adversarial network primarily designed for image-to-image translation tasks. By applying this network architecture, we aim to investigate its efficacy in tackling the specific challenge of cardiac fat segmentation, despite not being originally tailored for this purpose. The two types of fat deposits of interest in this study are referred to as epicardial and mediastinal fats, which are spatially separated by the pericardium. The experimental results demonstrated an average accuracy of 99.08% and f1-score 98.73 for the segmentation of the epicardial fat and 97.90% of accuracy and f1-score of 98.40 for the mediastinal fat. These findings represent the high precision and overlap agreement achieved by the proposed methodology. In comparison to existing studies, our approach exhibited superior performance in terms of f1-score and run time, enabling the images to be segmented in real time.
Collapse
Affiliation(s)
- Guilherme Santos da Silva
- Academic Department of Informatics, Universidade Tecnológica Federal do Paraná (UTFPR), Pato Branco, 85503-390, Brazil
| | - Dalcimar Casanova
- Academic Department of Informatics, Universidade Tecnológica Federal do Paraná (UTFPR), Pato Branco, 85503-390, Brazil
| | - Jefferson Tales Oliva
- Academic Department of Informatics, Universidade Tecnológica Federal do Paraná (UTFPR), Pato Branco, 85503-390, Brazil
| | - Erick Oliveira Rodrigues
- Academic Department of Informatics, Universidade Tecnológica Federal do Paraná (UTFPR), Pato Branco, 85503-390, Brazil; Graduate Program of Production and Systems Engineering, Universidade Tecnológica Federal do Paraná (UTFPR), Pato Branco, 85503-390, Brazil.
| |
Collapse
|
3
|
Taylor TRP, Menten MJ, Rueckert D, Sivaprasad S, Lotery AJ. The role of the retinal vasculature in age-related macular degeneration: a spotlight on OCTA. Eye (Lond) 2024; 38:442-449. [PMID: 37673970 PMCID: PMC10858204 DOI: 10.1038/s41433-023-02721-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 06/27/2023] [Accepted: 08/25/2023] [Indexed: 09/08/2023] Open
Abstract
Age-related macular degeneration (AMD) remains a disease with high morbidity and an incompletely understood pathophysiological mechanism. The ocular blood supply has been implicated in the development of the disease process, of which most research has focused on the role of the choroid and choriocapillaris. Recently, interest has developed into the role of the retinal vasculature in AMD, particularly with the advent of optical coherence tomography angiography (OCTA), which enables non-invasive imaging of the eye's blood vessels. This review summarises the up-to-date body of work in this field including the proposed links between observed changes in the retinal vessels and the development of AMD and potential future directions for research in this area. The review highlights that the strongest evidence supports the observation that patients with early to intermediate AMD have reduced vessel density in the superficial vascular complex of the retina, but also emphasises the need for caution when interpreting such studies due to their variable methodologies and nomenclature.
Collapse
Affiliation(s)
- Thomas R P Taylor
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, UK
| | - Martin J Menten
- BioMedIA, Imperial College London, London, UK
- Institute for AI and Informatics in Medicine, Klinikum Rechts der Isar, Technical University Munich, Munich, Germany
| | - Daniel Rueckert
- BioMedIA, Imperial College London, London, UK
- Institute for AI and Informatics in Medicine, Klinikum Rechts der Isar, Technical University Munich, Munich, Germany
| | - Sobha Sivaprasad
- NIHR Moorfields Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust, London, UK
| | - Andrew J Lotery
- Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, UK.
| |
Collapse
|
4
|
Shi T, Ding X, Zhou W, Pan F, Yan Z, Bai X, Yang X. Affinity Feature Strengthening for Accurate, Complete and Robust Vessel Segmentation. IEEE J Biomed Health Inform 2023; 27:4006-4017. [PMID: 37163397 DOI: 10.1109/jbhi.2023.3274789] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms. However, achieving high pixel-wise accuracy, complete topology structure and robustness to various contrast variations are critical and challenging, and most existing methods focus only on achieving one or two of these aspects. In this paper, we present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach. Specifically, we compute a multiscale affinity field for each pixel, capturing its semantic relationships with neighboring pixels in the predicted mask image. This field represents the local geometry of vessel segments of different sizes, allowing us to learn spatial- and scale-aware adaptive weights to strengthen vessel features. We evaluate our AFN on four different types of vascular datasets: X-ray angiography coronary vessel dataset (XCAD), portal vein dataset (PV), digital subtraction angiography cerebrovascular vessel dataset (DSA) and retinal vessel dataset (DRIVE). Extensive experimental results demonstrate that our AFN outperforms the state-of-the-art methods in terms of both higher accuracy and topological metrics, while also being more robust to various contrast changes.
Collapse
|
5
|
Tan Y, Zhao SX, Yang KF, Li YJ. A lightweight network guided with differential matched filtering for retinal vessel segmentation. Comput Biol Med 2023; 160:106924. [PMID: 37146492 DOI: 10.1016/j.compbiomed.2023.106924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 04/03/2023] [Accepted: 04/13/2023] [Indexed: 05/07/2023]
Abstract
The geometric morphology of retinal vessels reflects the state of cardiovascular health, and fundus images are important reference materials for ophthalmologists. Great progress has been made in automated vessel segmentation, but few studies have focused on thin vessel breakage and false-positives in areas with lesions or low contrast. In this work, we propose a new network, differential matched filtering guided attention UNet (DMF-AU), to address these issues, incorporating a differential matched filtering layer, feature anisotropic attention, and a multiscale consistency constrained backbone to perform thin vessel segmentation. The differential matched filtering is used for the early identification of locally linear vessels, and the resulting rough vessel map guides the backbone to learn vascular details. Feature anisotropic attention reinforces the vessel features of spatial linearity at each stage of the model. Multiscale constraints reduce the loss of vessel information while pooling within large receptive fields. In tests on multiple classical datasets, the proposed model performed well compared with other algorithms on several specially designed criteria for vessel segmentation. DMF-AU is a high-performance, lightweight vessel segmentation model. The source code is at https://github.com/tyb311/DMF-AU.
Collapse
Affiliation(s)
- Yubo Tan
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Shi-Xuan Zhao
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Kai-Fu Yang
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| | - Yong-Jie Li
- The MOE Key Laboratory for Neuroinformation, Radiation Oncology Key Laboratory of Sichuan Province, University of Electronic Science and Technology of China, China.
| |
Collapse
|
6
|
Zhang H, Ni W, Luo Y, Feng Y, Song R, Wang X. TUnet-LBF: Retinal fundus image fine segmentation model based on transformer Unet network and LBF. Comput Biol Med 2023; 159:106937. [PMID: 37084640 DOI: 10.1016/j.compbiomed.2023.106937] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 04/01/2023] [Accepted: 04/13/2023] [Indexed: 04/23/2023]
Abstract
Segmentation of retinal fundus images is a crucial part of medical diagnosis. Automatic extraction of blood vessels in low-quality retinal images remains a challenging problem. In this paper, we propose a novel two-stage model combining Transformer Unet (TUnet) and local binary energy function model (LBF), TUnet-LBF, for coarse to fine segmentation of retinal vessels. In the coarse segmentation stage, the global topological information of blood vessels is obtained by TUnet. The neural network outputs the initial contour and the probability maps, which are input to the fine segmentation stage as the priori information. In the fine segmentation stage, an energy modulated LBF model is proposed to obtain the local detail information of blood vessels. The proposed model reaches accuracy (Acc) of 0.9650, 0.9681 and 0.9708 on the public datasets DRIVE, STARE and CHASE_DB1 respectively. The experimental results demonstrate the effectiveness of each component in the proposed model.
Collapse
Affiliation(s)
- Hanyu Zhang
- School of Geography, Liaoning Normal University, Dalian City, 116029, China; School of Computer and Information Technology, Liaoning Normal University, Dalian City, 116029, China; College of Information Science and Engineering, Northeastern University, Shenyang, 110167, China.
| | - Weihan Ni
- School of Computer and Information Technology, Liaoning Normal University, Dalian City, 116029, China.
| | - Yi Luo
- College of Information Science and Engineering, Northeastern University, Shenyang, 110167, China.
| | - Yining Feng
- School of Geography, Liaoning Normal University, Dalian City, 116029, China.
| | - Ruoxi Song
- Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, 100101, China.
| | - Xianghai Wang
- School of Geography, Liaoning Normal University, Dalian City, 116029, China; School of Computer and Information Technology, Liaoning Normal University, Dalian City, 116029, China.
| |
Collapse
|
7
|
Bhimavarapu U, Battineni G. Deep Learning for the Detection and Classification of Diabetic Retinopathy with an Improved Activation Function. Healthcare (Basel) 2022; 11:healthcare11010097. [PMID: 36611557 PMCID: PMC9819317 DOI: 10.3390/healthcare11010097] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 12/23/2022] [Accepted: 12/26/2022] [Indexed: 12/30/2022] Open
Abstract
Diabetic retinopathy (DR) is an eye disease triggered due to diabetes, which may lead to blindness. To prevent diabetic patients from becoming blind, early diagnosis and accurate detection of DR are vital. Deep learning models, such as convolutional neural networks (CNNs), are largely used in DR detection through the classification of blood vessel pixels from the remaining pixels. In this paper, an improved activation function was proposed for diagnosing DR from fundus images that automatically reduces loss and processing time. The DIARETDB0, DRIVE, CHASE, and Kaggle datasets were used to train and test the enhanced activation function in the different CNN models. The ResNet-152 model has the highest accuracy of 99.41% with the Kaggle dataset. This enhanced activation function is suitable for DR diagnosis from retinal fundus images.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaramm 522302, Andhra Pradesh, India
| | - Gopi Battineni
- Medical Informatics Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
- Correspondence: ; Tel.: +39-333-172-8206
| |
Collapse
|
8
|
Rodrigues EO, Rodrigues LO, Machado JHP, Casanova D, Teixeira M, Oliva JT, Bernardes G, Liatsis P. Local-Sensitive Connectivity Filter (LS-CF): A Post-Processing Unsupervised Improvement of the Frangi, Hessian and Vesselness Filters for Multimodal Vessel Segmentation. J Imaging 2022; 8:jimaging8100291. [PMID: 36286385 PMCID: PMC9604711 DOI: 10.3390/jimaging8100291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 09/13/2022] [Accepted: 09/16/2022] [Indexed: 11/07/2022] Open
Abstract
A retinal vessel analysis is a procedure that can be used as an assessment of risks to the eye. This work proposes an unsupervised multimodal approach that improves the response of the Frangi filter, enabling automatic vessel segmentation. We propose a filter that computes pixel-level vessel continuity while introducing a local tolerance heuristic to fill in vessel discontinuities produced by the Frangi response. This proposal, called the local-sensitive connectivity filter (LS-CF), is compared against a naive connectivity filter to the baseline thresholded Frangi filter response and to the naive connectivity filter response in combination with the morphological closing and to the current approaches in the literature. The proposal was able to achieve competitive results in a variety of multimodal datasets. It was robust enough to outperform all the state-of-the-art approaches in the literature for the OSIRIX angiographic dataset in terms of accuracy and 4 out of 5 works in the case of the IOSTAR dataset while also outperforming several works in the case of the DRIVE and STARE datasets and 6 out of 10 in the CHASE-DB dataset. For the CHASE-DB, it also outperformed all the state-of-the-art unsupervised methods.
Collapse
Affiliation(s)
- Erick O. Rodrigues
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
- Correspondence:
| | - Lucas O. Rodrigues
- Graduate Program of Sciences Applied to Health Products, Universidade Federal Fluminense (UFF), Niteroi 24241-000, RJ, Brazil
| | - João H. P. Machado
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Dalcimar Casanova
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Marcelo Teixeira
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Jeferson T. Oliva
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco 85503-390, PR, Brazil
| | - Giovani Bernardes
- Institute of Technological Sciences (ICT), Universidade Federal de Itajuba (UNIFEI), Itabira 35903-087, MG, Brazil
| | - Panos Liatsis
- Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi P.O. Box 127788, United Arab Emirates
| |
Collapse
|
9
|
Hofer D, Schmidt-Erfurth U, Orlando JI, Goldbach F, Gerendas BS, Seeböck P. Improving foveal avascular zone segmentation in fluorescein angiograms by leveraging manual vessel labels from public color fundus pictures. BIOMEDICAL OPTICS EXPRESS 2022; 13:2566-2580. [PMID: 35774310 PMCID: PMC9203117 DOI: 10.1364/boe.452873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 03/11/2022] [Accepted: 03/24/2022] [Indexed: 06/15/2023]
Abstract
In clinical routine, ophthalmologists frequently analyze the shape and size of the foveal avascular zone (FAZ) to detect and monitor retinal diseases. In order to extract those parameters, the contours of the FAZ need to be segmented, which is normally achieved by analyzing the retinal vasculature (RV) around the macula in fluorescein angiograms (FA). Computer-aided segmentation methods based on deep learning (DL) can automate this task. However, current approaches for segmenting the FAZ are often tailored to a specific dataset or require manual initialization. Furthermore, they do not take the variability and challenges of clinical FA into account, which are often of low quality and difficult to analyze. In this paper we propose a DL-based framework to automatically segment the FAZ in challenging FA scans from clinical routine. Our approach mimics the workflow of retinal experts by using additional RV labels as a guidance during training. Hence, our model is able to produce RV segmentations simultaneously. We minimize the annotation work by using a multi-modal approach that leverages already available public datasets of color fundus pictures (CFPs) and their respective manual RV labels. Our experimental evaluation on two datasets with FA from 1) clinical routine and 2) large multicenter clinical trials shows that the addition of weak RV labels as a guidance during training improves the FAZ segmentation significantly with respect to using only manual FAZ annotations.
Collapse
Affiliation(s)
- Dominik Hofer
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - José Ignacio Orlando
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
- Yatiris Group, PLADEMA Institute, CON-ICET, Universidad Nacional del Centro de la Provincia de Buenos Aires, Gral. Pinto 399, Tandil, Buenos Aires, Argentina
| | - Felix Goldbach
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Bianca S. Gerendas
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| | - Philipp Seeböck
- Vienna Reading Center, Department of Ophthalmology and Optometry, Medical University of Vienna, Spitalgasse 23, 1090 Vienna, Austria
| |
Collapse
|
10
|
Robust Detection and Modeling of the Major Temporal Arcade in Retinal Fundus Images. MATHEMATICS 2022. [DOI: 10.3390/math10081334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
The Major Temporal Arcade (MTA) is a critical component of the retinal structure that facilitates clinical diagnosis and monitoring of various ocular pathologies. Although recent works have addressed the quantitative analysis of the MTA through parametric modeling, their efforts are strongly based on an assumption of symmetry in the MTA shape. This work presents a robust method for the detection and piecewise parametric modeling of the MTA in fundus images. The model consists of a piecewise parametric curve with the ability to consider both symmetric and asymmetric scenarios. In an initial stage, multiple models are built from random blood vessel points taken from the blood-vessel segmented retinal image, following a weighted-RANSAC strategy. To choose the final model, the algorithm extracts blood-vessel width and grayscale-intensity features and merges them to obtain a coarse MTA probability function, which is used to weight the percentage of inlier points for each model. This procedure promotes selecting a model based on points with high MTA probability. Experimental results in the public benchmark dataset Digital Retinal Images for Vessel Extraction (DRIVE), for which manual MTA delineations have been prepared, indicate that the proposed method outperforms existing approaches with a balanced Accuracy of 0.7067, Mean Distance to Closest Point of 7.40 pixels, and Hausdorff Distance of 27.96 pixels, while demonstrating competitive results in terms of execution time (9.93 s per image).
Collapse
|
11
|
Billardello R, Ntolkeras G, Chericoni A, Madsen JR, Papadelis C, Pearl PL, Grant PE, Taffoni F, Tamilia E. Novel User-Friendly Application for MRI Segmentation of Brain Resection following Epilepsy Surgery. Diagnostics (Basel) 2022; 12:diagnostics12041017. [PMID: 35454065 PMCID: PMC9032020 DOI: 10.3390/diagnostics12041017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 04/10/2022] [Accepted: 04/13/2022] [Indexed: 11/16/2022] Open
Abstract
Delineation of resected brain cavities on magnetic resonance images (MRIs) of epilepsy surgery patients is essential for neuroimaging/neurophysiology studies investigating biomarkers of the epileptogenic zone. The gold standard to delineate the resection on MRI remains manual slice-by-slice tracing by experts. Here, we proposed and validated a semiautomated MRI segmentation pipeline, generating an accurate model of the resection and its anatomical labeling, and developed a graphical user interface (GUI) for user-friendly usage. We retrieved pre- and postoperative MRIs from 35 patients who had focal epilepsy surgery, implemented a region-growing algorithm to delineate the resection on postoperative MRIs and tested its performance while varying different tuning parameters. Similarity between our output and hand-drawn gold standards was evaluated via dice similarity coefficient (DSC; range: 0-1). Additionally, the best segmentation pipeline was trained to provide an automated anatomical report of the resection (based on presurgical brain atlas). We found that the best-performing set of parameters presented DSC of 0.83 (0.72-0.85), high robustness to seed-selection variability and anatomical accuracy of 90% to the clinical postoperative MRI report. We presented a novel user-friendly open-source GUI that implements a semiautomated segmentation pipeline specifically optimized to generate resection models and their anatomical reports from epilepsy surgery patients, while minimizing user interaction.
Collapse
Affiliation(s)
- Roberto Billardello
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
- Advanced Robotics and Human-Centered Technologies-CREO Lab, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
- Correspondence: (R.B.); (E.T.)
| | - Georgios Ntolkeras
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
- Baystate Children’s Hospital, Springfield, MA 01199, USA
| | - Assia Chericoni
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
- Advanced Robotics and Human-Centered Technologies-CREO Lab, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
| | - Joseph R. Madsen
- Epilepsy Surgery Program, Department of Neurosurgery, Boston Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA;
| | - Christos Papadelis
- Jane and John Justin Neurosciences Center, Cook Children’s Health Care System, Fort Worth, TX 76104, USA;
| | - Phillip L. Pearl
- Division of Epilepsy and Clinical Neurophysiology, Boston Children’s Hospital, Harvard Medical School, Boston, MA 02115, USA;
| | - Patricia Ellen Grant
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
| | - Fabrizio Taffoni
- Advanced Robotics and Human-Centered Technologies-CREO Lab, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
| | - Eleonora Tamilia
- Fetal Neonatal Neuroimaging and Developmental Science Center (FNNDSC), Newborn Medicine Division, Department of Pediatrics, Boston Children’s Hospital, Boston, MA 02115, USA; (G.N.); (A.C.); (P.E.G.)
- Correspondence: (R.B.); (E.T.)
| |
Collapse
|
12
|
Sun M, Wang Y, Fu Z, Li L, Liu Y, Zhao X. A Machine Learning Method for Automated In Vivo Transparent Vessel Segmentation and Identification Based on Blood Flow Characteristics. MICROSCOPY AND MICROANALYSIS : THE OFFICIAL JOURNAL OF MICROSCOPY SOCIETY OF AMERICA, MICROBEAM ANALYSIS SOCIETY, MICROSCOPICAL SOCIETY OF CANADA 2022; 28:1-14. [PMID: 35387704 DOI: 10.1017/s1431927622000514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In vivo transparent vessel segmentation is important to life science research. However, this task remains very challenging because of the fuzzy edges and the barely noticeable tubular characteristics of vessels under a light microscope. In this paper, we present a new machine learning method based on blood flow characteristics to segment the global vascular structure in vivo. Specifically, the videos of blood flow in transparent vessels are used as input. We use the machine learning classifier to classify the vessel pixels through the motion features extracted from moving red blood cells and achieve vessel segmentation based on a region-growing algorithm. Moreover, we utilize the moving characteristics of blood flow to distinguish between the types of vessels, including arteries, veins, and capillaries. In the experiments, we evaluate the performance of our method on videos of zebrafish embryos. The experimental results indicate the high accuracy of vessel segmentation, with an average accuracy of 97.98%, which is much more superior than other segmentation or motion-detection algorithms. Our method has good robustness when applied to input videos with various time resolutions, with a minimum of 3.125 fps.
Collapse
Affiliation(s)
- Mingzhu Sun
- Institute of Robotics and Automatic Information System (IRAIS) and the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin300350, China
| | - Yiwen Wang
- Institute of Robotics and Automatic Information System (IRAIS) and the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin300350, China
| | - Zhenhua Fu
- Institute of Robotics and Automatic Information System (IRAIS) and the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin300350, China
| | - Lu Li
- Institute of Robotics and Automatic Information System (IRAIS) and the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin300350, China
| | - Yaowei Liu
- Institute of Robotics and Automatic Information System (IRAIS) and the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin300350, China
| | - Xin Zhao
- Institute of Robotics and Automatic Information System (IRAIS) and the Tianjin Key Laboratory of Intelligent Robotic (tjKLIR), Nankai University, Tianjin300350, China
| |
Collapse
|
13
|
Xu J, Shen J, Wan C, Jiang Q, Yan Z, Yang W. A Few-Shot Learning-Based Retinal Vessel Segmentation Method for Assisting in the Central Serous Chorioretinopathy Laser Surgery. Front Med (Lausanne) 2022; 9:821565. [PMID: 35308538 PMCID: PMC8927682 DOI: 10.3389/fmed.2022.821565] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 01/28/2022] [Indexed: 12/05/2022] Open
Abstract
Background The location of retinal vessels is an important prerequisite for Central Serous Chorioretinopathy (CSC) Laser Surgery, which does not only assist the ophthalmologist in marking the location of the leakage point (LP) on the fundus color image but also avoids the damage of the laser spot to the vessel tissue, as well as the low efficiency of the surgery caused by the absorption of laser energy by retinal vessels. In acquiring an excellent intra- and cross-domain adaptability, the existing deep learning (DL)-based vessel segmentation scheme must be driven by big data, which makes the densely annotated work tedious and costly. Methods This paper aims to explore a new vessel segmentation method with a few samples and annotations to alleviate the above problems. Firstly, a key solution is presented to transform the vessel segmentation scene into the few-shot learning task, which lays a foundation for the vessel segmentation task with a few samples and annotations. Then, we improve the existing few-shot learning framework as our baseline model to adapt to the vessel segmentation scenario. Next, the baseline model is upgraded from the following three aspects: (1) A multi-scale class prototype extraction technique is designed to obtain more sufficient vessel features for better utilizing the information from the support images; (2) The multi-scale vessel features of the query images, inferred by the support image class prototype information, are gradually fused to provide more effective guidance for the vessel extraction tasks; and (3) A multi-scale attention module is proposed to promote the consideration of the global information in the upgraded model to assist vessel localization. Concurrently, the integrated framework is further conceived to appropriately alleviate the low performance of a single model in the cross-domain vessel segmentation scene, enabling to boost the domain adaptabilities of both the baseline and the upgraded models. Results Extensive experiments showed that the upgraded operation could further improve the performance of vessel segmentation significantly. Compared with the listed methods, both the baseline and the upgraded models achieved competitive results on the three public retinal image datasets (i.e., CHASE_DB, DRIVE, and STARE). In the practical application of private CSC datasets, the integrated scheme partially enhanced the domain adaptabilities of the two proposed models.
Collapse
Affiliation(s)
- Jianguo Xu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, China
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Zhipeng Yan
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Weihua Yang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
14
|
Li X, Ding J, Tang J, Guo F. Res2Unet: A multi-scale channel attention network for retinal vessel segmentation. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07086-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
15
|
Pena FB, Crabi D, Izidoro SC, Rodrigues ÉO, Bernardes G. Machine learning applied to emerald gemstone grading: framework proposal and creation of a public dataset. Pattern Anal Appl 2021. [DOI: 10.1007/s10044-021-01041-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
16
|
Rodrigues EO, Rodrigues LO, Lima JJ, Casanova D, Favarim F, Dosciatti ER, Pegorini V, Oliveira LSN, Morais FFC. X-Ray cardiac angiographic vessel segmentation based on pixel classification using machine learning and region growing. Biomed Phys Eng Express 2021; 7. [PMID: 34256366 DOI: 10.1088/2057-1976/ac13ba] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 07/13/2021] [Indexed: 11/11/2022]
Abstract
This work proposes a pixel-classification approach for vessel segmentation in x-ray angiograms. The proposal uses textural features such as anisotropic diffusion, features based on the Hessian matrix, mathematical morphology and statistics. These features are extracted from the neighborhood of each pixel. The approach also uses the ELEMENT methodology, which consists of creating a pixel-classification controlled by region-growing where the result of the classification affects further classifications of pixels. The Random Forests classifier is used to predict whether the pixel belongs to the vessel structure. The approach achieved the best accuracy in the literature (95.48%) outperforming unsupervised state-of-the-art approaches.
Collapse
Affiliation(s)
- E O Rodrigues
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco, Parana, Brazil
| | - L O Rodrigues
- Graduate Program of Applied Sciences to Health Products, Universidade Federal Fluminense (UFF), Niteroi, Rio de Janeiro, Brazil
| | - J J Lima
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco, Parana, Brazil
| | - D Casanova
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco, Parana, Brazil
| | - F Favarim
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco, Parana, Brazil
| | - E R Dosciatti
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco, Parana, Brazil
| | - V Pegorini
- Department of Academic Informatics (DAINF), Universidade Tecnologica Federal do Parana (UTFPR), Pato Branco, Parana, Brazil
| | - L S N Oliveira
- Primary Health Care, Pato Branco Prefecture, Parana, Brazil
| | - F F C Morais
- Innovation Office, Mass General Brigham Hospital, Cambridge, Massachusetts, United States of America
| |
Collapse
|
17
|
An Extended Approach to Predict Retinopathy in Diabetic Patients Using the Genetic Algorithm and Fuzzy C-Means. BIOMED RESEARCH INTERNATIONAL 2021; 2021:5597222. [PMID: 34258269 PMCID: PMC8257333 DOI: 10.1155/2021/5597222] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 06/19/2021] [Indexed: 01/23/2023]
Abstract
The present study is developed a new approach using a computer diagnostic method to diagnosing diabetic diseases with the use of fluorescein images. In doing so, this study presented the growth region algorithm for the aim of diagnosing diabetes, considering the angiography images of the patients' eyes. In addition, this study integrated two methods, including fuzzy C-means (FCM) and genetic algorithm (GA) to predict the retinopathy in diabetic patients from angiography images. The developed algorithm was applied to a total of 224 images of patients' retinopathy eyes. As clearly confirmed by the obtained results, the GA-FCM method outperformed the hand method regarding the selection of initial points. The proposed method showed 0.78 sensitivity. The comparison of the fuzzy fitness function in GA with other techniques revealed that the approach introduced in this study is more applicable to the Jaccard index since it could offer the lowest Jaccard distance and, at the same time, the highest Jaccard values. The results of the analysis demonstrated that the proposed method was efficient and effective to predict the retinopathy in diabetic patients from angiography images.
Collapse
|
18
|
Cheng J, Fu H, Cabrera DeBuc D, Tian J. Guest Editorial Ophthalmic Image Analysis and Informatics. IEEE J Biomed Health Inform 2020. [DOI: 10.1109/jbhi.2020.3037388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|