1
|
Freire TP, Braz Júnior G, de Almeida JDS, Rodrigues Junior JRD. Cup and Disc Segmentation in Smartphone Handheld Ophthalmoscope Images with a Composite Backbone and Double Decoder Architecture. Vision (Basel) 2025; 9:32. [PMID: 40265400 DOI: 10.3390/vision9020032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2025] [Revised: 04/01/2025] [Accepted: 04/09/2025] [Indexed: 04/24/2025] Open
Abstract
Glaucoma is a visual disease that affects millions of people, and early diagnosis can prevent total blindness. One way to diagnose the disease is through fundus image examination, which analyzes the optic disc and cup structures. However, screening programs in primary care are costly and unfeasible. Neural network models have been used to segment optic nerve structures, assisting physicians in this task and reducing fatigue. This work presents a methodology to enhance morphological biomarkers of the optic disc and cup in images obtained by a smartphone coupled to an ophthalmoscope through a deep neural network, which combines two backbones and a dual decoder approach to improve the segmentation of these structures, as well as a new way to combine the loss weights in the training process. The models obtained were numerically evaluated through Dice and IoU measures. The dice values obtained in the experiments reached a Dice of 95.92% and 85.30% for the optical disc and cup and an IoU of 92.22% and 75.68% for the optical disc and cup, respectively, in the BrG dataset. These findings indicate promising architectures in the fundus image segmentation task.
Collapse
Affiliation(s)
- Thiago Paiva Freire
- UFMA/Computer Science Department, Universidade Federal do Maranhão, Campus do Bacanga, São Luís 65085-580, Brazil
| | - Geraldo Braz Júnior
- UFMA/Computer Science Department, Universidade Federal do Maranhão, Campus do Bacanga, São Luís 65085-580, Brazil
| | | | | |
Collapse
|
2
|
Song G, Li K, Wang Z, Liu W, Xue Q, Liang J, Zhou Y, Geng H, Liu D. A fully automatic radiomics pipeline for postoperative facial nerve function prediction of vestibular schwannoma. Neuroscience 2025; 574:124-137. [PMID: 40210197 DOI: 10.1016/j.neuroscience.2025.04.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2025] [Revised: 03/28/2025] [Accepted: 04/05/2025] [Indexed: 04/12/2025]
Abstract
Vestibular schwannoma (VS) is the most prevalent intracranial schwannoma. Surgery is one of the options for the treatment of VS, with the preservation of facial nerve (FN) function being the primary objective. Therefore, postoperative FN function prediction is essential. However, achieving automation for such a method remains a challenge. In this study, we proposed a fully automatic deep learning approach based on multi-sequence magnetic resonance imaging (MRI) to predict FN function after surgery in VS patients. We first developed a segmentation network 2.5D Trans-UNet, which combined Transformer and U-Net to optimize contour segmentation for radiomic feature extraction. Next, we built a deep learning network based on the integration of 1DConvolutional Neural Network (1DCNN) and Gated Recurrent Unit (GRU) to predict postoperative FN function using the extracted features. We trained and tested the 2.5D Trans-UNet segmentation network on public and private datasets, achieving accuracies of 89.51% and 90.66%, respectively, confirming the model's strong performance. Then Feature extraction and selection were performed on the private dataset's segmentation results using 2.5D Trans-UNet. The selected features were used to train the 1DCNN-GRU network for classification. The results showed that our proposed fully automatic radiomics pipeline outperformed the traditional radiomics pipeline on the test set, achieving an accuracy of 88.64%, demonstrating its effectiveness in predicting the postoperative FN function in VS patients. Our proposed automatic method has the potential to become a valuable decision-making tool in neurosurgery, assisting neurosurgeons in making more informed decisions regarding surgical interventions and improving the treatment of VS patients.
Collapse
Affiliation(s)
- Gang Song
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Keyuan Li
- School of Information Science and Technology, Beijing University of Technology, Beijing, China
| | - Zhuozheng Wang
- School of Information Science and Technology, Beijing University of Technology, Beijing, China
| | - Wei Liu
- School of Information Science and Technology, Beijing University of Technology, Beijing, China.
| | - Qi Xue
- School of Information Science and Technology, Beijing University of Technology, Beijing, China
| | - Jiantao Liang
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Yiqiang Zhou
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Haoming Geng
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| | - Dong Liu
- Department of Neurosurgery, Xuanwu Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
3
|
Bachanek S, Wuerzberg P, Biggemann L, Janssen TY, Nietert M, Lotz J, Zeuschner P, Maßmann A, Uhlig A, Uhlig J. Renal tumor segmentation, visualization, and segmentation confidence using ensembles of neural networks in patients undergoing surgical resection. Eur Radiol 2025; 35:2147-2156. [PMID: 39177855 PMCID: PMC11913914 DOI: 10.1007/s00330-024-11026-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 05/29/2024] [Accepted: 08/02/2024] [Indexed: 08/24/2024]
Abstract
OBJECTIVES To develop an automatic segmentation model for solid renal tumors on contrast-enhanced CTs and to visualize segmentation with associated confidence to promote clinical applicability. MATERIALS AND METHODS The training dataset included solid renal tumor patients from two tertiary centers undergoing surgical resection and receiving CT in the corticomedullary or nephrogenic contrast media (CM) phase. Manual tumor segmentation was performed on all axial CT slices serving as reference standard for automatic segmentations. Independent testing was performed on the publicly available KiTS 2019 dataset. Ensembles of neural networks (ENN, DeepLabV3) were used for automatic renal tumor segmentation, and their performance was quantified with DICE score. ENN average foreground entropy measured segmentation confidence (binary: successful segmentation with DICE score > 0.8 versus inadequate segmentation ≤ 0.8). RESULTS N = 639/n = 210 patients were included in the training and independent test dataset. Datasets were comparable regarding age and sex (p > 0.05), while renal tumors in the training dataset were larger and more frequently benign (p < 0.01). In the internal test dataset, the ENN model yielded a median DICE score = 0.84 (IQR: 0.62-0.97, corticomedullary) and 0.86 (IQR: 0.77-0.96, nephrogenic CM phase), and the segmentation confidence an AUC = 0.89 (sensitivity = 0.86; specificity = 0.77). In the independent test dataset, the ENN model achieved a median DICE score = 0.84 (IQR: 0.71-0.97, corticomedullary CM phase); and segmentation confidence an accuracy = 0.84 (sensitivity = 0.86 and specificity = 0.81). ENN segmentations were visualized with color-coded voxelwise tumor probabilities and thresholds superimposed on clinical CT images. CONCLUSIONS ENN-based renal tumor segmentation robustly performs in external test data and might aid in renal tumor classification and treatment planning. CLINICAL RELEVANCE STATEMENT Ensembles of neural networks (ENN) models could automatically segment renal tumors on routine CTs, enabling and standardizing downstream image analyses and treatment planning. Providing confidence measures and segmentation overlays on images can lower the threshold for clinical ENN implementation. KEY POINTS Ensembles of neural networks (ENN) segmentation is visualized by color-coded voxelwise tumor probabilities and thresholds. ENN provided a high segmentation accuracy in internal testing and in an independent external test dataset. ENN models provide measures of segmentation confidence which can robustly discriminate between successful and inadequate segmentations.
Collapse
Affiliation(s)
- Sophie Bachanek
- Department of Clinical and Interventional Radiology, University Medical Center Goettingen, Goettingen, Germany
| | - Paul Wuerzberg
- Department of Medical Bioinformatics, University Medical Center Goettingen, Goettingen, Germany
| | - Lorenz Biggemann
- Department of Clinical and Interventional Radiology, University Medical Center Goettingen, Goettingen, Germany
| | - Tanja Yani Janssen
- Department of Clinical and Interventional Radiology, University Medical Center Goettingen, Goettingen, Germany
| | - Manuel Nietert
- Department of Medical Bioinformatics, University Medical Center Goettingen, Goettingen, Germany
| | - Joachim Lotz
- Department of Cardiac Radiology, University Medical Center Goettingen, Goettingen, Germany
| | - Philip Zeuschner
- Department of Urology and Pediatric Urology, Saarland University, Homburg, Germany
| | - Alexander Maßmann
- Department of Radiology & Nuclear Medicine, Robert-Bosch-Krankenhaus, Bosch Health Campus, Stuttgart, Germany
| | - Annemarie Uhlig
- Department of Urology, University Medical Center Goettingen, Goettingen, Germany
| | - Johannes Uhlig
- Department of Clinical and Interventional Radiology, University Medical Center Goettingen, Goettingen, Germany.
- Campus Institute for Data Science (CIDAS), Section of Medical Data Science (MeDaS), University of Goettingen, Goettingen, Germany.
| |
Collapse
|
4
|
Laudon A, Wang Z, Zou A, Sharma R, Ji J, Tan W, Kim C, Qian Y, Ye Q, Chen H, Henderson JM, Zhang C, Kolachalama VB, Lu W. Digital pathology assessment of kidney glomerular filtration barrier ultrastructure in an animal model of podocytopathy. Biol Methods Protoc 2025; 10:bpaf024. [PMID: 40223818 PMCID: PMC11992336 DOI: 10.1093/biomethods/bpaf024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2024] [Revised: 03/16/2025] [Accepted: 03/26/2025] [Indexed: 04/15/2025] Open
Abstract
Transmission electron microscopy (TEM) images can visualize kidney glomerular filtration barrier ultrastructure, including the glomerular basement membrane (GBM) and podocyte foot processes (PFP). Podocytopathy is associated with glomerular filtration barrier morphological changes observed experimentally and clinically by measuring GBM or PFP width. However, these measurements are currently performed manually. This limits research on podocytopathy disease mechanisms and therapeutics due to labor intensiveness and inter-operator variability. We developed a deep learning-based digital pathology computational method to measure GBM and PFP width in TEM images from the kidneys of Integrin-Linked Kinase (ILK) podocyte-specific conditional knockout (cKO) mouse, an animal model of podocytopathy, compared to wild-type (WT) control mouse. We obtained TEM images from WT and ILK cKO littermate mice at 4 weeks old. Our automated method was composed of two stages: a U-Net model for GBM segmentation, followed by an image processing algorithm for GBM and PFP width measurement. We evaluated its performance with a 4-fold cross-validation study on WT and ILK cKO mouse kidney pairs. Mean [95% confidence interval (CI)] GBM segmentation accuracy, calculated as Jaccard index, was 0.73 (0.70-0.76) for WT and 0.85 (0.83-0.87) for ILK cKO TEM images. Automated and manual GBM width measurements were similar for both WT (P = .49) and ILK cKO (P = .06) specimens. While automated and manual PFP width measurements were similar for WT (P = .89), they differed for ILK cKO (P < .05) specimens. WT and ILK cKO specimens were morphologically distinguishable by manual GBM (P < .05) and PFP (P < .05) width measurements. This phenotypic difference was reflected in the automated GBM (P < .05) more than PFP (P = .06) widths. Our deep learning-based digital pathology tool automated measurements in a mouse model of podocytopathy. This proposed method provides high-throughput, objective morphological analysis and could facilitate podocytopathy research.
Collapse
Affiliation(s)
- Aksel Laudon
- Department of Biomedical Engineering, Boston University, Boston, MA 02215, United States
- Nephrology Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA 02118, United States
| | - Zhaoze Wang
- Department of Biomedical Engineering, Boston University, Boston, MA 02215, United States
| | - Anqi Zou
- Computational Biomedicine Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA 02118, United States
| | - Richa Sharma
- Nephrology Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA 02118, United States
| | - Jiayi Ji
- Nephrology Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA 02118, United States
| | - Winston Tan
- Nephrology Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA 02118, United States
| | - Connor Kim
- Department of Biomedical Engineering, Boston University, Boston, MA 02215, United States
| | - Yingzhe Qian
- Department of Biomedical Engineering, Boston University, Boston, MA 02215, United States
| | - Qin Ye
- Department of Biomedical Engineering, Boston University, Boston, MA 02215, United States
| | - Hui Chen
- Department of Pathology and Laboratory Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA 02118, United States
| | - Joel M Henderson
- Department of Pathology and Laboratory Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA 02118, United States
| | - Chao Zhang
- Computational Biomedicine Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA 02118, United States
| | - Vijaya B Kolachalama
- Computational Biomedicine Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA 02118, United States
- Department of Computer Science and Faculty of Computing & Data Sciences, Boston University, Boston, MA 02215, United States
| | - Weining Lu
- Nephrology Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA 02118, United States
- Department of Pathology and Laboratory Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA 02118, United States
| |
Collapse
|
5
|
Erdur AC, Rusche D, Scholz D, Kiechle J, Fischer S, Llorián-Salvador Ó, Buchner JA, Nguyen MQ, Etzel L, Weidner J, Metz MC, Wiestler B, Schnabel J, Rueckert D, Combs SE, Peeken JC. Deep learning for autosegmentation for radiotherapy treatment planning: State-of-the-art and novel perspectives. Strahlenther Onkol 2025; 201:236-254. [PMID: 39105745 PMCID: PMC11839850 DOI: 10.1007/s00066-024-02262-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/13/2024] [Indexed: 08/07/2024]
Abstract
The rapid development of artificial intelligence (AI) has gained importance, with many tools already entering our daily lives. The medical field of radiation oncology is also subject to this development, with AI entering all steps of the patient journey. In this review article, we summarize contemporary AI techniques and explore the clinical applications of AI-based automated segmentation models in radiotherapy planning, focusing on delineation of organs at risk (OARs), the gross tumor volume (GTV), and the clinical target volume (CTV). Emphasizing the need for precise and individualized plans, we review various commercial and freeware segmentation tools and also state-of-the-art approaches. Through our own findings and based on the literature, we demonstrate improved efficiency and consistency as well as time savings in different clinical scenarios. Despite challenges in clinical implementation such as domain shifts, the potential benefits for personalized treatment planning are substantial. The integration of mathematical tumor growth models and AI-based tumor detection further enhances the possibilities for refining target volumes. As advancements continue, the prospect of one-stop-shop segmentation and radiotherapy planning represents an exciting frontier in radiotherapy, potentially enabling fast treatment with enhanced precision and individualization.
Collapse
Affiliation(s)
- Ayhan Can Erdur
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
| | - Daniel Rusche
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Daniel Scholz
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Johannes Kiechle
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
| | - Stefan Fischer
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
| | - Óscar Llorián-Salvador
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department for Bioinformatics and Computational Biology - i12, Technical University of Munich, Boltzmannstraße 3, 85748, Garching, Bavaria, Germany
- Institute of Organismic and Molecular Evolution, Johannes Gutenberg University Mainz (JGU), Hüsch-Weg 15, 55128, Mainz, Rhineland-Palatinate, Germany
| | - Josef A Buchner
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Mai Q Nguyen
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
| | - Jonas Weidner
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Marie-Christin Metz
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Benedikt Wiestler
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Julia Schnabel
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
- Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich, Ingolstädter Landstraße 1, 85764, Neuherberg, Bavaria, Germany
- School of Biomedical Engineering & Imaging Sciences, King's College London, Strand, WC2R 2LS, London, London, UK
| | - Daniel Rueckert
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Faculty of Engineering, Department of Computing, Imperial College London, Exhibition Rd, SW7 2BX, London, London, UK
| | - Stephanie E Combs
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| |
Collapse
|
6
|
Jeong J, Ham S, Seo BK, Lee JT, Wang S, Bae MS, Cho KR, Woo OH, Song SE, Choi H. Superior performance in classification of breast cancer molecular subtype and histological factors by radiomics based on ultrafast MRI over standard MRI: evidence from a prospective study. LA RADIOLOGIA MEDICA 2025; 130:368-380. [PMID: 39862364 PMCID: PMC11903601 DOI: 10.1007/s11547-025-01956-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Accepted: 01/09/2025] [Indexed: 01/27/2025]
Abstract
PURPOSE To compare the performance of ultrafast MRI with standard MRI in classifying histological factors and subtypes of invasive breast cancer among radiologists with varying experience. METHODS From October 2021 to November 2022, this prospective study enrolled 225 participants with 233 breast cancers before treatment (NCT06104189 at clinicaltrials.gov). Tumor segmentation on MRI was performed independently by two readers (R1, dedicated breast radiologist; R2, radiology resident). We extracted 1618 radiomic features and four kinetic features from ultrafast and standard images, respectively. Logistic regression algorithms were adopted for prediction modeling, following feature selection by the least absolute shrinkage and selection operator. The performance of predicting histological factors and subtypes was evaluated using the area under the receiver-operating characteristic curve (AUC). Performance differences between MRI methods and radiologists were assessed using the DeLong test. RESULTS Ultrafast MRI outperformed standard MRI in predicting HER2 status (AUCs [95% CI] of ultrafast MRI vs standard MRI; 0.87 [0.83-0.91] vs 0.77 [0.64-0.90] for R1 and 0.88 [0.83-0.91] vs 0.77 [0.69-0.84] for R2) (all P < 0.05). Both ultrafast MRI and standard MRI showed comparable performance in predicting hormone receptors. Ultrafast MRI exhibited superior performance to standard MRI in classifying subtypes. The classification of the luminal subtype for both readers, the HER2-overexpressed subtype for R2, and the triple-negative subtype for R1 was significantly better with ultrafast MRI (P < 0.05). CONCLUSION Ultrafast MRI-based radiomics holds promise as a noninvasive imaging biomarker for classifying hormone receptors, HER2 status, and molecular subtypes compared to standard MRI, regardless of radiologist experience.
Collapse
Affiliation(s)
- Juhyun Jeong
- Department of Radiology, Korea University Ansan Hospital, Korea University College of Medicine, 123 Jeokgeum-Ro, Danwon-Gu, Ansan City, 15355, Gyeonggi-Do, Korea
| | - Sungwon Ham
- Healthcare Readiness Institute for Unified Korea, Korea University Ansan Hospital, Korea University College of Medicine, Ansan, Republic of Korea
| | - Bo Kyoung Seo
- Department of Radiology, Korea University Ansan Hospital, Korea University College of Medicine, 123 Jeokgeum-Ro, Danwon-Gu, Ansan City, 15355, Gyeonggi-Do, Korea.
| | - Jeong Taek Lee
- Department of Radiology, Korea University Ansan Hospital, Korea University College of Medicine, 123 Jeokgeum-Ro, Danwon-Gu, Ansan City, 15355, Gyeonggi-Do, Korea
| | - Shuncong Wang
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Center, Amsterdam, The Netherlands
| | - Min Sun Bae
- Department of Radiology, Korea University Ansan Hospital, Korea University College of Medicine, 123 Jeokgeum-Ro, Danwon-Gu, Ansan City, 15355, Gyeonggi-Do, Korea
| | - Kyu Ran Cho
- Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Republic of Korea
| | - Ok Hee Woo
- Department of Radiology, Korea University Guro Hospital, Korea University College of Medicine, Seoul, Republic of Korea
| | - Sung Eun Song
- Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Republic of Korea
| | - Hangseok Choi
- Medical Science Research Center, Korea University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
7
|
Jester N, Singh M, Lorr S, Tommasini SM, Wiznia DH, Buono FD. The development of an artificial intelligence auto-segmentation tool for 3D volumetric analysis of vestibular schwannomas. Sci Rep 2025; 15:5918. [PMID: 39966622 PMCID: PMC11836447 DOI: 10.1038/s41598-025-88589-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Accepted: 01/29/2025] [Indexed: 02/20/2025] Open
Abstract
Linear and volumetric analysis are the typical methods to measure tumor size. 3D volumetric analysis has risen in popularity; however, this is very time and labor intensive limiting its implementation in clinical practice. This study aims to show that an AI-led approach can shorten the length of time required to conduct 3D volumetric analysis of VS tumors and improve image processing accuracy. From Yale New Haven Hospital and public patient recruitment, 143 MRIs were included in the ground truth dataset. To create the tumor models for the ground truth dataset, an image processing software (Simpleware ScanIP, Synopsys) was used. The helper (DPP V1.0) was trained using proprietary AI- and ML-based algorithms and information. A proof-of-concept AI model achieved a mean DICE score of 0.76 (standard deviation 0.21). After the final testing stage, the model improved to a final mean DICE score of 0.88 (range 0.74-0.93, standard deviation 0.04). Our study has demonstrated an efficient, accurate AI for 3D volumetric analysis of vestibular schwannomas. The use of this AI will enable faster 3D volumetric analysis compared to manual segmentation. Additionally, the overlay function would allow visualization of growth patterns. The tool will be a method of assessing tumor growth and allow clinicians to make more informed decisions.
Collapse
Affiliation(s)
- Noemi Jester
- School of Medicine, Sheffield University, Sheffield, UK
- Department of Orthopaedics and Rehabilitation, Yale School of Medicine, New Haven, CT, USA
| | - Manwi Singh
- School of Medicine, Sheffield University, Sheffield, UK
- Department of Orthopaedics and Rehabilitation, Yale School of Medicine, New Haven, CT, USA
| | - Samantha Lorr
- School of Engineering and Applied Science, Yale University, New Haven, CT, USA
| | - Steven M Tommasini
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Department of Orthopaedics and Rehabilitation, Yale School of Medicine, New Haven, CT, USA
| | - Daniel H Wiznia
- Department of Mechanical Engineering, Yale University, New Haven, CT, USA
- Department of Orthopaedics and Rehabilitation, Yale School of Medicine, New Haven, CT, USA
| | - Frank D Buono
- Department of Psychiatry, Yale School of Medicine, 300 George Street, New Haven, CT, 06510, USA.
| |
Collapse
|
8
|
Azadivash A. Lost circulation intensity characterization in drilling operations: Leveraging machine learning and well log data. Heliyon 2025; 11:e41059. [PMID: 39758384 PMCID: PMC11699355 DOI: 10.1016/j.heliyon.2024.e41059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Revised: 10/27/2024] [Accepted: 12/06/2024] [Indexed: 01/07/2025] Open
Abstract
Lost circulation is one of the important challenges in drilling operations and bears financial losses and operational risks. The prime causes of lost circulation are related to several geological parameters, especially in problem-prone formations. Herein, the approach of applying machine learning models to forecast the intensity of lost circulation using well-log data is presented in this work. It concerns a gas field in northern Iran and contains nine well logs with lost circulation incidents categorized into six intensity classes. After rigorous exploratory analysis and preprocessing of the data, seven machine learning methods are applied: Random Forest, Extra Trees, Decision Tree, XGBoost, k-Nearest Neighbors, Support Vector Machine, and Hard Voting. Random Forest, Extra Trees, and Hard Voting are the best-performing methods. These models attained the most robust results on both key performance metrics and, hence, can predict the intensity of lost circulation quite well. Models of Extra Trees and Hard Voting show very high predictive performance values. On the other hand, their limitations in some intensity classes suggest further refinement. In this regard, the ensemble methods are highly effective for managing the multivariate nature of the task. Hard Voting aggregates multiple classifiers, becoming superior to individual models like support vector machines. This paper offers new insight into integrating machine learning to well-log data toward enhancing lost circulation prediction by offering a dependable foundation for real-time drilling decision-making. These results show that the models have the potential to lower operational risks, improve drilling safety, and minimize nonproductive time; hence, they form a quantum leap in lost circulation control.
Collapse
Affiliation(s)
- Ahmad Azadivash
- Department of Petroleum Engineering, Amirkabir University of Technology, Tehran, Iran
| |
Collapse
|
9
|
Mita K, Kobayashi N, Takahashi K, Sakai T, Shimaguchi M, Kouno M, Toyota N, Hatano M, Toyota T, Sasaki J. Anatomical recognition of dissection layers, nerves, vas deferens, and microvessels using artificial intelligence during transabdominal preperitoneal inguinal hernia repair. Hernia 2024; 29:52. [PMID: 39724499 DOI: 10.1007/s10029-024-03223-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Accepted: 11/16/2024] [Indexed: 12/28/2024]
Abstract
PURPOSE In laparoscopic inguinal hernia surgery, proper recognition of loose connective tissue, nerves, vas deferens, and microvessels is important to prevent postoperative complications, such as recurrence, pain, sexual dysfunction, and bleeding. EUREKA (Anaut Inc., Tokyo, Japan) is a system that uses artificial intelligence (AI) for anatomical recognition. This system can intraoperatively confirm the aforementioned anatomical landmarks. In this study, we validated the accuracy of EUREKA in recognizing dissection layers, nerves, vas deferens, and microvessels during transabdominal preperitoneal inguinal hernia repair (TAPP). METHODS We used TAPP videos to compare EUREKA's recognition of loose connective tissue, nerves, vas deferens, and microvessels with the original surgical video and examined whether EUREKA accurately identified these structures. Intersection over Union (IoU) and F1/Dice scores were calculated to quantitively evaluate AI predictive images. RESULTS The mean IoU and F1/Dice scores were 0.33 and 0.50 for connective tissue, 0.24 and 0.38 for nerves, 0.50 and 0.66 for the vas deferens, and 0.30 and 0.45 for microvessels, respectively. Compared with the images without EUREKA visualization, dissection layers were very clearly recognized and displayed when appropriate tension was applied.
Collapse
Affiliation(s)
- Kazuhito Mita
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan.
| | - Nao Kobayashi
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
- Anaut Inc, Tokyo, Japan
| | - Kunihiko Takahashi
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Takashi Sakai
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Mayu Shimaguchi
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Michitaka Kouno
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Naoyuki Toyota
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Minoru Hatano
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Tsuyoshi Toyota
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| | - Junichi Sasaki
- Department of Surgery, Tsudanuma Central General Hospital, 1- 9-17 Yatsu, Narashino, Japan
| |
Collapse
|
10
|
Jiang P, Wu S, Qin W, Xie Y. Complex Large-Deformation Multimodality Image Registration Network for Image-Guided Radiotherapy of Cervical Cancer. Bioengineering (Basel) 2024; 11:1304. [PMID: 39768121 PMCID: PMC11726759 DOI: 10.3390/bioengineering11121304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2024] [Revised: 12/11/2024] [Accepted: 12/19/2024] [Indexed: 01/16/2025] Open
Abstract
In recent years, image-guided brachytherapy for cervical cancer has become an important treatment method for patients with locally advanced cervical cancer, and multi-modality image registration technology is a key step in this system. However, due to the patient's own movement and other factors, the deformation between the different modalities of images is discontinuous, which brings great difficulties to the registration of pelvic computed tomography (CT/) and magnetic resonance (MR) images. In this paper, we propose a multimodality image registration network based on multistage transformation enhancement features (MTEF) to maintain the continuity of the deformation field. The model uses wavelet transform to extract different components of the image and performs fusion and enhancement processing as the input to the model. The model performs multiple registrations from local to global regions. Then, we propose a novel shared pyramid registration network that can accurately extract features from different modalities, optimizing the predicted deformation field through progressive refinement. In order to improve the registration performance, we also propose a deep learning similarity measurement method combined with bistructural morphology. On the basis of deep learning, bistructural morphology is added to the model to train the pelvic area registration evaluator, and the model can obtain parameters covering large deformation for loss function. The model was verified by the actual clinical data of cervical cancer patients. After a large number of experiments, our proposed model achieved the highest dice similarity coefficient (DSC) metric compared with the state-of-the-art registration methods. The DSC index of the MTEF algorithm is 5.64% higher than that of the TransMorph algorithm. It will effectively integrate multi-modal image information, improve the accuracy of tumor localization, and benefit more cervical cancer patients.
Collapse
Affiliation(s)
- Ping Jiang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (P.J.); (S.W.); (W.Q.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Sijia Wu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (P.J.); (S.W.); (W.Q.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Wenjian Qin
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (P.J.); (S.W.); (W.Q.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yaoqin Xie
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (P.J.); (S.W.); (W.Q.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
11
|
Ghorpade H, Kolhar S, Jagtap J, Chakraborty J. An optimized two stage U-Net approach for segmentation of pancreas and pancreatic tumor. MethodsX 2024; 13:102995. [PMID: 39435045 PMCID: PMC11491966 DOI: 10.1016/j.mex.2024.102995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Accepted: 10/03/2024] [Indexed: 10/23/2024] Open
Abstract
The segmentation of pancreas and pancreatic tumor remain a persistent challenge for radiologists. Consequently, it is essential to develop automated segmentation methods to address this task. U-Net based models are most often used among various deep learning-based techniques in tumor segmentation. This paper introduces an innovative hybrid two-stage U-Net model for segmenting both the pancreas and pancreatic tumors. The optimization technique, used in this approach, involves a combination of meta-heuristic optimization algorithms namely, Grey Wolf Border Collie Optimization (GWBCO) technique, combining the Grey Wolf Optimization algorithm and the Border Collie Optimization algorithm. Our approach is evaluated using key parameters, such as Dice Similarity Coefficient (DSC), Jaccard Index (JI), sensitivity, specificity and precision to assess its effectiveness and achieves a DSC of 93.33 % for pancreas segmentation. Additionally, the model also achieves high DSC of 91.46 % for pancreatic tumor segmentation. This method helps in improving the diagnostic accuracy and assists medical professionals to provide treatment at an early stage with precise intervention. The method offers•Two-stage U-Net model addresses both pancreas and tumor segmentation.•Combination of two metaheuristic optimization algorithms, Grey Wolf and Border Collie for enhanced performance.•High dice similarity coefficient for pancreas and tumor segmentation.
Collapse
Affiliation(s)
- Himali Ghorpade
- Symbiosis Institute of Technology, Symbiosis International (Deemed University), Pune, Maharashtra, India
| | - Shrikrishna Kolhar
- Symbiosis Institute of Technology, Symbiosis International (Deemed University), Pune, Maharashtra, India
| | - Jayant Jagtap
- Marik Institute of Computing, Artificial Intelligence, Robotics and Cybernetics, NIMS University Rajasthan, Jaipur, India
| | | |
Collapse
|
12
|
Boyd C, Brown GC, Kleinig TJ, Mayer W, Dawson J, Jenkinson M, Bezak E. Hyperparameter selection for dataset-constrained semantic segmentation: Practical machine learning optimization. J Appl Clin Med Phys 2024; 25:e14542. [PMID: 39387832 PMCID: PMC11633816 DOI: 10.1002/acm2.14542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 07/23/2024] [Accepted: 09/08/2024] [Indexed: 10/15/2024] Open
Abstract
PURPOSE/AIM This paper provides a pedagogical example for systematic machine learning optimization in small dataset image segmentation, emphasizing hyperparameter selections. A simple process is presented for medical physicists to examine hyperparameter optimization. This is also applied to a case-study, demonstrating the benefit of the method. MATERIALS AND METHODS An unrestricted public Computed Tomography (CT) dataset, with binary organ segmentation, was used to develop a multiclass segmentation model. To start the optimization process, a preliminary manual search of hyperparameters was conducted and from there a grid search identified the most influential result metrics. A total of 658 different models were trained in 2100 h, using 13 160 effective patients. The quantity of results was analyzed using random forest regression, identifying relative hyperparameter impact. RESULTS Metric implied segmentation quality (accuracy 96.8%, precision 95.1%) and visual inspection were found to be mismatched. In this work batch normalization was most important, but performance varied with hyperparameters and metrics selected. Targeted grid-search optimization and random forest analysis of relative hyperparameter importance, was an easily implementable sensitivity analysis approach. CONCLUSION The proposed optimization method gives a systematic and quantitative approach to something intuitively understood, that hyperparameters change model performance. Even just grid search optimization with random forest analysis presented here can be informative within hardware and data quality/availability limitations, adding confidence to model validity and minimize decision-making risks. By providing a guided methodology, this work helps medical physicists to improve their model optimization, irrespective of specific challenges posed by datasets and model design.
Collapse
Affiliation(s)
- Chris Boyd
- Allied Health and Human PerformanceUniversity of South AustraliaAdelaideAustralia
- Medical Physics and Radiation SafetySouth Australia Medical ImagingAdelaideAustralia
| | - Gregory C. Brown
- Allied Health and Human PerformanceUniversity of South AustraliaAdelaideAustralia
| | - Timothy J. Kleinig
- Department of NeurologyRoyal Adelaide HospitalAdelaideAustralia
- Adelaide Medical SchoolThe University of AdelaideAdelaideAustralia
| | - Wolfgang Mayer
- Discipline of SurgeryUniversity of AdelaideAdelaideAustralia
| | - Joseph Dawson
- Department of Vascular and Endovascular SurgeryRoyal Adelaide HospitalAdelaideAustralia
- Industrial AI Research Centre, UniSA STEMUniversity of South AustraliaAdelaideAustralia
| | - Mark Jenkinson
- Australian Institute for Machine Learning (AIML), School of Computer and Mathematical SciencesUniversity of AdelaideAdelaideAustralia
- South Australian Health and Medical Research Institute (SAHMRI)AdelaideAustralia
- Wellcome Trust Centre for Integrative Neuroimaging (WIN)Nuffield Department of Clinical Neurosciences (FMRIB)University of OxfordOxfordUK
| | - Eva Bezak
- Allied Health and Human PerformanceUniversity of South AustraliaAdelaideAustralia
- Department of PhysicsUniversity of AdelaideAdelaideAustralia
| |
Collapse
|
13
|
Luo X, Yang Y, Yin S, Li H, Shao Y, Zheng D, Li X, Li J, Fan W, Li J, Ban X, Lian S, Zhang Y, Yang Q, Zhang W, Zhang C, Ma L, Luo Y, Zhou F, Wang S, Lin C, Li J, Luo M, He J, Xu G, Gao Y, Shen D, Sun Y, Mou Y, Zhang R, Xie C. Automated segmentation of brain metastases with deep learning: A multi-center, randomized crossover, multi-reader evaluation study. Neuro Oncol 2024; 26:2140-2151. [PMID: 38991556 PMCID: PMC11639187 DOI: 10.1093/neuonc/noae113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2024] [Indexed: 07/13/2024] Open
Abstract
BACKGROUND Artificial intelligence has been proposed for brain metastasis (BM) segmentation but it has not been fully clinically validated. The aim of this study was to develop and evaluate a system for BM segmentation. METHODS A deep-learning-based BM segmentation system (BMSS) was developed using contrast-enhanced MR images from 488 patients with 10338 brain metastases. A randomized crossover, multi-reader study was then conducted to evaluate the performance of the BMSS for BM segmentation using data prospectively collected from 50 patients with 203 metastases at 5 centers. Five radiology residents and 5 attending radiologists were randomly assigned to contour the same prospective set in assisted and unassisted modes. Aided and unaided Dice similarity coefficients (DSCs) and contouring times per lesion were compared. RESULTS The BMSS alone yielded a median DSC of 0.91 (95% confidence interval, 0.90-0.92) in the multi-center set and showed comparable performance between the internal and external sets (P = .67). With BMSS assistance, the readers increased the median DSC from 0.87 (0.87-0.88) to 0.92 (0.92-0.92) (P < .001) with a median time saving of 42% (40-45%) per lesion. Resident readers showed a greater improvement than attending readers in contouring accuracy (improved median DSC, 0.05 [0.05-0.05] vs 0.03 [0.03-0.03]; P < .001), but a similar time reduction (reduced median time, 44% [40-47%] vs 40% [37-44%]; P = .92) with BMSS assistance. CONCLUSIONS The BMSS can be optimally applied to improve the efficiency of brain metastasis delineation in clinical practice.
Collapse
Affiliation(s)
- Xiao Luo
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Yadi Yang
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Shaohan Yin
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Hui Li
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Ying Shao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd, Shanghai, China
| | - Dechun Zheng
- Department of Radiology, Fujian Cancer Hospital, Fujian Medical University Cancer Hospital, Fuzhou, Fujian Province, China
| | - Xinchun Li
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guang zhou, Guangdong Province, China
| | - Jianpeng Li
- Department of Radiology, Affiliated Dongguan Hospital, Southern Medical University, Dongguan, Guangdong Province, China
| | - Weixiong Fan
- Department of Magnetic Resonance, Guangdong Provincial Key Laboratory of Precision Medicine and Clinical Translational Research of Hakka Population, Meizhou People’s Hospital, Meizhou, Guangdong Province, China
| | - Jing Li
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Xiaohua Ban
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Shanshan Lian
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Yun Zhang
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Qiuxia Yang
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Weijing Zhang
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Cheng Zhang
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Lidi Ma
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Yingwei Luo
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Fan Zhou
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Shiyuan Wang
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Cuiping Lin
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Jiao Li
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Ma Luo
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Jianxun He
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guang zhou, Guangdong Province, China
| | - Guixiao Xu
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Yaozong Gao
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd, Shanghai, China
| | - Dinggang Shen
- R&D Department, Shanghai United Imaging Intelligence Co., Ltd, Shanghai, China
| | - Ying Sun
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Yonggao Mou
- Department of Neurosurgery, Sun Yat-Sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Rong Zhang
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| | - Chuanmiao Xie
- State Key Laboratory of Oncology in South China, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
- Department of Radiology, Sun Yat-sen University Cancer Center, Guang zhou, Guangdong Province, China
| |
Collapse
|
14
|
Conrad AM, Zimmermann J, Mohr D, Froelich MF, Hertel A, Rathmann N, Boesing C, Thiel M, Schoenberg SO, Krebs J, Luecke T, Rocco PRM, Otto M. Quantification of pulmonary edema using automated lung segmentation on computed tomography in mechanically ventilated patients with acute respiratory distress syndrome. Intensive Care Med Exp 2024; 12:95. [PMID: 39487874 PMCID: PMC11531458 DOI: 10.1186/s40635-024-00685-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 10/10/2024] [Indexed: 11/04/2024] Open
Abstract
BACKGROUND Quantification of pulmonary edema in patients with acute respiratory distress syndrome (ARDS) by chest computed tomography (CT) scan has not been validated in routine diagnostics due to its complexity and time-consuming nature. Therefore, the single-indicator transpulmonary thermodilution (TPTD) technique to measure extravascular lung water (EVLW) has been used in the clinical setting. Advances in artificial intelligence (AI) have now enabled CT images of inhomogeneous lungs to be segmented automatically by an intensive care physician with no prior radiology training within a relatively short time. Nevertheless, there is a paucity of data validating the quantification of pulmonary edema using automated lung segmentation on CT compared with TPTD. METHODS A retrospective study (January 2016 to December 2021) analyzed patients with ARDS, admitted to the intensive care unit of the Department of Anesthesiology and Critical Care Medicine, University Hospital Mannheim, who underwent a chest CT scan and hemodynamic monitoring using TPTD at the same time. Pulmonary edema was estimated using manually and automated lung segmentation on CT and then compared to the pulmonary edema calculated from EVLW determined using TPTD. RESULTS 145 comparative measurements of pulmonary edema with TPTD and CT were included in the study. Estimating pulmonary edema using either automated lung segmentation on CT or TPTD showed a low bias overall (- 104 ml) but wide levels of agreement (upper: 936 ml, lower: - 1144 ml). In 13% of the analyzed CT scans, the agreement between the segmentation of the AI algorithm and a dedicated investigator was poor. Manual segmentation and automated segmentation adjusted for contrast agent did not improve the agreement levels. CONCLUSIONS Automated lung segmentation on CT can be considered an unbiased but imprecise measurement of pulmonary edema in mechanically ventilated patients with ARDS.
Collapse
Affiliation(s)
- Alice Marguerite Conrad
- Department of Anesthesiology and Critical Care Medicine, Faculty of Medicine, University Hospital Mannheim, University of Heidelberg, Theodor-Kutzer Ufer 1-3, 68165, Mannheim, Germany
| | - Julia Zimmermann
- Department of Anesthesiology and Critical Care Medicine, Faculty of Medicine, University Hospital Mannheim, University of Heidelberg, Theodor-Kutzer Ufer 1-3, 68165, Mannheim, Germany
| | - David Mohr
- Department of Anesthesiology and Critical Care Medicine, Faculty of Medicine, University Hospital Mannheim, University of Heidelberg, Theodor-Kutzer Ufer 1-3, 68165, Mannheim, Germany
| | - Matthias F Froelich
- Department of Clinical Radiology and Nuclear Medicine, Faculty of Medicine, University Hospital Mannheim, University of Heidelberg, Theodor-Kutzer Ufer 1-3, 68165, Mannheim, Germany
| | - Alexander Hertel
- Department of Clinical Radiology and Nuclear Medicine, Faculty of Medicine, University Hospital Mannheim, University of Heidelberg, Theodor-Kutzer Ufer 1-3, 68165, Mannheim, Germany
| | - Nils Rathmann
- Department of Clinical Radiology and Nuclear Medicine, Faculty of Medicine, University Hospital Mannheim, University of Heidelberg, Theodor-Kutzer Ufer 1-3, 68165, Mannheim, Germany
| | - Christoph Boesing
- Department of Anesthesiology and Critical Care Medicine, Faculty of Medicine, University Hospital Mannheim, University of Heidelberg, Theodor-Kutzer Ufer 1-3, 68165, Mannheim, Germany
| | - Manfred Thiel
- Department of Anesthesiology and Critical Care Medicine, Faculty of Medicine, University Hospital Mannheim, University of Heidelberg, Theodor-Kutzer Ufer 1-3, 68165, Mannheim, Germany
| | - Stefan O Schoenberg
- Department of Clinical Radiology and Nuclear Medicine, Faculty of Medicine, University Hospital Mannheim, University of Heidelberg, Theodor-Kutzer Ufer 1-3, 68165, Mannheim, Germany
| | - Joerg Krebs
- Department of Anesthesiology and Critical Care Medicine, Faculty of Medicine, University Hospital Mannheim, University of Heidelberg, Theodor-Kutzer Ufer 1-3, 68165, Mannheim, Germany
| | - Thomas Luecke
- Department of Anesthesiology and Critical Care Medicine, Faculty of Medicine, University Hospital Mannheim, University of Heidelberg, Theodor-Kutzer Ufer 1-3, 68165, Mannheim, Germany
| | - Patricia R M Rocco
- Laboratory of Pulmonary Investigation, Centro de Ciências da Saúde, Carlos Chagas Filho Institute of Biophysics, Federal University of Rio de Janeiro, Avenida Carlos Chagas Filho, 373, Bloco G-014, Ilha Do Fundão, Rio de Janeiro, Brazil
| | - Matthias Otto
- Department of Anesthesiology and Critical Care Medicine, Faculty of Medicine, University Hospital Mannheim, University of Heidelberg, Theodor-Kutzer Ufer 1-3, 68165, Mannheim, Germany.
| |
Collapse
|
15
|
Laddi A, Goyal S, Himani, Savlania A. Vein segmentation and visualization of upper and lower extremities using convolution neural network. BIOMED ENG-BIOMED TE 2024; 69:455-464. [PMID: 38651783 DOI: 10.1515/bmt-2023-0331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 04/03/2024] [Indexed: 04/25/2024]
Abstract
OBJECTIVES The study focused on developing a reliable real-time venous localization, identification, and visualization framework based upon deep learning (DL) self-parametrized Convolution Neural Network (CNN) algorithm for segmentation of the venous map for both lower and upper limb dataset acquired under unconstrained conditions using near-infrared (NIR) imaging setup, specifically to assist vascular surgeons during venipuncture, vascular surgeries, or Chronic Venous Disease (CVD) treatments. METHODS A portable image acquisition setup has been designed to collect venous data (upper and lower extremities) from 72 subjects. A manually annotated image dataset was used to train and compare the performance of existing well-known CNN-based architectures such as ResNet and VGGNet with self-parameterized U-Net, improving automated vein segmentation and visualization. RESULTS Experimental results indicated that self-parameterized U-Net performs better at segmenting the unconstrained dataset in comparison with conventional CNN feature-based learning models, with a Dice score of 0.58 and displaying 96.7 % accuracy for real-time vein visualization, making it appropriate to locate veins in real-time under unconstrained conditions. CONCLUSIONS Self-parameterized U-Net for vein segmentation and visualization has the potential to reduce risks associated with traditional venipuncture or CVD treatments by outperforming conventional CNN architectures, providing vascular assistance, and improving patient care and treatment outcomes.
Collapse
Affiliation(s)
- Amit Laddi
- Biomedical Applications Group, CSIR-Central Scientific Instruments Organisation (CSIO), Chandigarh-160030, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, Uttar Pradesh- 201 002, India
| | - Shivalika Goyal
- Biomedical Applications Group, CSIR-Central Scientific Instruments Organisation (CSIO), Chandigarh-160030, India
- Academy of Scientific and Innovative Research (AcSIR), Ghaziabad, Uttar Pradesh- 201 002, India
| | | | - Ajay Savlania
- Department of General Surgery, 29751 PGIMER , Chandigarh, India
| |
Collapse
|
16
|
Hu M, Zhang Y, Xue H, Lv H, Han S. Mamba- and ResNet-Based Dual-Branch Network for Ultrasound Thyroid Nodule Segmentation. Bioengineering (Basel) 2024; 11:1047. [PMID: 39451422 PMCID: PMC11504408 DOI: 10.3390/bioengineering11101047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Revised: 10/15/2024] [Accepted: 10/18/2024] [Indexed: 10/26/2024] Open
Abstract
Accurate segmentation of thyroid nodules in ultrasound images is crucial for the diagnosis of thyroid cancer and preoperative planning. However, the segmentation of thyroid nodules is challenging due to their irregular shape, blurred boundary, and uneven echo texture. To address these challenges, a novel Mamba- and ResNet-based dual-branch network (MRDB) is proposed. Specifically, the visual state space block (VSSB) from Mamba and ResNet-34 are utilized to construct a dual encoder for extracting global semantics and local details, and establishing multi-dimensional feature connections. Meanwhile, an upsampling-convolution strategy is employed in the left decoder focusing on image size and detail reconstruction. A convolution-upsampling strategy is used in the right decoder to emphasize gradual feature refinement and recovery. To facilitate the interaction between local details and global context within the encoder and decoder, cross-skip connection is introduced. Additionally, a novel hybrid loss function is proposed to improve the boundary segmentation performance of thyroid nodules. Experimental results show that MRDB outperforms the state-of-the-art approaches with DSC of 90.02% and 80.6% on two public thyroid nodule datasets, TN3K and TNUI-2021, respectively. Furthermore, experiments on a third external dataset, DDTI, demonstrate that our method improves the DSC by 10.8% compared to baseline and exhibits good generalization to clinical small-scale thyroid nodule datasets. The proposed MRDB can effectively improve thyroid nodule segmentation accuracy and has great potential for clinical applications.
Collapse
Affiliation(s)
- Min Hu
- Department of Medical Electronics, School of Biomedical Engineering, Air Force Medical University, Xi’an 710032, China; (M.H.); (Y.Z.); (H.X.); (H.L.)
| | - Yaorong Zhang
- Department of Medical Electronics, School of Biomedical Engineering, Air Force Medical University, Xi’an 710032, China; (M.H.); (Y.Z.); (H.X.); (H.L.)
- School of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, China
| | - Huijun Xue
- Department of Medical Electronics, School of Biomedical Engineering, Air Force Medical University, Xi’an 710032, China; (M.H.); (Y.Z.); (H.X.); (H.L.)
| | - Hao Lv
- Department of Medical Electronics, School of Biomedical Engineering, Air Force Medical University, Xi’an 710032, China; (M.H.); (Y.Z.); (H.X.); (H.L.)
| | - Shipeng Han
- Department of Medical Electronics, School of Biomedical Engineering, Air Force Medical University, Xi’an 710032, China; (M.H.); (Y.Z.); (H.X.); (H.L.)
| |
Collapse
|
17
|
Li L, Lu Z, Jiang A, Sha G, Luo Z, Xie X, Ding X. Swin Transformer-based automatic delineation of the hippocampus by MRI in hippocampus-sparing whole-brain radiotherapy. Front Neurosci 2024; 18:1441791. [PMID: 39464425 PMCID: PMC11502472 DOI: 10.3389/fnins.2024.1441791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Accepted: 09/26/2024] [Indexed: 10/29/2024] Open
Abstract
Objective This study aims to develop and validate SwinHS, a deep learning-based automatic segmentation model designed for precise hippocampus delineation in patients receiving hippocampus-protected whole-brain radiotherapy. By streamlining this process, we seek to significantly improve workflow efficiency for clinicians. Methods A total of 100 three-dimensional T1-weighted MR images were collected, with 70 patients allocated for training and 30 for testing. Manual delineation of the hippocampus was performed according to RTOG0933 guidelines. The SwinHS model, which incorporates a 3D ELSA Transformer module and an sSE CNN decoder, was trained and tested on these datasets. To prove the effectiveness of SwinHS, this study compared the segmentation performance of SwinHS with that of V-Net, U-Net, ResNet and VIT. Evaluation metrics included the Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), and Hausdorff distance (HD). Dosimetric evaluation compared radiotherapy plans generated using automatic segmentation (plan AD) versus manual hippocampus segmentation (plan MD). Results SwinHS outperformed four advanced deep learning-based models, achieving an average DSC of 0.894, a JSC of 0.817, and an HD of 3.430 mm. Dosimetric evaluation revealed that both plan (AD) and plan (MD) met treatment plan constraints for the target volume (PTV). However, the hippocampal Dmax in plan (AD) was significantly greater than that in plan (MD), approaching the 17 Gy constraint limit. Nonetheless, there were no significant differences in D100% or maximum doses to other critical structures between the two plans. Conclusion Compared with manual delineation, SwinHS demonstrated superior segmentation performance and a significantly shorter delineation time. While plan (AD) met clinical requirements, caution should be exercised regarding hippocampal Dmax. SwinHS offers a promising tool to enhance workflow efficiency and facilitate hippocampal protection in radiotherapy planning for patients with brain metastases.
Collapse
Affiliation(s)
- Liang Li
- Department of Radiotherapy, The Affiliated Hospital of Xuzhou Medical University, Xuzhou, China
| | - Zhennan Lu
- Department of Equipment, Affiliated Hospital of Nanjing University of Chinese Medicine (Jiangsu Province Hospital of Chinese Medicine), Nanjing, China
| | - Aijun Jiang
- Department of Radiotherapy, The Affiliated Hospital of Xuzhou Medical University, Xuzhou, China
| | - Guanchen Sha
- Department of Radiation Oncology, Xuzhou Central Hospital, Xuzhou, China
| | - Zhaoyang Luo
- HaiChuang Future Medical Technology Co., Ltd., Zhejiang, China
| | - Xin Xie
- Department of Radiotherapy, The Affiliated Hospital of Xuzhou Medical University, Xuzhou, China
| | - Xin Ding
- Department of Radiotherapy, The Affiliated Hospital of Xuzhou Medical University, Xuzhou, China
| |
Collapse
|
18
|
Chen J, Zeng H, Cheng Y, Yang B. Identifying radiogenomic associations of breast cancer based on DCE-MRI by using Siamese Neural Network with manufacturer bias normalization. Med Phys 2024; 51:7269-7281. [PMID: 38922986 DOI: 10.1002/mp.17266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 06/08/2024] [Accepted: 06/08/2024] [Indexed: 06/28/2024] Open
Abstract
BACKGROUND AND PURPOSE The immunohistochemical test (IHC) for Human Epidermal Growth Factor Receptor 2 (HER2) and hormone receptors (HR) provides prognostic information and guides treatment for patients with invasive breast cancer. The objective of this paper is to establish a non-invasive system for identifying HER2 and HR in breast cancer using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). METHODS In light of the absence of high-performance algorithms and external validation in previously published methods, this study utilizes 3D deep features and radiomics features to represent the information of the Region of Interest (ROI). A Siamese Neural Network was employed as the classifier, with 3D deep features and radiomics features serving as the network input. To neutralize manufacturer bias, a batch effect normalization method, ComBat, was introduced. To enhance the reliability of the study, two datasets, Predict Your Therapeutic Response with Imaging and moLecular Analysis (I-SPY 1) and I-SPY 2, were incorporated. I-SPY 2 was utilized for model training and validation, while I-SPY 1 was exclusively employed for external validation. Additionally, a breast tumor segmentation network was trained to improve radiomic feature extraction. RESULTS The results indicate that our approach achieved an average Area Under the Curve (AUC) of 0.632, with a Standard Error of the Mean (SEM) of 0.042 for HER2 prediction in the I-SPY 2 dataset. For HR prediction, our method attained an AUC of 0.635 (SEM 0.041), surpassing other published methods in the AUC metric. Moreover, the proposed method yielded competitive results in other metrics. In external validation using the I-SPY 1 dataset, our approach achieved an AUC of 0.567 (SEM 0.032) for HR prediction and 0.563 (SEM 0.033) for HER2 prediction. CONCLUSION This study proposes a non-invasive system for identifying HER2 and HR in breast cancer. Although the results do not conclusively demonstrate superiority in both tasks, they indicate that the proposed method achieved good performance and is a competitive classifier compared to other reference methods. Ablation studies demonstrate that both radiomics features and deep features for the Siamese Neural Network are beneficial for the model. The introduced manufacturer bias normalization method has been shown to enhance the method's performance. Furthermore, the external validation of the method enhances the reliability of this research. Source code, pre-trained segmentation network, Radiomics and deep features, data for statistical analysis, and Supporting Information of this article are online at: https://github.com/FORRESTHUACHEN/Siamese_Neural_Network_based_Brest_cancer_Radiogenomic.
Collapse
Affiliation(s)
- Junhua Chen
- School of Medicine, Shanghai University, Shanghai, China
| | - Haiyan Zeng
- Department of Radiation Oncology, Division of Thoracic Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Yanyan Cheng
- Medical Engineering Department, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Shandong, China
| | - Banghua Yang
- School of Medicine, Shanghai University, Shanghai, China
- School of Mechatronic Engineering and Automation, Research Center of Brain Computer Engineering, Shanghai University, Shanghai, China
| |
Collapse
|
19
|
Saavedra JP, Droppelmann G, Jorquera C, Feijoo F. Automated segmentation and classification of supraspinatus fatty infiltration in shoulder magnetic resonance image using a convolutional neural network. Front Med (Lausanne) 2024; 11:1416169. [PMID: 39290391 PMCID: PMC11405335 DOI: 10.3389/fmed.2024.1416169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Accepted: 08/20/2024] [Indexed: 09/19/2024] Open
Abstract
Background Goutallier's fatty infiltration of the supraspinatus muscle is a critical condition in degenerative shoulder disorders. Deep learning research primarily uses manual segmentation and labeling to detect this condition. Employing unsupervised training with a hybrid framework of segmentation and classification could offer an efficient solution. Aim To develop and assess a two-step deep learning model for detecting the region of interest and categorizing the magnetic resonance image (MRI) supraspinatus muscle fatty infiltration according to Goutallier's scale. Materials and methods A retrospective study was performed from January 1, 2019 to September 20, 2020, using 900 MRI T2-weighted images with supraspinatus muscle fatty infiltration diagnoses. A model with two sequential neural networks was implemented and trained. The first sub-model automatically detects the region of interest using a U-Net model. The second sub-model performs a binary classification using the VGG-19 architecture. The model's performance was computed as the average of five-fold cross-validation processes. Loss, accuracy, Dice coefficient (CI. 95%), AU-ROC, sensitivity, and specificity (CI. 95%) were reported. Results Six hundred and six shoulders MRIs were analyzed. The Goutallier distribution was presented as follows: 0 (66.50%); 1 (18.81%); 2 (8.42%); 3 (3.96%); 4 (2.31%). Segmentation results demonstrate high levels of accuracy (0.9977 ± 0.0002) and Dice score (0.9441 ± 0.0031), while the classification model also results in high levels of accuracy (0.9731 ± 0.0230); sensitivity (0.9000 ± 0.0980); specificity (0.9788 ± 0.0257); and AUROC (0.9903 ± 0.0092). Conclusion The two-step training method proposed using a deep learning model demonstrated strong performance in segmentation and classification tasks.
Collapse
Affiliation(s)
- Juan Pablo Saavedra
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| | - Guillermo Droppelmann
- Clínica MEDS, Santiago, Chile
- Harvard T.H. Chan School of Public Health, Boston, MA, United States
| | - Carlos Jorquera
- Facultad de Ciencias, Escuela de Nutrición y Dietética, Universidad Mayor, Santiago, Chile
| | - Felipe Feijoo
- School of Industrial Engineering, Pontificia Universidad Católica de Valparaíso, Valparaíso, Chile
| |
Collapse
|
20
|
Zwijnen AW, Watzema L, Ridwan Y, van Der Pluijm I, Smal I, Essers J. Self-adaptive deep learning-based segmentation for universal and functional clinical and preclinical CT image analysis. Comput Biol Med 2024; 179:108853. [PMID: 39013341 DOI: 10.1016/j.compbiomed.2024.108853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 07/04/2024] [Accepted: 07/04/2024] [Indexed: 07/18/2024]
Abstract
BACKGROUND Methods to monitor cardiac functioning non-invasively can accelerate preclinical and clinical research into novel treatment options for heart failure. However, manual image analysis of cardiac substructures is resource-intensive and error-prone. While automated methods exist for clinical CT images, translating these to preclinical μCT data is challenging. We employed deep learning to automate the extraction of quantitative data from both CT and μCT images. METHODS We collected a public dataset of cardiac CT images of human patients, as well as acquired μCT images of wild-type and accelerated aging mice. The left ventricle, myocardium, and right ventricle were manually segmented in the μCT training set. After template-based heart detection, two separate segmentation neural networks were trained using the nnU-Net framework. RESULTS The mean Dice score of the CT segmentation results (0.925 ± 0.019, n = 40) was superior to those achieved by state-of-the-art algorithms. Automated and manual segmentations of the μCT training set were nearly identical. The estimated median Dice score (0.940) of the test set results was comparable to existing methods. The automated volume metrics were similar to manual expert observations. In aging mice, ejection fractions had significantly decreased, and myocardial volume increased by age 24 weeks. CONCLUSIONS With further optimization, automated data extraction expands the application of (μ)CT imaging, while reducing subjectivity and workload. The proposed method efficiently measures the left and right ventricular ejection fraction and myocardial mass. With uniform translation between image types, cardiac functioning in diastolic and systolic phases can be monitored in both animals and humans.
Collapse
Affiliation(s)
- Anne-Wietje Zwijnen
- Department of Molecular Genetics, Erasmus University Medical Center, Rotterdam, the Netherlands
| | | | - Yanto Ridwan
- AMIE Core Facility, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Ingrid van Der Pluijm
- Department of Molecular Genetics, Erasmus University Medical Center, Rotterdam, the Netherlands; Department of Vascular Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - Ihor Smal
- Department of Cell Biology, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - Jeroen Essers
- Department of Molecular Genetics, Erasmus University Medical Center, Rotterdam, the Netherlands; Department of Vascular Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands; Department of Radiotherapy, Erasmus University Medical Center, Rotterdam, the Netherlands.
| |
Collapse
|
21
|
Jain R, Lee F, Luo N, Hyare H, Pandit AS. A Practical Guide to Manual and Semi-Automated Neurosurgical Brain Lesion Segmentation. NEUROSCI 2024; 5:265-275. [PMID: 39483281 PMCID: PMC11468002 DOI: 10.3390/neurosci5030021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 07/30/2024] [Accepted: 07/31/2024] [Indexed: 11/03/2024] Open
Abstract
The purpose of the article is to provide a practical guide for manual and semi-automated image segmentation of common neurosurgical cranial lesions, namely meningioma, glioblastoma multiforme (GBM) and subarachnoid haemorrhage (SAH), for neurosurgical trainees and researchers. MATERIALS AND METHODS The medical images used were sourced from the Medical Image Computing and Computer Assisted Interventions Society (MICCAI) Multimodal Brain Tumour Segmentation Challenge (BRATS) image database and from the local Picture Archival and Communication System (PACS) record with consent. Image pre-processing was carried out using MRIcron software (v1.0.20190902). ITK-SNAP (v3.8.0) was used in this guideline due to its availability and powerful built-in segmentation tools, although others (Seg3D, Freesurfer and 3D Slicer) are available. Quality control was achieved by employing expert segmenters to review. RESULTS A pipeline was developed to demonstrate the pre-processing and manual and semi-automated segmentation of patient images for each cranial lesion, accompanied by image guidance and video recordings. Three sample segmentations were generated to illustrate potential challenges. Advice and solutions were provided within both text and video. CONCLUSIONS Semi-automated segmentation methods enhance efficiency, increase reproducibility, and are suitable to be incorporated into future clinical practise. However, manual segmentation remains a highly effective technique in specific circumstances and provides initial training sets for the development of more advanced semi- and fully automated segmentation algorithms.
Collapse
Affiliation(s)
- Raunak Jain
- UCL Medical School, University College London, London WC1E 6DE, UK; (R.J.); (F.L.); (N.L.)
| | - Faith Lee
- UCL Medical School, University College London, London WC1E 6DE, UK; (R.J.); (F.L.); (N.L.)
| | - Nianhe Luo
- UCL Medical School, University College London, London WC1E 6DE, UK; (R.J.); (F.L.); (N.L.)
| | - Harpreet Hyare
- Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK;
| | - Anand S. Pandit
- Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, London WC1N 3BG, UK
- High-Dimensional Neurology, Institute of Neurology, University College London, London WC1N 3BG, UK
| |
Collapse
|
22
|
Kinoshita K, Maruyama T, Kobayashi N, Imanishi S, Maruyama M, Ohira G, Endo S, Tochigi T, Kinoshita M, Fukui Y, Kumazu Y, Kita J, Shinohara H, Matsubara H. An artificial intelligence-based nerve recognition model is useful as surgical support technology and as an educational tool in laparoscopic and robot-assisted rectal cancer surgery. Surg Endosc 2024; 38:5394-5404. [PMID: 39073558 PMCID: PMC11362368 DOI: 10.1007/s00464-024-10939-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Accepted: 05/17/2024] [Indexed: 07/30/2024]
Abstract
BACKGROUND Artificial intelligence (AI) has the potential to enhance surgical practice by predicting anatomical structures within the surgical field, thereby supporting surgeons' experiences and cognitive skills. Preserving and utilising nerves as critical guiding structures is paramount in rectal cancer surgery. Hence, we developed a deep learning model based on U-Net to automatically segment nerves. METHODS The model performance was evaluated using 60 randomly selected frames, and the Dice and Intersection over Union (IoU) scores were quantitatively assessed by comparing them with ground truth data. Additionally, a questionnaire was administered to five colorectal surgeons to gauge the extent of underdetection, overdetection, and the practical utility of the model in rectal cancer surgery. Furthermore, we conducted an educational assessment of non-colorectal surgeons, trainees, physicians, and medical students. We evaluated their ability to recognise nerves in mesorectal dissection scenes, scored them on a 12-point scale, and examined the score changes before and after exposure to the AI analysis videos. RESULTS The mean Dice and IoU scores for the 60 test frames were 0.442 (range 0.0465-0.639) and 0.292 (range 0.0238-0.469), respectively. The colorectal surgeons revealed an under-detection score of 0.80 (± 0.47), an over-detection score of 0.58 (± 0.41), and a usefulness evaluation score of 3.38 (± 0.43). The nerve recognition scores of non-colorectal surgeons, rotating residents, and medical students significantly improved by simply watching the AI nerve recognition videos for 1 min. Notably, medical students showed a more substantial increase in nerve recognition scores when exposed to AI nerve analysis videos than when exposed to traditional lectures on nerves. CONCLUSIONS In laparoscopic and robot-assisted rectal cancer surgeries, the AI-based nerve recognition model achieved satisfactory recognition levels for expert surgeons and demonstrated effectiveness in educating junior surgeons and medical students on nerve recognition.
Collapse
Affiliation(s)
- Kazuya Kinoshita
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
- Department of General Surgery, Kumagaya General Hospital, Saitama, Japan
| | - Tetsuro Maruyama
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan.
| | | | - Shunsuke Imanishi
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Michihiro Maruyama
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Gaku Ohira
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Satoshi Endo
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Toru Tochigi
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Mayuko Kinoshita
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Yudai Fukui
- Department of Gastroenterological Surgery, Toranomon Hospital, Tokyo, Japan
| | - Yuta Kumazu
- Anaut Inc, Tokyo, Japan
- Department of Surgery, Yokohama City University, Kanagawa, Japan
| | - Junji Kita
- Department of General Surgery, Kumagaya General Hospital, Saitama, Japan
| | - Hisashi Shinohara
- Department of Gastroenterological Surgery, Hyogo College of Medicine, Hyogo, Japan
| | - Hisahiro Matsubara
- Department of Frontier Surgery, Graduate School of Medicine, Chiba University, Chiba, Japan
| |
Collapse
|
23
|
Ye J, Zhao Z, Ghafourian E, Tajally A, Alkhazaleh HA, Lee S. Optimizing the topology of convolutional neural network (CNN) and artificial neural network (ANN) for brain tumor diagnosis (BTD) through MRIs. Heliyon 2024; 10:e35083. [PMID: 39687857 PMCID: PMC11647943 DOI: 10.1016/j.heliyon.2024.e35083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 07/22/2024] [Accepted: 07/22/2024] [Indexed: 12/18/2024] Open
Abstract
The use of MRI analysis for BTD and tumor type detection has considerable importance within the domain of machine vision. Numerous methodologies have been proposed to address this issue, and significant progress has been achieved in this domain via the use of deep learning (DL) approaches. While the majority of offered approaches using artificial neural networks (ANNs) and deep neural networks (DNNs) demonstrate satisfactory performance in Bayesian Tree Descent (BTD), none of these research studies can ensure the optimality of the employed learning model structure. Put simply, there is room for improvement in the efficiency of these learning models in BTD. This research introduces a novel approach for optimizing the configuration of Convolutional Neural Networks (CNNs) and Artificial Neural Networks (ANNs) to address the BTD issue. The suggested approach employs Convolutional Neural Networks (CNN) for the purpose of segmenting brain MRIs. The model's configurable hyper-parameters are tuned using a genetic algorithm (GA). The Multi-Linear Principal Component Analysis (MPCA) is used to decrease the dimensionality of the segmented features in the pictures after they have been segmented. Ultimately, the segmentation procedure is executed using an Artificial Neural Network (ANN). In this artificial neural network (ANN), the genetic algorithm (GA) sets the ideal number of neurons in the hidden layer and the appropriate weight vector. The effectiveness of the suggested approach was assessed by utilizing the BRATS2014 and BTD20 databases. The results indicate that the proposed method can classify samples from these two databases with an average accuracy of 98.6 % and 99.1 %, respectively, which represents an accuracy improvement of at least 1.1 % over the preceding methods.
Collapse
Affiliation(s)
- Jianhong Ye
- Head and Neck Surgery, The First Hospital of Jiaxing, Jiaxing, 314500, Zhejiang, China
| | - Zhiyong Zhao
- School of Engineering, Cardiff University, Cardiff, CF24 3TF, UK
| | - Ehsan Ghafourian
- Department of Computer Science, Iowa State University, Ames, IA, USA
| | - AmirReza Tajally
- School of Industrial Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Hamzah Ali Alkhazaleh
- College of Engineering and IT, University of Dubai, Academic City, 14143, Dubai, United Arab Emirates
| | - Sangkeum Lee
- Department of Computer Engineering, Hanbat National University, Daejeon, 34158, South Korea
| |
Collapse
|
24
|
ALOM SHAHIN, DANESHKHAH ALI, ACOSTA NICOLAS, ANTHONY NICK, LIWAG EMILYPUJADAS, BACKMAN VADIM, GAIRE SUNILKUMAR. Deep Learning-driven Automatic Nuclei Segmentation of Label-free Live Cell Chromatin-sensitive Partial Wave Spectroscopic Microscopy Imaging. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.20.608885. [PMID: 39229026 PMCID: PMC11370422 DOI: 10.1101/2024.08.20.608885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/05/2024]
Abstract
Chromatin-sensitive Partial Wave Spectroscopic (csPWS) microscopy offers a non-invasive glimpse into the mass density distribution of cellular structures at the nanoscale, leveraging the spectroscopic information. Such capability allows us to analyze the chromatin structure and organization and the global transcriptional state of the cell nuclei for the study of its role in carcinogenesis. Accurate segmentation of the nuclei in csPWS microscopy images is an essential step in isolating them for further analysis. However, manual segmentation is error-prone, biased, time-consuming, and laborious, resulting in disrupted nuclear boundaries with partial or over-segmentation. Here, we present an innovative deep-learning-driven approach to automate the accurate nuclei segmentation of label-free live cell csPWS microscopy imaging data. Our approach, csPWS-seg, harnesses the Convolutional Neural Networks-based U-Net model with an attention mechanism to automate the accurate cell nuclei segmentation of csPWS microscopy images. We leveraged the structural, physical, and biological differences between the cytoplasm, nucleus, and nuclear periphery to construct three distinct csPWS feature images for nucleus segmentation. Using these images of HCT116 cells, csPWS-seg achieved superior performance with a median Intersection over Union (IoU) of 0.80 and a Dice Similarity Coefficient (DSC) score of 0.88. The csPWS-seg overcame the segmentation performance over the baseline U-Net model and another attention-based model, SE-U-Net, marking a significant improvement in segmentation accuracy. Further, we analyzed the performance of our proposed model with four loss functions: binary cross-entropy loss, focal loss, dice loss, and Jaccard loss. The csPWS-seg with focal loss provided the best results compared to other loss functions. The automatic and accurate nuclei segmentation offered by the csPWS-seg not only automates, accelerates, and streamlines csPWS data analysis but also enhances the reliability of subsequent chromatin analysis research, paving the way for more accurate diagnostics, treatment, and understanding of cellular mechanisms for carcinogenesis.
Collapse
Affiliation(s)
- SHAHIN ALOM
- Department of Electrical and Computer Engineering, North Carolina Agricultural and Technical State University, Greensboro, NC 27411, USA
| | - ALI DANESHKHAH
- Department of Biomedical Engineering, Northwestern University, Evanston, IL 60208, USA
| | - NICOLAS ACOSTA
- Department of Biomedical Engineering, Northwestern University, Evanston, IL 60208, USA
| | - NICK ANTHONY
- Department of Biomedical Engineering, Northwestern University, Evanston, IL 60208, USA
| | - EMILY PUJADAS LIWAG
- Department of Biomedical Engineering, Northwestern University, Evanston, IL 60208, USA
| | - VADIM BACKMAN
- Department of Biomedical Engineering, Northwestern University, Evanston, IL 60208, USA
| | - SUNIL KUMAR GAIRE
- Department of Electrical and Computer Engineering, North Carolina Agricultural and Technical State University, Greensboro, NC 27411, USA
| |
Collapse
|
25
|
Laudon A, Wang Z, Zou A, Sharma R, Ji J, Kim C, Qian Y, Ye Q, Chen H, Henderson JM, Zhang C, Kolachalama VB, Lu W. Digital pathology assessment of kidney glomerular filtration barrier ultrastructure in an animal model of podocytopathy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.14.599097. [PMID: 38948787 PMCID: PMC11212870 DOI: 10.1101/2024.06.14.599097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/02/2024]
Abstract
Background Transmission electron microscopy (TEM) images can visualize kidney glomerular filtration barrier ultrastructure, including the glomerular basement membrane (GBM) and podocyte foot processes (PFP). Podocytopathy is associated with glomerular filtration barrier morphological changes observed experimentally and clinically by measuring GBM or PFP width. However, these measurements are currently performed manually. This limits research on podocytopathy disease mechanisms and therapeutics due to labor intensiveness and inter-operator variability. Methods We developed a deep learning-based digital pathology computational method to measure GBM and PFP width in TEM images from the kidneys of Integrin-Linked Kinase (ILK) podocyte-specific conditional knockout (cKO) mouse, an animal model of podocytopathy, compared to wild-type (WT) control mouse. We obtained TEM images from WT and ILK cKO littermate mice at 4 weeks old. Our automated method was composed of two stages: a U-Net model for GBM segmentation, followed by an image processing algorithm for GBM and PFP width measurement. We evaluated its performance with a 4-fold cross-validation study on WT and ILK cKO mouse kidney pairs. Results Mean (95% confidence interval) GBM segmentation accuracy, calculated as Jaccard index, was 0.73 (0.70-0.76) for WT and 0.85 (0.83-0.87) for ILK cKO TEM images. Automated and manual GBM width measurements were similar for both WT (p=0.49) and ILK cKO (p=0.06) specimens. While automated and manual PFP width measurements were similar for WT (p=0.89), they differed for ILK cKO (p<0.05) specimens. WT and ILK cKO specimens were morphologically distinguishable by manual GBM (p<0.05) and PFP (p<0.05) width measurements. This phenotypic difference was reflected in the automated GBM (p<0.05) more than PFP (p=0.06) widths. Conclusions These results suggest that certain automated measurements enabled via deep learning-based digital pathology tools could distinguish healthy kidneys from those with podocytopathy. Our proposed method provides high-throughput, objective morphological analysis and could facilitate podocytopathy research and translate into clinical diagnosis.
Collapse
Affiliation(s)
- Aksel Laudon
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Nephrology Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA, USA
| | - Zhaoze Wang
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Anqi Zou
- Computational Biomedicine Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA, USA
| | - Richa Sharma
- Nephrology Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA, USA
| | - Jiayi Ji
- Nephrology Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA, USA
| | - Connor Kim
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Yingzhe Qian
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Qin Ye
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Hui Chen
- Department of Pathology and Laboratory Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA, USA
| | - Joel M Henderson
- Department of Pathology and Laboratory Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA, USA
| | - Chao Zhang
- Computational Biomedicine Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA, USA
| | - Vijaya B Kolachalama
- Computational Biomedicine Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA, USA
- Department of Computer Science and Faculty of Computing & Data Sciences, Boston University, Boston, MA, USA
| | - Weining Lu
- Nephrology Section, Department of Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA, USA
- Department of Pathology and Laboratory Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston Medical Center, Boston, MA, USA
| |
Collapse
|
26
|
Wilke M. A three-step, "brute-force" approach toward optimized affine spatial normalization. Front Comput Neurosci 2024; 18:1367148. [PMID: 39040884 PMCID: PMC11260722 DOI: 10.3389/fncom.2024.1367148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Accepted: 06/18/2024] [Indexed: 07/24/2024] Open
Abstract
The first step in spatial normalization of magnetic resonance (MR) images commonly is an affine transformation, which may be vulnerable to image imperfections (such as inhomogeneities or "unusual" heads). Additionally, common software solutions use internal starting estimates to allow for a more efficient computation, which may pose a problem in datasets not conforming to these assumptions (such as those from children). In this technical note, three main questions were addressed: one, does the affine spatial normalization step implemented in SPM12 benefit from an initial inhomogeneity correction. Two, does using a complexity-reduced image version improve robustness when matching "unusual" images. And three, can a blind "brute-force" application of a wide range of parameter combinations improve the affine fit for unusual datasets in particular. A large database of 2081 image datasets was used, covering the full age range from birth to old age. All analyses were performed in Matlab. Results demonstrate that an initial removal of image inhomogeneities improved the affine fit particularly when more inhomogeneity was present. Further, using a complexity-reduced input image also improved the affine fit and was beneficial in younger children in particular. Finally, blindly exploring a very wide parameter space resulted in a better fit for the vast majority of subjects, but again particularly so in infants and young children. In summary, the suggested modifications were shown to improve the affine transformation in the large majority of datasets in general, and in children in particular. The changes can easily be implemented into SPM12.
Collapse
Affiliation(s)
- Marko Wilke
- Department of Pediatric Neurology and Developmental Medicine, Children’s Hospital, University of Tübingen, Tübingen, Germany
- Experimental Pediatric Neuroimaging, Children’s Hospital and Department of Neuroradiology, University of Tübingen, Tübingen, Germany
| |
Collapse
|
27
|
Harangi B, Bogacsovics G, Toth J, Kovacs I, Dani E, Hajdu A. Pixel-wise segmentation of cells in digitized Pap smear images. Sci Data 2024; 11:733. [PMID: 38971865 PMCID: PMC11227563 DOI: 10.1038/s41597-024-03566-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 06/24/2024] [Indexed: 07/08/2024] Open
Abstract
A simple and cheap way to recognize cervical cancer is using light microscopic analysis of Pap smear images. Training artificial intelligence-based systems becomes possible in this domain, e.g., to follow the European recommendation to screen negative smears to reduce false negative cases. The first step for such a process is segmenting the cells. A large and manually segmented dataset is required for this task, which can be used to train deep learning-based solutions. We describe a corresponding dataset with accurate manual segmentations for the enclosed cells. Altogether, the APACS23 (Annotated PAp smear images for Cell Segmentation 2023) dataset contains about 37 000 manually segmented cells and is separated into dedicated training and test parts, which could be used for an official benchmark of scientific investigations or a grand challenge.
Collapse
Affiliation(s)
- Balazs Harangi
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary.
| | - Gergo Bogacsovics
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| | - Janos Toth
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| | - Ilona Kovacs
- Department of Pathology, Kenezy Gyula Hospital and Clinic, University of Debrecen, Debrecen, Hungary
| | - Erzsebet Dani
- Department of Library and Information Science, Faculty of Humanities, University of Debrecen, Debrecen, Hungary
| | - Andras Hajdu
- Department of Data Science and Visualization, Faculty of Informatics, University of Debrecen, Debrecen, Hungary
| |
Collapse
|
28
|
Agarwal S, Saxena S, Carriero A, Chabert GL, Ravindran G, Paul S, Laird JR, Garg D, Fatemi M, Mohanty L, Dubey AK, Singh R, Fouda MM, Singh N, Naidu S, Viskovic K, Kukuljan M, Kalra MK, Saba L, Suri JS. COVLIAS 3.0: cloud-based quantized hybrid UNet3+ deep learning for COVID-19 lesion detection in lung computed tomography. Front Artif Intell 2024; 7:1304483. [PMID: 39006802 PMCID: PMC11240867 DOI: 10.3389/frai.2024.1304483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 06/10/2024] [Indexed: 07/16/2024] Open
Abstract
Background and novelty When RT-PCR is ineffective in early diagnosis and understanding of COVID-19 severity, Computed Tomography (CT) scans are needed for COVID diagnosis, especially in patients having high ground-glass opacities, consolidations, and crazy paving. Radiologists find the manual method for lesion detection in CT very challenging and tedious. Previously solo deep learning (SDL) was tried but they had low to moderate-level performance. This study presents two new cloud-based quantized deep learning UNet3+ hybrid (HDL) models, which incorporated full-scale skip connections to enhance and improve the detections. Methodology Annotations from expert radiologists were used to train one SDL (UNet3+), and two HDL models, namely, VGG-UNet3+ and ResNet-UNet3+. For accuracy, 5-fold cross-validation protocols, training on 3,500 CT scans, and testing on unseen 500 CT scans were adopted in the cloud framework. Two kinds of loss functions were used: Dice Similarity (DS) and binary cross-entropy (BCE). Performance was evaluated using (i) Area error, (ii) DS, (iii) Jaccard Index, (iii) Bland-Altman, and (iv) Correlation plots. Results Among the two HDL models, ResNet-UNet3+ was superior to UNet3+ by 17 and 10% for Dice and BCE loss. The models were further compressed using quantization showing a percentage size reduction of 66.76, 36.64, and 46.23%, respectively, for UNet3+, VGG-UNet3+, and ResNet-UNet3+. Its stability and reliability were proved by statistical tests such as the Mann-Whitney, Paired t-Test, Wilcoxon test, and Friedman test all of which had a p < 0.001. Conclusion Full-scale skip connections of UNet3+ with VGG and ResNet in HDL framework proved the hypothesis showing powerful results improving the detection accuracy of COVID-19.
Collapse
Affiliation(s)
- Sushant Agarwal
- Advanced Knowledge Engineering Center, GBTI, Roseville, CA, United States
- Department of CSE, PSIT, Kanpur, India
| | | | - Alessandro Carriero
- Department of Radiology, “Maggiore della Carità” Hospital, University of Piemonte Orientale (UPO), Novara, Italy
| | | | - Gobinath Ravindran
- Department of Civil Engineering, SR University, Warangal, Telangana, India
| | - Sudip Paul
- Department of Biomedical Engineering, NEHU, Shillong, India
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St. Helena, CA, United States
| | - Deepak Garg
- School of CS and AI, SR University, Warangal, Telangana, India
| | - Mostafa Fatemi
- Department of Physiology and Biomedical Engineering, Mayo Clinic College of Medicine and Science, Rochester, MN, United States
| | - Lopamudra Mohanty
- Department of Computer Science, ABES Engineering College, Ghaziabad, UP, India
- Department of Computer science, Bennett University, Greater Noida, UP, India
| | - Arun K. Dubey
- Bharati Vidyapeeth’s College of Engineering, New Delhi, India
| | - Rajesh Singh
- Division of Research and Innovation, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun, India
| | - Mostafa M. Fouda
- Department of ECE, Idaho State University, Pocatello, ID, United States
| | - Narpinder Singh
- Department of Food Science and Technology, Graphic Era Deemed to be University, Dehradun, India
| | - Subbaram Naidu
- Department of EE, University of Minnesota, Duluth, MN, United States
| | | | - Melita Kukuljan
- Department of Interventional and Diagnostic Radiology, Clinical Hospital Center Rijeka, Rijeka, Croatia
| | - Manudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Luca Saba
- Department of Radiology, A.O.U., Cagliari, Italy
| | - Jasjit S. Suri
- Department of ECE, Idaho State University, Pocatello, ID, United States
- Department of Computer Science, Graphic Era Deemed to Be University, Dehradun, Uttarakhand, India
- Symbiosis Institute of Technology, Nagpur Campus, Symbiosis International (Deemed University), Pune, India
- Stroke and Monitoring Division, AtheroPoint LLC, Roseville, CA, United States
| |
Collapse
|
29
|
Mao K, Li R, Cheng J, Huang D, Song Z, Liu Z. PL-Net: progressive learning network for medical image segmentation. Front Bioeng Biotechnol 2024; 12:1414605. [PMID: 38994123 PMCID: PMC11236745 DOI: 10.3389/fbioe.2024.1414605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 05/30/2024] [Indexed: 07/13/2024] Open
Abstract
In recent years, deep convolutional neural network-based segmentation methods have achieved state-of-the-art performance for many medical analysis tasks. However, most of these approaches rely on optimizing the U-Net structure or adding new functional modules, which overlooks the complementation and fusion of coarse-grained and fine-grained semantic information. To address these issues, we propose a 2D medical image segmentation framework called Progressive Learning Network (PL-Net), which comprises Internal Progressive Learning (IPL) and External Progressive Learning (EPL). PL-Net offers the following advantages: 1) IPL divides feature extraction into two steps, allowing for the mixing of different size receptive fields and capturing semantic information from coarse to fine granularity without introducing additional parameters; 2) EPL divides the training process into two stages to optimize parameters and facilitate the fusion of coarse-grained information in the first stage and fine-grained information in the second stage. We conducted comprehensive evaluations of our proposed method on five medical image segmentation datasets, and the experimental results demonstrate that PL-Net achieves competitive segmentation performance. It is worth noting that PL-Net does not introduce any additional learnable parameters compared to other U-Net variants.
Collapse
Affiliation(s)
- Kunpeng Mao
- Chongqing City Management College, Chongqing, China
| | - Ruoyu Li
- College of Computer Science, Sichuan University, Chengdu, China
| | - Junlong Cheng
- College of Computer Science, Sichuan University, Chengdu, China
| | - Danmei Huang
- Chongqing City Management College, Chongqing, China
| | - Zhiping Song
- Chongqing University of Engineering, Chongqing, China
| | - ZeKui Liu
- Chongqing University of Engineering, Chongqing, China
| |
Collapse
|
30
|
Jiang W, Gao Z, Brahim W. Wound Semantic Segmentation Framework with Encoder-Decoder Architecture. 2024 IEEE 4TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING AND ARTIFICIAL INTELLIGENCE (SEAI) 2024:6-10. [DOI: 10.1109/seai62072.2024.10674494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2025]
Affiliation(s)
- Wenda Jiang
- W Booth School of Engineering Practice and Technology, McMaster University,Faculty of Engineering,Hamilton,ON,Canada
| | - Zhen Gao
- W Booth School of Engineering Practice and Technology, McMaster University,Faculty of Engineering,Hamilton,ON,Canada
| | - Wael Brahim
- W Booth School of Engineering Practice and Technology, McMaster University,Faculty of Engineering,Hamilton,ON,Canada
| |
Collapse
|
31
|
dos Santos PV, Scoczynski Ribeiro Martins M, Amorim Nogueira S, Gonçalves C, Maffei Loureiro R, Pacheco Calixto W. Unsupervised model for structure segmentation applied to brain computed tomography. PLoS One 2024; 19:e0304017. [PMID: 38870119 PMCID: PMC11175403 DOI: 10.1371/journal.pone.0304017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 05/03/2024] [Indexed: 06/15/2024] Open
Abstract
This article presents an unsupervised method for segmenting brain computed tomography scans. The proposed methodology involves image feature extraction and application of similarity and continuity constraints to generate segmentation maps of the anatomical head structures. Specifically designed for real-world datasets, this approach applies a spatial continuity scoring function tailored to the desired number of structures. The primary objective is to assist medical experts in diagnosis by identifying regions with specific abnormalities. Results indicate a simplified and accessible solution, reducing computational effort, training time, and financial costs. Moreover, the method presents potential for expediting the interpretation of abnormal scans, thereby impacting clinical practice. This proposed approach might serve as a practical tool for segmenting brain computed tomography scans, and make a significant contribution to the analysis of medical images in both research and clinical settings.
Collapse
Affiliation(s)
- Paulo Victor dos Santos
- Electrical, Mechanical & Computer Engineering School, Federal University of Goias, Goiania, Brazil
- Department of Radiology, Hospital Israelita Albert Einstein, Sao Paulo, Sao Paulo, Brazil
- Technology Research and Development Center (GCITE), Federal Institute of Goias, Goiania, Brazil
| | - Marcella Scoczynski Ribeiro Martins
- Electrical, Mechanical & Computer Engineering School, Federal University of Goias, Goiania, Brazil
- Federal University of Technology - Parana, Ponta Grossa, Parana, Brazil
| | - Solange Amorim Nogueira
- Electrical, Mechanical & Computer Engineering School, Federal University of Goias, Goiania, Brazil
- Department of Radiology, Hospital Israelita Albert Einstein, Sao Paulo, Sao Paulo, Brazil
| | | | - Rafael Maffei Loureiro
- Department of Radiology, Hospital Israelita Albert Einstein, Sao Paulo, Sao Paulo, Brazil
| | - Wesley Pacheco Calixto
- Electrical, Mechanical & Computer Engineering School, Federal University of Goias, Goiania, Brazil
- Technology Research and Development Center (GCITE), Federal Institute of Goias, Goiania, Brazil
| |
Collapse
|
32
|
Carl M, Lall K, Pai D, Chang E, Statum S, Brau A, Chung CB, Fung M, Bae WC. Shoulder Bone Segmentation with DeepLab and U-Net. OSTEOLOGY (BASEL, SWITZERLAND) 2024; 4:98-110. [PMID: 39474235 PMCID: PMC11520815 DOI: 10.3390/osteology4020008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/02/2024]
Abstract
Evaluation of 3D bone morphology of the glenohumeral joint is necessary for pre-surgical planning. Zero echo time (ZTE) magnetic resonance imaging (MRI) provides excellent bone contrast and can potentially be used in place of computed tomography. Segmentation of shoulder anatomy, particularly humeral head and acetabulum, is needed for detailed assessment of each anatomy and for pre-surgical preparation. In this study we compared performance of two popular deep learning models based on Google's DeepLab and U-Net to perform automated segmentation on ZTE MRI of human shoulders. Axial ZTE images of normal shoulders (n=31) acquired at 3-Tesla were annotated for training with a DeepLab and 2D U-Net, and the trained model was validated with testing data (n=13). While both models showed visually satisfactory results for segmenting the humeral bone, U-Net slightly over-estimated while DeepLab under-estimated the segmented area compared to the ground truth. Testing accuracy quantified by Dice score was significantly higher (p<0.05) for U-Net (88%) than DeepLab (81%) for the humeral segmentation. We have also implemented the U-Net model onto an MRI console for a push-button DL segmentation processing. Although this is an early work with limitations, our approach has the potential to improve shoulder MR evaluation hindered by manual post-processing and may provide clinical benefit for quickly visualizing bones of the glenohumeral joint.
Collapse
Affiliation(s)
| | - Kaustubh Lall
- Dept. of Electrical and Computer Engineering, University of California-San Diego, CA
| | | | - Eric Chang
- Dept. of Radiology, VA San Diego Healthcare System, San Diego, CA
- Dept. of Radiology, University of California-San Diego, La Jolla, CA
| | - Sheronda Statum
- Canyon Crest Academy, San Diego, CA
- Dept. of Radiology, VA San Diego Healthcare System, San Diego, CA
| | - Anja Brau
- General Electric Healthcare, Menlo Park, CA
| | - Christine B. Chung
- Dept. of Radiology, VA San Diego Healthcare System, San Diego, CA
- Dept. of Radiology, University of California-San Diego, La Jolla, CA
| | | | - Won C. Bae
- Dept. of Radiology, VA San Diego Healthcare System, San Diego, CA
- Dept. of Radiology, University of California-San Diego, La Jolla, CA
| |
Collapse
|
33
|
Matsumoto S, Kawahira H, Fukata K, Doi Y, Kobayashi N, Hosoya Y, Sata N. Laparoscopic distal gastrectomy skill evaluation from video: a new artificial intelligence-based instrument identification system. Sci Rep 2024; 14:12432. [PMID: 38816459 PMCID: PMC11139867 DOI: 10.1038/s41598-024-63388-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 05/28/2024] [Indexed: 06/01/2024] Open
Abstract
The advent of Artificial Intelligence (AI)-based object detection technology has made identification of position coordinates of surgical instruments from videos possible. This study aimed to find kinematic differences by surgical skill level. An AI algorithm was developed to identify X and Y coordinates of surgical instrument tips accurately from video. Kinematic analysis including fluctuation analysis was performed on 18 laparoscopic distal gastrectomy videos from three expert and three novice surgeons (3 videos/surgeon, 11.6 h, 1,254,010 frames). Analysis showed the expert surgeon cohort moved more efficiently and regularly, with significantly less operation time and total travel distance. Instrument tip movement did not differ in velocity, acceleration, or jerk between skill levels. The evaluation index of fluctuation β was significantly higher in experts. ROC curve cutoff value at 1.4 determined sensitivity and specificity of 77.8% for experts and novices. Despite the small sample, this study suggests AI-based object detection with fluctuation analysis is promising because skill evaluation can be calculated in real time with potential for peri-operational evaluation.
Collapse
Affiliation(s)
- Shiro Matsumoto
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University, Tochigi, Japan.
| | - Hiroshi Kawahira
- Medical Simulation Center, Jichi Medical University, Tochigi, Japan
| | | | | | | | - Yoshinori Hosoya
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University, Tochigi, Japan
| | - Naohiro Sata
- Department of Surgery, Division of Gastroenterological, General and Transplant Surgery, Jichi Medical University, Tochigi, Japan
| |
Collapse
|
34
|
Porter VA, Hobson BA, Foster B, Lein PJ, Chaudhari AJ. Fully automated whole brain segmentation from rat MRI scans with a convolutional neural network. J Neurosci Methods 2024; 405:110078. [PMID: 38340902 PMCID: PMC11000587 DOI: 10.1016/j.jneumeth.2024.110078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 01/27/2024] [Accepted: 02/05/2024] [Indexed: 02/12/2024]
Abstract
BACKGROUND Whole brain delineation (WBD) is utilized in neuroimaging analysis for data preprocessing and deriving whole brain image metrics. Current automated WBD techniques for analysis of preclinical brain MRI data show limited accuracy when images present with significant neuropathology and anatomical deformations, such as that resulting from organophosphate intoxication (OPI) and Alzheimer's Disease (AD), and inadequate generalizability. METHODS A modified 2D U-Net framework was employed for WBD of MRI rodent brains, consisting of 27 convolutional layers, batch normalization, two dropout layers and data augmentation, after training parameter optimization. A total of 265 T2-weighted 7.0 T MRI scans were utilized for the study, including 125 scans of an OPI rat model for neural network training. For testing and validation, 20 OPI rat scans and 120 scans of an AD rat model were utilized. U-Net performance was evaluated using Dice coefficients (DC) and Hausdorff distances (HD) between the U-Net-generated and manually segmented WBDs. RESULTS The U-Net achieved a DC (median[range]) of 0.984[0.936-0.990] and HD of 1.69[1.01-6.78] mm for OPI rat model scans, and a DC (mean[range]) of 0.975[0.898-0.991] and HD of 1.49[0.86-3.89] for the AD rat model scans. COMPARISON WITH EXISTING METHODS The proposed approach is fully automated and robust across two rat strains and longitudinal brain changes with a computational speed of 8 seconds/scan, overcoming limitations of manual segmentation. CONCLUSIONS The modified 2D U-Net provided a fully automated, efficient, and generalizable segmentation approach that achieved high accuracy across two disparate rat models of neurological diseases.
Collapse
Affiliation(s)
- Valerie A Porter
- Department of Biomedical Engineering, University of California, Davis, CA 95616, USA; Department of Radiology, University of California, Davis, CA 95817, USA
| | - Brad A Hobson
- Department of Biomedical Engineering, University of California, Davis, CA 95616, USA; Center for Molecular and Genomic Imaging, University of California, Davis, CA 95616, USA
| | - Brent Foster
- TechMah Medical LLC, 2099 Thunderhead Rd, Knoxville, TN 37922, USA
| | - Pamela J Lein
- Department of Molecular Biosciences, University of California, Davis, CA 95616, USA
| | - Abhijit J Chaudhari
- Department of Radiology, University of California, Davis, CA 95817, USA; Center for Molecular and Genomic Imaging, University of California, Davis, CA 95616, USA.
| |
Collapse
|
35
|
Zennadi MM, Ptito M, Redouté J, Costes N, Boutet C, Germain N, Galusca B, Schneider FC. MRI atlas of the pituitary gland in young female adults. Brain Struct Funct 2024; 229:1001-1010. [PMID: 38502330 DOI: 10.1007/s00429-024-02779-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 02/20/2024] [Indexed: 03/21/2024]
Abstract
The probabilistic topography and inter-individual variability of the pituitary gland (PG) remain undetermined. The absence of a standardized reference atlas hinders research on PG volumetrics. In this study, we aimed at creating maximum probability maps for the anterior and posterior PG in young female adults. We manually delineated the anterior and posterior parts of the pituitary glands in 26 healthy subjects using high-resolution MRI T1 images. A three-step procedure and a cost function-masking approach were employed to optimize spatial normalization for the PG. We generated probabilistic atlases and maximum probability maps, which were subsequently coregistered back to the subjects' space and compared to manual delineations. Manual measurements led to a total pituitary volume of 705 ± 88 mm³, with the anterior and posterior volumes measuring 614 ± 82 mm³ and 91 ± 20 mm³, respectively. The mean relative volume difference between manual and atlas-based estimations was 1.3%. The global pituitary atlas exhibited an 80% (± 9%) overlap for the DICE index and 67% (± 11%) for the Jaccard index. Similarly, these values were 77% (± 13%) and 64% (± 14%) for the anterior pituitary atlas and 62% (± 21%) and 47% (± 17%) for the posterior PG atlas, respectively. We observed a substantial concordance and a significant correlation between the volume estimations of the manual and atlas-based methods for the global pituitary and anterior volumes. The maximum probability maps of the anterior and posterior PG lay the groundwork for automatic atlas-based segmentation methods and the standardized analysis of large PG datasets.
Collapse
Affiliation(s)
- Manel Merabet Zennadi
- Université Jean Monnet Saint Etienne, CHU de Saint Etienne, TAPE Research Unit EA 7423, F-42023, Saint Etienne, France
| | - Maurice Ptito
- École d'Optométrie, Université de Montréal, Montréal, Québec, Canada
- Department of Neuroscience, Copenhagen University, Copenhagen, Denmark
| | - Jérôme Redouté
- CERMEP, Claude Bernard University Lyon 1, Villeurbanne, France
| | - Nicolas Costes
- CERMEP, Claude Bernard University Lyon 1, Villeurbanne, France
| | - Claire Boutet
- Université Jean Monnet Saint Etienne, CHU de Saint Etienne, TAPE Research Unit EA 7423, F-42023, Saint Etienne, France
| | - Natacha Germain
- Université Jean Monnet Saint Etienne, CHU de Saint Etienne, TAPE Research Unit EA 7423, F-42023, Saint Etienne, France
| | - Bogdan Galusca
- Université Jean Monnet Saint Etienne, CHU de Saint Etienne, TAPE Research Unit EA 7423, F-42023, Saint Etienne, France
| | - Fabien C Schneider
- Université Jean Monnet Saint Etienne, CHU de Saint Etienne, TAPE Research Unit EA 7423, F-42023, Saint Etienne, France.
| |
Collapse
|
36
|
Egebjerg JM, Szomek M, Thaysen K, Juhl AD, Kozakijevic S, Werner S, Pratsch C, Schneider G, Kapishnikov S, Ekman A, Röttger R, Wüstner D. Automated quantification of vacuole fusion and lipophagy in Saccharomyces cerevisiae from fluorescence and cryo-soft X-ray microscopy data using deep learning. Autophagy 2024; 20:902-922. [PMID: 37908116 PMCID: PMC11062380 DOI: 10.1080/15548627.2023.2270378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 09/12/2023] [Accepted: 10/02/2023] [Indexed: 11/02/2023] Open
Abstract
During starvation in the yeast Saccharomyces cerevisiae vacuolar vesicles fuse and lipid droplets (LDs) can become internalized into the vacuole in an autophagic process named lipophagy. There is a lack of tools to quantitatively assess starvation-induced vacuole fusion and lipophagy in intact cells with high resolution and throughput. Here, we combine soft X-ray tomography (SXT) with fluorescence microscopy and use a deep-learning computational approach to visualize and quantify these processes in yeast. We focus on yeast homologs of mammalian NPC1 (NPC intracellular cholesterol transporter 1; Ncr1 in yeast) and NPC2 proteins, whose dysfunction leads to Niemann Pick type C (NPC) disease in humans. We developed a convolutional neural network (CNN) model which classifies fully fused versus partially fused vacuoles based on fluorescence images of stained cells. This CNN, named Deep Yeast Fusion Network (DYFNet), revealed that cells lacking Ncr1 (ncr1∆ cells) or Npc2 (npc2∆ cells) have a reduced capacity for vacuole fusion. Using a second CNN model, we implemented a pipeline named LipoSeg to perform automated instance segmentation of LDs and vacuoles from high-resolution reconstructions of X-ray tomograms. From that, we obtained 3D renderings of LDs inside and outside of the vacuole in a fully automated manner and additionally measured droplet volume, number, and distribution. We find that ncr1∆ and npc2∆ cells could ingest LDs into vacuoles normally but showed compromised degradation of LDs and accumulation of lipid vesicles inside vacuoles. Our new method is versatile and allows for analysis of vacuole fusion, droplet size and lipophagy in intact cells.Abbreviations: BODIPY493/503: 4,4-difluoro-1,3,5,7,8-pentamethyl-4-bora-3a,4a-diaza-s-Indacene; BPS: bathophenanthrolinedisulfonic acid disodium salt hydrate; CNN: convolutional neural network; DHE; dehydroergosterol; npc2∆, yeast deficient in Npc2; DSC, Dice similarity coefficient; EM, electron microscopy; EVs, extracellular vesicles; FIB-SEM, focused ion beam milling-scanning electron microscopy; FM 4-64, N-(3-triethylammoniumpropyl)-4-(6-[4-{diethylamino} phenyl] hexatrienyl)-pyridinium dibromide; LDs, lipid droplets; Ncr1, yeast homolog of human NPC1 protein; ncr1∆, yeast deficient in Ncr1; NPC, Niemann Pick type C; NPC2, Niemann Pick type C homolog; OD600, optical density at 600 nm; ReLU, rectifier linear unit; PPV, positive predictive value; NPV, negative predictive value; MCC, Matthews correlation coefficient; SXT, soft X-ray tomography; UV, ultraviolet; YPD, yeast extract peptone dextrose.
Collapse
Affiliation(s)
- Jacob Marcus Egebjerg
- Department of Biochemistry and Molecular Biology, University of Southern Denmark, Odense M, Denmark
- Department of Mathematics and Computer Science, University of Southern Denmark, Odense M, Denmark
| | - Maria Szomek
- Department of Biochemistry and Molecular Biology, University of Southern Denmark, Odense M, Denmark
| | - Katja Thaysen
- Department of Biochemistry and Molecular Biology, University of Southern Denmark, Odense M, Denmark
| | - Alice Dupont Juhl
- Department of Biochemistry and Molecular Biology, University of Southern Denmark, Odense M, Denmark
| | - Suzana Kozakijevic
- Department of Biochemistry and Molecular Biology, University of Southern Denmark, Odense M, Denmark
| | - Stephan Werner
- Department of X‑Ray Microscopy, Helmholtz-Zentrum Berlin, Germany and Humboldt-Universität zu Berlin, Institut für Physik, Berlin, Germany
| | - Christoph Pratsch
- Department of X‑Ray Microscopy, Helmholtz-Zentrum Berlin, Germany and Humboldt-Universität zu Berlin, Institut für Physik, Berlin, Germany
| | - Gerd Schneider
- Department of X‑Ray Microscopy, Helmholtz-Zentrum Berlin, Germany and Humboldt-Universität zu Berlin, Institut für Physik, Berlin, Germany
| | - Sergey Kapishnikov
- SiriusXT, 9A Holly Ave. Stillorgan Industrial Park, Blackrock, Co, Dublin, Ireland
| | - Axel Ekman
- Department of Biological and Environmental Science and Nanoscience Centre, University of Jyväskylä, Jyväskylä, Finland
| | - Richard Röttger
- Department of Mathematics and Computer Science, University of Southern Denmark, Odense M, Denmark
| | - Daniel Wüstner
- Department of Biochemistry and Molecular Biology, University of Southern Denmark, Odense M, Denmark
| |
Collapse
|
37
|
Kakkos I, Vagenas TP, Zygogianni A, Matsopoulos GK. Towards Automation in Radiotherapy Planning: A Deep Learning Approach for the Delineation of Parotid Glands in Head and Neck Cancer. Bioengineering (Basel) 2024; 11:214. [PMID: 38534488 DOI: 10.3390/bioengineering11030214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/19/2024] [Accepted: 02/22/2024] [Indexed: 03/28/2024] Open
Abstract
The delineation of parotid glands in head and neck (HN) carcinoma is critical to assess radiotherapy (RT) planning. Segmentation processes ensure precise target position and treatment precision, facilitate monitoring of anatomical changes, enable plan adaptation, and enhance overall patient safety. In this context, artificial intelligence (AI) and deep learning (DL) have proven exceedingly effective in precisely outlining tumor tissues and, by extension, the organs at risk. This paper introduces a DL framework using the AttentionUNet neural network for automatic parotid gland segmentation in HN cancer. Extensive evaluation of the model is performed in two public and one private dataset, while segmentation accuracy is compared with other state-of-the-art DL segmentation schemas. To assess replanning necessity during treatment, an additional registration method is implemented on the segmentation output, aligning images of different modalities (Computed Tomography (CT) and Cone Beam CT (CBCT)). AttentionUNet outperforms similar DL methods (Dice Similarity Coefficient: 82.65% ± 1.03, Hausdorff Distance: 6.24 mm ± 2.47), confirming its effectiveness. Moreover, the subsequent registration procedure displays increased similarity, providing insights into the effects of RT procedures for treatment planning adaptations. The implementation of the proposed methods indicates the effectiveness of DL not only for automatic delineation of the anatomical structures, but also for the provision of information for adaptive RT support.
Collapse
Affiliation(s)
- Ioannis Kakkos
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| | - Theodoros P Vagenas
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| | - Anna Zygogianni
- Radiation Oncology Unit, 1st Department of Radiology, ARETAIEION University Hospital, 11528 Athens, Greece
| | - George K Matsopoulos
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| |
Collapse
|
38
|
Gouzou D, Taimori A, Haloubi T, Finlayson N, Wang Q, Hopgood JR, Vallejo M. Applications of machine learning in time-domain fluorescence lifetime imaging: a review. Methods Appl Fluoresc 2024; 12:022001. [PMID: 38055998 PMCID: PMC10851337 DOI: 10.1088/2050-6120/ad12f7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 09/25/2023] [Accepted: 12/06/2023] [Indexed: 12/08/2023]
Abstract
Many medical imaging modalities have benefited from recent advances in Machine Learning (ML), specifically in deep learning, such as neural networks. Computers can be trained to investigate and enhance medical imaging methods without using valuable human resources. In recent years, Fluorescence Lifetime Imaging (FLIm) has received increasing attention from the ML community. FLIm goes beyond conventional spectral imaging, providing additional lifetime information, and could lead to optical histopathology supporting real-time diagnostics. However, most current studies do not use the full potential of machine/deep learning models. As a developing image modality, FLIm data are not easily obtainable, which, coupled with an absence of standardisation, is pushing back the research to develop models which could advance automated diagnosis and help promote FLIm. In this paper, we describe recent developments that improve FLIm image quality, specifically time-domain systems, and we summarise sensing, signal-to-noise analysis and the advances in registration and low-level tracking. We review the two main applications of ML for FLIm: lifetime estimation and image analysis through classification and segmentation. We suggest a course of action to improve the quality of ML studies applied to FLIm. Our final goal is to promote FLIm and attract more ML practitioners to explore the potential of lifetime imaging.
Collapse
Affiliation(s)
- Dorian Gouzou
- Dorian Gouzou and Marta Vallejo are with Institute of Signals, Sensors and Systems, School of Engineering and Physical Sciences, Heriot Watt University, Edinburgh, EH14 4AS, United Kingdom
| | - Ali Taimori
- Tarek Haloubi, Ali Taimori, and James R. Hopgood are with Institute for Imaging, Data and Communication, School of Engineering, University of Edinburgh, Edinburgh, EH9 3FG, United Kingdom
| | - Tarek Haloubi
- Tarek Haloubi, Ali Taimori, and James R. Hopgood are with Institute for Imaging, Data and Communication, School of Engineering, University of Edinburgh, Edinburgh, EH9 3FG, United Kingdom
| | - Neil Finlayson
- Neil Finlayson is with Institute for Integrated Micro and Nano Systems, School of Engineering, University ofEdinburgh, Edinburgh EH9 3FF, United Kingdom
| | - Qiang Wang
- Qiang Wang is with Centre for Inflammation Research, University of Edinburgh, Edinburgh, EH16 4TJ, United Kingdom
| | - James R Hopgood
- Tarek Haloubi, Ali Taimori, and James R. Hopgood are with Institute for Imaging, Data and Communication, School of Engineering, University of Edinburgh, Edinburgh, EH9 3FG, United Kingdom
| | - Marta Vallejo
- Dorian Gouzou and Marta Vallejo are with Institute of Signals, Sensors and Systems, School of Engineering and Physical Sciences, Heriot Watt University, Edinburgh, EH14 4AS, United Kingdom
| |
Collapse
|
39
|
Young F, Aquilina K, Seunarine KK, Mancini L, Clark CA, Clayden JD. Fibre orientation atlas guided rapid segmentation of white matter tracts. Hum Brain Mapp 2024; 45:e26578. [PMID: 38339907 PMCID: PMC10826637 DOI: 10.1002/hbm.26578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 12/14/2023] [Accepted: 12/19/2023] [Indexed: 02/12/2024] Open
Abstract
Fibre tract delineation from diffusion magnetic resonance imaging (MRI) is a valuable clinical tool for neurosurgical planning and navigation, as well as in research neuroimaging pipelines. Several popular methods are used for this task, each with different strengths and weaknesses making them more or less suited to different contexts. For neurosurgical imaging, priorities include ease of use, computational efficiency, robustness to pathology and ability to generalise to new tracts of interest. Many existing methods use streamline tractography, which may require expert neuroimaging operators for setting parameters and delineating anatomical regions of interest, or suffer from as a lack of generalisability to clinical scans involving deforming tumours and other pathologies. More recently, data-driven approaches including deep-learning segmentation models and streamline clustering methods have improved reproducibility and automation, although they can require large amounts of training data and/or computationally intensive image processing at the point of application. We describe an atlas-based direct tract mapping technique called 'tractfinder', utilising tract-specific location and orientation priors. Our aim was to develop a clinically practical method avoiding streamline tractography at the point of application while utilising prior anatomical knowledge derived from only 10-20 training samples. Requiring few training samples allows emphasis to be placed on producing high quality, neuro-anatomically accurate training data, and enables rapid adaptation to new tracts of interest. Avoiding streamline tractography at the point of application reduces computational time, false positives and vulnerabilities to pathology such as tumour deformations or oedema. Carefully filtered training streamlines and track orientation distribution mapping are used to construct tract specific orientation and spatial probability atlases in standard space. Atlases are then transformed to target subject space using affine registration and compared with the subject's voxel-wise fibre orientation distribution data using a mathematical measure of distribution overlap, resulting in a map of the tract's likely spatial distribution. This work includes extensive performance evaluation and comparison with benchmark techniques, including streamline tractography and the deep-learning method TractSeg, in two publicly available healthy diffusion MRI datasets (from TractoInferno and the Human Connectome Project) in addition to a clinical dataset comprising paediatric and adult brain tumour scans. Tract segmentation results display high agreement with established techniques while requiring less than 3 min on average when applied to a new subject. Results also display higher robustness than compared methods when faced with clinical scans featuring brain tumours and resections. As well as describing and evaluating a novel proposed tract delineation technique, this work continues the discussion on the challenges surrounding the white matter segmentation task, including issues of anatomical definitions and the use of quantitative segmentation comparison metrics.
Collapse
Affiliation(s)
- Fiona Young
- Developmental Neurosciences Research and Teaching Department, UCL Great Ormond Street Institute of Child HealthUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Kristian Aquilina
- Department of NeurosurgeryGreat Ormond Street Hospital for ChildrenLondonUK
| | - Kiran K. Seunarine
- Developmental Neurosciences Research and Teaching Department, UCL Great Ormond Street Institute of Child HealthUniversity College LondonLondonUK
- Department of RadiologyGreat Ormond Street Hospital for ChildrenLondonUK
| | - Laura Mancini
- Lysholm Department of Neuroradiology, The National Hospital for Neurology and NeurosurgeryUniversity College London Hospitals NHS Foundation TrustLondonUK
- Department of Brain Repair and Rehabilitation, UCL Queen Square Institute of NeurologyUniversity College LondonLondonUK
| | - Chris A. Clark
- Developmental Neurosciences Research and Teaching Department, UCL Great Ormond Street Institute of Child HealthUniversity College LondonLondonUK
| | - Jonathan D. Clayden
- Developmental Neurosciences Research and Teaching Department, UCL Great Ormond Street Institute of Child HealthUniversity College LondonLondonUK
| |
Collapse
|
40
|
Xinsen L, Yang K, Bingzhi C, Xiuhong C, Xinling L, Xinyao X, Jinlin C, Ming T, Pengtao L, Zheng X, Linying C. Vague-Segment Technique: Automatic Computation of Tumor Stroma Ratio for Breast Cancer on Whole Slides. IEEE J Biomed Health Inform 2024; 28:905-916. [PMID: 38079367 DOI: 10.1109/jbhi.2023.3341101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
The calculation of Tumor Stroma Ratio (TSR) is a challenging medical issue that could improve predictions of neoadjuvant chemotherapy benefits and patient prognoses. Although several studies on breast cancer and deep learning methods have achieved promising results, the drawbacks that pixel-level semantic segmentation processes could not extract core tumor regions containing both tumor pixels and stroma pixels make it difficult to accurately calculate TSR. In this paper, we propose a Vague-Segment Technique (VST) consisting of a designed SwinV2UNet module and a modified Suzuki algorithm. Specifically, the SwinV2UNet identifies tumor pixels and generate pixel-level classification results, based on which the modified Suzuki algorithm extracts the contour of core tumor regions in terms of cosine angle. Through this way, VST obtains vaguely segmentation results of core tumor regions containing both tumor pixels and stroma pixels, where the TSR could be calculated by the formula of Intersection over Union (IOU). For the training and evaluation, we utilize the well-known The Cancer Genome Atlas (TCGA) database to create an annotated dataset, while 150 images with TSR annotations from real cases are also collected. The experimental results illustrate that the proposed VST could generate better tumor identification results compared with state-of-the-art methods, where the extracted core tumor regions lead to more consistencies of calculated TSR with senior experts compared to junior pathologists. The experimental results demonstrate the superiority of our proposed pipeline, which has promise for future clinical application.
Collapse
|
41
|
Yang L, Lei Y, Huang Z, Geng M, Liu Z, Wang B, Luo D, Huang W, Liang D, Pang Z, Hu Z. An interactive nuclei segmentation framework with Voronoi diagrams and weighted convex difference for cervical cancer pathology images. Phys Med Biol 2024; 69:025021. [PMID: 37972412 DOI: 10.1088/1361-6560/ad0d44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Accepted: 11/16/2023] [Indexed: 11/19/2023]
Abstract
Objective.Nuclei segmentation is crucial for pathologists to accurately classify and grade cancer. However, this process faces significant challenges, such as the complex background structures in pathological images, the high-density distribution of nuclei, and cell adhesion.Approach.In this paper, we present an interactive nuclei segmentation framework that increases the precision of nuclei segmentation. Our framework incorporates expert monitoring to gather as much prior information as possible and accurately segment complex nucleus images through limited pathologist interaction, where only a small portion of the nucleus locations in each image are labeled. The initial contour is determined by the Voronoi diagram generated from the labeled points, which is then input into an optimized weighted convex difference model to regularize partition boundaries in an image. Specifically, we provide theoretical proof of the mathematical model, stating that the objective function monotonically decreases. Furthermore, we explore a postprocessing stage that incorporates histograms, which are simple and easy to handle and prevent arbitrariness and subjectivity in individual choices.Main results.To evaluate our approach, we conduct experiments on both a cervical cancer dataset and a nasopharyngeal cancer dataset. The experimental results demonstrate that our approach achieves competitive performance compared to other methods.Significance.The Voronoi diagram in the paper serves as prior information for the active contour, providing positional information for individual cells. Moreover, the active contour model achieves precise segmentation results while offering mathematical interpretability.
Collapse
Affiliation(s)
- Lin Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- College of Mathematics and Statistics, Henan University, Kaifeng 475004, People's Republic of China
| | - Yuanyuan Lei
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, People's Republic of China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Mengxiao Geng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- College of Mathematics and Statistics, Henan University, Kaifeng 475004, People's Republic of China
| | - Zhou Liu
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, People's Republic of China
| | - Baijie Wang
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, People's Republic of China
| | - Dehong Luo
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, People's Republic of China
| | - Wenting Huang
- Department of Pathology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, 518116, People's Republic of China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Zhifeng Pang
- College of Mathematics and Statistics, Henan University, Kaifeng 475004, People's Republic of China
- Hubei Key Laboratory of Computational Science, Wuhan University, Wuhan 430072, People's Republic of China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| |
Collapse
|
42
|
Lin SY, Lin CL. Brain tumor segmentation using U-Net in conjunction with EfficientNet. PeerJ Comput Sci 2024; 10:e1754. [PMID: 38196955 PMCID: PMC10773611 DOI: 10.7717/peerj-cs.1754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 11/23/2023] [Indexed: 01/11/2024]
Abstract
According to the Ten Leading Causes of Death Statistics Report by the Ministry of Health and Welfare in 2021, cancer ranks as the leading cause of mortality. Among them, pleomorphic glioblastoma is a common type of brain cancer. Brain cancer often occurs in the brain with unclear boundaries from normal brain tissue, necessitating assistance from experienced doctors to distinguish brain tumors before surgical resection to avoid damaging critical neural structures. In recent years, with the advancement of deep learning (DL) technology, artificial intelligence (AI) plays a vital role in disease diagnosis, especially in the field of image segmentation. This technology can aid doctors in locating and measuring brain tumors, while significantly reducing manpower and time costs. Currently, U-Net is one of the primary image segmentation techniques. It utilizes skip connections to combine high-level and low-level feature information, leading to significant improvements in segmentation accuracy. To further enhance the model's performance, this study explores the feasibility of using EfficientNetV2 as an encoder in combination with U-net. Experimental results indicate that employing EfficientNetV2 as an encoder together with U-net can improve the segmentation model's Dice score (loss = 0.0866, accuracy = 0.9977, and Dice similarity coefficient (DSC) = 0.9133).
Collapse
Affiliation(s)
- Shu-You Lin
- Department of Electrical Engineering, Ming Chi University of Technology, New Taipei City, Taiwan
| | - Chun-Ling Lin
- Department of Electrical Engineering, Ming Chi University of Technology, New Taipei City, Taiwan
| |
Collapse
|
43
|
Pal S, Singh RP, Kumar A. Analysis of Hybrid Feature Optimization Techniques Based on the Classification Accuracy of Brain Tumor Regions Using Machine Learning and Further Evaluation Based on the Institute Test Data. J Med Phys 2024; 49:22-32. [PMID: 38828069 PMCID: PMC11141750 DOI: 10.4103/jmp.jmp_77_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 02/23/2024] [Accepted: 02/23/2024] [Indexed: 06/05/2024] Open
Abstract
Aim The goal of this study was to get optimal brain tumor features from magnetic resonance imaging (MRI) images and classify them based on the three groups of the tumor region: Peritumoral edema, enhancing-core, and necrotic tumor core, using machine learning classification models. Materials and Methods This study's dataset was obtained from the multimodal brain tumor segmentation challenge. A total of 599 brain MRI studies were employed, all in neuroimaging informatics technology initiative format. The dataset was divided into training, validation, and testing subsets online test dataset (OTD). The dataset includes four types of MRI series, which were combined together and processed for intensity normalization using contrast limited adaptive histogram equalization methodology. To extract radiomics features, a python-based library called pyRadiomics was employed. Particle-swarm optimization (PSO) with varying inertia weights was used for feature optimization. Inertia weight with a linearly decreasing strategy (W1), inertia weight with a nonlinear coefficient decreasing strategy (W2), and inertia weight with a logarithmic strategy (W3) were different strategies used to vary the inertia weight for feature optimization in PSO. These selected features were further optimized using the principal component analysis (PCA) method to further reducing the dimensionality and removing the noise and improve the performance and efficiency of subsequent algorithms. Support vector machine (SVM), light gradient boosting (LGB), and extreme gradient boosting (XGB) machine learning classification algorithms were utilized for the classification of images into different tumor regions using optimized features. The proposed method was also tested on institute test data (ITD) for a total of 30 patient images. Results For OTD test dataset, the classification accuracy of SVM was 0.989, for the LGB model (LGBM) was 0.992, and for the XGB model (XGBM) was 0.994, using the varying inertia weight-PSO optimization method and the classification accuracy of SVM was 0.996 for the LGBM was 0.998, and for the XGBM was 0.994, using PSO and PCA-a hybrid optimization technique. For ITD test dataset, the classification accuracy of SVM was 0.994 for the LGBM was 0.993, and for the XGBM was 0.997, using the hybrid optimization technique. Conclusion The results suggest that the proposed method can be used to classify a brain tumor as used in this study to classify the tumor region into three groups: Peritumoral edema, enhancing-core, and necrotic tumor core. This was done by extracting the different features of the tumor, such as its shape, grey level, gray-level co-occurrence matrix, etc., and then choosing the best features using hybrid optimal feature selection techniques. This was done without much human expertise and in much less time than it would take a person.
Collapse
Affiliation(s)
- Soniya Pal
- Department of Physics, GLA University, Mathura, Uttar Pradesh, India
- Batra Hospital and Medical Research Center, New Delhi, India
| | - Raj Pal Singh
- Department of Physics, GLA University, Mathura, Uttar Pradesh, India
| | - Anuj Kumar
- Department of Radiotherapy, S. N. Medical College, Agra, Uttar Pradesh, India
| |
Collapse
|
44
|
Liu B, Dolz J, Galdran A, Kobbi R, Ben Ayed I. Do we really need dice? The hidden region-size biases of segmentation losses. Med Image Anal 2024; 91:103015. [PMID: 37918314 DOI: 10.1016/j.media.2023.103015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 09/24/2023] [Accepted: 10/24/2023] [Indexed: 11/04/2023]
Abstract
Most segmentation losses are arguably variants of the Cross-Entropy (CE) or Dice losses. On the surface, these two categories of losses (i.e., distribution based vs. geometry based) seem unrelated, and there is no clear consensus as to which category is a better choice, with varying performances for each across different benchmarks and applications. Furthermore, it is widely argued within the medical-imaging community that Dice and CE are complementary, which has motivated the use of compound CE-Dice losses. In this work, we provide a theoretical analysis, which shows that CE and Dice share a much deeper connection than previously thought. First, we show that, from a constrained-optimization perspective, they both decompose into two components, i.e., a similar ground-truth matching term, which pushes the predicted foreground regions towards the ground-truth, and a region-size penalty term imposing different biases on the size (or proportion) of the predicted regions. Then, we provide bound relationships and an information-theoretic analysis, which uncover hidden region-size biases: Dice has an intrinsic bias towards specific extremely imbalanced solutions, whereas CE implicitly encourages the ground-truth region proportions. Our theoretical results explain the wide experimental evidence in the medical-imaging literature, whereby Dice losses bring improvements for imbalanced segmentation. It also explains why CE dominates natural-image problems with diverse class proportions, in which case Dice might have difficulty adapting to different region-size distributions. Based on our theoretical analysis, we propose a principled and simple solution, which enables to control explicitly the region-size bias. The proposed method integrates CE with explicit terms based on L1 or the KL divergence, which encourage segmenting region proportions to match target class proportions, thereby mitigating class imbalance but without losing generality. Comprehensive experiments and ablation studies over different losses and applications validate our theoretical analysis, as well as the effectiveness of explicit and simple region-size terms. The code is available at https://github.com/by-liu/SegLossBias .
Collapse
Affiliation(s)
- Bingyuan Liu
- LIVIA, ÉTS Montréal, Canada; International Laboratory on Learning Systems (ILLS), McGill - ETS - MILA - CNRS - Université Paris-Saclay - CentraleSupélec, Canada.
| | - Jose Dolz
- LIVIA, ÉTS Montréal, Canada; International Laboratory on Learning Systems (ILLS), McGill - ETS - MILA - CNRS - Université Paris-Saclay - CentraleSupélec, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Canada
| | | | | | - Ismail Ben Ayed
- LIVIA, ÉTS Montréal, Canada; International Laboratory on Learning Systems (ILLS), McGill - ETS - MILA - CNRS - Université Paris-Saclay - CentraleSupélec, Canada; Centre de Recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Canada
| |
Collapse
|
45
|
Tong L, Wan Y. A Hybrid Intelligence Approach for Circulating Tumor Cell Enumeration in Digital Pathology by Using CNN and Weak Annotations. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2023; 11:142992-143003. [PMID: 38957613 PMCID: PMC11218908 DOI: 10.1109/access.2023.3343701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/04/2024]
Abstract
Counting the number of Circulating Tumor Cells (CTCs) for cancer screenings is currently done by cytopathologists with a heavy time and energy cost. AI, especially deep learning, has shown great potential in medical imaging domains. The aim of this paper is to develop a novel hybrid intelligence approach to automatically enumerate CTCs by combining cytopathologist expertise with the efficiency of deep learning convolutional neural networks (CNNs). This hybrid intelligence approach includes three major components: CNN based CTC detection/localization using weak annotations, CNN based CTC segmentation, and a classifier to ultimately determine CTCs. A support vector machine (SVM) was investigated for classification efficiency. The B-scale transform was also introduced to find the maximum sphericality of a given region. The SVM classifier was implemented to use a three-element vector as its input, including the B-scale (size), texture, and area values from the detection and segmentation results. We collected 466 fluoroscopic images for CTC detection/localization, 473 images for CTC segmentation and another 198 images with 323 CTCs as an independent data set for CTC enumeration. Precision and recall for CTC detection are 0.98 and 0.92, which is comparable with the state-of-the-art results that needed much larger and stricter training data sets. The counting error on an independent testing set was 2-3% and 9% (with/without B-scale) and performs much better than previous thresholding approaches with 30% of counting error rates. Recent publications prove facilitation of other types of research in object localization and segmentation are necessary.
Collapse
Affiliation(s)
- Leihui Tong
- Conestoga High School, Berwyn, PA 19087, USA
| | - Yuan Wan
- Department of Biomedical Engineering, Binghamton University, Binghamton, NY 13902, USA
| |
Collapse
|
46
|
Deininger L, Jung-Klawitter S, Mikut R, Richter P, Fischer M, Karimian-Jazi K, Breckwoldt MO, Bendszus M, Heiland S, Kleesiek J, Opladen T, Kuseyri Hübschmann O, Hübschmann D, Schwarz D. An AI-based segmentation and analysis pipeline for high-field MR monitoring of cerebral organoids. Sci Rep 2023; 13:21231. [PMID: 38040865 PMCID: PMC10692072 DOI: 10.1038/s41598-023-48343-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 11/25/2023] [Indexed: 12/03/2023] Open
Abstract
Cerebral organoids recapitulate the structure and function of the developing human brain in vitro, offering a large potential for personalized therapeutic strategies. The enormous growth of this research area over the past decade with its capability for clinical translation makes a non-invasive, automated analysis pipeline of organoids highly desirable. This work presents a novel non-invasive approach to monitor and analyze cerebral organoids over time using high-field magnetic resonance imaging and state-of-the-art tools for automated image analysis. Three specific objectives are addressed, (I) organoid segmentation to investigate organoid development over time, (II) global cysticity classification and (III) local cyst segmentation for organoid quality assessment. We show that organoid growth can be monitored reliably over time and cystic and non-cystic organoids can be separated with high accuracy, with on par or better performance compared to state-of-the-art tools applied to brightfield imaging. Local cyst segmentation is feasible but could be further improved in the future. Overall, these results highlight the potential of the pipeline for clinical application to larger-scale comparative organoid analysis.
Collapse
Affiliation(s)
- Luca Deininger
- Group for Automated Image and Data Analysis, Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany.
- Division of Pediatric Neurology and Metabolic Medicine, Department I, Center for Pediatric and Adolescent Medicine, Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany.
| | - Sabine Jung-Klawitter
- Division of Pediatric Neurology and Metabolic Medicine, Department I, Center for Pediatric and Adolescent Medicine, Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Ralf Mikut
- Group for Automated Image and Data Analysis, Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Petra Richter
- Division of Pediatric Neurology and Metabolic Medicine, Department I, Center for Pediatric and Adolescent Medicine, Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Manuel Fischer
- Department of Neuroradiology, Heidelberg University Hospital, INF 400, Heidelberg, Germany
| | - Kianush Karimian-Jazi
- Department of Neuroradiology, Heidelberg University Hospital, INF 400, Heidelberg, Germany
| | - Michael O Breckwoldt
- Department of Neuroradiology, Heidelberg University Hospital, INF 400, Heidelberg, Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital, INF 400, Heidelberg, Germany
| | - Sabine Heiland
- Department of Neuroradiology, Heidelberg University Hospital, INF 400, Heidelberg, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen, Essen, Germany
- German Cancer Consortium (DKTK), Heidelberg, Germany
- Cancer Research Center Cologne Essen (CCCE), Essen, Germany
| | - Thomas Opladen
- Division of Pediatric Neurology and Metabolic Medicine, Department I, Center for Pediatric and Adolescent Medicine, Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Oya Kuseyri Hübschmann
- Division of Pediatric Neurology and Metabolic Medicine, Department I, Center for Pediatric and Adolescent Medicine, Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Daniel Hübschmann
- German Cancer Consortium (DKTK), Heidelberg, Germany
- Computational Oncology Group, Molecular Precision Oncology Program, National Center for Tumor Diseases (NCT) Heidelberg, DKFZ, Heidelberg, Germany
- Pattern Recognition and Digital Medicine, Heidelberg Institute for Stem Cell Technology and Experimental Medicine (HI-STEM), Heidelberg, Germany
| | - Daniel Schwarz
- Department of Neuroradiology, Heidelberg University Hospital, INF 400, Heidelberg, Germany
| |
Collapse
|
47
|
Karlas A, Katsouli N, Fasoula NA, Bariotakis M, Chlis NK, Omar M, He H, Iakovakis D, Schäffer C, Kallmayer M, Füchtenbusch M, Ziegler A, Eckstein HH, Hadjileontiadis L, Ntziachristos V. Dermal features derived from optoacoustic tomograms via machine learning correlate microangiopathy phenotypes with diabetes stage. Nat Biomed Eng 2023; 7:1667-1682. [PMID: 38049470 PMCID: PMC10727986 DOI: 10.1038/s41551-023-01151-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 10/24/2023] [Indexed: 12/06/2023]
Abstract
Skin microangiopathy has been associated with diabetes. Here we show that skin-microangiopathy phenotypes in humans can be correlated with diabetes stage via morphophysiological cutaneous features extracted from raster-scan optoacoustic mesoscopy (RSOM) images of skin on the leg. We obtained 199 RSOM images from 115 participants (40 healthy and 75 with diabetes), and used machine learning to segment skin layers and microvasculature to identify clinically explainable features pertaining to different depths and scales of detail that provided the highest predictive power. Features in the dermal layer at the scale of detail of 0.1-1 mm (such as the number of junction-to-junction branches) were highly sensitive to diabetes stage. A 'microangiopathy score' compiling the 32 most-relevant features predicted the presence of diabetes with an area under the receiver operating characteristic curve of 0.84. The analysis of morphophysiological cutaneous features via RSOM may allow for the discovery of diabetes biomarkers in the skin and for the monitoring of diabetes status.
Collapse
Affiliation(s)
- Angelos Karlas
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany
- Department for Vascular and Endovascular Surgery, Klinikum rechts der Isar, Technical University of Munich (TUM), Munich, Germany
- DZHK (German Centre for Cardiovascular Research), partner site Munich Heart Alliance, Munich, Germany
| | - Nikoletta Katsouli
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany
| | - Nikolina-Alexia Fasoula
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany
| | - Michail Bariotakis
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany
| | - Nikolaos-Kosmas Chlis
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany
- Institute of Computational Biology, Helmholtz Zentrum München, Neuherberg, Germany
| | - Murad Omar
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany
| | - Hailong He
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany
| | - Dimitrios Iakovakis
- Department of Biomedical Engineering, Healthcare Engineering Innovation Center (HEIC), Khalifa University, Abu Dhabi, United Arab Emirates
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Christoph Schäffer
- Department for Vascular and Endovascular Surgery, Klinikum rechts der Isar, Technical University of Munich (TUM), Munich, Germany
| | - Michael Kallmayer
- Department for Vascular and Endovascular Surgery, Klinikum rechts der Isar, Technical University of Munich (TUM), Munich, Germany
| | | | - Annette Ziegler
- Forschergruppe Diabetes e.V., Helmholtz Zentrum München, Neuherberg, Germany
- Institute of Diabetes Research, Helmholtz Zentrum München, Neuherberg, Germany
- Forschergruppe Diabetes, Klinikum rechts der Isar, Technical University of Munich (TUM), Munich, Germany
| | - Hans-Henning Eckstein
- Department for Vascular and Endovascular Surgery, Klinikum rechts der Isar, Technical University of Munich (TUM), Munich, Germany
- DZHK (German Centre for Cardiovascular Research), partner site Munich Heart Alliance, Munich, Germany
| | - Leontios Hadjileontiadis
- Department of Biomedical Engineering, Healthcare Engineering Innovation Center (HEIC), Khalifa University, Abu Dhabi, United Arab Emirates
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Vasilis Ntziachristos
- Institute of Biological and Medical Imaging, Helmholtz Zentrum München, Neuherberg, Germany.
- Chair of Biological Imaging at the Central Institute for Translational Cancer Research (TranslaTUM), School of Medicine, Technical University of Munich, Munich, Germany.
- DZHK (German Centre for Cardiovascular Research), partner site Munich Heart Alliance, Munich, Germany.
- Munich Institute of Robotics and Machine Intelligence (MIRMI), Technical University of Munich, Munich, Germany.
| |
Collapse
|
48
|
Jawad MT, Yeafi A, Halder KK. GSNet: a multi-class 3D attention-based hybrid glioma segmentation network. OPTICS EXPRESS 2023; 31:40881-40906. [PMID: 38041378 DOI: 10.1364/oe.499054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 10/21/2023] [Indexed: 12/03/2023]
Abstract
In modern neuro-oncology, computer-aided biomedical image retrieval (CBIR) tools have recently gained significant popularity due to their quick and easy usage and high-performance capability. However, designing such an automated tool remains challenging because of the lack of balanced resources and inconsistent spatial texture. Like in many other fields of diagnosis, brain tumor (glioma) extraction has posed a challenge to the research community. In this article, we proposed a fully developed robust segmentation network called GSNet for the purpose of glioma segmentation. Unlike conventional 2-dimensional structures, GSNet directly deals with 3-dimensional (3D) data while utilizing attention-based skip links. The network is trained and validated using the BraTS 2020 dataset and further trained with BraTS 2019 and BraTS 2018 datasets for comparison. While utilizing the BraTS 2020 dataset, our 3D network achieved an overall dice similarity coefficient of 0.9239, 0.9103, and 0.8139, respectively for whole tumor, tumor core, and enhancing tumor classes. Our model produces significantly high scores across all occasions and is capable of dealing with newer data, despite training with imbalanced datasets. In comparison to other articles, our model outperforms some of the state-of-the-art scores designating it to be suitable as a reliable CBIR tool for necessary medical usage.
Collapse
|
49
|
Marupudi S, Cao Q, Samala R, Petrick N. Characterization of mechanical stiffness using additive manufacturing and finite element analysis: potential tool for bone health assessment. 3D Print Med 2023; 9:32. [PMID: 37978094 PMCID: PMC10656885 DOI: 10.1186/s41205-023-00197-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 11/07/2023] [Indexed: 11/19/2023] Open
Abstract
BACKGROUND Bone health and fracture risk are known to be correlated with stiffness. Both micro-finite element analysis (μFEA) and mechanical testing of additive manufactured phantoms are useful approaches for estimating mechanical properties of trabecular bone-like structures. However, it is unclear if measurements from the two approaches are consistent. The purpose of this work is to evaluate the agreement between stiffness measurements obtained from mechanical testing of additive manufactured trabecular bone phantoms and μFEA modeling. Agreement between the two methods would suggest 3D printing is a viable method for validation of μFEA modeling. METHODS A set of 20 lumbar vertebrae regions of interests were segmented and the corresponding trabecular bone phantoms were produced using selective laser sintering. The phantoms were mechanically tested in uniaxial compression to derive their stiffness values. The stiffness values were also derived from in silico simulation, where linear elastic μFEA was applied to simulate the same compression and boundary conditions. Bland-Altman analysis was used to evaluate agreement between the mechanical testing and μFEA simulation values. Additionally, we evaluated the fidelity of the 3D printed phantoms as well as the repeatability of the 3D printing and mechanical testing process. RESULTS We observed good agreement between the mechanically tested stiffness and μFEA stiffness, with R2 of 0.84 and normalized root mean square deviation of 8.1%. We demonstrate that the overall trabecular bone structures are printed in high fidelity (Dice score of 0.97 (95% CI, [0.96,0.98]) and that mechanical testing is repeatable (coefficient of variation less than 5% for stiffness values from testing of duplicated phantoms). However, we noticed some defects in the resin microstructure of the 3D printed phantoms, which may account for the discrepancy between the stiffness values from simulation and mechanical testing. CONCLUSION Overall, the level of agreement achieved between the mechanical stiffness and μFEA indicates that our μFEA methods may be acceptable for assessing bone mechanics of complex trabecular structures as part of an analysis of overall bone health.
Collapse
Affiliation(s)
- Sriharsha Marupudi
- Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Labs, U.S. Food and Drug Administration, Silver Spring, MD, USA
| | - Qian Cao
- Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Labs, U.S. Food and Drug Administration, Silver Spring, MD, USA.
| | - Ravi Samala
- Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Labs, U.S. Food and Drug Administration, Silver Spring, MD, USA
| | - Nicholas Petrick
- Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Labs, U.S. Food and Drug Administration, Silver Spring, MD, USA
| |
Collapse
|
50
|
Borowska M, Jasiński T, Gierasimiuk S, Pauk J, Turek B, Górski K, Domino M. Three-Dimensional Segmentation Assisted with Clustering Analysis for Surface and Volume Measurements of Equine Incisor in Multidetector Computed Tomography Data Sets. SENSORS (BASEL, SWITZERLAND) 2023; 23:8940. [PMID: 37960639 PMCID: PMC10650163 DOI: 10.3390/s23218940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/22/2023] [Accepted: 10/31/2023] [Indexed: 11/15/2023]
Abstract
Dental diagnostic imaging has progressed towards the use of advanced technologies such as 3D image processing. Since multidetector computed tomography (CT) is widely available in equine clinics, CT-based anatomical 3D models, segmentations, and measurements have become clinically applicable. This study aimed to use a 3D segmentation of CT images and volumetric measurements to investigate differences in the surface area and volume of equine incisors. The 3D Slicer was used to segment single incisors of 50 horses' heads and to extract volumetric features. Axial vertical symmetry, but not horizontal, of the incisors was evidenced. The surface area and volume differed significantly between temporary and permanent incisors, allowing for easy eruption-related clustering of the CT-based 3D images with an accuracy of >0.75. The volumetric features differed partially between center, intermediate, and corner incisors, allowing for moderate location-related clustering with an accuracy of >0.69. The volumetric features of mandibular incisors' equine odontoclastic tooth resorption and hypercementosis (EOTRH) degrees were more than those for maxillary incisors; thus, the accuracy of EOTRH degree-related clustering was >0.72 for the mandibula and >0.33 for the maxilla. The CT-based 3D images of equine incisors can be successfully segmented using the routinely achieved multidetector CT data sets and the proposed data-processing approaches.
Collapse
Affiliation(s)
- Marta Borowska
- Institute of Biomedical Engineering, Faculty of Mechanical Engineering, Białystok University of Technology, 15-351 Bialystok, Poland; (M.B.); (S.G.); (J.P.)
| | - Tomasz Jasiński
- Department of Large Animal Diseases and Clinic, Institute of Veterinary Medicine, Warsaw University of Life Sciences, 02-787 Warsaw, Poland; (T.J.); (B.T.); (K.G.)
| | - Sylwia Gierasimiuk
- Institute of Biomedical Engineering, Faculty of Mechanical Engineering, Białystok University of Technology, 15-351 Bialystok, Poland; (M.B.); (S.G.); (J.P.)
| | - Jolanta Pauk
- Institute of Biomedical Engineering, Faculty of Mechanical Engineering, Białystok University of Technology, 15-351 Bialystok, Poland; (M.B.); (S.G.); (J.P.)
| | - Bernard Turek
- Department of Large Animal Diseases and Clinic, Institute of Veterinary Medicine, Warsaw University of Life Sciences, 02-787 Warsaw, Poland; (T.J.); (B.T.); (K.G.)
| | - Kamil Górski
- Department of Large Animal Diseases and Clinic, Institute of Veterinary Medicine, Warsaw University of Life Sciences, 02-787 Warsaw, Poland; (T.J.); (B.T.); (K.G.)
| | - Małgorzata Domino
- Department of Large Animal Diseases and Clinic, Institute of Veterinary Medicine, Warsaw University of Life Sciences, 02-787 Warsaw, Poland; (T.J.); (B.T.); (K.G.)
| |
Collapse
|