1
|
Huang Y, Zheng G, Li X, Xiao J, Xu Z, Tian P. Habitat quality evaluation and pattern simulation of coastal salt marsh wetlands. THE SCIENCE OF THE TOTAL ENVIRONMENT 2024; 945:174003. [PMID: 38879037 DOI: 10.1016/j.scitotenv.2024.174003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 05/26/2024] [Accepted: 06/12/2024] [Indexed: 06/25/2024]
Abstract
Coastal salt marsh wetlands not only sequester a large amount of organic carbon, mitigating the effect of climate change, but also nurture rich wetland resources and diverse ecological environments. In this study, habitat pattern and quality of the Jiangsu Yancheng Wetland Rare Birds National Nature Reserve were studied. The evolution of habitat patterns was analyzed using the U-Net model and Sentinel-2 data. The habitat quality was evaluated using the InVEST model, while the future habitat pattern in 2027 under different scenarios were simulated using the PLUS model. Our results showed that, during 2017-2022, the Suaeda salsa habitat showed a net decrease in area of 2077.61 ha, while Spartina alterniflora and Phragmites australis habitats manifested a net increase in different degrees. The overall habitat pattern was characterized by fragmentation decline and regularization enhancement. The habitat quality decreased from 0.75 to 0.72, mainly due to the loss of the S. salsa habitat and the expansion of the P. australis habitat. The simulation results indicated that, the habitat quality is expected to further decline to 0.71 under the natural development scenario, and 390.27 ha of S. salsa habitat will convert to P. australis. While in government control scenario, the habitat quality is expected to improve to 0.78, which was 0.07 higher than that in natural development scenario, and S. salsa habitat can be restored well. This study provides a scientific basis for the protection of suitable habitats for waterfowl and is crucial for the ecological conservation and management planning of nature reserves and coastal salt marsh wetlands.
Collapse
Affiliation(s)
- Yuting Huang
- State Key Laboratory of Remote Sensing Science, College of Global Change and Earth System Science, Faculty of Geographical Science, Beijing Normal University, Beijing, China
| | - Guanghui Zheng
- School of Geographical Sciences, Nanjing University of Information Science & Technology, Nanjing, China
| | - Xianglan Li
- State Key Laboratory of Remote Sensing Science, College of Global Change and Earth System Science, Faculty of Geographical Science, Beijing Normal University, Beijing, China.
| | - Jingfeng Xiao
- Earth Systems Research Center, Institute for the Study of Earth, Oceans, and Space, University of New Hampshire, Durham, NH, USA
| | - Zhe Xu
- State Key Laboratory of Remote Sensing Science, College of Global Change and Earth System Science, Faculty of Geographical Science, Beijing Normal University, Beijing, China
| | - Pengpeng Tian
- State Key Laboratory of Remote Sensing Science, College of Global Change and Earth System Science, Faculty of Geographical Science, Beijing Normal University, Beijing, China
| |
Collapse
|
2
|
Jeng PH, Yang CY, Huang TR, Kuo CF, Liu SC. Harnessing AI for precision tonsillitis diagnosis: a revolutionary approach in endoscopic analysis. Eur Arch Otorhinolaryngol 2024:10.1007/s00405-024-08938-w. [PMID: 39230610 DOI: 10.1007/s00405-024-08938-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Accepted: 08/19/2024] [Indexed: 09/05/2024]
Abstract
BACKGROUND Diagnosing and treating tonsillitis pose no significant challenge for otolaryngologists; however, it can increase the infection risk for healthcare professionals amidst the coronavirus pandemic. In recent years, with the advancement of artificial intelligence (AI), its application in medical imaging has also thrived. This research is to identify the optimal convolutional neural network (CNN) algorithm for accurate diagnosis of tonsillitis and early precision treatment. METHODS Semi-supervised learning with pseudo-labels used for self-training was adopted to train our CNN, with the algorithm including UNet, PSPNet, and FPN. A total of 485 pharyngoscopic images from 485 participants were included, comprising healthy individuals (133 cases), patients with the common cold (295 cases), and patients with tonsillitis (57 cases). Both color and texture features from 485 images are extracted for analysis. RESULTS UNet outperformed PSPNet and FPN in accurately segmenting oropharyngeal anatomy automatically, with average Dice coefficient of 97.74% and a pixel accuracy of 98.12%, making it suitable for enhancing the diagnosis of tonsillitis. The normal tonsils generally have more uniform and smooth textures and have pinkish color, similar to the surrounding mucosal tissues, while tonsillitis, particularly the antibiotic-required type, shows white or yellowish pus-filled spots or patches, and shows more granular or lumpy texture in contrast, indicating inflammation and changes in tissue structure. After training with 485 cases, our algorithm with UNet achieved accuracy rates of 93.75%, 97.1%, and 91.67% in differentiating the three tonsil groups, demonstrating excellent results. CONCLUSION Our research highlights the potential of using UNet for fully automated semantic segmentation of oropharyngeal structures, which aids in subsequent feature extraction, machine learning, and enables accurate AI diagnosis of tonsillitis. This innovation shows promise for enhancing both the accuracy and speed of tonsillitis assessments.
Collapse
Affiliation(s)
- Po-Hsuan Jeng
- Department of Otolaryngology-Head and Neck Surgery Tri-Service General Hospital, National Defense Medical Center, No. 325, Sec. 2, Cheng-Gong Road, Neihu District, Taipei, Taiwan 114, Republic of China
- Graduate Institute of Medical Science, National Defense Medical Center, Taipei, Taiwan
| | - Chien-Yi Yang
- Division of General Surgery, Department of Surgery Tri-Service General Hospital Songshan Branch, National Defense Medical Center, Taipei, Taiwan, Republic of China
| | - Tien-Ru Huang
- Department of Otolaryngology-Head and Neck Surgery Tri-Service General Hospital, National Defense Medical Center, No. 325, Sec. 2, Cheng-Gong Road, Neihu District, Taipei, Taiwan 114, Republic of China
| | - Chung-Feng Kuo
- Department of Material Science & Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan, Republic of China
| | - Shao-Cheng Liu
- Department of Otolaryngology-Head and Neck Surgery Tri-Service General Hospital, National Defense Medical Center, No. 325, Sec. 2, Cheng-Gong Road, Neihu District, Taipei, Taiwan 114, Republic of China.
| |
Collapse
|
3
|
Zwijnen AW, Watzema L, Ridwan Y, van Der Pluijm I, Smal I, Essers J. Self-adaptive deep learning-based segmentation for universal and functional clinical and preclinical CT image analysis. Comput Biol Med 2024; 179:108853. [PMID: 39013341 DOI: 10.1016/j.compbiomed.2024.108853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 07/04/2024] [Accepted: 07/04/2024] [Indexed: 07/18/2024]
Abstract
BACKGROUND Methods to monitor cardiac functioning non-invasively can accelerate preclinical and clinical research into novel treatment options for heart failure. However, manual image analysis of cardiac substructures is resource-intensive and error-prone. While automated methods exist for clinical CT images, translating these to preclinical μCT data is challenging. We employed deep learning to automate the extraction of quantitative data from both CT and μCT images. METHODS We collected a public dataset of cardiac CT images of human patients, as well as acquired μCT images of wild-type and accelerated aging mice. The left ventricle, myocardium, and right ventricle were manually segmented in the μCT training set. After template-based heart detection, two separate segmentation neural networks were trained using the nnU-Net framework. RESULTS The mean Dice score of the CT segmentation results (0.925 ± 0.019, n = 40) was superior to those achieved by state-of-the-art algorithms. Automated and manual segmentations of the μCT training set were nearly identical. The estimated median Dice score (0.940) of the test set results was comparable to existing methods. The automated volume metrics were similar to manual expert observations. In aging mice, ejection fractions had significantly decreased, and myocardial volume increased by age 24 weeks. CONCLUSIONS With further optimization, automated data extraction expands the application of (μ)CT imaging, while reducing subjectivity and workload. The proposed method efficiently measures the left and right ventricular ejection fraction and myocardial mass. With uniform translation between image types, cardiac functioning in diastolic and systolic phases can be monitored in both animals and humans.
Collapse
Affiliation(s)
- Anne-Wietje Zwijnen
- Department of Molecular Genetics, Erasmus University Medical Center, Rotterdam, the Netherlands
| | | | - Yanto Ridwan
- AMIE Core Facility, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Ingrid van Der Pluijm
- Department of Molecular Genetics, Erasmus University Medical Center, Rotterdam, the Netherlands; Department of Vascular Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - Ihor Smal
- Department of Cell Biology, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - Jeroen Essers
- Department of Molecular Genetics, Erasmus University Medical Center, Rotterdam, the Netherlands; Department of Vascular Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands; Department of Radiotherapy, Erasmus University Medical Center, Rotterdam, the Netherlands.
| |
Collapse
|
4
|
Shao J, Cao J, Wang C, Xu P, Lou L, Ye J. Automatic Measurement and Comparison of Normal Eyelid Contour by Age and Gender Using Image-Based Deep Learning. OPHTHALMOLOGY SCIENCE 2024; 4:100518. [PMID: 38881605 PMCID: PMC11179404 DOI: 10.1016/j.xops.2024.100518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 03/03/2024] [Accepted: 03/12/2024] [Indexed: 06/18/2024]
Abstract
Purpose This study aimed to propose a fully automatic eyelid measurement system and compare the contours of both the upper and lower eyelids of normal individuals according to age and gender. Design Prospective study. Participants Five hundred and forty healthy Chinese aged 0 to 79 years in a tertiary hospital were included. Methods Facial images in the primary gazing position were used to train and test the proposed automatic system for eye recognition and eye segmentation. According to the 10-millimeter diameter circular marker, measurements were transformed from pixel sizes into factual distances. Main Outcome Measures Midpupil lid distances (MPLDs) every 15° of all participants were automatically measured in both genders (30 males and 30 females in each age group) by the proposed deep learning (DL)-based system. Intraclass correlation coefficients (ICCs) were performed to assess the agreement between the automatic and manual margin reflex distances (MRDs). The eyelid contour, eyelid asymmetry, and palpebral fissure obliquity were analyzed using MPLD, temporal-versus-nasal MPLD ratio, and the angle between the inner and outer canthi, respectively. Results The measurement of MRDs by the automatic system excellently agreed with that of the expert, with ICCs ranging from 0.863 to 0.886. As the age of the participants increased, the values of MPLDs reached a peak in those in their 20s or 30s and then gradually decreased at all angles. The temporal sector showed greater changes in MPLDs than the nasal sector, and the changes were more significant in females than in males. The maximum value of palpebral fissure obliquity appeared before 10 years in both genders and remained relatively stable after the 20s (P > 0.05). Conclusions The proposed DL-based eyelid analysis system allowed automatic, accurate, and comprehensive measurement of the eyelid contour. The refinement of eyelid shape quantification could be beneficial for future objective assessment preocular and postocular plastic surgery. Financial Disclosures The authors have no proprietary or commercial interest in any materials discussed in this article.
Collapse
Affiliation(s)
- Ji Shao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Jing Cao
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Changjun Wang
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Peifang Xu
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Lixia Lou
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| | - Juan Ye
- Eye Center, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Zhejiang Provincial Key Laboratory of Ophthalmology, Zhejiang Provincial Clinical Research Center for Eye Diseases, Zhejiang Provincial Engineering Institute on Eye Diseases, Hangzhou, Zhejiang, China
| |
Collapse
|
5
|
Zeng Y, Liu H, Hu J, Zhao Z, She Q. Pretrained subtraction and segmentation model for coronary angiograms. Sci Rep 2024; 14:19888. [PMID: 39191858 DOI: 10.1038/s41598-024-71063-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2024] [Accepted: 08/23/2024] [Indexed: 08/29/2024] Open
Abstract
This study introduces a novel self-supervised learning method for single-frame subtraction and vessel segmentation in coronary angiography, addressing the scarcity of annotated medical samples in AI applications. We pretrain a U-Net model on a large dataset of unannotated coronary angiograms using an image-to-image translation framework, then fine-tune it on a limited set of manually annotated samples. The pretrained model excels at comprehensive single-frame subtraction, outperforming existing DSA methods. Fine-tuning with just 40 samples yields a Dice coefficient of 0.828 for vessel segmentation. On the public XCAD dataset, our model sets a new state-of-the-art benchmark with a Dice coefficient of 0.755, surpassing both unsupervised and supervised learning approaches. This method achieves robust single-frame subtraction and demonstrates that combining pretraining with minimal fine-tuning enables accurate coronary vessel segmentation with limited manual annotations. We successfully apply this approach to assist physicians in visualizing potential vascular stenosis sites during coronary angiography. Code, dataset, and a live demo will be available available at: https://github.com/newfyu/DeepSA .
Collapse
Affiliation(s)
- Yunjie Zeng
- Department of Cardiology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, 400010, China
- Department of Cardiology, The Affiliated Dazu's Hospital of Chongqing Medical University, Chongqing, 402360, China
| | - Han Liu
- Department of Neurology, Jiulongpo District People's Hospital, Chongqing, 400050, China
| | - Juan Hu
- The First Affiliated Hospital of Chongqing Medical and Pharmaceutical College, Chongqing, 400060, China
| | - Zhengbo Zhao
- Department of Cardiology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, 400010, China
| | - Qiang She
- Department of Cardiology, The Second Affiliated Hospital of Chongqing Medical University, Chongqing, 400010, China.
| |
Collapse
|
6
|
Alom MS, Daneshkhah A, Acosta N, Anthony N, Liwag EP, Backman V, Gaire SK. Deep Learning-driven Automatic Nuclei Segmentation of Label-free Live Cell Chromatin-sensitive Partial Wave Spectroscopic Microscopy Imaging. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.08.20.608885. [PMID: 39229026 PMCID: PMC11370422 DOI: 10.1101/2024.08.20.608885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/05/2024]
Abstract
Chromatin-sensitive Partial Wave Spectroscopic (csPWS) microscopy offers a non-invasive glimpse into the mass density distribution of cellular structures at the nanoscale, leveraging the spectroscopic information. Such capability allows us to analyze the chromatin structure and organization and the global transcriptional state of the cell nuclei for the study of its role in carcinogenesis. Accurate segmentation of the nuclei in csPWS microscopy images is an essential step in isolating them for further analysis. However, manual segmentation is error-prone, biased, time-consuming, and laborious, resulting in disrupted nuclear boundaries with partial or over-segmentation. Here, we present an innovative deep-learning-driven approach to automate the accurate nuclei segmentation of label-free live cell csPWS microscopy imaging data. Our approach, csPWS-seg, harnesses the Convolutional Neural Networks-based U-Net model with an attention mechanism to automate the accurate cell nuclei segmentation of csPWS microscopy images. We leveraged the structural, physical, and biological differences between the cytoplasm, nucleus, and nuclear periphery to construct three distinct csPWS feature images for nucleus segmentation. Using these images of HCT116 cells, csPWS-seg achieved superior performance with a median Intersection over Union (IoU) of 0.80 and a Dice Similarity Coefficient (DSC) score of 0.88. The csPWS-seg overcame the segmentation performance over the baseline U-Net model and another attention-based model, SE-U-Net, marking a significant improvement in segmentation accuracy. Further, we analyzed the performance of our proposed model with four loss functions: binary cross-entropy loss, focal loss, dice loss, and Jaccard loss. The csPWS-seg with focal loss provided the best results compared to other loss functions. The automatic and accurate nuclei segmentation offered by the csPWS-seg not only automates, accelerates, and streamlines csPWS data analysis but also enhances the reliability of subsequent chromatin analysis research, paving the way for more accurate diagnostics, treatment, and understanding of cellular mechanisms for carcinogenesis.
Collapse
|
7
|
Cai F, Wen J, He F, Xia Y, Xu W, Zhang Y, Jiang L, Li J. SC-Unext: A Lightweight Image Segmentation Model with Cellular Mechanism for Breast Ultrasound Tumor Diagnosis. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1505-1515. [PMID: 38424276 PMCID: PMC11300774 DOI: 10.1007/s10278-024-01042-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 01/13/2024] [Accepted: 02/05/2024] [Indexed: 03/02/2024]
Abstract
Automatic breast ultrasound image segmentation plays an important role in medical image processing. However, current methods for breast ultrasound segmentation suffer from high computational complexity and large model parameters, particularly when dealing with complex images. In this paper, we take the Unext network as a basis and utilize its encoder-decoder features. And taking inspiration from the mechanisms of cellular apoptosis and division, we design apoptosis and division algorithms to improve model performance. We propose a novel segmentation model which integrates the division and apoptosis algorithms and introduces spatial and channel convolution blocks into the model. Our proposed model not only improves the segmentation performance of breast ultrasound tumors, but also reduces the model parameters and computational resource consumption time. The model was evaluated on the breast ultrasound image dataset and our collected dataset. The experiments show that the SC-Unext model achieved Dice scores of 75.29% and accuracy of 97.09% on the BUSI dataset, and on the collected dataset, it reached Dice scores of 90.62% and accuracy of 98.37%. Meanwhile, we conducted a comparison of the model's inference speed on CPUs to verify its efficiency in resource-constrained environments. The results indicated that the SC-Unext model achieved an inference speed of 92.72 ms per instance on devices equipped only with CPUs. The model's number of parameters and computational resource consumption are 1.46M and 2.13 GFlops, respectively, which are lower compared to other network models. Due to its lightweight nature, the model holds significant value for various practical applications in the medical field.
Collapse
Affiliation(s)
- Fenglin Cai
- Department of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, People's Republic of China
| | - Jiaying Wen
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China
| | - Fangzhou He
- Department of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, People's Republic of China
| | - Yulong Xia
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China
| | - Weijun Xu
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China
| | - Yong Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China
| | - Li Jiang
- Department of Neurosurgery, The First Affiliated Hospital of Chongqing Medical University, Chongqing, 400016, People's Republic of China.
| | - Jie Li
- Department of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, People's Republic of China.
| |
Collapse
|
8
|
Islam F, Das S, Ashaduzzaman M, Sillman B, Yeapuri P, Nayan MU, Oupický D, Gendelman HE, Kevadiya BD. Development of an extended action fostemsavir lipid nanoparticle. Commun Biol 2024; 7:917. [PMID: 39080401 PMCID: PMC11289258 DOI: 10.1038/s42003-024-06589-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 07/15/2024] [Indexed: 08/02/2024] Open
Abstract
An extended action fostemsavir (FTR) lipid nanoparticle (LNP) formulation prevents human immunodeficiency virus type one (HIV-1) infection. This FTR formulation establishes a drug depot in monocyte-derived macrophages that extend the drug's plasma residence time. The LNP's physicochemical properties improve FTR's antiretroviral activities, which are linked to the drug's ability to withstand fluid flow forces and levels of drug cellular internalization. Each is, in measure, dependent on PEGylated lipid composition and flow rate ratios affecting the size, polydispersity, shape, zeta potential, stability, biodistribution, and antiretroviral efficacy. The FTR LNP physicochemical properties enable the drug-particle's extended actions.
Collapse
Affiliation(s)
- Farhana Islam
- Department of Pharmacology and Experimental Neuroscience, University of Nebraska Medical Center, Omaha, NE, USA
- Department of Biochemistry and Molecular Biology, University of Nebraska Medical Center, Omaha, NE, USA
| | - Srijanee Das
- Department of Pharmacology and Experimental Neuroscience, University of Nebraska Medical Center, Omaha, NE, USA
- Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, NE, USA
| | - Md Ashaduzzaman
- Department of Computer Science, University of Nebraska Omaha, Omaha, NE, 68182, USA
| | - Brady Sillman
- Department of Pharmacology and Experimental Neuroscience, University of Nebraska Medical Center, Omaha, NE, USA
| | - Pravin Yeapuri
- Department of Pharmacology and Experimental Neuroscience, University of Nebraska Medical Center, Omaha, NE, USA
| | - Mohammad Ullah Nayan
- Department of Pharmacology and Experimental Neuroscience, University of Nebraska Medical Center, Omaha, NE, USA
| | - David Oupický
- Center for Drug Delivery and Nanomedicine, Department of Pharmaceutical Sciences, College of Pharmacy, University of Nebraska Medical Center, Omaha, NE, USA
| | - Howard E Gendelman
- Department of Pharmacology and Experimental Neuroscience, University of Nebraska Medical Center, Omaha, NE, USA.
- Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, NE, USA.
| | - Bhavesh D Kevadiya
- Department of Pharmacology and Experimental Neuroscience, University of Nebraska Medical Center, Omaha, NE, USA
| |
Collapse
|
9
|
Pan X, Wang D. GC Snakes: An Efficient and Robust Segmentation Model for Hot Forging Images. SENSORS (BASEL, SWITZERLAND) 2024; 24:4821. [PMID: 39123869 PMCID: PMC11314881 DOI: 10.3390/s24154821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Revised: 07/22/2024] [Accepted: 07/23/2024] [Indexed: 08/12/2024]
Abstract
Machine vision is a desirable non-contact measurement method for hot forgings, as image segmentation has been a challenging issue in performance and robustness resulting from the diversity of working conditions for hot forgings. Thus, this paper proposes an efficient and robust active contour model and corresponding image segmentation approach for forging images, by which verification experiments are conducted to prove the performance of the segmentation method by measuring geometric parameters for forging parts. Specifically, three types of continuity parameters are defined based on the geometric continuity of equivalent grayscale surfaces for forging images; hence, a new image force and external energy functional are proposed to form a new active contour model, Geometric Continuity Snakes (GC Snakes), which is more percipient to the grayscale distribution characteristics of forging images to improve the convergence for active contour robustly; additionally, a generating strategy for initial control points for GC Snakes is proposed to compose an efficient and robust image segmentation approach. The experimental results show that the proposed GC Snakes has better segmentation performance compared with existing active contour models for forging images of different temperatures and sizes, which provides better performance and efficiency in geometric parameter measurement for hot forgings. The maximum positioning and dimension errors by GC Snakes are 0.5525 mm and 0.3868 mm, respectively, compared with errors of 0.7873 mm and 0.6868 mm by the Snakes model.
Collapse
Affiliation(s)
| | - Delun Wang
- School of Mechanical Engineering, Dalian University of Technology, Dalian 116024, China;
| |
Collapse
|
10
|
Luo H, Li J, Huang H, Jiao L, Zheng S, Ying Y, Li Q. AI-based segmentation of renal enhanced CT images for quantitative evaluate of chronic kidney disease. Sci Rep 2024; 14:16890. [PMID: 39043766 PMCID: PMC11266695 DOI: 10.1038/s41598-024-67658-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Accepted: 07/15/2024] [Indexed: 07/25/2024] Open
Abstract
To quantitatively evaluate chronic kidney disease (CKD), a deep convolutional neural network-based segmentation model was applied to renal enhanced computed tomography (CT) images. A retrospective analysis was conducted on a cohort of 100 individuals diagnosed with CKD and 90 individuals with healthy kidneys, who underwent contrast-enhanced CT scans of the kidneys or abdomen. Demographic and clinical data were collected from all participants. The study consisted of two distinct stages: firstly, the development and validation of a three-dimensional (3D) nnU-Net model for segmenting the arterial phase of renal enhanced CT scans; secondly, the utilization of the 3D nnU-Net model for quantitative evaluation of CKD. The 3D nnU-Net model achieved a mean Dice Similarity Coefficient (DSC) of 93.53% for renal parenchyma and 81.48% for renal cortex. Statistically significant differences were observed among different stages of renal function for renal parenchyma volume (VRP), renal cortex volume (VRC), renal medulla volume (VRM), the CT values of renal parenchyma (HuRP), the CT values of renal cortex (HuRC), and the CT values of renal medulla (HuRM) (F = 93.476, 144.918, 9.637, 170.533, 216.616, and 94.283; p < 0.001). Pearson correlation analysis revealed significant positive associations between glomerular filtration rate (eGFR) and VRP, VRC, VRM, HuRP, HuRC, and HuRM (r = 0.749, 0.818, 0.321, 0.819, 0.820, and 0.747, respectively, all p < 0.001). Similarly, a negative correlation was observed between serum creatinine (Scr) levels and VRP, VRC, VRM, HuRP, HuRC, and HuRM (r = - 0.759, - 0.777, - 0.420, - 0.762, - 0.771, and - 0.726, respectively, all p < 0.001). For predicting CKD in males, VRP had an area under the curve (AUC) of 0.726, p < 0.001; VRC, AUC 0.765, p < 0.001; VRM, AUC 0.578, p = 0.018; HuRP, AUC 0.912, p < 0.001; HuRC, AUC 0.952, p < 0.001; and HuRM, AUC 0.772, p < 0.001 in males. In females, VRP had an AUC of 0.813, p < 0.001; VRC, AUC 0.851, p < 0.001; VRM, AUC 0.623, p = 0.060; HuRP, AUC 0.904, p < 0.001; HuRC, AUC 0.934, p < 0.001; and HuRM, AUC 0.840, p < 0.001. The optimal cutoff values for predicting CKD in HuRP are 99.9 Hu for males and 98.4 Hu for females, while in HuRC are 120.1 Hu for males and 111.8 Hu for females. The kidney was effectively segmented by our AI-based 3D nnU-Net model for enhanced renal CT images. In terms of mild kidney injury, the CT values exhibited higher sensitivity compared to kidney volume. The correlation analysis revealed a stronger association between VRC, HuRP, and HuRC with renal function, while the association between VRP and HuRM was weaker, and the association between VRM was the weakest. Particularly, HuRP and HuRC demonstrated significant potential in predicting renal function. For diagnosing CKD, it is recommended to set the threshold values as follows: HuRP < 99.9 Hu and HuRC < 120.1 Hu in males, and HuRP < 98.4 Hu and HuRC < 111.8 Hu in females.
Collapse
Affiliation(s)
- Hui Luo
- Department of Radiology, Ningbo Yinzhou Second Hospital, Ningbo, China
| | - Jingzhen Li
- Department of Nephrology, Ningbo Yinzhou Second Hospital, Ningbo, China
| | - Haiyang Huang
- Department of Radiology, Ningbo Yinzhou Second Hospital, Ningbo, China
| | - Lianghong Jiao
- Department of Radiology, Ningbo Yinzhou Second Hospital, Ningbo, China
| | - Siyuan Zheng
- Department of Radiology, Ningbo Yinzhou Second Hospital, Ningbo, China
| | - Yibo Ying
- Department of Radiology, Ningbo Yinzhou Second Hospital, Ningbo, China
| | - Qiang Li
- Department of Radiology, The Affiliated People's Hospital of Ningbo University, Ningbo, 315000, China.
| |
Collapse
|
11
|
D S CS, Christopher Clement J. Enhancing brain tumor segmentation in MRI images using the IC-net algorithm framework. Sci Rep 2024; 14:15660. [PMID: 38977779 PMCID: PMC11231217 DOI: 10.1038/s41598-024-66314-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 07/01/2024] [Indexed: 07/10/2024] Open
Abstract
Brain tumors, often referred to as intracranial tumors, are abnormal tissue masses that arise from rapidly multiplying cells. During medical imaging, it is essential to separate brain tumors from healthy tissue. The goal of this paper is to improve the accuracy of separating tumorous regions from healthy tissues in medical imaging, specifically for brain tumors in MRI images which is difficult in the field of medical image analysis. In our research work, we propose IC-Net (Inverted-C), a novel semantic segmentation architecture that combines elements from various models to provide effective and precise results. The architecture includes Multi-Attention (MA) blocks, Feature Concatenation Networks (FCN), Attention-blocks which performs crucial tasks in improving brain tumor segmentation. MA-block aggregates multi-attention features to adapt to different tumor sizes and shapes. Attention-block is focusing on key regions, resulting in more effective segmentation in complex images. FCN-block captures diverse features, making the model more robust to various characteristics of brain tumor images. Our proposed architecture is used to accelerate the training process and also to address the challenges posed by the diverse nature of brain tumor images, ultimately leads to potentially improved segmentation performance. IC-Net significantly outperforms the typical U-Net architecture and other contemporary effective segmentation techniques. On the BraTS 2020 dataset, our IC-Net design obtained notable outcomes in Accuracy, Loss, Specificity, Sensitivity as 99.65, 0.0159, 99.44, 99.86 and DSC (core, whole, and enhancing tumors as 0.998717, 0.888930, 0.866183) respectively.
Collapse
Affiliation(s)
- Chandra Sekaran D S
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, 632014, India
| | - J Christopher Clement
- School of Electronics Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, 632014, India.
| |
Collapse
|
12
|
Doğru D, Özdemir GD, Özdemir MA, Ercan UK, Topaloğlu Avşar N, Güren O. An automated in vitro wound healing microscopy image analysis approach utilizing U-net-based deep learning methodology. BMC Med Imaging 2024; 24:158. [PMID: 38914942 PMCID: PMC11197287 DOI: 10.1186/s12880-024-01332-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 06/13/2024] [Indexed: 06/26/2024] Open
Abstract
BACKGROUND The assessment of in vitro wound healing images is critical for determining the efficacy of the therapy-of-interest that may influence the wound healing process. Existing methods suffer significant limitations, such as user dependency, time-consuming nature, and lack of sensitivity, thus paving the way for automated analysis approaches. METHODS Hereby, three structurally different variations of U-net architectures based on convolutional neural networks (CNN) were implemented for the segmentation of in vitro wound healing microscopy images. The developed models were fed using two independent datasets after applying a novel augmentation method aimed at the more sensitive analysis of edges after the preprocessing. Then, predicted masks were utilized for the accurate calculation of wound areas. Eventually, the therapy efficacy-indicator wound areas were thoroughly compared with current well-known tools such as ImageJ and TScratch. RESULTS The average dice similarity coefficient (DSC) scores were obtained as 0.958 ∼ 0.968 for U-net-based deep learning models. The averaged absolute percentage errors (PE) of predicted wound areas to ground truth were 6.41%, 3.70%, and 3.73%, respectively for U-net, U-net++, and Attention U-net, while ImageJ and TScratch had considerable averaged error rates of 22.59% and 33.88%, respectively. CONCLUSIONS Comparative analyses revealed that the developed models outperformed the conventional approaches in terms of analysis time and segmentation sensitivity. The developed models also hold great promise for the prediction of the in vitro wound area, regardless of the therapy-of-interest, cell line, magnification of the microscope, or other application-dependent parameters.
Collapse
Affiliation(s)
- Dilan Doğru
- Department of Biomedical Engineering, Graduate School of Natural and Applied Sciences, Izmir Katip Celebi University, Izmir, Turkey
| | - Gizem D Özdemir
- Department of Biomedical Engineering, Graduate School of Natural and Applied Sciences, Izmir Katip Celebi University, Izmir, Turkey
- Department of Biomedical Engineering, Faculty of Engineering and Architecture, Izmir Katip Celebi University, Izmir, Turkey
| | - Mehmet A Özdemir
- Department of Biomedical Engineering, Graduate School of Natural and Applied Sciences, Izmir Katip Celebi University, Izmir, Turkey.
- Department of Biomedical Engineering, Faculty of Engineering and Architecture, Izmir Katip Celebi University, Izmir, Turkey.
| | - Utku K Ercan
- Department of Biomedical Engineering, Faculty of Engineering and Architecture, Izmir Katip Celebi University, Izmir, Turkey
| | - Nermin Topaloğlu Avşar
- Department of Biomedical Engineering, Faculty of Engineering and Architecture, Izmir Katip Celebi University, Izmir, Turkey
| | - Onan Güren
- Department of Biomedical Engineering, Faculty of Engineering and Architecture, Izmir Katip Celebi University, Izmir, Turkey.
| |
Collapse
|
13
|
Watanabe H, Fukuda H, Ezawa Y, Matsuyama E, Kondo Y, Hayashi N, Ogura T, Shimosegawa M. Automated angular measurement for puncture angle using a computer-aided method in ultrasound-guided peripheral insertion. Phys Eng Sci Med 2024; 47:679-689. [PMID: 38358620 DOI: 10.1007/s13246-024-01397-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 01/28/2024] [Indexed: 02/16/2024]
Abstract
Ultrasound guidance has become the gold standard for obtaining vascular access. Angle information, which indicates the entry angle of the needle into the vein, is required to ensure puncture success. Although various image processing-based methods, such as deep learning, have recently been applied to improve needle visibility, these methods have limitations, in that the puncture angle to the target organ is not measured. We aim to detect the target vessel and puncture needle and to derive the puncture angle by combining deep learning and conventional image processing methods such as the Hough transform. Median cubital vein US images were obtained from 20 healthy volunteers, and images of simulated blood vessels and needles were obtained during the puncture of a simulated blood vessel in four phantoms. The U-Net architecture was used to segment images of blood vessels and needles, and various image processing methods were employed to automatically measure angles. The experimental results indicated that the mean dice coefficients of median cubital veins, simulated blood vessels, and needles were 0.826, 0.931, and 0.773, respectively. The quantitative results of angular measurement showed good agreement between the expert and automatic measurements of the puncture angle with 0.847 correlations. Our findings indicate that the proposed method achieves extremely high segmentation accuracy and automated angular measurements. The proposed method reduces the variability and time required in manual angle measurements and presents the possibility where the operator can concentrate on delicate techniques related to the direction of the needle.
Collapse
Affiliation(s)
- Haruyuki Watanabe
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan.
| | - Hironori Fukuda
- Department of Radiology, Cardiovascular Hospital of Central Japan, Shibukawa, Japan
| | - Yuina Ezawa
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Eri Matsuyama
- Faculty of Informatics, The University of Fukuchiyama, Fukuchiyama, Japan
| | - Yohan Kondo
- Graduate School of Health Sciences, Niigata University, Niigata, Japan
| | - Norio Hayashi
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Toshihiro Ogura
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Masayuki Shimosegawa
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| |
Collapse
|
14
|
Chen J, Chen R, Chen L, Zhang L, Wang W, Zeng X. Kidney medicine meets computer vision: a bibliometric analysis. Int Urol Nephrol 2024:10.1007/s11255-024-04082-w. [PMID: 38814370 DOI: 10.1007/s11255-024-04082-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 05/16/2024] [Indexed: 05/31/2024]
Abstract
BACKGROUND AND OBJECTIVE Rapid advances in computer vision (CV) have the potential to facilitate the examination, diagnosis, and treatment of diseases of the kidney. The bibliometric study aims to explore the research landscape and evolving research focus of the application of CV in kidney medicine research. METHODS The Web of Science Core Collection was utilized to identify publications related to the research or applications of CV technology in the field of kidney medicine from January 1, 1900, to December 31, 2022. We analyzed emerging research trends, highly influential publications and journals, prolific researchers, countries/regions, research institutions, co-authorship networks, and co-occurrence networks. Bibliographic information was analyzed and visualized using Python, Matplotlib, Seaborn, HistCite, and Vosviewer. RESULTS There was an increasing trend in the number of publications on CV-based kidney medicine research. These publications mainly focused on medical image processing, surgical procedures, medical image analysis/diagnosis, as well as the application and innovation of CV technology in medical imaging. The United States is currently the leading country in terms of the quantities of published articles and international collaborations, followed by China. Deep learning-based segmentation and machine learning-based texture analysis are the most commonly used techniques in this field. Regarding research hotspot trends, CV algorithms are shifting toward artificial intelligence, and research objects are expanding to encompass a wider range of kidney-related objects, with data dimensions used in research transitioning from 2D to 3D while simultaneously incorporating more diverse data modalities. CONCLUSION The present study provides a scientometric overview of the current progress in the research and application of CV technology in kidney medicine research. Through the use of bibliometric analysis and network visualization, we elucidate emerging trends, key sources, leading institutions, and popular topics. Our findings and analysis are expected to provide valuable insights for future research on the use of CV in kidney medicine research.
Collapse
Affiliation(s)
- Junren Chen
- Department of Nephrology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, 610041, Sichuan, China
- School of Computer Science, Sichuan University, Chengdu, 610065, Sichuan, China
- Med-X Center for Informatics, Sichuan University, Chengdu, 610041, Sichuan, China
| | - Rui Chen
- The Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Liangyin Chen
- School of Computer Science, Sichuan University, Chengdu, 610065, Sichuan, China
| | - Lei Zhang
- School of Computer Science, Sichuan University, Chengdu, 610065, Sichuan, China
| | - Wei Wang
- School of Automation, Chengdu University of Information Technology, Chengdu, 610225, Sichuan, China
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, Sichuan, China
| | - Xiaoxi Zeng
- Department of Nephrology and West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, 610041, Sichuan, China.
- Med-X Center for Informatics, Sichuan University, Chengdu, 610041, Sichuan, China.
| |
Collapse
|
15
|
van Lohuizen Q, Roest C, Simonis FFJ, Fransen SJ, Kwee TC, Yakar D, Huisman H. Assessing deep learning reconstruction for faster prostate MRI: visual vs. diagnostic performance metrics. Eur Radiol 2024:10.1007/s00330-024-10771-y. [PMID: 38724765 DOI: 10.1007/s00330-024-10771-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 02/16/2024] [Accepted: 03/09/2024] [Indexed: 05/31/2024]
Abstract
OBJECTIVE Deep learning (DL) MRI reconstruction enables fast scan acquisition with good visual quality, but the diagnostic impact is often not assessed because of large reader study requirements. This study used existing diagnostic DL to assess the diagnostic quality of reconstructed images. MATERIALS AND METHODS A retrospective multisite study of 1535 patients assessed biparametric prostate MRI between 2016 and 2020. Likely clinically significant prostate cancer (csPCa) lesions (PI-RADS ≥ 4) were delineated by expert radiologists. T2-weighted scans were retrospectively undersampled, simulating accelerated protocols. DL reconstruction (DLRecon) and diagnostic DL detection (DLDetect) were developed. The effect on the partial area under (pAUC), the Free-Response Operating Characteristic (FROC) curve, and the structural similarity (SSIM) were compared as metrics for diagnostic and visual quality, respectively. DLDetect was validated with a reader concordance analysis. Statistical analysis included Wilcoxon, permutation, and Cohen's kappa tests for visual quality, diagnostic performance, and reader concordance. RESULTS DLRecon improved visual quality at 4- and 8-fold (R4, R8) subsampling rates, with SSIM (range: -1 to 1) improved to 0.78 ± 0.02 (p < 0.001) and 0.67 ± 0.03 (p < 0.001) from 0.68 ± 0.03 and 0.51 ± 0.03, respectively. However, diagnostic performance at R4 showed a pAUC FROC of 1.33 (CI 1.28-1.39) for DL and 1.29 (CI 1.23-1.35) for naive reconstructions, both significantly lower than fully sampled pAUC of 1.58 (DL: p = 0.024, naïve: p = 0.02). Similar trends were noted for R8. CONCLUSION DL reconstruction produces visually appealing images but may reduce diagnostic accuracy. Incorporating diagnostic AI into the assessment framework offers a clinically relevant metric essential for adopting reconstruction models into clinical practice. CLINICAL RELEVANCE STATEMENT In clinical settings, caution is warranted when using DL reconstruction for MRI scans. While it recovered visual quality, it failed to match the prostate cancer detection rates observed in scans not subjected to acceleration and DL reconstruction.
Collapse
Affiliation(s)
- Quintin van Lohuizen
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands.
| | - Christian Roest
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Frank F J Simonis
- University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Stefan J Fransen
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Thomas C Kwee
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Derya Yakar
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
- Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Henkjan Huisman
- Radboud University Medical Centre, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Norwegian University of Science and Technology, Høgskoleringen 1, 7034, Trondheim, Norway
| |
Collapse
|
16
|
Wang S, Liang S, Chang Q, Zhang L, Gong B, Bai Y, Zuo F, Wang Y, Xie X, Gu Y. STSN-Net: Simultaneous Tooth Segmentation and Numbering Method in Crowded Environments with Deep Learning. Diagnostics (Basel) 2024; 14:497. [PMID: 38472969 DOI: 10.3390/diagnostics14050497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/25/2024] [Accepted: 02/01/2024] [Indexed: 03/14/2024] Open
Abstract
Accurate tooth segmentation and numbering are the cornerstones of efficient automatic dental diagnosis and treatment. In this paper, a multitask learning architecture has been proposed for accurate tooth segmentation and numbering in panoramic X-ray images. A graph convolution network was applied for the automatic annotation of the target region, a modified convolutional neural network-based detection subnetwork (DSN) was used for tooth recognition and boundary regression, and an effective region segmentation subnetwork (RSSN) was used for region segmentation. The features extracted using RSSN and DSN were fused to optimize the quality of boundary regression, which provided impressive results for multiple evaluation metrics. Specifically, the proposed framework achieved a top F1 score of 0.9849, a top Dice metric score of 0.9629, and an mAP (IOU = 0.5) score of 0.9810. This framework holds great promise for enhancing the clinical efficiency of dentists in tooth segmentation and numbering tasks.
Collapse
Affiliation(s)
- Shaofeng Wang
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing 100050, China
| | - Shuang Liang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Laboratory for Clinical Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamicationental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Qiao Chang
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing 100050, China
| | - Li Zhang
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing 100050, China
| | - Beiwen Gong
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing 100050, China
| | - Yuxing Bai
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing 100050, China
- Laboratory for Clinical Medicine, Capital Medical University, Beijing 100069, China
| | - Feifei Zuo
- LargeV Instrument Corp., Ltd., Beijing 100084, China
| | - Yajie Wang
- LargeV Instrument Corp., Ltd., Beijing 100084, China
| | - Xianju Xie
- Department of Orthodontics, Beijing Stomatological Hospital, Capital Medical University, Beijing 100050, China
- Laboratory for Clinical Medicine, Capital Medical University, Beijing 100069, China
| | - Yu Gu
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Laboratory for Clinical Medicine, Capital Medical University, Beijing 100069, China
| |
Collapse
|
17
|
Kakkos I, Vagenas TP, Zygogianni A, Matsopoulos GK. Towards Automation in Radiotherapy Planning: A Deep Learning Approach for the Delineation of Parotid Glands in Head and Neck Cancer. Bioengineering (Basel) 2024; 11:214. [PMID: 38534488 DOI: 10.3390/bioengineering11030214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/19/2024] [Accepted: 02/22/2024] [Indexed: 03/28/2024] Open
Abstract
The delineation of parotid glands in head and neck (HN) carcinoma is critical to assess radiotherapy (RT) planning. Segmentation processes ensure precise target position and treatment precision, facilitate monitoring of anatomical changes, enable plan adaptation, and enhance overall patient safety. In this context, artificial intelligence (AI) and deep learning (DL) have proven exceedingly effective in precisely outlining tumor tissues and, by extension, the organs at risk. This paper introduces a DL framework using the AttentionUNet neural network for automatic parotid gland segmentation in HN cancer. Extensive evaluation of the model is performed in two public and one private dataset, while segmentation accuracy is compared with other state-of-the-art DL segmentation schemas. To assess replanning necessity during treatment, an additional registration method is implemented on the segmentation output, aligning images of different modalities (Computed Tomography (CT) and Cone Beam CT (CBCT)). AttentionUNet outperforms similar DL methods (Dice Similarity Coefficient: 82.65% ± 1.03, Hausdorff Distance: 6.24 mm ± 2.47), confirming its effectiveness. Moreover, the subsequent registration procedure displays increased similarity, providing insights into the effects of RT procedures for treatment planning adaptations. The implementation of the proposed methods indicates the effectiveness of DL not only for automatic delineation of the anatomical structures, but also for the provision of information for adaptive RT support.
Collapse
Affiliation(s)
- Ioannis Kakkos
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| | - Theodoros P Vagenas
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| | - Anna Zygogianni
- Radiation Oncology Unit, 1st Department of Radiology, ARETAIEION University Hospital, 11528 Athens, Greece
| | - George K Matsopoulos
- Biomedical Engineering Laboratory, National Technical University of Athens, 15773 Athens, Greece
| |
Collapse
|
18
|
Alabdulhafith M, Ba Mahel AS, Samee NA, Mahmoud NF, Talaat R, Muthanna MSA, Nassef TM. Automated wound care by employing a reliable U-Net architecture combined with ResNet feature encoders for monitoring chronic wounds. Front Med (Lausanne) 2024; 11:1310137. [PMID: 38357646 PMCID: PMC10865496 DOI: 10.3389/fmed.2024.1310137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 01/02/2024] [Indexed: 02/16/2024] Open
Abstract
Quality of life is greatly affected by chronic wounds. It requires more intensive care than acute wounds. Schedule follow-up appointments with their doctor to track healing. Good wound treatment promotes healing and fewer problems. Wound care requires precise and reliable wound measurement to optimize patient treatment and outcomes according to evidence-based best practices. Images are used to objectively assess wound state by quantifying key healing parameters. Nevertheless, the robust segmentation of wound images is complex because of the high diversity of wound types and imaging conditions. This study proposes and evaluates a novel hybrid model developed for wound segmentation in medical images. The model combines advanced deep learning techniques with traditional image processing methods to improve the accuracy and reliability of wound segmentation. The main objective is to overcome the limitations of existing segmentation methods (UNet) by leveraging the combined advantages of both paradigms. In our investigation, we introduced a hybrid model architecture, wherein a ResNet34 is utilized as the encoder, and a UNet is employed as the decoder. The combination of ResNet34's deep representation learning and UNet's efficient feature extraction yields notable benefits. The architectural design successfully integrated high-level and low-level features, enabling the generation of segmentation maps with high precision and accuracy. Following the implementation of our model to the actual data, we were able to determine the following values for the Intersection over Union (IOU), Dice score, and accuracy: 0.973, 0.986, and 0.9736, respectively. According to the achieved results, the proposed method is more precise and accurate than the current state-of-the-art.
Collapse
Affiliation(s)
- Maali Alabdulhafith
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Abduljabbar S. Ba Mahel
- School of Life Science, University of Electronic Science and Technology of China, Chengdu, China
| | - Nagwan Abdel Samee
- Department of Information Technology, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Noha F. Mahmoud
- Rehabilitation Sciences Department, Health and Rehabilitation Sciences College, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Rawan Talaat
- Biotechnology and Genetics Department, Agriculture Engineering, Ain Shams University, Cairo, Egypt
| | | | - Tamer M. Nassef
- Computer and Software Engineering Department, Engineering College, Misr University for Science and Technology, 6th of October, Egypt
| |
Collapse
|
19
|
Ueda T, Yamashita K, Kawazoe R, Sayawaki Y, Morisawa Y, Kamezaki R, Ikeda R, Shiraishi S, Uchiyama Y, Ito S. Feasibility of direct brain 18F-fluorodeoxyglucose-positron emission tomography attenuation and high-resolution correction methods using deep learning. ASIA OCEANIA JOURNAL OF NUCLEAR MEDICINE & BIOLOGY 2024; 12:108-119. [PMID: 39050241 PMCID: PMC11263769 DOI: 10.22038/aojnmb.2024.74875.1522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/24/2023] [Accepted: 01/11/2024] [Indexed: 07/27/2024]
Abstract
Objectives To develop the following three attenuation correction (AC) methods for brain 18F-fluorodeoxyglucose-positron emission tomography (PET), using deep learning, and to ascertain their precision levels: (i) indirect method; (ii) direct method; and (iii) direct and high-resolution correction (direct+HRC) method. Methods We included 53 patients who underwent cranial magnetic resonance imaging (MRI) and computed tomography (CT) and 27 patients who underwent cranial MRI, CT, and PET. After fusion of the magnetic resonance, CT, and PET images, resampling was performed to standardize the field of view and matrix size and prepare the data set. In the indirect method, synthetic CT (SCT) images were generated, whereas in the direct and direct+HRC methods, a U-net structure was used to generate AC images. In the indirect method, attenuation correction was performed using SCT images generated from MRI findings using U-net instead of CT images. In the direct and direct+HRC methods, AC images were generated directly from non-AC images using U-net, followed by image evaluation. The precision levels of AC images generated using the indirect and direct methods were compared based on the normalized mean squared error (NMSE) and structural similarity (SSIM). Results Visual inspection revealed no difference between the AC images prepared using CT-based attenuation correction and those prepared using the three methods. The NMSE increased in the order indirect, direct, and direct+HRC methods, with values of 0.281×10-3, 4.62×10-3, and 12.7×10-3, respectively. Moreover, the SSIM of the direct+HRC method was 0.975. Conclusion The direct+HRC method enables accurate attenuation without CT exposure and high-resolution correction without dedicated correction programs.
Collapse
Affiliation(s)
- Tomohiro Ueda
- Graduate School of Health Sciences, Kumamoto University, Japan
| | | | - Retsu Kawazoe
- Graduate School of Health Sciences, Kumamoto University, Japan
| | - Yuta Sayawaki
- Graduate School of Health Sciences, Kumamoto University, Japan
| | | | - Ryosuke Kamezaki
- Department of Central Radiology Kumamoto University Hospital, Japan
| | - Ryuji Ikeda
- Department of Central Radiology Kumamoto University Hospital, Japan
| | - Shinya Shiraishi
- Department of Diagnostic Radiology, Faculty of Life Sciences,Kumamoto University, Japan
| | - Yoshikazu Uchiyama
- Department of Information and Communication Technology, Faculty of Engineering, University of Miyazaki, Japan
| | - Shigeki Ito
- Department of Medical Radiation Sciences, Faculty of Life Sciences, Kumamoto University, Japan
| |
Collapse
|
20
|
Chen Y, Yang W, Lu J, Sun J, Rao L, Zhao H, Peng X, Ni D. A modified U-net with graph representation for dose prediction in esophageal cancer radiotherapy plans. Comput Med Imaging Graph 2024; 111:102318. [PMID: 38088017 DOI: 10.1016/j.compmedimag.2023.102318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 11/29/2023] [Accepted: 11/30/2023] [Indexed: 01/08/2024]
Abstract
The manual design of esophageal cancer radiotherapy plan is time-consuming and labor-intensive. Automatic planning (AP) is prevalent nowadays to increase physicists' work efficiency. Because of the intuitiveness of dose distribution in AP evaluation, obtaining reasonable dose prediction provides effective guarantees to generate a satisfactory AP. Existing fully convolutional network-based methods for predicting dose distribution in esophageal cancer radiotherapy plans often capture features in a limited receptive field. Additionally, the correlations between voxel pairs are often ignored. This work modifies the U-net architecture and exploits graph convolution to capture long-range information for dose prediction in esophageal cancer plans. Meanwhile, attention mechanism gets correlations between planning target volume (PTV) and organs at risk, and adaptively learns their feature weights. Finally, a novel loss function that considers features between voxel pairs is used to highlight the predictions. 152 subjects with prescription doses of 50 Gy or 60 Gy are collected in this study. The mean absolute error and standard deviation of conformity index, homogeneity index, and max dose for PTV achieved by the proposed method are 0.036 ± 0.030, 0.036 ± 0.027, and 0.930 ± 1.162, respectively, which outperform other state-of-the-art models. The superior performance demonstrates that our proposed method has great potential for AP generation.
Collapse
Affiliation(s)
- Yanlin Chen
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Jiayang Lu
- Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, China
| | - Jinyan Sun
- School of Medicine, Foshan University, Foshan, China
| | - Linshang Rao
- Shenzhen Mindray Bio-Medical Electronics Co., Ltd., Shenzhen, China
| | - Huanmiao Zhao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Xun Peng
- Department of Radiation Oncology, Cancer Hospital of Shantou University Medical College, Shantou, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China.
| |
Collapse
|
21
|
Tomimatsu T, Yamashita K, Sakata T, Kamezaki R, Ikeda R, Shiraishi S, Uchiyama Y, Ito S. Development of an automated region-of-interest-setting method based on a deep neural network for brain perfusion single photon emission computed tomography quantification methods. ASIA OCEANIA JOURNAL OF NUCLEAR MEDICINE & BIOLOGY 2024; 12:120-130. [PMID: 39050240 PMCID: PMC11263778 DOI: 10.22038/aojnmb.2024.75375.1528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 11/29/2023] [Accepted: 01/20/2024] [Indexed: 07/27/2024]
Abstract
Objectives A simple noninvasive microsphere (SIMS) method using 123I-IMP and an improved brain uptake ratio (IBUR) method using 99mTc-ECD for the quantitative measurement of regional cerebral blood flow have been recently reported. The input functions of these methods were determined using the administered dose, which was obtained by analyzing the time activity curve of the pulmonary artery (PA) for SIMS and the ascending aorta (AAo) for the IBUR methods for dynamic chest images. If the PA and AAo regions of interest (ROIs) can be determined using deep convolutional neural networks (DCNN) for segmentation, the accuracy of these ROI-setting methods can be improved through simple analytical operations to ensure repeatability and reproducibility. The purpose of this study was to develop new PA and AAo-ROI setting methods using a DCNN (DCNN-ROI method). Methods A U-Net architecture based on convolutional neural networks was used to determine the PA and AAo candidate regions. Images of 290 patients who underwent 123I-IMP RI-angiography and 108 patients who underwent 99mTc-ECD RI-angiography were used. The PA and AAo-ROI results for the DCNN-ROI method were compared to those obtained using manual methods. The counts for the input function on the PA and AAo-ROI were determined by integrating the area under the curve (AUC) counts of the time-activity curve of PA and AAo-ROI, respectively. The effectiveness of the DCNN-ROI method was elucidated through a comparison with the integrated AUC counts of the DCNN-ROI and the manual ROI. Results The coincidence ratio for the locations of the PA and AAo-ROI obtained using the DCNN method and that for the manual method was 100%. Strong correlations were observed between the AUC counts using the DCNN and manual methods. Conclusion New ROI- setting programs were developed using a deep convolution neural network DCNN to determine the input functions for the SIMS and IBUR methods. The accuracy of these methods is comparable to that of the manual method.
Collapse
Affiliation(s)
- Taeko Tomimatsu
- Graduate School of Health Sciences, Kumamoto University, Japan
| | | | - Takumi Sakata
- Graduate School of Health Sciences, Kumamoto University, Japan
| | - Ryosuke Kamezaki
- Department of Central Radiology Kumamoto University Hospital, Japan
| | - Ryuji Ikeda
- Department of Central Radiology Kumamoto University Hospital, Japan
| | - Shinya Shiraishi
- Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University, Japan
| | - Yoshikazu Uchiyama
- Department of Information and Communication Technology, Faculty of Engineering, University of Miyazaki, Japan
| | - Shigeki Ito
- Department of Medical Radiation Sciences, Faculty of Life Science, Kumamoto University, Japan
| |
Collapse
|
22
|
Li X, Zhao D, Xie J, Wen H, Liu C, Li Y, Li W, Wang S. Deep learning for classifying the stages of periodontitis on dental images: a systematic review and meta-analysis. BMC Oral Health 2023; 23:1017. [PMID: 38114946 PMCID: PMC10729340 DOI: 10.1186/s12903-023-03751-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 12/08/2023] [Indexed: 12/21/2023] Open
Abstract
BACKGROUND The development of deep learning (DL) algorithms for use in dentistry is an emerging trend. Periodontitis is one of the most prevalent oral diseases, which has a notable impact on the life quality of patients. Therefore, it is crucial to classify periodontitis accurately and efficiently. This systematic review aimed to identify the application of DL for the classification of periodontitis and assess the accuracy of this approach. METHODS A literature search up to November 2023 was implemented through EMBASE, PubMed, Web of Science, Scopus, and Google Scholar databases. Inclusion and exclusion criteria were used to screen eligible studies, and the quality of the studies was evaluated by the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology with the QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies) tool. Random-effects inverse-variance model was used to perform the meta-analysis of a diagnostic test, with which pooled sensitivity, specificity, positive likelihood ratio (LR), negative LR, and diagnostic odds ratio (DOR) were calculated, and a summary receiver operating characteristic (SROC) plot was constructed. RESULTS Thirteen studies were included in the meta-analysis. After excluding an outlier, the pooled sensitivity, specificity, positive LR, negative LR and DOR were 0.88 (95%CI 0.82-0.92), 0.82 (95%CI 0.72-0.89), 4.9 (95%CI 3.2-7.5), 0.15 (95%CI 0.10-0.22) and 33 (95%CI 19-59), respectively. The area under the SROC was 0.92 (95%CI 0.89-0.94). CONCLUSIONS The accuracy of DL-based classification of periodontitis is high, and this approach could be employed in the future to reduce the workload of dental professionals and enhance the consistency of classification.
Collapse
Affiliation(s)
- Xin Li
- School of Public Health, National Institute for Data Science in Health and Medicine, Capital Medical University, Beijing, China
| | - Dan Zhao
- Department of Implant Dentistry, Beijing Stomatological Hospital, Capital Medical University, Beijing, China
| | - Jinxuan Xie
- School of Public Health, National Institute for Data Science in Health and Medicine, Capital Medical University, Beijing, China
| | - Hao Wen
- City University of Hong Kong, Hong Kong SAR, China
| | - Chunhua Liu
- City University of Hong Kong, Hong Kong SAR, China
| | - Yajie Li
- School of Public Health, National Institute for Data Science in Health and Medicine, Capital Medical University, Beijing, China
| | - Wenbin Li
- Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Songlin Wang
- Salivary Gland Disease Center and Beijing Key Laboratory of Tooth Regeneration and Function Reconstruction, Beijing Laboratory of Oral Health and Beijing Stomatological Hospital, Capital Medical University, Beijing, 100050, China.
| |
Collapse
|
23
|
Zhang J, Zhang Y, Jin Y, Xu J, Xu X. MDU-Net: multi-scale densely connected U-Net for biomedical image segmentation. Health Inf Sci Syst 2023; 11:13. [PMID: 36925619 PMCID: PMC10011258 DOI: 10.1007/s13755-022-00204-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 11/02/2022] [Indexed: 03/18/2023] Open
Abstract
Biomedical image segmentation plays a central role in quantitative analysis, clinical diagnosis, and medical intervention. In the light of the fully convolutional networks (FCN) and U-Net, deep convolutional networks (DNNs) have made significant contributions to biomedical image segmentation applications. In this paper, we propose three different multi-scale dense connections (MDC) for the encoder, the decoder of U-shaped architectures, and across them. Based on three dense connections, we propose a multi-scale densely connected U-Net (MDU-Net) for biomedical image segmentation. MDU-Net directly fuses the neighboring feature maps with different scales from both higher layers and lower layers to strengthen feature propagation in the current layer. Multi-scale dense connections, which contain shorter connections between layers close to the input and output, also make a much deeper U-Net possible. Besides, we introduce quantization to alleviate the potential overfitting in dense connections, and further improve the segmentation performance. We evaluate our proposed model on the MICCAI 2015 Gland Segmentation (GlaS) dataset. The three MDC improve U-Net performance by up to 1.8% on test A and 3.5% on test B in the MICCAI Gland dataset. Meanwhile, the MDU-Net with quantization obviously improves the segmentation performance of original U-Net.
Collapse
Affiliation(s)
- Jiawei Zhang
- The Department of New Networks, Peng Cheng Laboratory, Shenzhen, Guangdong China
- Department of Cardiovascular Surgery, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences),Southern Medical University, Guangzhou, Guangdong China
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, Guangdong China
- Institute for Sustainable Industries & Livable Cities, Victoria University, Melbourne, VIC Australia
| | - Yanchun Zhang
- The Department of New Networks, Peng Cheng Laboratory, Shenzhen, Guangdong China
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou, Guangdong China
- Institute for Sustainable Industries & Livable Cities, Victoria University, Melbourne, VIC Australia
| | - Yuzhen Jin
- Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
| | - Jilan Xu
- Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University, Shanghai, China
| | - Xiaowei Xu
- Department of Cardiovascular Surgery, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences),Southern Medical University, Guangzhou, Guangdong China
| |
Collapse
|
24
|
Maaliw RR. SCOLIONET: An Automated Scoliosis Cobb Angle Quantification Using Enhanced X-ray Images and Deep Learning Models. J Imaging 2023; 9:265. [PMID: 38132683 PMCID: PMC10743962 DOI: 10.3390/jimaging9120265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 11/20/2023] [Accepted: 11/27/2023] [Indexed: 12/23/2023] Open
Abstract
The advancement of medical prognoses hinges on the delivery of timely and reliable assessments. Conventional methods of assessments and diagnosis, often reliant on human expertise, lead to inconsistencies due to professionals' subjectivity, knowledge, and experience. To address these problems head-on, we harnessed artificial intelligence's power to introduce a transformative solution. We leveraged convolutional neural networks to engineer our SCOLIONET architecture, which can accurately identify Cobb angle measurements. Empirical testing on our pipeline demonstrated a mean segmentation accuracy of 97.50% (Sorensen-Dice coefficient) and 96.30% (Intersection over Union), indicating the model's proficiency in outlining vertebrae. The level of quantification accuracy was attributed to the state-of-the-art design of the atrous spatial pyramid pooling to better segment images. We also compared physician's manual evaluations against our machine driven measurements to validate our approach's practicality and reliability further. The results were remarkable, with a p-value (t-test) of 0.1713 and an average acceptable deviation of 2.86 degrees, suggesting insignificant difference between the two methods. Our work holds the premise of enabling medical practitioners to expedite scoliosis examination swiftly and consistently in improving and advancing the quality of patient care.
Collapse
Affiliation(s)
- Renato R Maaliw
- College of Engineering, Southern Luzon State University, Lucban 4328, Quezon, Philippines
| |
Collapse
|
25
|
Stoichita A, Ghita M, Mahler B, Vlasceanu S, Ghinet A, Mosteanu M, Cioacata A, Udrea A, Marcu A, Mitra GD, Ionescu CM, Iliesiu A. Imagistic Findings Using Artificial Intelligence in Vaccinated versus Unvaccinated SARS-CoV-2-Positive Patients Receiving In-Care Treatment at a Tertiary Lung Hospital. J Clin Med 2023; 12:7115. [PMID: 38002725 PMCID: PMC10672398 DOI: 10.3390/jcm12227115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 10/27/2023] [Accepted: 11/04/2023] [Indexed: 11/26/2023] Open
Abstract
BACKGROUND In December 2019 the World Health Organization announced that the widespread severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection had become a global pandemic. The most affected organ by the novel virus is the lung, and imaging exploration of the thorax using computer tomography (CT) scanning and X-ray has had an important impact. MATERIALS AND METHODS We assessed the prevalence of lung lesions in vaccinated versus unvaccinated SARS-CoV-2 patients using an artificial intelligence (AI) platform provided by Medicai. The software analyzes the CT scans, performing the lung and lesion segmentation using a variant of the U-net convolutional network. RESULTS We conducted a cohort study at a tertiary lung hospital in which we included 186 patients: 107 (57.52%) male and 59 (42.47%) females, of which 157 (84.40%) were not vaccinated for SARS-CoV-2. Over five times more unvaccinated patients than vaccinated ones are admitted to the hospital and require imaging investigations. More than twice as many unvaccinated patients have more than 75% of the lungs affected. Patients in the age group 30-39 have had the most lung lesions at almost 69% of both lungs affected. Compared to vaccinated patients with comorbidities, unvaccinated patients with comorbidities had developed increased lung lesions by 5%. CONCLUSION The study revealed a higher percentage of lung lesions among unvaccinated SARS-CoV-2-positive patients admitted to The National Institute of Pulmonology "Marius Nasta" in Bucharest, Romania, underlining the importance of vaccination and also the usefulness of artificial intelligence in CT interpretation.
Collapse
Affiliation(s)
- Alexandru Stoichita
- Faculty of Medicine, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (B.M.); (S.V.); (A.I.)
- “Marius Nasta” Institute of Pneumology, 050159 Bucharest, Romania; (A.G.); (M.M.); (A.C.)
| | - Maria Ghita
- Research Group of Dynamical Systems and Control, Ghent University, 9052 Ghent, Belgium; (M.G.); (C.M.I.)
- Faculty of Medicine and Health Sciences, Antwerp University, 2610 Wilrijk, Belgium
| | - Beatrice Mahler
- Faculty of Medicine, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (B.M.); (S.V.); (A.I.)
- “Marius Nasta” Institute of Pneumology, 050159 Bucharest, Romania; (A.G.); (M.M.); (A.C.)
| | - Silviu Vlasceanu
- Faculty of Medicine, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (B.M.); (S.V.); (A.I.)
- “Marius Nasta” Institute of Pneumology, 050159 Bucharest, Romania; (A.G.); (M.M.); (A.C.)
| | - Andreea Ghinet
- “Marius Nasta” Institute of Pneumology, 050159 Bucharest, Romania; (A.G.); (M.M.); (A.C.)
| | - Madalina Mosteanu
- “Marius Nasta” Institute of Pneumology, 050159 Bucharest, Romania; (A.G.); (M.M.); (A.C.)
- Faculty of Medicine, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Andreea Cioacata
- “Marius Nasta” Institute of Pneumology, 050159 Bucharest, Romania; (A.G.); (M.M.); (A.C.)
| | - Andreea Udrea
- Medicai, 020961 Bucharest, Romania; (A.U.); (A.M.); (G.D.M.)
| | - Alina Marcu
- Medicai, 020961 Bucharest, Romania; (A.U.); (A.M.); (G.D.M.)
| | | | - Clara Mihaela Ionescu
- Research Group of Dynamical Systems and Control, Ghent University, 9052 Ghent, Belgium; (M.G.); (C.M.I.)
- Automation Department, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
| | - Adriana Iliesiu
- Faculty of Medicine, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (B.M.); (S.V.); (A.I.)
- Clinical Hospital “Prof. Dr. Th. Burghele”, 061344 Bucharest, Romania
| |
Collapse
|
26
|
Neininger-Castro AC, Hayes JB, Sanchez ZC, Taneja N, Fenix AM, Moparthi S, Vassilopoulos S, Burnette DT. Independent regulation of Z-lines and M-lines during sarcomere assembly in cardiac myocytes revealed by the automatic image analysis software sarcApp. eLife 2023; 12:RP87065. [PMID: 37921850 PMCID: PMC10624428 DOI: 10.7554/elife.87065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2023] Open
Abstract
Sarcomeres are the basic contractile units within cardiac myocytes, and the collective shortening of sarcomeres aligned along myofibrils generates the force driving the heartbeat. The alignment of the individual sarcomeres is important for proper force generation, and misaligned sarcomeres are associated with diseases, including cardiomyopathies and COVID-19. The actin bundling protein, α-actinin-2, localizes to the 'Z-Bodies" of sarcomere precursors and the 'Z-Lines' of sarcomeres, and has been used previously to assess sarcomere assembly and maintenance. Previous measurements of α-actinin-2 organization have been largely accomplished manually, which is time-consuming and has hampered research progress. Here, we introduce sarcApp, an image analysis tool that quantifies several components of the cardiac sarcomere and their alignment in muscle cells and tissue. We first developed sarcApp to utilize deep learning-based segmentation and real space quantification to measure α-actinin-2 structures and determine the organization of both precursors and sarcomeres/myofibrils. We then expanded sarcApp to analyze 'M-Lines' using the localization of myomesin and a protein that connects the Z-Lines to the M-Line (titin). sarcApp produces 33 distinct measurements per cell and 24 per myofibril that allow for precise quantification of changes in sarcomeres, myofibrils, and their precursors. We validated this system with perturbations to sarcomere assembly. We found perturbations that affected Z-Lines and M-Lines differently, suggesting that they may be regulated independently during sarcomere assembly.
Collapse
Affiliation(s)
- Abigail C Neininger-Castro
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine Basic SciencesNashvilleUnited States
| | - James B Hayes
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine Basic SciencesNashvilleUnited States
| | - Zachary C Sanchez
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine Basic SciencesNashvilleUnited States
| | - Nilay Taneja
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine Basic SciencesNashvilleUnited States
| | - Aidan M Fenix
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine Basic SciencesNashvilleUnited States
| | - Satish Moparthi
- Sorbonne Université, INSERM, Institut de Myologie, Centre de Recherche en MyologieParisFrance
| | - Stéphane Vassilopoulos
- Sorbonne Université, INSERM, Institut de Myologie, Centre de Recherche en MyologieParisFrance
| | - Dylan Tyler Burnette
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine Basic SciencesNashvilleUnited States
| |
Collapse
|
27
|
Zhao Q, Jia Q, Chi T. U-Net deep learning model for endoscopic diagnosis of chronic atrophic gastritis and operative link for gastritis assessment staging: a prospective nested case-control study. Therap Adv Gastroenterol 2023; 16:17562848231208669. [PMID: 37928896 PMCID: PMC10624012 DOI: 10.1177/17562848231208669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 10/02/2023] [Indexed: 11/07/2023] Open
Abstract
Background The operative link for the gastritis assessment (OLGA) system can objectively reflect the stratification of gastric cancer risk in patients with chronic atrophic gastritis (CAG). Objectives We developed a real-time video monitoring model for the endoscopic diagnosis of CAG and OLGA staging based on U-Net deep learning (DL). To further validate and improve its performance, we designed a study to evaluate the diagnostic evaluation indices. Design A prospective nested case-control study. Methods Our cohort consisted of 1306 patients from 31 July 2021 to 31 January 2022. According to the pathological results, patients in the cohort were divided into the CAG group and the chronic non-atrophic gastritis group to evaluate the diagnostic evaluation indices. Each atrophy lesion was automatically labeled and the atrophy severity was assessed by the model. Propensity score matching was used to minimize selection bias. Results The diagnostic evaluation indices and the consistency between OLGA staging and pathological diagnosis of the model were superior to those of endoscopists [sensitivity (89.31% versus 67.56%), specificity (90.46% versus 70.23%), positive predictive value (90.35% versus 69.41%), negative predictive value (89.43% versus 68.40%), accuracy rate (89.89% versus 68.89%), Youden index (79.77% versus 37.79%), odd product (79.23 versus 4.91), positive likelihood ratio (9.36 versus 2.27), negative likelihood ratio (0.12 versus 0.46)], areas under the curves (AUC) (95% CI) (0.919 (0.893-0.945) versus 0.749 (0.707-0.792), p < 0.001) and kappa (0.816 versus 0.291)]. Conclusion Our study demonstrated that the DL model can assist endoscopists in real-time diagnosis of CAG during gastroscopy and synchronous identification of high-risk OLGA stage (OLGA stages III and IV) patients. Trial registration ChiCTR2100044458.
Collapse
Affiliation(s)
- Quchuan Zhao
- Department of Gastroenterology, Xuanwu Hospital of Capital Medical University, Beijing, China
| | - Qing Jia
- Department of Anesthesiology, Guang’anmen Hospital China Academy of Chinese Medical Sciences, 5 North Court Street, Beijing 100053, China
| | - Tianyu Chi
- Department of Gastroenterology, Xuanwu Hospital of Capital Medical University, 45 Chang-Chun Street, Beijing 100053, China
| |
Collapse
|
28
|
Murata T, Hirano T, Mizobe H, Toba S. OCT-angiography based artificial intelligence-inferred fluorescein angiography for leakage detection in retina [Invited]. BIOMEDICAL OPTICS EXPRESS 2023; 14:5851-5860. [PMID: 38021144 PMCID: PMC10659810 DOI: 10.1364/boe.506467] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Revised: 10/12/2023] [Accepted: 10/12/2023] [Indexed: 12/01/2023]
Abstract
Optical coherence tomography angiography (OCTA) covers most functions of fluorescein angiography (FA) when imaging the retina but lacks the ability to depict vascular leakage. Based on OCTA, we developed artificial intelligence-inferred-FA (AI-FA) to delineate leakage in eyes with diabetic retinopathy (DR). Training data of 19,648 still FA images were prepared from FA-photo and videos of 43 DR eyes. AI-FA images were generated using a convolutional neural network. AI-FA images achieved a structural similarity index of 0.91 with corresponding real FA images in DR. The AI-FA generated from OCTA correctly depicted vascular occlusion and associated leakage with enough quality, enabling precise DR diagnosis and treatment planning. A combination of OCT, OCTA, and AI-FA yields more information than real FA with reduced acquisition time without risk of allergic reactions.
Collapse
Affiliation(s)
- Toshinori Murata
- Department of Ophthalmology, School of Medicine, Shinshu University, 3-1-1 Asahi Matsumoto, Nagano, 390-8621, Japan
| | - Takao Hirano
- Department of Ophthalmology, School of Medicine, Shinshu University, 3-1-1 Asahi Matsumoto, Nagano, 390-8621, Japan
| | - Hideaki Mizobe
- Canon Inc. 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo 146-8501, Japan
| | - Shuhei Toba
- Canon Inc. 30-2, Shimomaruko 3-chome, Ohta-ku, Tokyo 146-8501, Japan
| |
Collapse
|
29
|
Alsayat A, Elmezain M, Alanazi S, Alruily M, Mostafa AM, Said W. Multi-Layer Preprocessing and U-Net with Residual Attention Block for Retinal Blood Vessel Segmentation. Diagnostics (Basel) 2023; 13:3364. [PMID: 37958260 PMCID: PMC10648654 DOI: 10.3390/diagnostics13213364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/21/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023] Open
Abstract
Retinal blood vessel segmentation is a valuable tool for clinicians to diagnose conditions such as atherosclerosis, glaucoma, and age-related macular degeneration. This paper presents a new framework for segmenting blood vessels in retinal images. The framework has two stages: a multi-layer preprocessing stage and a subsequent segmentation stage employing a U-Net with a multi-residual attention block. The multi-layer preprocessing stage has three steps. The first step is noise reduction, employing a U-shaped convolutional neural network with matrix factorization (CNN with MF) and detailed U-shaped U-Net (D_U-Net) to minimize image noise, culminating in the selection of the most suitable image based on the PSNR and SSIM values. The second step is dynamic data imputation, utilizing multiple models for the purpose of filling in missing data. The third step is data augmentation through the utilization of a latent diffusion model (LDM) to expand the training dataset size. The second stage of the framework is segmentation, where the U-Nets with a multi-residual attention block are used to segment the retinal images after they have been preprocessed and noise has been removed. The experiments show that the framework is effective at segmenting retinal blood vessels. It achieved Dice scores of 95.32, accuracy of 93.56, precision of 95.68, and recall of 95.45. It also achieved efficient results in removing noise using CNN with matrix factorization (MF) and D-U-NET according to values of PSNR and SSIM for (0.1, 0.25, 0.5, and 0.75) levels of noise. The LDM achieved an inception score of 13.6 and an FID of 46.2 in the augmentation step.
Collapse
Affiliation(s)
- Ahmed Alsayat
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia; (S.A.); (M.A.)
| | - Mahmoud Elmezain
- Computer Science Division, Faculty of Science, Tanta University, Tanta 31527, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Yanbu 966144, Saudi Arabia
| | - Saad Alanazi
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia; (S.A.); (M.A.)
| | - Meshrif Alruily
- Department of Computer Science, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia; (S.A.); (M.A.)
| | - Ayman Mohamed Mostafa
- Information Systems Department, College of Computer and Information Sciences, Jouf University, Sakaka 72341, Saudi Arabia
| | - Wael Said
- Computer Science Department, Faculty of Computers and Informatics, Zagazig University, Zagazig 44511, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| |
Collapse
|
30
|
Yao T, Wang C, Wang X, Li X, Jiang Z, Qi P. Enhancing percutaneous coronary intervention with heuristic path planning and deep-learning-based vascular segmentation. Comput Biol Med 2023; 166:107540. [PMID: 37806060 DOI: 10.1016/j.compbiomed.2023.107540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Revised: 09/21/2023] [Accepted: 09/28/2023] [Indexed: 10/10/2023]
Abstract
Percutaneous coronary intervention (PCI) is a minimally invasive technique for treating vascular diseases. PCI requires precise and real-time visualization and guidance to ensure surgical safety and efficiency. Existing mainstream guiding methods rely on hemodynamic parameters. However, these methods are less intuitive than images and pose some challenges to the decision-making of cardiologists. This paper proposes a novel PCI guiding assistance system by combining a novel vascular segmentation network and a heuristic intervention path planning algorithm, providing cardiologists with clear and visualized information. A dataset of 1077 DSA images from 288 patients is also collected in clinical practice. A Likert Scale is also designed to evaluate system performance in user experiments. Results of user experiments demonstrate that the system can generate satisfactory and reasonable paths for PCI. Our proposed method outperformed the state-of-the-art baselines based on three metrics (Jaccard: 0.4091, F1: 0.5626, Accuracy: 0.9583). The proposed system can effectively assist cardiologists in PCI by providing a clear segmentation of vascular structures and optimal real-time intervention paths, thus demonstrating great potential for robotic PCI autonomy.
Collapse
Affiliation(s)
- Tianliang Yao
- College of Electronics and Information Engineering, Tongji University, Shanghai, 200092, China.
| | - Chengjia Wang
- School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, EH14 4AP, United Kingdom; BHF Centre for Cardiovascular Science,University of Edinburgh, Edinburgh, EH16 4TJ, United Kingdom.
| | - Xinyi Wang
- School of Medicine, Tongji University, Shanghai, 200092, China.
| | - Xiang Li
- Departments of Cardiology and Nursing, Shanghai Tenth People's Hospital, School of Medicine, Tongji University, Shanghai, 200072, China.
| | - Zhaolei Jiang
- Department of Cardiothoracic Surgery, Xinhua Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China.
| | - Peng Qi
- College of Electronics and Information Engineering, Tongji University, Shanghai, 200092, China.
| |
Collapse
|
31
|
Ling Y, Zhao Q, Liu W, Wei K, Bao R, Song W, Nie X. Detection and characterization of spike architecture based on deep learning and X-ray computed tomography in barley. PLANT METHODS 2023; 19:115. [PMID: 37891590 PMCID: PMC10604417 DOI: 10.1186/s13007-023-01096-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 10/19/2023] [Indexed: 10/29/2023]
Abstract
BACKGROUND Spike is the grain-bearing organ in cereal crops, which is a key proxy indicator determining the grain yield and quality. Machine learning methods for image analysis of spike-related phenotypic traits not only hold the promise for high-throughput estimating grain production and quality, but also lay the foundation for better dissection of the genetic basis for spike development. Barley (Hordeum vulgare L.) is one of the most important crops globally, ranking as the fourth largest cereal crop in terms of cultivated area and total yield. However, image analysis of spike-related traits in barley, especially based on CT-scanning, remains elusive at present. RESULTS In this study, we developed a non-invasive, high-throughput approach to quantitatively measuring the multitude of spike architectural traits in barley through combining X-ray computed tomography (CT) and a deep learning model (UNet). Firstly, the spikes of 11 barley accessions, including 2 wild barley, 3 landraces and 6 cultivars were used for X-ray CT scanning to obtain the tomographic images. And then, an optimized 3D image processing method was used to point cloud data to generate the 3D point cloud images of spike, namely 'virtual' spike, which is then used to investigate internal structures and morphological traits of barley spikes. Furthermore, the virtual spike-related traits, such as spike length, grain number per spike, grain volume, grain surface area, grain length and grain width as well as grain thickness were efficiently and non-destructively quantified. The virtual values of these traits were highly consistent with the actual value using manual measurement, demonstrating the accuracy and reliability of the developed model. The reconstruction process took 15 min approximately, 10 min for CT scanning and 5 min for imaging and features extraction, respectively. CONCLUSIONS This study provides an efficient, non-invasive and useful tool for dissecting barley spike architecture, which will contribute to high-throughput phenotyping and breeding for high yield in barley and other crops.
Collapse
Affiliation(s)
- Yimin Ling
- State Key Laboratory of Crop Stress Biology in Arid Areas and College of Agronomy, Northwest A&F University, Yangling, 712100, Shaanxi, China
| | - Qinlong Zhao
- State Key Laboratory of Crop Stress Biology in Arid Areas and College of Agronomy, Northwest A&F University, Yangling, 712100, Shaanxi, China
| | - Wenxin Liu
- State Key Laboratory of Crop Stress Biology in Arid Areas and College of Agronomy, Northwest A&F University, Yangling, 712100, Shaanxi, China
| | - Kexu Wei
- State Key Laboratory of Crop Stress Biology in Arid Areas and College of Agronomy, Northwest A&F University, Yangling, 712100, Shaanxi, China
| | - Runfei Bao
- State Key Laboratory of Crop Stress Biology in Arid Areas and College of Agronomy, Northwest A&F University, Yangling, 712100, Shaanxi, China
| | - Weining Song
- State Key Laboratory of Crop Stress Biology in Arid Areas and College of Agronomy, Northwest A&F University, Yangling, 712100, Shaanxi, China
- ICARDA-NWSUAF Joint Research Centre, Northwest A&F University, Yangling, 712100, Shaanxi, China
| | - Xiaojun Nie
- State Key Laboratory of Crop Stress Biology in Arid Areas and College of Agronomy, Northwest A&F University, Yangling, 712100, Shaanxi, China.
| |
Collapse
|
32
|
Hao D, Li H, Zhang Y, Zhang Q. MUE-CoT: multi-scale uncertainty entropy-aware co-training framework for left atrial segmentation. Phys Med Biol 2023; 68:215008. [PMID: 37567214 DOI: 10.1088/1361-6560/acef8e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 08/11/2023] [Indexed: 08/13/2023]
Abstract
Objective.Accurate left atrial segmentation is the basis of the recognition and clinical analysis of atrial fibrillation. Supervised learning has achieved some competitive segmentation results, but the high annotation cost often limits its performance. Semi-supervised learning is implemented from limited labeled data and a large amount of unlabeled data and shows good potential in solving practical medical problems.Approach. In this study, we proposed a collaborative training framework for multi-scale uncertain entropy perception (MUE-CoT) and achieved efficient left atrial segmentation from a small amount of labeled data. Based on the pyramid feature network, learning is implemented from unlabeled data by minimizing the pyramid prediction difference. In addition, novel loss constraints are proposed for co-training in the study. The diversity loss is defined as a soft constraint so as to accelerate the convergence and a novel multi-scale uncertainty entropy calculation method and a consistency regularization term are proposed to measure the consistency between prediction results. The quality of pseudo-labels cannot be guaranteed in the pre-training period, so a confidence-dependent empirical Gaussian function is proposed to weight the pseudo-supervised loss.Main results.The experimental results of a publicly available dataset and an in-house clinical dataset proved that our method outperformed existing semi-supervised methods. For the two datasets with a labeled ratio of 5%, the Dice similarity coefficient scores were 84.94% ± 4.31 and 81.24% ± 2.4, the HD95values were 4.63 mm ± 2.13 and 3.94 mm ± 2.72, and the Jaccard similarity coefficient scores were 74.00% ± 6.20 and 68.49% ± 3.39, respectively.Significance.The proposed model effectively addresses the challenges of limited data samples and high costs associated with manual annotation in the medical field, leading to enhanced segmentation accuracy.
Collapse
Affiliation(s)
- Dechen Hao
- School of Software, North University of China, Taiyuan Shanxi, People's Republic of China
| | - Hualing Li
- School of Software, North University of China, Taiyuan Shanxi, People's Republic of China
| | - Yonglai Zhang
- School of Software, North University of China, Taiyuan Shanxi, People's Republic of China
| | - Qi Zhang
- Department of Cardiology, The Second Hospital of Shanxi Medical University, Taiyuan Shanxi, People's Republic of China
| |
Collapse
|
33
|
Healthcare Engineering JO. Retracted: U-Net-Based Medical Image Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:9890389. [PMID: 37886340 PMCID: PMC10599952 DOI: 10.1155/2023/9890389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Accepted: 10/17/2023] [Indexed: 10/28/2023]
Abstract
[This retracts the article DOI: 10.1155/2022/4189781.].
Collapse
|
34
|
Neininger-Castro AC, Hayes JB, Sanchez ZC, Taneja N, Fenix AM, Moparthi S, Vassilopoulos S, Burnette DT. Independent regulation of Z-lines and M-lines during sarcomere assembly in cardiac myocytes revealed by the automatic image analysis software sarcApp. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.11.523681. [PMID: 36711995 PMCID: PMC9882215 DOI: 10.1101/2023.01.11.523681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
Sarcomeres are the basic contractile units within cardiac myocytes, and the collective shortening of sarcomeres aligned along myofibrils generates the force driving the heartbeat. The alignment of the individual sarcomeres is important for proper force generation, and misaligned sarcomeres are associated with diseases including cardiomyopathies and COVID-19. The actin bundling protein, α-actinin-2, localizes to the "Z-Bodies" of sarcomere precursors and the "Z-Lines" of sarcomeres, and has been used previously to assess sarcomere assembly and maintenance. Previous measurements of α-actinin-2 organization have been largely accomplished manually, which is time-consuming and has hampered research progress. Here, we introduce sarcApp, an image analysis tool that quantifies several components of the cardiac sarcomere and their alignment in muscle cells and tissue. We first developed sarcApp to utilize deep learning-based segmentation and real space quantification to measure α-actinin-2 structures and determine the organization of both precursors and sarcomeres/myofibrils. We then expanded sarcApp to analyze "M-Lines" using the localization of myomesin and a protein that connects the Z-Lines to the M-Line (titin). sarcApp produces 33 distinct measurements per cell and 24 per myofibril that allow for precise quantification of changes in sarcomeres, myofibrils, and their precursors. We validated this system with perturbations to sarcomere assembly. We found perturbations that affected Z-Lines and M-Lines differently, suggesting that they may be regulated independently during sarcomere assembly.
Collapse
Affiliation(s)
- Abigail C. Neininger-Castro
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine Basic Sciences, Nashville, TN
| | - James B. Hayes
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine Basic Sciences, Nashville, TN
| | - Zachary C. Sanchez
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine Basic Sciences, Nashville, TN
| | - Nilay Taneja
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine Basic Sciences, Nashville, TN
| | - Aidan M. Fenix
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine Basic Sciences, Nashville, TN
| | - Satish Moparthi
- Sorbonne Université, INSERM, Institut de Myologie, Centre de Recherche en Myologie, Paris, France
| | - Stéphane Vassilopoulos
- Sorbonne Université, INSERM, Institut de Myologie, Centre de Recherche en Myologie, Paris, France
| | - Dylan T. Burnette
- Department of Cell and Developmental Biology, Vanderbilt University School of Medicine Basic Sciences, Nashville, TN
| |
Collapse
|
35
|
Szentimrey Z, Ameri G, Hong CX, Cheung RYK, Ukwatta E, Eltahawi A. Automated segmentation and measurement of the female pelvic floor from the mid-sagittal plane of 3D ultrasound volumes. Med Phys 2023; 50:6215-6227. [PMID: 36964964 DOI: 10.1002/mp.16389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 03/17/2023] [Accepted: 03/17/2023] [Indexed: 03/27/2023] Open
Abstract
BACKGROUND Transperineal ultrasound (TPUS) is a valuable imaging tool for evaluating patients with pelvic floor disorders, including pelvic organ prolapse (POP). Currently, measurements of anatomical structures in the mid-sagittal plane of 2D and 3D US volumes are obtained manually, which is time-consuming, has high intra-rater variability, and requires an expert in pelvic floor US interpretation. Manual segmentation and biometric measurement can take 15 min per 2D mid-sagittal image by an expert operator. An automated segmentation method would provide quantitative data relevant to pelvic floor disorders and improve the efficiency and reproducibility of segmentation-based biometric methods. PURPOSE Develop a fast, reproducible, and automated method of acquiring biometric measurements and organ segmentations from the mid-sagittal plane of female 3D TPUS volumes. METHODS Our method used a nnU-Net segmentation model to segment the pubis symphysis, urethra, bladder, rectum, rectal ampulla, and anorectal angle in the mid-sagittal plane of female 3D TPUS volumes. We developed an algorithm to extract relevant biometrics from the segmentations. Our dataset included 248 3D TPUS volumes, 126/122 rest/Valsalva split, from 135 patients. System performance was assessed by comparing the automated results with manual ground truth data using the Dice similarity coefficient (DSC) and average absolute difference (AD). Intra-class correlation coefficient (ICC) and time difference were used to compare reproducibility and efficiency between manual and automated methods respectively. High ICC, low AD and reduction in time indicated an accurate and reliable automated system, making TPUS an efficient alternative for POP assessment. Paired t-test and non-parametric Wilcoxon signed-rank test were conducted, with p < 0.05 determining significance. RESULTS The nnU-Net segmentation model reported average DSC and p values (in brackets), compared to the next best tested model, of 87.4% (<0.0001), 68.5% (<0.0001), 61.0% (0.1), 54.6% (0.04), 49.2% (<0.0001) and 33.7% (0.02) for bladder, rectum, urethra, pubic symphysis, anorectal angle, and rectal ampulla respectively. The average ADs for the bladder neck position, bladder descent, rectal ampulla descent and retrovesical angle were 3.2 mm, 4.5 mm, 5.3 mm and 27.3°, respectively. The biometric algorithm had an ICC > 0.80 for the bladder neck position, bladder descent and rectal ampulla descent when compared to manual measurements, indicating high reproducibility. The proposed algorithms required approximately 1.27 s to analyze one image. The manual ground truths were performed by a single expert operator. In addition, due to high operator dependency for TPUS image collection, we would need to pursue further studies with images collected from multiple operators. CONCLUSIONS Based on our search in scientific databases (i.e., Web of Science, IEEE Xplore Digital Library, Elsevier ScienceDirect and PubMed), this is the first reported work of an automated segmentation and biometric measurement system for the mid-sagittal plane of 3D TPUS volumes. The proposed algorithm pipeline can improve the efficiency (1.27 s compared to 15 min manually) and has high reproducibility (high ICC values) compared to manual TPUS analysis for pelvic floor disorder diagnosis. Further studies are needed to verify this system's viability using multiple TPUS operators and multiple experts for performing manual segmentation and extracting biometrics from the images.
Collapse
Affiliation(s)
| | | | - Christopher X Hong
- Department of Obstetrics & Gynaecology, University of Michigan, Ann Arbor, Michigan, USA
| | - Rachel Y K Cheung
- Department of Obstetrics & Gynaecology, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong
| | - Eranga Ukwatta
- School of Engineering, University of Guelph, Guelph, Ontario, Canada
| | - Ahmed Eltahawi
- Cosm Medical, Toronto, Ontario, Canada
- Information System Department, Faculty of Computers and Informatics, Suez Canal University, Ismailia, Egypt
| |
Collapse
|
36
|
Han Trong T, Nguyen Van H, Vu Dang L. High-Performance Method for Brain Tumor Feature Extraction in MRI Using Complex Network. Appl Bionics Biomech 2023; 2023:8843488. [PMID: 37780200 PMCID: PMC10539089 DOI: 10.1155/2023/8843488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 08/12/2023] [Accepted: 08/26/2023] [Indexed: 10/03/2023] Open
Abstract
Objective To localize and distinguish between benign and malignant tumors on MRI. Method This work proposes a high-performance method for brain tumor feature extraction using a combination of complex network and U-Net architecture. And then, the common machine-learning algorithms are used to discriminate between benign and malignant tumors. Experiments and Results. The dataset of brain MRI of a total of 230 brain tumor patients in which 77 high-grade glioma patients and 153 low-grade glioma patients were processed. The results of classifying benign and malignant tumors achieved an accuracy of 99.84%. Conclusion The high accuracy of experiment results demonstrates that the use of the complex network and U-Net architecture can significantly improve the accuracy of brain tumor classification. This method could potentially be useful for clinicians in aiding diagnosis and treatment planning for brain tumor patients.
Collapse
Affiliation(s)
- Thanh Han Trong
- School of Electronics and Telecommunications, Hanoi University of Science and Technology, Hanoi, Vietnam
| | - Hinh Nguyen Van
- Department of Science and Technology Management and International Cooperation, Posts and Telecommunications Institute of Technology, Hanoi, Vietnam
| | | |
Collapse
|
37
|
Ning X, Liu R, Wang N, Xiao X, Wu S, Wang Y, Yi C, He Y, Li D, Chen H. Development of a deep learning-based model to diagnose mixed-type gastric cancer accurately. Int J Biochem Cell Biol 2023; 162:106452. [PMID: 37482265 DOI: 10.1016/j.biocel.2023.106452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 07/16/2023] [Accepted: 07/19/2023] [Indexed: 07/25/2023]
Abstract
OBJECTIVE The accurate diagnosis of mixed-type gastric cancer from pathology images presents a formidable challenge for pathologists, given its intricate features and resemblance to other subtypes of gastric cancer. Artificial Intelligence has the potential to overcome this hurdle. This study aimed to leverage deep machine learning techniques to establish a precise and efficient diagnostic approach for this cancer type which can also predict the metastatic risk using two software, U-Net and QuPath, which have not been trialled in gastric cancers. METHODS A U-Net neural network was trained to recognise, and segment differentiated components from 186 pathology images of mixed-type gastric cancer. Undifferentiated components in the same images were annotated using the open-source pathology imaging software QuPath. The outcomes from U-Net and QuPath were used to calculate the ratios of differentiation/undifferentiated components which were correlated to lymph node metastasis. RESULTS The models established by U-Net recognised ∼91% of the regions of interest, with precision, recall, and F1 values of 90.2%, 90.9% and 94.6%, respectively, indicating a high level of accuracy and reliability. Furthermore, the receiver operating characteristic curve analysis showed an area under the cure of 91%, indicating good performance. A bell-curve correlation between the differentiated/undifferentiated ratio and lymphatic metastasis was found (highest risk between 0.683 and 1.03), which is paradigm-shifting. CONCLUSION U-Net and QuPath exhibit promising accuracy in the identification of differentiated and undifferentiated components in mixed-type gastric cancer, as well as paradigm-shifting prediction of metastasis. These findings bring us one step closer to their potential clinical application.
Collapse
Affiliation(s)
- Xinjie Ning
- Research Center, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen 518107, China
| | - Ruide Liu
- Department of Pathology, The First Affiliated Hospital of Gannan Medical University, Ganzhou 341000, China
| | - Nan Wang
- Research Center, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen 518107, China
| | - Xuewen Xiao
- Department of Pathology, The First Affiliated Hospital of Gannan Medical University, Ganzhou 341000, China
| | - Siqi Wu
- Research Center, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen 518107, China
| | - Yu Wang
- Department of Respiratory Diseases, Central Medical Branch of PLA General Hospital, Beijing 100081, China
| | - Chenju Yi
- Research Center, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen 518107, China; Shenzhen Key Laboratory of Chinese Medicine Active substance screening and Translational Research, Shenzhen 518107, China; Guangdong Provincial Key Laboratory of Brain Function and Disease, Guangzhou 510080, China.
| | - Yulong He
- Research Center, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen 518107, China.
| | - Dan Li
- Department of Pathology, The First Affiliated Hospital of Gannan Medical University, Ganzhou 341000, China.
| | - Hui Chen
- School of Life Sciences, Faculty of Science, University of Technology Sydney, Ultimo, NSW 2007, Australia
| |
Collapse
|
38
|
Petsiou DP, Martinos A, Spinos D. Applications of Artificial Intelligence in Temporal Bone Imaging: Advances and Future Challenges. Cureus 2023; 15:e44591. [PMID: 37795060 PMCID: PMC10545916 DOI: 10.7759/cureus.44591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/02/2023] [Indexed: 10/06/2023] Open
Abstract
The applications of artificial intelligence (AI) in temporal bone (TB) imaging have gained significant attention in recent years, revolutionizing the field of otolaryngology and radiology. Accurate interpretation of imaging features of TB conditions plays a crucial role in diagnosing and treating a range of ear-related pathologies, including middle and inner ear diseases, otosclerosis, and vestibular schwannomas. According to multiple clinical studies published in the literature, AI-powered algorithms have demonstrated exceptional proficiency in interpreting imaging findings, not only saving time for physicians but also enhancing diagnostic accuracy by reducing human error. Although several challenges remain in routinely relying on AI applications, the collaboration between AI and healthcare professionals holds the key to better patient outcomes and significantly improved patient care. This overview delivers a comprehensive update on the advances of AI in the field of TB imaging, summarizes recent evidence provided by clinical studies, and discusses future insights and challenges in the widespread integration of AI in clinical practice.
Collapse
Affiliation(s)
- Dioni-Pinelopi Petsiou
- Otolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens, School of Medicine, Athens, GRC
| | - Anastasios Martinos
- Otolaryngology-Head and Neck Surgery, National and Kapodistrian University of Athens, School of Medicine, Athens, GRC
| | - Dimitrios Spinos
- Otolaryngology-Head and Neck Surgery, Gloucestershire Hospitals NHS Foundation Trust, Gloucester, GBR
| |
Collapse
|
39
|
Lin YC, Lin G, Pandey S, Yeh CH, Wang JJ, Lin CY, Ho TY, Ko SF, Ng SH. Fully automated segmentation and radiomics feature extraction of hypopharyngeal cancer on MRI using deep learning. Eur Radiol 2023; 33:6548-6556. [PMID: 37338554 PMCID: PMC10415433 DOI: 10.1007/s00330-023-09827-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 03/29/2023] [Accepted: 04/14/2023] [Indexed: 06/21/2023]
Abstract
OBJECTIVES To use convolutional neural network for fully automated segmentation and radiomics features extraction of hypopharyngeal cancer (HPC) tumor in MRI. METHODS MR images were collected from 222 HPC patients, among them 178 patients were used for training, and another 44 patients were recruited for testing. U-Net and DeepLab V3 + architectures were used for training the models. The model performance was evaluated using the dice similarity coefficient (DSC), Jaccard index, and average surface distance. The reliability of radiomics parameters of the tumor extracted by the models was assessed using intraclass correlation coefficient (ICC). RESULTS The predicted tumor volumes by DeepLab V3 + model and U-Net model were highly correlated with those delineated manually (p < 0.001). The DSC of DeepLab V3 + model was significantly higher than that of U-Net model (0.77 vs 0.75, p < 0.05), particularly in those small tumor volumes of < 10 cm3 (0.74 vs 0.70, p < 0.001). For radiomics extraction of the first-order features, both models exhibited high agreement (ICC: 0.71-0.91) with manual delineation. The radiomics extracted by DeepLab V3 + model had significantly higher ICCs than those extracted by U-Net model for 7 of 19 first-order features and for 8 of 17 shape-based features (p < 0.05). CONCLUSION Both DeepLab V3 + and U-Net models produced reasonable results in automated segmentation and radiomic features extraction of HPC on MR images, whereas DeepLab V3 + had a better performance than U-Net. CLINICAL RELEVANCE STATEMENT The deep learning model, DeepLab V3 + , exhibited promising performance in automated tumor segmentation and radiomics extraction for hypopharyngeal cancer on MRI. This approach holds great potential for enhancing the radiotherapy workflow and facilitating prediction of treatment outcomes. KEY POINTS • DeepLab V3 + and U-Net models produced reasonable results in automated segmentation and radiomic features extraction of HPC on MR images. • DeepLab V3 + model was more accurate than U-Net in automated segmentation, especially on small tumors. • DeepLab V3 + exhibited higher agreement for about half of the first-order and shape-based radiomics features than U-Net.
Collapse
Affiliation(s)
- Yu-Chun Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
- Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, Taiwan
- Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
- Clinical Metabolomics Core Laboratory, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Sumit Pandey
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
| | - Chih-Hua Yeh
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan
| | - Jiun-Jie Wang
- Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan, Taiwan
| | - Chien-Yu Lin
- Department of Radiation Oncology, Chang Gung Memorial Hospital at Linkou and Chang Gung University, Taoyuan, Taiwan
| | - Tsung-Ying Ho
- Department of Nuclear Medicine and Molecular Imaging Center, Chang Gung Memorial Hospital and Chang Gung University, Taoyuan, Taiwan
| | - Sheung-Fat Ko
- Department of Radiology, Kaohsiung Chang Gung Memorial Hospital and Chang Gung University College of Medicine, Kaohsiung, Taiwan
| | - Shu-Hang Ng
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou, 5 Fuhsing St., Guishan, Taoyuan, 33382, Taiwan.
| |
Collapse
|
40
|
Rich JM, Bhardwaj LN, Shah A, Gangal K, Rapaka MS, Oberai AA, Fields BKK, Matcuk GR, Duddalwar VA. Deep learning image segmentation approaches for malignant bone lesions: a systematic review and meta-analysis. FRONTIERS IN RADIOLOGY 2023; 3:1241651. [PMID: 37614529 PMCID: PMC10442705 DOI: 10.3389/fradi.2023.1241651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 07/28/2023] [Indexed: 08/25/2023]
Abstract
Introduction Image segmentation is an important process for quantifying characteristics of malignant bone lesions, but this task is challenging and laborious for radiologists. Deep learning has shown promise in automating image segmentation in radiology, including for malignant bone lesions. The purpose of this review is to investigate deep learning-based image segmentation methods for malignant bone lesions on Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron-Emission Tomography/CT (PET/CT). Method The literature search of deep learning-based image segmentation of malignant bony lesions on CT and MRI was conducted in PubMed, Embase, Web of Science, and Scopus electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 41 original articles published between February 2017 and March 2023 were included in the review. Results The majority of papers studied MRI, followed by CT, PET/CT, and PET/MRI. There was relatively even distribution of papers studying primary vs. secondary malignancies, as well as utilizing 3-dimensional vs. 2-dimensional data. Many papers utilize custom built models as a modification or variation of U-Net. The most common metric for evaluation was the dice similarity coefficient (DSC). Most models achieved a DSC above 0.6, with medians for all imaging modalities between 0.85-0.9. Discussion Deep learning methods show promising ability to segment malignant osseous lesions on CT, MRI, and PET/CT. Some strategies which are commonly applied to help improve performance include data augmentation, utilization of large public datasets, preprocessing including denoising and cropping, and U-Net architecture modification. Future directions include overcoming dataset and annotation homogeneity and generalizing for clinical applicability.
Collapse
Affiliation(s)
- Joseph M. Rich
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Lokesh N. Bhardwaj
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Aman Shah
- Department of Applied Biostatistics and Epidemiology, University of Southern California, Los Angeles, CA, United States
| | - Krish Gangal
- Bridge UnderGrad Science Summer Research Program, Irvington High School, Fremont, CA, United States
| | - Mohitha S. Rapaka
- Department of Biology, University of Texas at Austin, Austin, TX, United States
| | - Assad A. Oberai
- Department of Aerospace and Mechanical Engineering Department, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brandon K. K. Fields
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - George R. Matcuk
- Department of Radiology, Cedars-Sinai Medical Center, Los Angeles, CA, United States
| | - Vinay A. Duddalwar
- Department of Radiology, Keck School of Medicine of the University of Southern California, Los Angeles, CA, United States
- Department of Radiology, USC Radiomics Laboratory, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
41
|
Pu J, Leme AS, de Lima e Silva C, Beeche C, Nyunoya T, Königshoff M, Chandra D. Deep-Masker: A Deep Learning-based Tool to Assess Chord Length from Murine Lung Images. Am J Respir Cell Mol Biol 2023; 69:126-134. [PMID: 37236629 PMCID: PMC10399147 DOI: 10.1165/rcmb.2023-0051ma] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 05/22/2023] [Indexed: 05/28/2023] Open
Abstract
Chord length is an indirect measure of alveolar size and a critical endpoint in animal models of chronic obstructive pulmonary disease (COPD). In assessing chord length, the lumens of nonalveolar structures are eliminated from measurement by various methods, including manual masking. However, manual masking is resource intensive and can introduce variability and bias. We created a fully automated deep learning-based tool to mask murine lung images and assess chord length to facilitate mechanistic and therapeutic discovery in COPD called Deep-Masker (available at http://47.93.0.75:8110/login). We trained the deep learning algorithm for Deep-Masker using 1,217 images from 137 mice from 12 strains exposed to room air or cigarette smoke for 6 months. We validated this algorithm against manual masking. Deep-Masker demonstrated high accuracy with an average difference in chord length compared with manual masking of -0.3 ± 1.4% (rs = 0.99) for room-air-exposed mice and 0.7 ± 1.9% (rs = 0.99) for cigarette-smoke-exposed mice. The difference between Deep-Masker and manually masked images for change in chord length because of cigarette smoke exposure was 6.0 ± 9.2% (rs = 0.95). These values exceed published estimates for interobserver variability for manual masking (rs = 0.65) and the accuracy of published algorithms by a significant margin. We validated the performance of Deep-Masker using an independent set of images. Deep-Masker can be an accurate, precise, fully automated method to standardize chord length measurement in murine models of lung disease.
Collapse
Affiliation(s)
- Jiantao Pu
- Department of Radiology
- Department of Bioengineering, and
| | - Adriana S. Leme
- Division of Pulmonary, Allergy, and Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Camilla de Lima e Silva
- Division of Pulmonary, Allergy, and Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | | | - Toru Nyunoya
- Division of Pulmonary, Allergy, and Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Melanie Königshoff
- Division of Pulmonary, Allergy, and Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Divay Chandra
- Division of Pulmonary, Allergy, and Critical Care Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania
| |
Collapse
|
42
|
Sui G, Zhang Z, Liu S, Chen S, Liu X. Pulmonary nodules segmentation based on domain adaptation. Phys Med Biol 2023; 68:155015. [PMID: 37406634 DOI: 10.1088/1361-6560/ace498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 07/05/2023] [Indexed: 07/07/2023]
Abstract
With the development of deep learning, the methods based on transfer learning have promoted the progress of medical image segmentation. However, the domain shift and complex background information of medical images limit the further improvement of the segmentation accuracy. Domain adaptation can compensate for the sample shortage by learning important information from a similar source dataset. Therefore, a segmentation method based on adversarial domain adaptation with background mask (ADAB) is proposed in this paper. Firstly, two ADAB networks are built for the source and target data segmentation, respectively. Next, to extract the foreground features that are the input of the discriminators, the background masks are generated according to the region growth algorithm. Then, to update the parameters in the target network without being affected by the conflict between the distinguishing differences of the discriminator and the domain shift reduction of the adversarial domain adaptation, a gradient reversal layer propagation is embedded in the ADAB model for the target data. Finally, an enhanced boundaries loss is deduced to make the target network sensitive to the edge of the area to be segmented. The performance of the proposed method is evaluated in the segmentation of pulmonary nodules in computed tomography images. Experimental results show that the proposed approach has a potential prospect in medical image processing.
Collapse
Affiliation(s)
- Guozheng Sui
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, People's Republic of China
| | - Zaixian Zhang
- Radiology Department, The Affiliated Hospital of Qingdao University, People's Republic of China
| | - Shunli Liu
- Radiology Department, The Affiliated Hospital of Qingdao University, People's Republic of China
| | - Shuang Chen
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, People's Republic of China
| | - Xuefeng Liu
- College of Automation and Electronic Engineering, Qingdao University of Science and Technology, People's Republic of China
| |
Collapse
|
43
|
Vahedifard F, Ai HA, Supanich MP, Marathu KK, Liu X, Kocak M, Ansari SM, Akyuz M, Adepoju JO, Adler S, Byrd S. Automatic Ventriculomegaly Detection in Fetal Brain MRI: A Step-by-Step Deep Learning Model for Novel 2D-3D Linear Measurements. Diagnostics (Basel) 2023; 13:2355. [PMID: 37510099 PMCID: PMC10378043 DOI: 10.3390/diagnostics13142355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/07/2023] [Accepted: 07/09/2023] [Indexed: 07/30/2023] Open
Abstract
In this study, we developed an automated workflow using a deep learning model (DL) to measure the lateral ventricle linearly in fetal brain MRI, which are subsequently classified into normal or ventriculomegaly, defined as a diameter wider than 10 mm at the level of the thalamus and choroid plexus. To accomplish this, we first trained a UNet-based deep learning model to segment the brain of a fetus into seven different tissue categories using a public dataset (FeTA 2022) consisting of fetal T2-weighted images. Then, an automatic workflow was developed to perform lateral ventricle measurement at the level of the thalamus and choroid plexus. The test dataset included 22 cases of normal and abnormal T2-weighted fetal brain MRIs. Measurements performed by our AI model were compared with manual measurements performed by a general radiologist and a neuroradiologist. The AI model correctly classified 95% of fetal brain MRI cases into normal or ventriculomegaly. It could measure the lateral ventricle diameter in 95% of cases with less than a 1.7 mm error. The average difference between measurements was 0.90 mm in AI vs. general radiologists and 0.82 mm in AI vs. neuroradiologists, which are comparable to the difference between the two radiologists, 0.51 mm. In addition, the AI model also enabled the researchers to create 3D-reconstructed images, which better represent real anatomy than 2D images. When a manual measurement is performed, it could also provide both the right and left ventricles in just one cut, instead of two. The measurement difference between the general radiologist and the algorithm (p = 0.9827), and between the neuroradiologist and the algorithm (p = 0.2378), was not statistically significant. In contrast, the difference between general radiologists vs. neuroradiologists was statistically significant (p = 0.0043). To the best of our knowledge, this is the first study that performs 2D linear measurement of ventriculomegaly with a 3D model based on an artificial intelligence approach. The paper presents a step-by-step approach for designing an AI model based on several radiological criteria. Overall, this study showed that AI can automatically calculate the lateral ventricle in fetal brain MRIs and accurately classify them as abnormal or normal.
Collapse
Affiliation(s)
- Farzan Vahedifard
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - H Asher Ai
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Mark P Supanich
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Kranthi K Marathu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Xuchu Liu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Mehmet Kocak
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Shehbaz M Ansari
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Melih Akyuz
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Jubril O Adepoju
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Seth Adler
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Sharon Byrd
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| |
Collapse
|
44
|
Chu EP, Liu KC, Hsieh CY, Chang CY, Tsao Y, Chan CT. Multi-Task Learning U-Net for Functional Shoulder Sub-Task Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38083530 DOI: 10.1109/embc40787.2023.10341137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The assessment of a frozen shoulder (FS) is critical for evaluating outcomes and medical treatment. Analysis of functional shoulder sub-tasks provides more crucial information, but current manual labeling methods are time-consuming and prone to errors. To address this challenge, we propose a deep multi-task learning (MTL) U-Net to provide an automatic and reliable functional shoulder sub-task segmentation (STS) tool for clinical evaluation in FS. The proposed approach contains the main task of STS and the auxiliary task of transition point detection (TPD). For the main STS task, a U-Net architecture including an encoder-decoder with skip connection is presented to perform shoulder sub-task classification for each time point. The auxiliary TPD task uses lightweight convolutional neural networks architecture to detect the boundary between shoulder sub-tasks. A shared structure is implemented between two tasks and their objective functions of them are optimized jointly. The fine-grained transition-related information from the auxiliary TPD task is expected to help the main STS task better detect boundaries between functional shoulder sub-tasks. We conduct the experiments using wearable inertial measurement units to record 815 shoulder task sequences collected from 20 healthy subjects and 43 patients with FS. The experimental results present that the deep MTL U-Net can achieve superior performance compared to using single-task models. It shows the effectiveness of the proposed method for functional shoulder STS. The code has been made publicly available at https://github.com/RobinChu9890/MTL-U-Net-for-Functional-Shoulder-STS.Clinical Relevance- This work provides an automatic and reliable functional shoulder sub-task segmentation tool for clinical evaluation in frozen shoulder.
Collapse
|
45
|
Li M, Fan D, Dong Y, Li D. Satellite Video Moving Vehicle Detection and Tracking Based on Spatiotemporal Characteristics. SENSORS (BASEL, SWITZERLAND) 2023; 23:5771. [PMID: 37420935 DOI: 10.3390/s23125771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Revised: 06/05/2023] [Accepted: 06/19/2023] [Indexed: 07/09/2023]
Abstract
The complex backgrounds of satellite videos and serious interference from noise and pseudo-motion targets make it difficult to detect and track moving vehicles. Recently, researchers have proposed road-based constraints to remove background interference and achieve highly accurate detection and tracking. However, existing methods for constructing road constraints suffer from poor stability, low arithmetic performance, leakage, and error detection. In response, this study proposes a method for detecting and tracking moving vehicles in satellite videos based on the constraints from spatiotemporal characteristics (DTSTC), fusing road masks from the spatial domain with motion heat maps from the temporal domain. The detection precision is enhanced by increasing the contrast in the constrained area to accurately detect moving vehicles. Vehicle tracking is achieved by completing an inter-frame vehicle association using position and historical movement information. The method was tested at various stages, and the results show that the proposed method outperformed the traditional method in constructing constraints, correct detection rate, false detection rate, and missed detection rate. The tracking phase performed well in identity retention capability and tracking accuracy. Therefore, DTSTC is robust for detecting moving vehicles in satellite videos.
Collapse
Affiliation(s)
- Ming Li
- Institute of Geospatial Information, Information Engineering University, 62 Science Avenue, Zhengzhou 450001, China
| | - Dazhao Fan
- Institute of Geospatial Information, Information Engineering University, 62 Science Avenue, Zhengzhou 450001, China
| | - Yang Dong
- Institute of Geospatial Information, Information Engineering University, 62 Science Avenue, Zhengzhou 450001, China
| | - Dongzi Li
- Institute of Geospatial Information, Information Engineering University, 62 Science Avenue, Zhengzhou 450001, China
| |
Collapse
|
46
|
Qureshi A, Lim S, Suh SY, Mutawak B, Chitnis PV, Demer JL, Wei Q. Deep-Learning-Based Segmentation of Extraocular Muscles from Magnetic Resonance Images. Bioengineering (Basel) 2023; 10:699. [PMID: 37370630 PMCID: PMC10295225 DOI: 10.3390/bioengineering10060699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 05/31/2023] [Accepted: 06/02/2023] [Indexed: 06/29/2023] Open
Abstract
In this study, we investigated the performance of four deep learning frameworks of U-Net, U-NeXt, DeepLabV3+, and ConResNet in multi-class pixel-based segmentation of the extraocular muscles (EOMs) from coronal MRI. Performances of the four models were evaluated and compared with the standard F-measure-based metrics of intersection over union (IoU) and Dice, where the U-Net achieved the highest overall IoU and Dice scores of 0.77 and 0.85, respectively. Centroid distance offset between identified and ground truth EOM centroids was measured where U-Net and DeepLabV3+ achieved low offsets (p > 0.05) of 0.33 mm and 0.35 mm, respectively. Our results also demonstrated that segmentation accuracy varies in spatially different image planes. This study systematically compared factors that impact the variability of segmentation and morphometric accuracy of the deep learning models when applied to segmenting EOMs from MRI.
Collapse
Affiliation(s)
- Amad Qureshi
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (A.Q.)
| | - Seongjin Lim
- Department of Ophthalmology, Neurology and Bioengineering, Jules Stein Eye Institute, University of California, Los Angeles, CA 90095, USA; (S.L.)
| | - Soh Youn Suh
- Department of Ophthalmology, Neurology and Bioengineering, Jules Stein Eye Institute, University of California, Los Angeles, CA 90095, USA; (S.L.)
| | - Bassam Mutawak
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (A.Q.)
| | - Parag V. Chitnis
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (A.Q.)
| | - Joseph L. Demer
- Department of Ophthalmology, Neurology and Bioengineering, Jules Stein Eye Institute, University of California, Los Angeles, CA 90095, USA; (S.L.)
| | - Qi Wei
- Department of Bioengineering, George Mason University, Fairfax, VA 22030, USA; (A.Q.)
| |
Collapse
|
47
|
Alam T, Shia WC, Hsu FR, Hassan T. Improving Breast Cancer Detection and Diagnosis through Semantic Segmentation Using the Unet3+ Deep Learning Framework. Biomedicines 2023; 11:1536. [PMID: 37371631 DOI: 10.3390/biomedicines11061536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Revised: 05/23/2023] [Accepted: 05/24/2023] [Indexed: 06/29/2023] Open
Abstract
We present an analysis and evaluation of breast cancer detection and diagnosis using segmentation models. We used an advanced semantic segmentation method and a deep convolutional neural network to identify the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound images. To improve the segmentation results, we used six models to analyse 309 patients, including 151 benign and 158 malignant tumour images. We compared the Unet3+ architecture with several other models, such as FCN, Unet, SegNet, DeeplabV3+ and pspNet. The Unet3+ model is a state-of-the-art, semantic segmentation architecture that showed optimal performance with an average accuracy of 82.53% and an average intersection over union (IU) of 52.57%. The weighted IU was found to be 89.14% with a global accuracy of 90.99%. The application of these types of segmentation models to the detection and diagnosis of breast cancer provides remarkable results. Our proposed method has the potential to provide a more accurate and objective diagnosis of breast cancer, leading to improved patient outcomes.
Collapse
Affiliation(s)
- Taukir Alam
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan
| | - Wei-Chung Shia
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan
- Molecular Medicine Laboratory, Department of Research, Changhua Christian Hospital, Changhua 500, Taiwan
| | - Fang-Rong Hsu
- Department of Information Engineering and Computer Science, Feng Chia University, Taichung 407, Taiwan
| | - Taimoor Hassan
- Institute of Translational Medicine and New Drug Development, China Medical University, Taichung 404333, Taiwan
| |
Collapse
|
48
|
Sha X, Wang H, Sha H, Xie L, Zhou Q, Zhang W, Yin Y. Clinical target volume and organs at risk segmentation for rectal cancer radiotherapy using the Flex U-Net network. Front Oncol 2023; 13:1172424. [PMID: 37324028 PMCID: PMC10266488 DOI: 10.3389/fonc.2023.1172424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 05/05/2023] [Indexed: 06/17/2023] Open
Abstract
Purpose/Objectives The aim of this study was to improve the accuracy of the clinical target volume (CTV) and organs at risk (OARs) segmentation for rectal cancer preoperative radiotherapy. Materials/Methods Computed tomography (CT) scans from 265 rectal cancer patients treated at our institution were collected to train and validate automatic contouring models. The regions of CTV and OARs were delineated by experienced radiologists as the ground truth. We improved the conventional U-Net and proposed Flex U-Net, which used a register model to correct the noise caused by manual annotation, thus refining the performance of the automatic segmentation model. Then, we compared its performance with that of U-Net and V-Net. The Dice similarity coefficient (DSC), Hausdorff distance (HD), and average symmetric surface distance (ASSD) were calculated for quantitative evaluation purposes. With a Wilcoxon signed-rank test, we found that the differences between our method and the baseline were statistically significant (P< 0.05). Results Our proposed framework achieved DSC values of 0.817 ± 0.071, 0.930 ± 0.076, 0.927 ± 0.03, and 0.925 ± 0.03 for CTV, the bladder, Femur head-L and Femur head-R, respectively. Conversely, the baseline results were 0.803 ± 0.082, 0.917 ± 0.105, 0.923 ± 0.03 and 0.917 ± 0.03, respectively. Conclusion In conclusion, our proposed Flex U-Net can enable satisfactory CTV and OAR segmentation for rectal cancer and yield superior performance compared to conventional methods. This method provides an automatic, fast and consistent solution for CTV and OAR segmentation and exhibits potential to be widely applied for radiation therapy planning for a variety of cancers.
Collapse
Affiliation(s)
- Xue Sha
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | - Hui Wang
- Department of Radiation Oncology, Qingdao Central Hospital, Qingdao, Shandong, China
| | - Hui Sha
- Hunan Cancer Hospital, Xiangya School of Medicine, Central South University, Changsha, Hunan, China
| | - Lu Xie
- Manteia Technologies Co., Ltd, Xiamen, Fujian, China
| | - Qichao Zhou
- Manteia Technologies Co., Ltd, Xiamen, Fujian, China
| | - Wei Zhang
- Manteia Technologies Co., Ltd, Xiamen, Fujian, China
| | - Yong Yin
- Department of Radiation Oncology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| |
Collapse
|
49
|
Shinohara T, Murakami K, Matsumura N. Diagnosis Assistance in Colposcopy by Segmenting Acetowhite Epithelium Using U-Net with Images before and after Acetic Acid Solution Application. Diagnostics (Basel) 2023; 13:diagnostics13091596. [PMID: 37174987 PMCID: PMC10178183 DOI: 10.3390/diagnostics13091596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 04/20/2023] [Accepted: 04/27/2023] [Indexed: 05/15/2023] Open
Abstract
Colposcopy is an essential examination tool to identify cervical intraepithelial neoplasia (CIN), a precancerous lesion of the uterine cervix, and to sample its tissues for histological examination. In colposcopy, gynecologists visually identify the lesion highlighted by applying an acetic acid solution to the cervix using a magnifying glass. This paper proposes a deep learning method to aid the colposcopic diagnosis of CIN by segmenting lesions. In this method, to segment the lesion effectively, the colposcopic images taken before acetic acid solution application were input to the deep learning network, U-Net, for lesion segmentation with the images taken following acetic acid solution application. We conducted experiments using 30 actual colposcopic images of acetowhite epithelium, one of the representative types of CIN. As a result, it was confirmed that accuracy, precision, and F1 scores, which were 0.894, 0.837, and 0.834, respectively, were significantly better when images taken before and after acetic acid solution application were used than when only images taken after acetic acid solution application were used (0.882, 0.823, and 0.823, respectively). This result indicates that the image taken before acetic acid solution application is helpful for accurately segmenting the CIN in deep learning.
Collapse
Affiliation(s)
- Toshihiro Shinohara
- Department of Computational Systems Biology, Faculty of Biology-Oriented Science and Technology, Kindai University, Kinokawa 649-6493, Wakayama, Japan
| | - Kosuke Murakami
- Department of Obstetrics and Gynecology, Faculty of Medicine, Kindai University, Osakasayama 589-8511, Osaka, Japan
| | - Noriomi Matsumura
- Department of Obstetrics and Gynecology, Faculty of Medicine, Kindai University, Osakasayama 589-8511, Osaka, Japan
| |
Collapse
|
50
|
Feng Q, Liu S, Peng JX, Yan T, Zhu H, Zheng ZJ, Feng HC. Deep learning-based automatic sella turcica segmentation and morphology measurement in X-ray images. BMC Med Imaging 2023; 23:41. [PMID: 36964517 PMCID: PMC10039601 DOI: 10.1186/s12880-023-00998-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 03/14/2023] [Indexed: 03/26/2023] Open
Abstract
BACKGROUND Although the morphological changes of sella turcica have been drawing increasing attention, the acquirement of linear parameters of sella turcica relies on manual measurement. Manual measurement is laborious, time-consuming, and may introduce subjective bias. This paper aims to develop and evaluate a deep learning-based model for automatic segmentation and measurement of sella turcica in cephalometric radiographs. METHODS 1129 images were used to develop a deep learning-based segmentation network for automatic sella turcica segmentation. Besides, 50 images were used to test the generalization ability of the model. The performance of the segmented network was evaluated by the dice coefficient. Images in the test datasets were segmented by the trained segmentation network, and the segmentation results were saved in binary images. Then the extremum points and corner points were detected by calling the function in the OpenCV library to obtain the coordinates of the four landmarks of the sella turcica. Finally, the length, diameter, and depth of the sella turcica can be obtained by calculating the distance between the two points and the distance from the point to the straight line. Meanwhile, images were measured manually using Digimizer. Intraclass correlation coefficients (ICCs) and Bland-Altman plots were used to analyze the consistency between automatic and manual measurements to evaluate the reliability of the proposed methodology. RESULTS The dice coefficient of the segmentation network is 92.84%. For the measurement of sella turcica, there is excellent agreement between the automatic measurement and the manual measurement. In Test1, the ICCs of length, diameter and depth are 0.954, 0.953, and 0.912, respectively. In Test2, ICCs of length, diameter and depth are 0.906, 0.921, and 0.915, respectively. In addition, Bland-Altman plots showed the excellent reliability of the automated measurement method, with the majority measurements differences falling within ± 1.96 SDs intervals around the mean difference and no bias was apparent. CONCLUSIONS Our experimental results indicated that the proposed methodology could complete the automatic segmentation of the sella turcica efficiently, and reliably predict the length, diameter, and depth of the sella turcica. Moreover, the proposed method has generalization ability according to its excellent performance on Test2.
Collapse
Affiliation(s)
- Qi Feng
- College of Medicine, Guizhou University, Guiyang, 550025, China
| | - Shu Liu
- Department of Orthodontics, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Ju-Xiang Peng
- Department of Orthodontics, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Ting Yan
- Department of Radiology, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Hong Zhu
- Department of Medical Information, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Zhi-Jun Zheng
- Department of Orthodontics, Guiyang Hospital of Stomatology, Guiyang, 550002, China
| | - Hong-Chao Feng
- College of Medicine, Guizhou University, Guiyang, 550025, China.
- Department of Oral and Maxillofacial Surgery, Guiyang Hospital of Stomatology, Guiyang, 550002, China.
| |
Collapse
|