1
|
Zhang C, Piccini D, Demirel OB, Bonanno G, Roy CW, Yaman B, Moeller S, Shenoy C, Stuber M, Akçakaya M. Large-scale 3D non-Cartesian coronary MRI reconstruction using distributed memory-efficient physics-guided deep learning with limited training data. MAGMA (NEW YORK, N.Y.) 2024; 37:429-438. [PMID: 38743377 DOI: 10.1007/s10334-024-01157-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 02/19/2024] [Accepted: 03/13/2024] [Indexed: 05/16/2024]
Abstract
OBJECT To enable high-quality physics-guided deep learning (PG-DL) reconstruction of large-scale 3D non-Cartesian coronary MRI by overcoming challenges of hardware limitations and limited training data availability. MATERIALS AND METHODS While PG-DL has emerged as a powerful image reconstruction method, its application to large-scale 3D non-Cartesian MRI is hindered by hardware limitations and limited availability of training data. We combine several recent advances in deep learning and MRI reconstruction to tackle the former challenge, and we further propose a 2.5D reconstruction using 2D convolutional neural networks, which treat 3D volumes as batches of 2D images to train the network with a limited amount of training data. Both 3D and 2.5D variants of the PG-DL networks were compared to conventional methods for high-resolution 3D kooshball coronary MRI. RESULTS Proposed PG-DL reconstructions of 3D non-Cartesian coronary MRI with 3D and 2.5D processing outperformed all conventional methods both quantitatively and qualitatively in terms of image assessment by an experienced cardiologist. The 2.5D variant further improved vessel sharpness compared to 3D processing, and scored higher in terms of qualitative image quality. DISCUSSION PG-DL reconstruction of large-scale 3D non-Cartesian MRI without compromising image size or network complexity is achieved, and the proposed 2.5D processing enables high-quality reconstruction with limited training data.
Collapse
Affiliation(s)
- Chi Zhang
- Electrical and Computer Engineering, University of Minnesota, 200 Union Street S.E., Minneapolis, MN, 55455, USA
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, USA
| | - Davide Piccini
- Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- Advanced Clinical Imaging Technology, Siemens Healthineers International, Lausanne, Switzerland
| | - Omer Burak Demirel
- Electrical and Computer Engineering, University of Minnesota, 200 Union Street S.E., Minneapolis, MN, 55455, USA
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, USA
| | - Gabriele Bonanno
- Advanced Clinical Imaging Technology, Siemens Healthineers International, Lausanne, Switzerland
| | - Christopher W Roy
- Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Burhaneddin Yaman
- Electrical and Computer Engineering, University of Minnesota, 200 Union Street S.E., Minneapolis, MN, 55455, USA
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, USA
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, USA
| | - Chetan Shenoy
- Department of Medicine (Cardiology), University of Minnesota, Minneapolis, MN, 55455, USA
| | - Matthias Stuber
- Department of Diagnostic and Interventional Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
- Center for Biomedical Imaging, Lausanne, Switzerland
| | - Mehmet Akçakaya
- Electrical and Computer Engineering, University of Minnesota, 200 Union Street S.E., Minneapolis, MN, 55455, USA.
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, USA.
| |
Collapse
|
2
|
Zhao J, Jiang T, Lin Y, Chan LC, Chan PK, Wen C, Chen H. Adaptive Fusion of Deep Learning With Statistical Anatomical Knowledge for Robust Patella Segmentation From CT Images. IEEE J Biomed Health Inform 2024; 28:2842-2853. [PMID: 38446653 DOI: 10.1109/jbhi.2024.3372576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Kneeosteoarthritis (KOA), as a leading joint disease, can be decided by examining the shapes of patella to spot potential abnormal variations. To assist doctors in the diagnosis of KOA, a robust automatic patella segmentation method is highly demanded in clinical practice. Deep learning methods, especially convolutional neural networks (CNNs) have been widely applied to medical image segmentation in recent years. Nevertheless, poor image quality and limited data still impose challenges to segmentation via CNNs. On the other hand, statistical shape models (SSMs) can generate shape priors which give anatomically reliable segmentation to varying instances. Thus, in this work, we propose an adaptive fusion framework, explicitly combining deep neural networks and anatomical knowledge from SSM for robust patella segmentation. Our adaptive fusion framework will accordingly adjust the weight of segmentation candidates in fusion based on their segmentation performance. We also propose a voxel-wise refinement strategy to make the segmentation of CNNs more anatomically correct. Extensive experiments and thorough assessment have been conducted on various mainstream CNN backbones for patella segmentation in low-data regimes, which demonstrate that our framework can be flexibly attached to a CNN model, significantly improving its performance when labeled training data are limited and input image data are of poor quality.
Collapse
|
3
|
Chen W, Lim LJR, Lim RQR, Yi Z, Huang J, He J, Yang G, Liu B. Artificial intelligence powered advancements in upper extremity joint MRI: A review. Heliyon 2024; 10:e28731. [PMID: 38596104 PMCID: PMC11002577 DOI: 10.1016/j.heliyon.2024.e28731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 04/11/2024] Open
Abstract
Magnetic resonance imaging (MRI) is an indispensable medical imaging examination technique in musculoskeletal medicine. Modern MRI techniques achieve superior high-quality multiplanar imaging of soft tissue and skeletal pathologies without the harmful effects of ionizing radiation. Some current limitations of MRI include long acquisition times, artifacts, and noise. In addition, it is often challenging to distinguish abutting or closely applied soft tissue structures with similar signal characteristics. In the past decade, Artificial Intelligence (AI) has been widely employed in musculoskeletal MRI to help reduce the image acquisition time and improve image quality. Apart from being able to reduce medical costs, AI can assist clinicians in diagnosing diseases more accurately. This will effectively help formulate appropriate treatment plans and ultimately improve patient care. This review article intends to summarize AI's current research and application in musculoskeletal MRI, particularly the advancement of DL in identifying the structure and lesions of upper extremity joints in MRI images.
Collapse
Affiliation(s)
- Wei Chen
- Department of Hand Surgery, Beijing Jishuitan Hospital, Capital Medical University, Beijing, China
| | - Lincoln Jian Rong Lim
- Department of Medical Imaging, Western Health, Footscray Hospital, Victoria, Australia
- Department of Surgery, The University of Melbourne, Victoria, Australia
| | - Rebecca Qian Ru Lim
- Department of Hand & Reconstructive Microsurgery, Singapore General Hospital, Singapore
| | - Zhe Yi
- Department of Hand Surgery, Beijing Jishuitan Hospital, Capital Medical University, Beijing, China
| | - Jiaxing Huang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Jia He
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Ge Yang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Bo Liu
- Department of Hand Surgery, Beijing Jishuitan Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
4
|
Ye Y, Zhang N, Wu D, Huang B, Cai X, Ruan X, Chen L, Huang K, Li ZP, Wu PM, Jiang J, Dan G, Peng Z. Deep Learning Combined with Radiologist's Intervention Achieves Accurate Segmentation of Hepatocellular Carcinoma in Dual-Phase Magnetic Resonance Images. BIOMED RESEARCH INTERNATIONAL 2024; 2024:9267554. [PMID: 38464681 PMCID: PMC10923620 DOI: 10.1155/2024/9267554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 12/20/2023] [Accepted: 02/08/2024] [Indexed: 03/12/2024]
Abstract
Purpose Segmentation of hepatocellular carcinoma (HCC) is crucial; however, manual segmentation is subjective and time-consuming. Accurate and automatic lesion contouring for HCC is desirable in clinical practice. In response to this need, our study introduced a segmentation approach for HCC combining deep convolutional neural networks (DCNNs) and radiologist intervention in magnetic resonance imaging (MRI). We sought to design a segmentation method with a deep learning method that automatically segments using manual location information for moderately experienced radiologists. In addition, we verified the viability of this method to assist radiologists in accurate and fast lesion segmentation. Method In our study, we developed a semiautomatic approach for segmenting HCC using DCNN in conjunction with radiologist intervention in dual-phase gadolinium-ethoxybenzyl-diethylenetriamine penta-acetic acid- (Gd-EOB-DTPA-) enhanced MRI. We developed a DCNN and deep fusion network (DFN) trained on full-size images, namely, DCNN-F and DFN-F. Furthermore, DFN was applied to the image blocks containing tumor lesions that were roughly contoured by a radiologist with 10 years of experience in abdominal MRI, and this method was named DFN-R. Another radiologist with five years of experience (moderate experience) performed tumor lesion contouring for comparison with our proposed methods. The ground truth image was contoured by an experienced radiologist and reviewed by an independent experienced radiologist. Results The mean DSC of DCNN-F, DFN-F, and DFN-R was 0.69 ± 0.20 (median, 0.72), 0.74 ± 0.21 (median, 0.77), and 0.83 ± 0.13 (median, 0.88), respectively. The mean DSC of the segmentation by the radiologist with moderate experience was 0.79 ± 0.11 (median, 0.83), which was lower than the performance of DFN-R. Conclusions Deep learning using dual-phase MRI shows great potential for HCC lesion segmentation. The radiologist-aided semiautomated method (DFN-R) achieved improved performance compared to manual contouring by the radiologist with moderate experience, although the difference was not statistically significant.
Collapse
Affiliation(s)
- Yufeng Ye
- The First Clinical College of Jinan University, Guangzhou, China
- Department of Radiology, Panyu Central Hospital, Guangzhou, China
| | - Naiwen Zhang
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Dasheng Wu
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Bingsheng Huang
- Department of Radiology, Panyu Central Hospital, Guangzhou, China
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
- Shenzhen University Clinical Research Center for Neurological Diseases, Shenzhen, Guangdong, China
| | - Xun Cai
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Xiaolei Ruan
- Jiuquan Satellite Launch Center, Lanzhou, Gansu, China
| | - Liangliang Chen
- Medical AI Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Kun Huang
- Department of Radiology, First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Zi-Ping Li
- Department of Radiology, First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Po-Man Wu
- Medical Physics and Research Department, Hong Kong Sanatorium & Hospital, Hong Kong, China
| | - Jinzhao Jiang
- Department of Radiology, Shenzhen University General Hospital, Shenzhen, China
| | - Guo Dan
- School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Zhenpeng Peng
- Department of Radiology, First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
5
|
Zheng R, Wen H, Zhu F, Lan W. Attention-guided deep neural network with a multichannel architecture for lung nodule classification. Heliyon 2024; 10:e23508. [PMID: 38169878 PMCID: PMC10758786 DOI: 10.1016/j.heliyon.2023.e23508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 11/15/2023] [Accepted: 12/05/2023] [Indexed: 01/05/2024] Open
Abstract
Detecting and accurately identifying malignant lung nodules in chest CT scans in a timely manner is crucial for effective lung cancer treatment. This study introduces a deep learning model featuring a multi-channel attention mechanism, specifically designed for the precise diagnosis of malignant lung nodules. To start, we standardized the voxel size of CT images and generated three RGB images of varying scales for each lung nodule, viewed from three different angles. Subsequently, we applied three attention submodels to extract class-specific characteristics from these RGB images. Finally, the nodule features were consolidated in the model's final layer to make the ultimate predictions. Through the utilization of an attention mechanism, we could dynamically pinpoint the exact location of lung nodules in the images without the need for prior segmentation. This proposed approach enhances the accuracy and efficiency of lung nodule classification. We evaluated and tested our model using a dataset of 1018 CT scans sourced from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI). The experimental results demonstrate that our model achieved a lung nodule classification accuracy of 90.11 %, with an area under the receiver operator curve (AUC) score of 95.66 %. Impressively, our method achieved this high level of performance while utilizing only 29.09 % of the time needed by the mainstream model.
Collapse
Affiliation(s)
- Rong Zheng
- Department of Gynecology, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430070, China
| | - Hongqiao Wen
- School of Information Engineering, Wuhan University of Technology, Wuhan 430070, China
| | - Feng Zhu
- Department of Cardiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Clinic Center of Human Gene Research, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Weishun Lan
- Department of Medical Imaging, Maternal and Child Health Hospital of Hubei Province, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430070, China
| |
Collapse
|
6
|
Mahendrakar P, Kumar D, Patil U. A Comprehensive Review on MRI-based Knee Joint Segmentation and Analysis Techniques. Curr Med Imaging 2024; 20:e150523216894. [PMID: 37189281 DOI: 10.2174/1573405620666230515090557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 11/29/2022] [Accepted: 12/28/2022] [Indexed: 05/17/2023]
Abstract
Using magnetic resonance imaging (MRI) in osteoarthritis pathogenesis research has proven extremely beneficial. However, it is always challenging for both clinicians and researchers to detect morphological changes in knee joints from magnetic resonance (MR) imaging since the surrounding tissues produce identical signals in MR studies, making it difficult to distinguish between them. Segmenting the knee bone, articular cartilage and menisci from the MR images allows one to examine the complete volume of the bone, articular cartilage, and menisci. It can also be used to assess certain characteristics quantitatively. However, segmentation is a laborious and time-consuming operation that requires sufficient training to complete correctly. With the advancement of MRI technology and computational methods, researchers have developed several algorithms to automate the task of individual knee bone, articular cartilage and meniscus segmentation during the last two decades. This systematic review aims to present available fully and semi-automatic segmentation methods for knee bone, cartilage, and meniscus published in different scientific articles. This review provides a vivid description of the scientific advancements to clinicians and researchers in this field of image analysis and segmentation, which helps the development of novel automated methods for clinical applications. The review also contains the recently developed fully automated deep learning-based methods for segmentation, which not only provides better results compared to the conventional techniques but also open a new field of research in Medical Imaging.
Collapse
Affiliation(s)
- Pavan Mahendrakar
- BLDEA’s V.P.Dr. P.G., Halakatti College of Engineering and Technology, Vijayapur, Karnataka, India
| | | | - Uttam Patil
- Jain College of Engineering, T.S Nagar, Hunchanhatti Road, Machhe, Belagavi, Karnataka, India
| |
Collapse
|
7
|
Ding AS, Lu A, Li Z, Sahu M, Galaiya D, Siewerdsen JH, Unberath M, Taylor RH, Creighton FX. A Self-Configuring Deep Learning Network for Segmentation of Temporal Bone Anatomy in Cone-Beam CT Imaging. Otolaryngol Head Neck Surg 2023; 169:988-998. [PMID: 36883992 PMCID: PMC11060418 DOI: 10.1002/ohn.317] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 01/19/2023] [Accepted: 02/19/2023] [Indexed: 03/09/2023]
Abstract
OBJECTIVE Preoperative planning for otologic or neurotologic procedures often requires manual segmentation of relevant structures, which can be tedious and time-consuming. Automated methods for segmenting multiple geometrically complex structures can not only streamline preoperative planning but also augment minimally invasive and/or robot-assisted procedures in this space. This study evaluates a state-of-the-art deep learning pipeline for semantic segmentation of temporal bone anatomy. STUDY DESIGN A descriptive study of a segmentation network. SETTING Academic institution. METHODS A total of 15 high-resolution cone-beam temporal bone computed tomography (CT) data sets were included in this study. All images were co-registered, with relevant anatomical structures (eg, ossicles, inner ear, facial nerve, chorda tympani, bony labyrinth) manually segmented. Predicted segmentations from no new U-Net (nnU-Net), an open-source 3-dimensional semantic segmentation neural network, were compared against ground-truth segmentations using modified Hausdorff distances (mHD) and Dice scores. RESULTS Fivefold cross-validation with nnU-Net between predicted and ground-truth labels were as follows: malleus (mHD: 0.044 ± 0.024 mm, dice: 0.914 ± 0.035), incus (mHD: 0.051 ± 0.027 mm, dice: 0.916 ± 0.034), stapes (mHD: 0.147 ± 0.113 mm, dice: 0.560 ± 0.106), bony labyrinth (mHD: 0.038 ± 0.031 mm, dice: 0.952 ± 0.017), and facial nerve (mHD: 0.139 ± 0.072 mm, dice: 0.862 ± 0.039). Comparison against atlas-based segmentation propagation showed significantly higher Dice scores for all structures (p < .05). CONCLUSION Using an open-source deep learning pipeline, we demonstrate consistently submillimeter accuracy for semantic CT segmentation of temporal bone anatomy compared to hand-segmented labels. This pipeline has the potential to greatly improve preoperative planning workflows for a variety of otologic and neurotologic procedures and augment existing image guidance and robot-assisted systems for the temporal bone.
Collapse
Affiliation(s)
- Andy S. Ding
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Alexander Lu
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Zhaoshuo Li
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Manish Sahu
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Deepa Galaiya
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Jeffrey H. Siewerdsen
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Mathias Unberath
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| | - Francis X. Creighton
- Department of Otolaryngology–Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| |
Collapse
|
8
|
Lewis S, Inglis S, Doyle S. The role of anatomical context in soft-tissue multi-organ segmentation of cadaveric non-contrast-enhanced whole body CT. Med Phys 2023; 50:5061-5074. [PMID: 36847064 PMCID: PMC10440264 DOI: 10.1002/mp.16330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 02/01/2023] [Accepted: 02/08/2023] [Indexed: 03/01/2023] Open
Abstract
BACKGROUND Cadaveric computed tomography (CT) image segmentation is a difficult task to solve, especially when applied to whole-body image volumes. Traditional algorithms require preprocessing using registration, or highly conserved organ morphologies. These requirements cannot be fulfilled by cadaveric specimens, so deep learning must be used to overcome this limitation. Further, the widespread use of 2D algorithms for volumetric data ignores the role of anatomical context. The use of 3D spatial context for volumetric segmentation of CT scans as well as the anatomical context required to optimize the segmentation has not been adequately explored. PURPOSE To determine whether 2D slice-by-slice UNet algorithms or 3D volumetric UNet (VNet) algorithms provide a more effective method for segmenting 3D volumes, and to what extent anatomical context plays in the segmentation of soft-tissue organs in cadaveric, noncontrast-enhanced (NCE) CT. METHODS We tested five CT segmentation algorithms: 2D UNets with and without 3D data augmentation (3D rotations) as well as VNets with three levels of anatomical context (implemented via image downsampling at 1X, 2X, and 3X) for their performance via 3D Dice coefficients, and Hausdorff distance calculations. The classifiers were trained to segment the kidneys and liver and the performance was evaluated using Dice coefficient and Hausdorff distance on the segmentation versus the ground truth annotation. RESULTS Our results demonstrate that VNet algorithms perform significantly better (p < 0.05 $p<0.05$ ) than 2D models. Among the VNet classifiers, those that use some level of image downsampling outperform (as calculated through Dice coefficients) the VNet without downsampling. Additionally, the optimal amount of downsampling depends on the target organ. CONCLUSIONS Anatomical context is an important component of soft-tissue, multi-organ segmentation in cadaveric, NCE CT imaging of the whole body. Different amounts of anatomical contexts are optimal depending on the size, position, and surrounding tissue of the organ.
Collapse
Affiliation(s)
- Steven Lewis
- Department of Pathology Anatomical Sciences, University at Buffalo, Buffalo, New York, USA
| | - Stuart Inglis
- Department of Pathology Anatomical Sciences, University at Buffalo, Buffalo, New York, USA
| | - Scott Doyle
- Department of Pathology Anatomical Sciences, University at Buffalo, Buffalo, New York, USA
| |
Collapse
|
9
|
Mirmojarabian SA, Kajabi AW, Ketola JHJ, Nykänen O, Liimatainen T, Nieminen MT, Nissi MJ, Casula V. Machine Learning Prediction of Collagen Fiber Orientation and Proteoglycan Content From Multiparametric Quantitative MRI in Articular Cartilage. J Magn Reson Imaging 2023; 57:1056-1068. [PMID: 35861162 DOI: 10.1002/jmri.28353] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 06/30/2022] [Accepted: 07/01/2022] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Machine learning models trained with multiparametric quantitative MRIs (qMRIs) have the potential to provide valuable information about the structural composition of articular cartilage. PURPOSE To study the performance and feasibility of machine learning models combined with qMRIs for noninvasive assessment of collagen fiber orientation and proteoglycan content. STUDY TYPE Retrospective, animal model. ANIMAL MODEL An open-source single slice MRI dataset obtained from 20 samples of 10 Shetland ponies (seven with surgically induced cartilage lesions followed by treatment and three healthy controls) yielded to 1600 data points, including 10% for test and 90% for train validation. FIELD STRENGTH/SEQUENCE A 9.4 T MRI scanner/qMRI sequences: T1 , T2 , adiabatic T1ρ and T2ρ , continuous-wave T1ρ and relaxation along a fictitious field (TRAFF ) maps. ASSESSMENT Five machine learning regression models were developed: random forest (RF), support vector regression (SVR), gradient boosting (GB), multilayer perceptron (MLP), and Gaussian process regression (GPR). A nested cross-validation was used for performance evaluation. For reference, proteoglycan content and collagen fiber orientation were determined by quantitative histology from digital densitometry (DD) and polarized light microscopy (PLM), respectively. STATISTICAL TESTS Normality was tested using Shapiro-Wilk test, and association between predicted and measured values was evaluated using Spearman's Rho test. A P-value of 0.05 was considered as the limit of statistical significance. RESULTS Four out of the five models (RF, GB, MLP, and GPR) yielded high accuracy (R2 = 0.68-0.75 for PLM and 0.62-0.66 for DD), and strong significant correlations between the reference measurements and predicted cartilage matrix properties (Spearman's Rho = 0.72-0.88 for PLM and 0.61-0.83 for DD). GPR algorithm had the highest accuracy (R2 = 0.75 and 0.66) and lowest prediction-error (root mean squared [RMSE] = 1.34 and 2.55) for PLM and DD, respectively. DATA CONCLUSION Multiparametric qMRIs in combination with regression models can determine cartilage compositional and structural features, with higher accuracy for collagen fiber orientation than proteoglycan content. EVIDENCE LEVEL 2 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
| | - Abdul Wahed Kajabi
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, US
| | - Juuso H J Ketola
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland
| | - Olli Nykänen
- Department of Applied Physics, University of Eastern Finland, Kuopio, Finland
| | - Timo Liimatainen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - Miika T Nieminen
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland.,Medical Research Center, University of Oulu and Oulu University Hospital, Oulu, Finland
| | - Mikko J Nissi
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Department of Applied Physics, University of Eastern Finland, Kuopio, Finland
| | - Victor Casula
- Research Unit of Medical Imaging, Physics and Technology, University of Oulu, Oulu, Finland.,Medical Research Center, University of Oulu and Oulu University Hospital, Oulu, Finland
| |
Collapse
|
10
|
Kim-Wang SY, Bradley PX, Cutcliffe HC, Collins AT, Crook BS, Paranjape CS, Spritzer CE, DeFrate LE. Auto-segmentation of the tibia and femur from knee MR images via deep learning and its application to cartilage strain and recovery. J Biomech 2023; 149:111473. [PMID: 36791514 PMCID: PMC10281551 DOI: 10.1016/j.jbiomech.2023.111473] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Revised: 12/21/2022] [Accepted: 01/24/2023] [Indexed: 01/27/2023]
Abstract
The ability to efficiently and reproducibly generate subject-specific 3D models of bone and soft tissue is important to many areas of musculoskeletal research. However, methodologies requiring such models have largely been limited by lengthy manual segmentation times. Recently, machine learning, and more specifically, convolutional neural networks, have shown potential to alleviate this bottleneck in research throughput. Thus, the purpose of this work was to develop a modified version of the convolutional neural network architecture U-Net to automate segmentation of the tibia and femur from double echo steady state knee magnetic resonance (MR) images. Our model was trained on a dataset of over 4,000 MR images from 34 subjects, segmented by three experienced researchers, and reviewed by a musculoskeletal radiologist. For our validation and testing sets, we achieved dice coefficients of 0.985 and 0.984, respectively. As further testing, we applied our trained model to a prior study of tibial cartilage strain and recovery. In this analysis, across all subjects, there were no statistically significant differences in cartilage strain between the machine learning and ground truth bone models, with a mean difference of 0.2 ± 0.7 % (mean ± 95 % confidence interval). This difference is within the measurement resolution of previous cartilage strain studies from our lab using manual segmentation. In summary, we successfully trained, validated, and tested a machine learning model capable of segmenting MR images of the knee, achieving results that are comparable to trained human segmenters.
Collapse
Affiliation(s)
- Sophia Y Kim-Wang
- Duke University School of Medicine, United States; Department of Biomedical Engineering, Duke University, United States
| | - Patrick X Bradley
- Department of Mechanical Engineering and Materials Science, Duke University, United States
| | | | - Amber T Collins
- Department of Orthopaedic Surgery, Duke University School of Medicine, United States
| | - Bryan S Crook
- Department of Orthopaedic Surgery, Duke University School of Medicine, United States
| | - Chinmay S Paranjape
- Department of Orthopaedic Surgery, Duke University School of Medicine, United States
| | - Charles E Spritzer
- Department of Radiology, Duke University School of Medicine, United States
| | - Louis E DeFrate
- Department of Biomedical Engineering, Duke University, United States; Department of Mechanical Engineering and Materials Science, Duke University, United States; Department of Orthopaedic Surgery, Duke University School of Medicine, United States.
| |
Collapse
|
11
|
Fei H, Wang Q, Shang F, Xu W, Chen X, Chen Y, Li H. HC-Net: A hybrid convolutional network for non-human primate brain extraction. Front Comput Neurosci 2023; 17:1113381. [PMID: 36846727 PMCID: PMC9947775 DOI: 10.3389/fncom.2023.1113381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Accepted: 01/23/2023] [Indexed: 02/11/2023] Open
Abstract
Brain extraction (skull stripping) is an essential step in the magnetic resonance imaging (MRI) analysis of brain sciences. However, most of the current brain extraction methods that achieve satisfactory results for human brains are often challenged by non-human primate brains. Due to the small sample characteristics and the nature of thick-slice scanning of macaque MRI data, traditional deep convolutional neural networks (DCNNs) are unable to obtain excellent results. To overcome this challenge, this study proposed a symmetrical end-to-end trainable hybrid convolutional neural network (HC-Net). It makes full use of the spatial information between adjacent slices of the MRI image sequence and combines three consecutive slices from three axes for 3D convolutions, which reduces the calculation consumption and promotes accuracy. The HC-Net consists of encoding and decoding structures of 3D convolutions and 2D convolutions in series. The effective use of 2D convolutions and 3D convolutions relieves the underfitting of 2D convolutions to spatial features and the overfitting of 3D convolutions to small samples. After evaluating macaque brain data from different sites, the results showed that HC-Net performed better in inference time (approximately 13 s per volume) and accuracy (mean Dice coefficient reached 95.46%). The HC-Net model also had good generalization ability and stability in different modes of brain extraction tasks.
Collapse
Affiliation(s)
- Hong Fei
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Qianshan Wang
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Fangxin Shang
- Country Intelligent Healthcare Unit, Baidu, Beijing, China
| | - Wenyi Xu
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Xiaofeng Chen
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Yifei Chen
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China
| | - Haifang Li
- College of Information and Computer, Taiyuan University of Technology, Taiyuan, China,*Correspondence: Haifang Li,
| |
Collapse
|
12
|
Kulseng CPS, Nainamalai V, Grøvik E, Geitung JT, Årøen A, Gjesdal KI. Automatic segmentation of human knee anatomy by a convolutional neural network applying a 3D MRI protocol. BMC Musculoskelet Disord 2023; 24:41. [PMID: 36650496 PMCID: PMC9847207 DOI: 10.1186/s12891-023-06153-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 01/10/2023] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND To study deep learning segmentation of knee anatomy with 13 anatomical classes by using a magnetic resonance (MR) protocol of four three-dimensional (3D) pulse sequences, and evaluate possible clinical usefulness. METHODS The sample selection involved 40 healthy right knee volumes from adult participants. Further, a recently injured single left knee with previous known ACL reconstruction was included as a test subject. The MR protocol consisted of the following 3D pulse sequences: T1 TSE, PD TSE, PD FS TSE, and Angio GE. The DenseVNet neural network was considered for these experiments. Five input combinations of sequences (i) T1, (ii) T1 and FS, (iii) PD and FS, (iv) T1, PD, and FS and (v) T1, PD, FS and Angio were trained using the deep learning algorithm. The Dice similarity coefficient (DSC), Jaccard index and Hausdorff were used to compare the performance of the networks. RESULTS Combining all sequences collectively performed significantly better than other alternatives. The following DSCs (±standard deviation) were obtained for the test dataset: Bone medulla 0.997 (±0.002), PCL 0.973 (±0.015), ACL 0.964 (±0.022), muscle 0.998 (±0.001), cartilage 0.966 (±0.018), bone cortex 0.980 (±0.010), arteries 0.943 (±0.038), collateral ligaments 0.919 (± 0.069), tendons 0.982 (±0.005), meniscus 0.955 (±0.032), adipose tissue 0.998 (±0.001), veins 0.980 (±0.010) and nerves 0.921 (±0.071). The deep learning network correctly identified the anterior cruciate ligament (ACL) tear of the left knee, thus indicating a future aid to orthopaedics. CONCLUSIONS The convolutional neural network proves highly capable of correctly labeling all anatomical structures of the knee joint when applied to 3D MR sequences. We have demonstrated that this deep learning model is capable of automatized segmentation that may give 3D models and discover pathology. Both useful for a preoperative evaluation.
Collapse
Affiliation(s)
| | - Varatharajan Nainamalai
- grid.5947.f0000 0001 1516 2393Norwegian University of Science and Technology, Larsgaardvegen 2, Ålesund, 6025 Norway
| | - Endre Grøvik
- grid.5947.f0000 0001 1516 2393Norwegian University of Science and Technology, Høgskoleringen 5, Trondheim, 7491 Norway ,Møre og Romsdal Hospital Trust, Postboks 1600, Ålesund, 6025 Norway
| | - Jonn-Terje Geitung
- Sunnmøre MR-klinikk, Langelandsvegen 15, Ålesund, 6010 Norway ,grid.5510.10000 0004 1936 8921Faculty of Medicine, University of Oslo, Klaus Torgårds vei 3, Oslo, 0372 Norway ,grid.411279.80000 0000 9637 455XDepartment of Radiology, Akershus University Hospital, Postboks 1000, Lørenskog, 1478 Norway
| | - Asbjørn Årøen
- grid.411279.80000 0000 9637 455XDepartment of Orthopedic Surgery, Institute of Clinical Medicine, Akershus University Hospital, Problemveien 7, Oslo, 0315 Norway ,grid.412285.80000 0000 8567 2092Oslo Sports Trauma Research Center, Norwegian School of Sport Sciences, Postboks 4014 Ullevål Stadion, Oslo, 0806 Norway
| | - Kjell-Inge Gjesdal
- Sunnmøre MR-klinikk, Langelandsvegen 15, Ålesund, 6010 Norway ,grid.5947.f0000 0001 1516 2393Norwegian University of Science and Technology, Larsgaardvegen 2, Ålesund, 6025 Norway ,grid.411279.80000 0000 9637 455XDepartment of Radiology, Akershus University Hospital, Postboks 1000, Lørenskog, 1478 Norway
| |
Collapse
|
13
|
Bao D, Wang L, Zhou X, Yang S, He K, Xu M. Automated detection and growth tracking of 3D bio-printed organoid clusters using optical coherence tomography with deep convolutional neural networks. Front Bioeng Biotechnol 2023; 11:1133090. [PMID: 37122853 PMCID: PMC10130530 DOI: 10.3389/fbioe.2023.1133090] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 03/31/2023] [Indexed: 05/02/2023] Open
Abstract
Organoids are advancing the development of accurate prediction of drug efficacy and toxicity in vitro. These advancements are attributed to the ability of organoids to recapitulate key structural and functional features of organs and parent tumor. Specifically, organoids are self-organized assembly with a multi-scale structure of 30-800 μm, which exacerbates the difficulty of non-destructive three-dimensional (3D) imaging, tracking and classification analysis for organoid clusters by traditional microscopy techniques. Here, we devise a 3D imaging, segmentation and analysis method based on Optical coherence tomography (OCT) technology and deep convolutional neural networks (CNNs) for printed organoid clusters (Organoid Printing and optical coherence tomography-based analysis, OPO). The results demonstrate that the organoid scale influences the segmentation effect of the neural network. The multi-scale information-guided optimized EGO-Net we designed achieves the best results, especially showing better recognition workout for the biologically significant organoid with diameter ≥50 μm than other neural networks. Moreover, OPO achieves to reconstruct the multiscale structure of organoid clusters within printed microbeads and calibrate the printing errors by segmenting the printed microbeads edges. Overall, the classification, tracking and quantitative analysis based on image reveal that the growth process of organoid undergoes morphological changes such as volume growth, cavity creation and fusion, and quantitative calculation of the volume demonstrates that the growth rate of organoid is associated with the initial scale. The new method we proposed enable the study of growth, structural evolution and heterogeneity for the organoid cluster, which is valuable for drug screening and tumor drug sensitivity detection based on organoids.
Collapse
Affiliation(s)
- Di Bao
- School of Automation, Hangzhou Dianzi University, Hangzhou, China
| | - Ling Wang
- School of Automation, Hangzhou Dianzi University, Hangzhou, China
- Key Laboratory of Medical Information and 3D Bioprinting of Zhejiang Province, Hangzhou, China
- *Correspondence: Ling Wang, ; Mingen Xu,
| | - Xiaofei Zhou
- School of Automation, Hangzhou Dianzi University, Hangzhou, China
- Key Laboratory of Medical Information and 3D Bioprinting of Zhejiang Province, Hangzhou, China
| | - Shanshan Yang
- School of Automation, Hangzhou Dianzi University, Hangzhou, China
- Key Laboratory of Medical Information and 3D Bioprinting of Zhejiang Province, Hangzhou, China
| | - Kangxin He
- Key Laboratory of Medical Information and 3D Bioprinting of Zhejiang Province, Hangzhou, China
| | - Mingen Xu
- School of Automation, Hangzhou Dianzi University, Hangzhou, China
- Key Laboratory of Medical Information and 3D Bioprinting of Zhejiang Province, Hangzhou, China
- *Correspondence: Ling Wang, ; Mingen Xu,
| |
Collapse
|
14
|
Zhu Y, Wang M, Yin X, Zhang J, Meijering E, Hu J. Deep Learning in Diverse Intelligent Sensor Based Systems. SENSORS (BASEL, SWITZERLAND) 2022; 23:62. [PMID: 36616657 PMCID: PMC9823653 DOI: 10.3390/s23010062] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 12/06/2022] [Accepted: 12/14/2022] [Indexed: 05/27/2023]
Abstract
Deep learning has become a predominant method for solving data analysis problems in virtually all fields of science and engineering. The increasing complexity and the large volume of data collected by diverse sensor systems have spurred the development of deep learning methods and have fundamentally transformed the way the data are acquired, processed, analyzed, and interpreted. With the rapid development of deep learning technology and its ever-increasing range of successful applications across diverse sensor systems, there is an urgent need to provide a comprehensive investigation of deep learning in this domain from a holistic view. This survey paper aims to contribute to this by systematically investigating deep learning models/methods and their applications across diverse sensor systems. It also provides a comprehensive summary of deep learning implementation tips and links to tutorials, open-source codes, and pretrained models, which can serve as an excellent self-contained reference for deep learning practitioners and those seeking to innovate deep learning in this space. In addition, this paper provides insights into research topics in diverse sensor systems where deep learning has not yet been well-developed, and highlights challenges and future opportunities. This survey serves as a catalyst to accelerate the application and transformation of deep learning in diverse sensor systems.
Collapse
Affiliation(s)
- Yanming Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| | - Min Wang
- School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2612, Australia
| | - Xuefei Yin
- School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2612, Australia
| | - Jue Zhang
- School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2612, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia
| | - Jiankun Hu
- School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2612, Australia
| |
Collapse
|
15
|
Cheng HLM. Emerging MRI techniques for molecular and functional phenotyping of the diseased heart. Front Cardiovasc Med 2022; 9:1072828. [PMID: 36545017 PMCID: PMC9760746 DOI: 10.3389/fcvm.2022.1072828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 11/22/2022] [Indexed: 12/12/2022] Open
Abstract
Recent advances in cardiac MRI (CMR) capabilities have truly transformed its potential for deep phenotyping of the diseased heart. Long known for its unparalleled soft tissue contrast and excellent depiction of three-dimensional (3D) structure, CMR now boasts a range of unique capabilities for probing disease at the tissue and molecular level. We can look beyond coronary vessel blockages and detect vessel disease not visible on a structural level. We can assess if early fibrotic tissue is being laid down in between viable cardiac muscle cells. We can measure deformation of the heart wall to determine early presentation of stiffening. We can even assess how cardiomyocytes are utilizing energy, where abnormalities are often precursors to overt structural and functional deficits. Finally, with artificial intelligence gaining traction due to the high computing power available today, deep learning has proven itself a viable contender with traditional acceleration techniques for real-time CMR. In this review, we will survey five key emerging MRI techniques that have the potential to transform the CMR clinic and permit early detection and intervention. The emerging areas are: (1) imaging microvascular dysfunction, (2) imaging fibrosis, (3) imaging strain, (4) imaging early metabolic changes, and (5) deep learning for acceleration. Through a concerted effort to develop and translate these areas into the CMR clinic, we are committing ourselves to actualizing early diagnostics for the most intractable heart disease phenotypes.
Collapse
Affiliation(s)
- Hai-Ling Margaret Cheng
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering, Institute of Biomedical Engineering, University of Toronto, Toronto, ON, Canada
- Ted Rogers Centre for Heart Research, Translational Biology & Engineering Program, Toronto, ON, Canada
| |
Collapse
|
16
|
Mazher M, Qayyum A, Puig D, Abdel-Nasser M. Effective Approaches to Fetal Brain Segmentation in MRI and Gestational Age Estimation by Utilizing a Multiview Deep Inception Residual Network and Radiomics. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1708. [PMID: 36554113 PMCID: PMC9778347 DOI: 10.3390/e24121708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 11/16/2022] [Accepted: 11/18/2022] [Indexed: 06/17/2023]
Abstract
To completely comprehend neurodevelopment in healthy and congenitally abnormal fetuses, quantitative analysis of the human fetal brain is essential. This analysis requires the use of automatic multi-tissue fetal brain segmentation techniques. This paper proposes an end-to-end automatic yet effective method for a multi-tissue fetal brain segmentation model called IRMMNET. It includes a inception residual encoder block (EB) and a dense spatial attention (DSAM) block, which facilitate the extraction of multi-scale fetal-brain-tissue-relevant information from multi-view MRI images, enhance the feature reuse, and substantially reduce the number of parameters of the segmentation model. Additionally, we propose three methods for predicting gestational age (GA)-GA prediction by using a 3D autoencoder, GA prediction using radiomics features, and GA prediction using the IRMMNET segmentation model's encoder. Our experiments were performed on a dataset of 80 pathological and non-pathological magnetic resonance fetal brain volume reconstructions across a range of gestational ages (20 to 33 weeks) that were manually segmented into seven different tissue categories. The results showed that the proposed fetal brain segmentation model achieved a Dice score of 0.791±0.18, outperforming the state-of-the-art methods. The radiomics-based GA prediction methods achieved the best results (RMSE: 1.42). We also demonstrated the generalization capabilities of the proposed methods for tasks such as head and neck tumor segmentation and the prediction of patients' survival days.
Collapse
Affiliation(s)
- Moona Mazher
- Departament d’Enginyeria Informatica i Matemátiques, Universitat Rovira i Virgili, 43007 Tarragona, Spain
| | - Abdul Qayyum
- School of Biomedical Engineering and Imaging Sciences, Kings College London, London SE1 9RT, UK
| | - Domenec Puig
- Departament d’Enginyeria Informatica i Matemátiques, Universitat Rovira i Virgili, 43007 Tarragona, Spain
| | - Mohamed Abdel-Nasser
- Electronics and Communication Engineering Section, Electrical Engineering Department, Aswan University, Aswan 81528, Egypt
| |
Collapse
|
17
|
Zarychta P. Atlas-Based Segmentation in Extraction of Knee Joint Bone Structures from CT and MR. SENSORS (BASEL, SWITZERLAND) 2022; 22:8960. [PMID: 36433556 PMCID: PMC9694670 DOI: 10.3390/s22228960] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 11/16/2022] [Accepted: 11/16/2022] [Indexed: 06/16/2023]
Abstract
The main goal of the approach proposed in this study, which is dedicated to the extraction of bone structures of the knee joint (femoral head, tibia, and patella), was to show a fully automated method of extracting these structures based on atlas segmentation. In order to realize the above-mentioned goal, an algorithm employed automated image-matching as the first step, followed by the normalization of clinical images and the determination of the 11-element dataset to which all scans in the series were allocated. This allowed for a delineation of the average feature vector for the teaching group in the next step, which automated and streamlined known fuzzy segmentation methods (fuzzy c-means (FCM), fuzzy connectedness (FC)). These averaged features were then transmitted to the FCM and FC methods, which were implemented for the testing group and correspondingly for each scan. In this approach, two features are important: the centroids (which become starting points for the fuzzy methods) and the surface area of the extracted bone structure (protects against over-segmentation). This proposed approach was implemented in MATLAB and tested in 61 clinical CT studies of the lower limb on the transverse plane and in 107 T1-weighted MRI studies of the knee joint on the sagittal plane. The atlas-based segmentation combined with the fuzzy methods achieved a Dice index of 85.52-89.48% for the bone structures of the knee joint.
Collapse
Affiliation(s)
- Piotr Zarychta
- Faculty of Biomedical Engineering, Silesian University of Technology, Roosevelta 40 St., 41-800 Zabrze, Poland
| |
Collapse
|
18
|
Breast cancer image analysis using deep learning techniques – a survey. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00703-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
19
|
Chen Q, Xie W, Zhou P, Zheng C, Wu D. Multi-Crop Convolutional Neural Networks for Fast Lung Nodule Segmentation. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2022. [DOI: 10.1109/tetci.2021.3051910] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Affiliation(s)
- Quan Chen
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wei Xie
- School of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Pan Zhou
- Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong University of Science and Technology, China
| | - Chuansheng Zheng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Dapeng Wu
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, USA
| |
Collapse
|
20
|
Zambrano-Vizuete M, Botto-Tobar M, Huerta-Suárez C, Paredes-Parada W, Patiño Pérez D, Ahanger TA, Gonzalez N. Segmentation of Medical Image Using Novel Dilated Ghost Deep Learning Model. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6872045. [PMID: 35990113 PMCID: PMC9391132 DOI: 10.1155/2022/6872045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 07/04/2022] [Accepted: 07/15/2022] [Indexed: 11/21/2022]
Abstract
Image segmentation and computer vision are becoming more important in computer-aided design. A computer algorithm extracts image borders, colours, and textures. It also depletes resources. Technical knowledge is required to extract information about distinctive features. There is currently no medical picture segmentation or recognition software available. The proposed model has 13 layers and uses dilated convolution and max-pooling to extract small features. Ghost model deletes the duplicated features, makes the process easier, and reduces the complexity. The Convolution Neural Network (CNN) generates a feature vector map and improves the accuracy of area or bounding box proposals. Restructuring is required for healing. As a result, convolutional neural networks segment medical images. It is possible to acquire the beginning region of a segmented medical image. The proposed model gives better results as compared to the traditional models, it gives an accuracy of 96.05, Precision 98.2, and recall 95.78. The first findings are improved by thickening and categorising the image's pixels. Morphological techniques may be used to segment medical images. Experiments demonstrate that the recommended segmentation strategy is effective. This study rethinks medical image segmentation methods.
Collapse
Affiliation(s)
- Marcelo Zambrano-Vizuete
- Instituto Tecnológico Universitario Rumiñahui, Sangolquí, Ecuador
- Universidad Técnica del Norte, Ibarra, Ecuador
| | - Miguel Botto-Tobar
- Eindhoven University of Technology, Eindhoven, Netherlands
- Research Group in Artificial Intelligence and Information Technology, University of Guayaquil, Guayaquil, Ecuador
| | | | | | - Darwin Patiño Pérez
- Research Group in Artificial Intelligence and Information Technology, University of Guayaquil, Guayaquil, Ecuador
| | - Tariq Ahamed Ahanger
- Department of Management Information Systems, College of Business Administration, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | | |
Collapse
|
21
|
Mei J, Cheng MM, Xu G, Wan LR, Zhang H. SANet: A Slice-Aware Network for Pulmonary Nodule Detection. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:4374-4387. [PMID: 33687839 DOI: 10.1109/tpami.2021.3065086] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Lung cancer is the most common cause of cancer death worldwide. A timely diagnosis of the pulmonary nodules makes it possible to detect lung cancer in the early stage, and thoracic computed tomography (CT) provides a convenient way to diagnose nodules. However, it is hard even for experienced doctors to distinguish them from the massive CT slices. The currently existing nodule datasets are limited in both scale and category, which is insufficient and greatly restricts its applications. In this paper, we collect the largest and most diverse dataset named PN9 for pulmonary nodule detection by far. Specifically, it contains 8,798 CT scans and 40,439 annotated nodules from 9 common classes. We further propose a slice-aware network (SANet) for pulmonary nodule detection. A slice grouped non-local (SGNL) module is developed to capture long-range dependencies among any positions and any channels of one slice group in the feature map. And we introduce a 3D region proposal network to generate pulmonary nodule candidates with high sensitivity, while this detection stage usually comes with many false positives. Subsequently, a false positive reduction module (FPR) is proposed by using the multi-scale feature maps. To verify the performance of SANet and the significance of PN9, we perform extensive experiments compared with several state-of-the-art 2D CNN-based and 3D CNN-based detection methods. Promising evaluation results on PN9 prove the effectiveness of our proposed SANet. The dataset and source code is available at https://mmcheng.net/SANet/.
Collapse
|
22
|
Diabetic retinopathy screening using improved support vector domain description: a clinical study. Soft comput 2022. [DOI: 10.1007/s00500-022-07387-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
23
|
Minamoto Y, Akagi R, Maki S, Shiko Y, Tozawa R, Kimura S, Yamaguchi S, Kawasaki Y, Ohtori S, Sasho T. Automated detection of anterior cruciate ligament tears using a deep convolutional neural network. BMC Musculoskelet Disord 2022; 23:577. [PMID: 35705930 PMCID: PMC9199233 DOI: 10.1186/s12891-022-05524-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 06/03/2022] [Indexed: 11/12/2022] Open
Abstract
Background The development of computer-assisted technologies to diagnose anterior cruciate ligament (ACL) injury by analyzing knee magnetic resonance images (MRI) would be beneficial, and convolutional neural network (CNN)-based deep learning approaches may offer a solution. This study aimed to evaluate the accuracy of a CNN system in diagnosing ACL ruptures by a single slice from a knee MRI and to compare the results with that of experienced human readers. Methods One hundred sagittal MR images from patients with and without ACL injuries, confirmed by arthroscopy, were cropped and used for the CNN training. The final decision by the CNN for intact or torn ACL was based on the probability of ACL tear on a single MRI slice. Twelve board-certified physicians reviewed the same images used by CNN. Results The sensitivity, specificity, accuracy, positive predictive value and negative predictive value of the CNN classification was 91.0%, 86.0%, 88.5%, 87.0%, and 91.0%, respectively. The overall values of the physicians’ readings were similar, but the specificity was lower than the CNN classification for some of the physicians, thus resulting in lower accuracy for the human readers. Conclusions The trained CNN automatically detected the ACL tears with acceptable accuracy comparable to that of human readers.
Collapse
Affiliation(s)
- Yusuke Minamoto
- Center for Preventive Medical Sciences, Chiba University, Chiba, Japan.,Department of Physical Therapy, Faculty of Health Science, Ryotokuji University, Urayasu, Japan
| | - Ryuichiro Akagi
- Center for Preventive Medical Sciences, Chiba University, Chiba, Japan. .,Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba, 260-8670, Japan. .,Sportsmedics Center, Chiba University Hospital, Chiba, Japan.
| | - Satoshi Maki
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba, 260-8670, Japan
| | - Yuki Shiko
- Biostatistics Section, Clinical Research Center, Chiba University Hospital, Chiba, Japan
| | - Ryosuke Tozawa
- Center for Preventive Medical Sciences, Chiba University, Chiba, Japan.,Department of Physical Therapy, Faculty of Health Science, Ryotokuji University, Urayasu, Japan
| | - Seiji Kimura
- Center for Preventive Medical Sciences, Chiba University, Chiba, Japan.,Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba, 260-8670, Japan.,Sportsmedics Center, Chiba University Hospital, Chiba, Japan
| | - Satoshi Yamaguchi
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba, 260-8670, Japan.,Graduate School of Global and Transdisciplinary Studies, Chiba University, Chiba, Japan
| | - Yohei Kawasaki
- Faculty of Nursing, Japanese Red Cross College of Nursing, Tokyo, Japan
| | - Seiji Ohtori
- Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba, 260-8670, Japan.,Sportsmedics Center, Chiba University Hospital, Chiba, Japan
| | - Takahisa Sasho
- Center for Preventive Medical Sciences, Chiba University, Chiba, Japan.,Department of Orthopaedic Surgery, Graduate School of Medicine, Chiba University, 1-8-1 Inohana, Chuo-ku, Chiba, 260-8670, Japan
| |
Collapse
|
24
|
Chen H, Zhao N, Tan T, Kang Y, Sun C, Xie G, Verdonschot N, Sprengers A. Knee Bone and Cartilage Segmentation Based on a 3D Deep Neural Network Using Adversarial Loss for Prior Shape Constraint. Front Med (Lausanne) 2022; 9:792900. [PMID: 35669917 PMCID: PMC9163741 DOI: 10.3389/fmed.2022.792900] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 04/14/2022] [Indexed: 12/03/2022] Open
Abstract
Fast and accurate segmentation of knee bone and cartilage on MRI images is becoming increasingly important in the orthopaedic area, as the segmentation is an essential prerequisite step to a patient-specific diagnosis, optimising implant design and preoperative and intraoperative planning. However, manual segmentation is time-intensive and subjected to inter- and intra-observer variations. Hence, in this study, a three-dimensional (3D) deep neural network using adversarial loss was proposed to automatically segment the knee bone in a resampled image volume in order to enlarge the contextual information and incorporate prior shape constraints. A restoration network was proposed to further improve the bone segmentation accuracy by restoring the bone segmentation back to the original resolution. A conventional U-Net-like network was used to segment the cartilage. The ultimate results were the combination of the bone and cartilage outcomes through post-processing. The quality of the proposed method was thoroughly assessed using various measures for the dataset from the Grand Challenge Segmentation of Knee Images 2010 (SKI10), together with a comparison with a baseline network U-Net. A fine-tuned U-Net-like network can achieve state-of-the-art results without any post-processing operations. This method achieved a total score higher than 76 in terms of the SKI10 validation dataset. This method showed to be robust to extract bone and cartilage masks from the MRI dataset, even for the pathological case.
Collapse
Affiliation(s)
- Hao Chen
- Department of Biomechanical Engineering, University of Twente, Enschede, Netherlands
| | - Na Zhao
- School of Instrument Science and Engineering, Southeast University, Nanjing, China
| | - Tao Tan
- Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Yan Kang
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
| | - Chuanqi Sun
- Department of Biomedical Engineering, The Sixth Affiliated Hospital, Guangzhou Medical University, Guangzhou, China
| | - Guoxi Xie
- Department of Biomedical Engineering, The Sixth Affiliated Hospital, Guangzhou Medical University, Guangzhou, China
| | - Nico Verdonschot
- Orthopaedic Research Laboratory, Radboud University Medical Center, Nijmegen, Netherlands
| | - André Sprengers
- Department of Biomedical Engineering and Physics, Amsterdam UMC, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
25
|
Mäkelä T, Öman O, Hokkinen L, Wilppu U, Salli E, Savolainen S, Kangasniemi M. Automatic CT Angiography Lesion Segmentation Compared to CT Perfusion in Ischemic Stroke Detection: a Feasibility Study. J Digit Imaging 2022; 35:551-563. [PMID: 35211838 PMCID: PMC9156593 DOI: 10.1007/s10278-022-00611-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 12/30/2021] [Accepted: 01/12/2022] [Indexed: 12/15/2022] Open
Abstract
In stroke imaging, CT angiography (CTA) is used for detecting arterial occlusions. These images could also provide information on the extent of ischemia. The study aim was to develop and evaluate a convolutional neural network (CNN)-based algorithm for detecting and segmenting acute ischemic lesions from CTA images of patients with suspected middle cerebral artery stroke. These results were compared to volumes reported by widely used CT perfusion-based RAPID software (IschemaView). A 42-layer-deep CNN was trained on 50 CTA volumes with manually delineated targets. The lower bound for predicted lesion size to reliably discern stroke from false positives was estimated. The severity of false positives and false negatives was reviewed visually to assess the clinical applicability and to further guide the method development. The CNN model corresponded to the manual segmentations with voxel-wise sensitivity 0.54 (95% confidence interval: 0.44-0.63), precision 0.69 (0.60-0.76), and Sørensen-Dice coefficient 0.61 (0.52-0.67). Stroke/nonstroke differentiation accuracy 0.88 (0.81-0.94) was achieved when only considering the predicted lesion size (i.e., regardless of location). By visual estimation, 46% of cases showed some false findings, such as CNN highlighting chronic periventricular white matter changes or beam hardening artifacts, but only in 9% the errors were severe, translating to 0.91 accuracy. The CNN model had a moderately strong correlation to RAPID-reported Tmax > 10 s volumes (Pearson's r = 0.76 (0.58-0.86)). The results suggest that detecting anterior circulation ischemic strokes from CTA using a CNN-based algorithm can be feasible when accompanied with physiological knowledge to rule out false positives.
Collapse
Affiliation(s)
- Teemu Mäkelä
- grid.7737.40000 0004 0410 2071HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, Haartmaninkatu 4, P.O. Box 340, 00290 Helsinki, Finland ,grid.7737.40000 0004 0410 2071Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland
| | - Olli Öman
- grid.7737.40000 0004 0410 2071HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, Haartmaninkatu 4, P.O. Box 340, 00290 Helsinki, Finland
| | - Lasse Hokkinen
- grid.7737.40000 0004 0410 2071HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, Haartmaninkatu 4, P.O. Box 340, 00290 Helsinki, Finland
| | - Ulla Wilppu
- grid.7737.40000 0004 0410 2071HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, Haartmaninkatu 4, P.O. Box 340, 00290 Helsinki, Finland
| | - Eero Salli
- grid.7737.40000 0004 0410 2071HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, Haartmaninkatu 4, P.O. Box 340, 00290 Helsinki, Finland
| | - Sauli Savolainen
- grid.7737.40000 0004 0410 2071HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, Haartmaninkatu 4, P.O. Box 340, 00290 Helsinki, Finland ,grid.7737.40000 0004 0410 2071Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland
| | - Marko Kangasniemi
- grid.7737.40000 0004 0410 2071HUS Medical Imaging Center, Radiology, University of Helsinki and Helsinki University Hospital, Haartmaninkatu 4, P.O. Box 340, 00290 Helsinki, Finland
| |
Collapse
|
26
|
Ahmad M, Qadri SF, Ashraf MU, Subhi K, Khan S, Zareen SS, Qadri S. Efficient Liver Segmentation from Computed Tomography Images Using Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:2665283. [PMID: 35634046 PMCID: PMC9132625 DOI: 10.1155/2022/2665283] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 04/06/2022] [Indexed: 12/11/2022]
Abstract
Segmentation of a liver in computed tomography (CT) images is an important step toward quantitative biomarkers for a computer-aided decision support system and precise medical diagnosis. To overcome the difficulties that come across the liver segmentation that are affected by fuzzy boundaries, stacked autoencoder (SAE) is applied to learn the most discriminative features of the liver among other tissues in abdominal images. In this paper, we propose a patch-based deep learning method for the segmentation of a liver from CT images using SAE. Unlike the traditional machine learning methods, instead of anticipating pixel by pixel learning, our algorithm utilizes the patches to learn the representations and identify the liver area. We preprocessed the whole dataset to get the enhanced images and converted each image into many overlapping patches. These patches are given as input to SAE for unsupervised feature learning. Finally, the learned features with labels of the images are fine tuned, and the classification is performed to develop the probability map in a supervised way. Experimental results demonstrate that our proposed algorithm shows satisfactory results on test images. Our method achieved a 96.47% dice similarity coefficient (DSC), which is better than other methods in the same domain.
Collapse
Affiliation(s)
- Mubashir Ahmad
- College of Computer Science and Software Engineering, Computer Vision Institute, Shenzhen University, Shenzhen, Guangdong Province 518060, China
- Department of Computer Science and IT, The University of Lahore, Sargodha Campus, 40100, Lahore, Pakistan
| | - Syed Furqan Qadri
- College of Computer Science and Software Engineering, Computer Vision Institute, Shenzhen University, Shenzhen, Guangdong Province 518060, China
| | - M. Usman Ashraf
- Department of Computer Science, GC Women University, Sialkot 51310, Pakistan
| | - Khalid Subhi
- Department of Computer Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Salabat Khan
- College of Computer Science and Software Engineering, Computer Vision Institute, Shenzhen University, Shenzhen, Guangdong Province 518060, China
| | - Syeda Shamaila Zareen
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Salman Qadri
- Department of Computer Science, MNS University of Agriculture, Multan 60650, Pakistan
| |
Collapse
|
27
|
Felfeliyan B, Hareendranathan A, Kuntze G, Jaremko JL, Ronsky JL. Improved-Mask R-CNN: Towards an Accurate Generic MSK MRI instance segmentation platform (Data from the Osteoarthritis Initiative). Comput Med Imaging Graph 2022; 97:102056. [DOI: 10.1016/j.compmedimag.2022.102056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 12/11/2021] [Accepted: 03/04/2022] [Indexed: 10/18/2022]
|
28
|
Awan MJ, Rahim MSM, Salim N, Rehman A, Garcia-Zapirain B. Automated Knee MR Images Segmentation of Anterior Cruciate Ligament Tears. SENSORS 2022; 22:s22041552. [PMID: 35214451 PMCID: PMC8876207 DOI: 10.3390/s22041552] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 02/12/2022] [Accepted: 02/14/2022] [Indexed: 12/10/2022]
Abstract
The anterior cruciate ligament (ACL) is one of the main stabilizer parts of the knee. ACL injury leads to causes of osteoarthritis risk. ACL rupture is common in the young athletic population. Accurate segmentation at an early stage can improve the analysis and classification of anterior cruciate ligaments tears. This study automatically segmented the anterior cruciate ligament (ACL) tears from magnetic resonance imaging through deep learning. The knee mask was generated on the original Magnetic Resonance (MR) images to apply a semantic segmentation technique with convolutional neural network architecture U-Net. The proposed segmentation method was measured by accuracy, intersection over union (IoU), dice similarity coefficient (DSC), precision, recall and F1-score of 98.4%, 99.0%, 99.4%, 99.6%, 99.6% and 99.6% on 11451 training images, whereas on the validation images of 3817 was, respectively, 97.7%, 93.8%,96.8%, 96.5%, 97.3% and 96.9%. We also provide dice loss of training and test datasets that have remained 0.005 and 0.031, respectively. The experimental results show that the ACL segmentation on JPEG MRI images with U-Nets achieves accuracy that outperforms the human segmentation. The strategy has promising potential applications in medical image analytics for the segmentation of knee ACL tears for MR images.
Collapse
Affiliation(s)
- Mazhar Javed Awan
- Faculty of Engineering, School of Computing, Universiti Teknologi Malaysia (UTM), Skudai 81310, Johor, Malaysia; (M.S.M.R.); (N.S.)
- Department of Software Engineering, University of Management and Technology, Lahore 54770, Pakistan
- Correspondence: (M.J.A.); (B.G.-Z.)
| | - Mohd Shafry Mohd Rahim
- Faculty of Engineering, School of Computing, Universiti Teknologi Malaysia (UTM), Skudai 81310, Johor, Malaysia; (M.S.M.R.); (N.S.)
| | - Naomie Salim
- Faculty of Engineering, School of Computing, Universiti Teknologi Malaysia (UTM), Skudai 81310, Johor, Malaysia; (M.S.M.R.); (N.S.)
| | - Amjad Rehman
- Artificial Intelligence and Data Analytics Laboratory, College of Computer and Information Sciences (CCIS), Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | | |
Collapse
|
29
|
Calivà F, Namiri NK, Dubreuil M, Pedoia V, Ozhinsky E, Majumdar S. Studying osteoarthritis with artificial intelligence applied to magnetic resonance imaging. Nat Rev Rheumatol 2022; 18:112-121. [PMID: 34848883 DOI: 10.1038/s41584-021-00719-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2021] [Indexed: 02/08/2023]
Abstract
The 3D nature and soft-tissue contrast of MRI makes it an invaluable tool for osteoarthritis research, by facilitating the elucidation of disease pathogenesis and progression. The recent increasing employment of MRI has certainly been stimulated by major advances that are due to considerable investment in research, particularly related to artificial intelligence (AI). These AI-related advances are revolutionizing the use of MRI in clinical research by augmenting activities ranging from image acquisition to post-processing. Automation is key to reducing the long acquisition times of MRI, conducting large-scale longitudinal studies and quantitatively defining morphometric and other important clinical features of both soft and hard tissues in various anatomical joints. Deep learning methods have been used recently for multiple applications in the musculoskeletal field to improve understanding of osteoarthritis. Compared with labour-intensive human efforts, AI-based methods have advantages and potential in all stages of imaging, as well as post-processing steps, including aiding diagnosis and prognosis. However, AI-based methods also have limitations, including the arguably limited interpretability of AI models. Given that the AI community is highly invested in uncovering uncertainties associated with model predictions and improving their interpretability, we envision future clinical translation and progressive increase in the use of AI algorithms to support clinicians in optimizing patient care.
Collapse
Affiliation(s)
- Francesco Calivà
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Nikan K Namiri
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Maureen Dubreuil
- Section of Rheumatology, Department of Medicine, Boston University School of Medicine, Boston, MA, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Eugene Ozhinsky
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
30
|
Entropy and distance maps-guided segmentation of articular cartilage: data from the Osteoarthritis Initiative. Int J Comput Assist Radiol Surg 2022; 17:553-560. [PMID: 34988758 DOI: 10.1007/s11548-021-02555-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 12/22/2021] [Indexed: 11/05/2022]
Abstract
PURPOSE Accurate segmentation of articular cartilage from MR images is crucial for quantitative investigation of pathoanatomical conditions such as osteoarthritis (OA). Recently, deep learning-based methods have made significant progress in hard tissue segmentation. However, it remains a challenge to develop accurate methods for automatic segmentation of articular cartilage. METHODS We propose a two-stage method for automatic segmentation of articular cartilage. At the first stage, nnU-Net is employed to get segmentation of both hard tissues and articular cartilage. Based on the initial segmentation, we compute distance maps as well as entropy maps, which encode the uncertainty information about the initial cartilage segmentation. At the second stage, both distance maps and entropy maps are concatenated to the original image. We then crop a sub-volume around the cartilage region based on the initial segmentation, which is used as the input to another nnU-Net for segmentation refinement. RESULTS We designed and conducted comprehensive experiments on segmenting three different types of articular cartilage from two datasets, i.e., an in-house dataset consisting of 25 hip MR images and a publicly available dataset from Osteoarthritis Initiative (OAI). Our method achieved an average Dice similarity coefficient (DSC) of [Formula: see text] for the combined hip cartilage, [Formula: see text] for the femoral cartilage and [Formula: see text] for the tibial cartilage, respectively. CONCLUSION In summary, we developed a new approach for automatic segmentation of articular cartilage from MR images. Comprehensive experiments conducted on segmenting articular cartilage of the knee and hip joints demonstrated the efficacy of the present approach. Our method achieved equivalent or better results than the state-of-the-art methods.
Collapse
|
31
|
Song H, Chen L, Cui Y, Li Q, Wang Q, Fan J, Yang J, Zhang L. Denoising of MR and CT images using cascaded multi-supervision convolutional neural networks with progressive training. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2020.10.118] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
32
|
Suganyadevi S, Seethalakshmi V, Balasamy K. A review on deep learning in medical image analysis. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:19-38. [PMID: 34513553 PMCID: PMC8417661 DOI: 10.1007/s13735-021-00218-1] [Citation(s) in RCA: 75] [Impact Index Per Article: 37.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 08/06/2021] [Accepted: 08/09/2021] [Indexed: 05/02/2023]
Abstract
Ongoing improvements in AI, particularly concerning deep learning techniques, are assisting to identify, classify, and quantify patterns in clinical images. Deep learning is the quickest developing field in artificial intelligence and is effectively utilized lately in numerous areas, including medication. A brief outline is given on studies carried out on the region of application: neuro, brain, retinal, pneumonic, computerized pathology, bosom, heart, breast, bone, stomach, and musculoskeletal. For information exploration, knowledge deployment, and knowledge-based prediction, deep learning networks can be successfully applied to big data. In the field of medical image processing methods and analysis, fundamental information and state-of-the-art approaches with deep learning are presented in this paper. The primary goals of this paper are to present research on medical image processing as well as to define and implement the key guidelines that are identified and addressed.
Collapse
Affiliation(s)
- S. Suganyadevi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - V. Seethalakshmi
- Department of ECE, KPR Institute of Engineering and Technology, Coimbatore, India
| | - K. Balasamy
- Department of IT, Dr. Mahalingam College of Engineering and Technology, Coimbatore, India
| |
Collapse
|
33
|
Elsayed OM, Ismail SM, Abd El Ghany MA. Register Transfer Level Model For CNN Tumor Detection on FPGA. 2021 INTERNATIONAL CONFERENCE ON MICROELECTRONICS (ICM) 2021. [DOI: 10.1109/icm52667.2021.9664940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
34
|
Gao Z, Liu W, McDonough DJ, Zeng N, Lee JE. The Dilemma of Analyzing Physical Activity and Sedentary Behavior with Wrist Accelerometer Data: Challenges and Opportunities. J Clin Med 2021; 10:5951. [PMID: 34945247 PMCID: PMC8706489 DOI: 10.3390/jcm10245951] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 12/14/2021] [Accepted: 12/16/2021] [Indexed: 12/20/2022] Open
Abstract
Physical behaviors (e.g., physical activity and sedentary behavior) have been the focus among many researchers in the biomedical and behavioral science fields. The recent shift from hip- to wrist-worn accelerometers in these fields has signaled the need to develop novel approaches to process raw acceleration data of physical activity and sedentary behavior. However, there is currently no consensus regarding the best practices for analyzing wrist-worn accelerometer data to accurately predict individuals' energy expenditure and the times spent in different intensities of free-living physical activity and sedentary behavior. To this end, accurately analyzing and interpreting wrist-worn accelerometer data has become a major challenge facing many clinicians and researchers. In response, this paper attempts to review different methodologies for analyzing wrist-worn accelerometer data and offer cutting edge, yet appropriate analysis plans for wrist-worn accelerometer data in the assessment of physical behavior. In this paper, we first discuss the fundamentals of wrist-worn accelerometer data, followed by various methods of processing these data (e.g., cut points, steps per minute, machine learning), and then we discuss the opportunities, challenges, and directions for future studies in this area of inquiry. This is the most comprehensive review paper to date regarding the analysis and interpretation of free-living physical activity data derived from wrist-worn accelerometers, aiming to help establish a blueprint for processing wrist-derived accelerometer data.
Collapse
Affiliation(s)
- Zan Gao
- School of Kinesiology, University of Minnesota-Twin Cities, 1900 University Ave. SE, Minneapolis, MN 55455, USA
| | - Wenxi Liu
- Department of Physical Education, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Daniel J McDonough
- Division of Epidemiology and Community Health, School of Public Health, University of Minnesota-Twin Cities, 420 Delaware St. SE, Minneapolis, MN 55455, USA
| | - Nan Zeng
- Prevention Research Center, Department of Pediatrics, University of New Mexico Health Sciences Center, Albuquerque, NM 87131, USA
| | - Jung Eun Lee
- Department of Applied Human Sciences, University of Minnesota-Duluth, 1216 Ordean Court SpHC 109, Duluth, MN 55812, USA
| |
Collapse
|
35
|
Perslev M, Pai A, Runhaar J, Igel C, Dam EB. Cross-Cohort Automatic Knee MRI Segmentation With Multi-Planar U-Nets. J Magn Reson Imaging 2021; 55:1650-1663. [PMID: 34918423 PMCID: PMC9106804 DOI: 10.1002/jmri.27978] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 10/18/2021] [Accepted: 10/20/2021] [Indexed: 12/16/2022] Open
Abstract
Background Segmentation of medical image volumes is a time‐consuming manual task. Automatic tools are often tailored toward specific patient cohorts, and it is unclear how they behave in other clinical settings. Purpose To evaluate the performance of the open‐source Multi‐Planar U‐Net (MPUnet), the validated Knee Imaging Quantification (KIQ) framework, and a state‐of‐the‐art two‐dimensional (2D) U‐Net architecture on three clinical cohorts without extensive adaptation of the algorithms. Study Type Retrospective cohort study. Subjects A total of 253 subjects (146 females, 107 males, ages 57 ± 12 years) from three knee osteoarthritis (OA) studies (Center for Clinical and Basic Research [CCBR], Osteoarthritis Initiative [OAI], and Prevention of OA in Overweight Females [PROOF]) with varying demographics and OA severity (64/37/24/53/2 scans of Kellgren and Lawrence [KL] grades 0–4). Field Strength/Sequence 0.18 T, 1.0 T/1.5 T, and 3 T sagittal three‐dimensional fast‐spin echo T1w and dual‐echo steady‐state sequences. Assessment All models were fit without tuning to knee magnetic resonance imaging (MRI) scans with manual segmentations from three clinical cohorts. All models were evaluated across KL grades. Statistical Tests Segmentation performance differences as measured by Dice coefficients were tested with paired, two‐sided Wilcoxon signed‐rank statistics with significance threshold α = 0.05. Results The MPUnet performed superior or equal to KIQ and 2D U‐Net on all compartments across three cohorts. Mean Dice overlap was significantly higher for MPUnet compared to KIQ and U‐Net on CCBR (0.83±0.04 vs. 0.81±0.06 and 0.82±0.05), significantly higher than KIQ and U‐Net OAI (0.86±0.03 vs. 0.84±0.04 and 0.85±0.03), and not significantly different from KIQ while significantly higher than 2D U‐Net on PROOF (0.78±0.07 vs. 0.77±0.07, P=0.10, and 0.73±0.07). The MPUnet performed significantly better on N=22 KL grade 3 CCBR scans with 0.78±0.06 vs. 0.75±0.08 for KIQ and 0.76±0.06 for 2D U‐Net. Data Conclusion The MPUnet matched or exceeded the performance of state‐of‐the‐art knee MRI segmentation models across cohorts of variable sequences and patient demographics. The MPUnet required no manual tuning making it both accurate and easy‐to‐use. Level of Evidence 3 Technical Efficacy Stage 2
Collapse
Affiliation(s)
- Mathias Perslev
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Akshay Pai
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.,Cerebriu A/S, Copenhagen, Denmark
| | | | - Christian Igel
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Erik B Dam
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.,Cerebriu A/S, Copenhagen, Denmark
| |
Collapse
|
36
|
Emergence of Deep Learning in Knee Osteoarthritis Diagnosis. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:4931437. [PMID: 34804143 PMCID: PMC8598325 DOI: 10.1155/2021/4931437] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Revised: 10/23/2021] [Accepted: 10/25/2021] [Indexed: 12/13/2022]
Abstract
Osteoarthritis (OA), especially knee OA, is the most common form of arthritis, causing significant disability in patients worldwide. Manual diagnosis, segmentation, and annotations of knee joints remain as the popular method to diagnose OA in clinical practices, although they are tedious and greatly subject to user variation. Therefore, to overcome the limitations of the commonly used method as above, numerous deep learning approaches, especially the convolutional neural network (CNN), have been developed to improve the clinical workflow efficiency. Medical imaging processes, especially those that produce 3-dimensional (3D) images such as MRI, possess ability to reveal hidden structures in a volumetric view. Acknowledging that changes in a knee joint is a 3D complexity, 3D CNN has been employed to analyse the joint problem for a more accurate diagnosis in the recent years. In this review, we provide a broad overview on the current 2D and 3D CNN approaches in the OA research field. We reviewed 74 studies related to classification and segmentation of knee osteoarthritis from the Web of Science database and discussed the various state-of-the-art deep learning approaches proposed. We highlighted the potential and possibility of 3D CNN in the knee osteoarthritis field. We concluded by discussing the possible challenges faced as well as the potential advancements in adopting 3D CNNs in this field.
Collapse
|
37
|
Takata T, Sasaki H, Yamano H, Honma M, Shikano M. Study on Horizon Scanning with a Focus on the Development of AI-Based Medical Products: Citation Network Analysis. Ther Innov Regul Sci 2021; 56:263-275. [PMID: 34811711 PMCID: PMC8854249 DOI: 10.1007/s43441-021-00355-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 11/08/2021] [Indexed: 01/22/2023]
Abstract
Horizon scanning for innovative technologies that might be applied to medical products and requires new assessment approaches to prepare regulators, allowing earlier access to the product for patients and an improved benefit/risk ratio. The purpose of this study is to confirm that citation network analysis and text mining for bibliographic information analysis can be used for horizon scanning of the rapidly developing field of AI-based medical technologies and extract the latest research trend information from the field. We classified 119,553 publications obtained from SCI constructed with the keywords “conventional,” “machine-learning,” or “deep-learning" and grouped them into 36 clusters, which demonstrated the academic landscape of AI applications. We also confirmed that one or two close clusters included the key articles on AI-based medical image analysis, suggesting that clusters specific to the technology were appropriately formed. Significant research progress could be detected as a quick increase in constituent papers and the number of citations of hub papers in the cluster. Then we tracked recent research trends by re-analyzing “young” clusters based on the average publication year of the constituent papers of each cluster. The latest topics in AI-based medical technologies include electrocardiograms and electroencephalograms (ECG/EEG), human activity recognition, natural language processing of clinical records, and drug discovery. We could detect rapid increase in research activity of AI-based ECG/EEG a few years prior to the issuance of the draft guidance by US-FDA. Our study showed that a citation network analysis and text mining of scientific papers can be a useful objective tool for horizon scanning of rapidly developing AI-based medical technologies.
Collapse
Affiliation(s)
- Takuya Takata
- Faculty of Pharmaceutical Sciences, Tokyo University of Science, Tokyo, Japan
| | - Hajime Sasaki
- Institute for Future Initiatives, The University of Tokyo, Tokyo, Japan
| | - Hiroko Yamano
- Institute for Future Initiatives, The University of Tokyo, Tokyo, Japan
| | - Masashi Honma
- Department of Pharmacy, The University of Tokyo Hospital, Tokyo, Japan
| | - Mayumi Shikano
- Faculty of Pharmaceutical Sciences, Tokyo University of Science, Tokyo, Japan.
| |
Collapse
|
38
|
Penarrubia L, Pinon N, Roux E, Dávila Serrano EE, Richard JC, Orkisz M, Sarrut D. Improving motion-mask segmentation in thoracic CT with multiplanar U-nets. Med Phys 2021; 49:420-431. [PMID: 34778978 DOI: 10.1002/mp.15347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 09/30/2021] [Accepted: 10/19/2021] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Motion-mask segmentation from thoracic computed tomography (CT) images is the process of extracting the region that encompasses lungs and viscera, where large displacements occur during breathing. It has been shown to help image registration between different respiratory phases. This registration step is, for example, useful for radiotherapy planning or calculating local lung ventilation. Knowing the location of motion discontinuity, that is, sliding motion near the pleura, allows a better control of the registration preventing unrealistic estimates. Nevertheless, existing methods for motion-mask segmentation are not robust enough to be used in clinical routine. This article shows that it is feasible to overcome this lack of robustness by using a lightweight deep-learning approach usable on a standard computer, and this even without data augmentation or advanced model design. METHODS A convolutional neural-network architecture with three 2D U-nets for the three main orientations (sagittal, coronal, axial) was proposed. Predictions generated by the three U-nets were combined by majority voting to provide a single 3D segmentation of the motion mask. The networks were trained on a database of nonsmall cell lung cancer 4D CT images of 43 patients. Training and evaluation were done with a K-fold cross-validation strategy. Evaluation was based on a visual grading by two experts according to the appropriateness of the segmented motion mask for the registration task, and on a comparison with motion masks obtained by a baseline method using level sets. A second database (76 CT images of patients with early-stage COVID-19), unseen during training, was used to assess the generalizability of the trained neural network. RESULTS The proposed approach outperformed the baseline method in terms of quality and robustness: the success rate increased from 53 % to 79 % without producing any failure. It also achieved a speed-up factor of 60 with GPU, or 17 with CPU. The memory footprint was low: less than 5 GB GPU RAM for training and less than 1 GB GPU RAM for inference. When evaluated on a dataset with images differing by several characteristics (CT device, pathology, and field of view), the proposed method improved the success rate from 53 % to 83 % . CONCLUSION With 5-s processing time on a mid-range GPU and success rates around 80 % , the proposed approach seems fast and robust enough to be routinely used in clinical practice. The success rate can be further improved by incorporating more diversity in training data via data augmentation and additional annotated images from different scanners and diseases. The code and trained model are publicly available.
Collapse
Affiliation(s)
- Ludmilla Penarrubia
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | - Nicolas Pinon
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | - Emmanuel Roux
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | | | - Jean-Christophe Richard
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France.,Service de Réanimation Médicale, Hôpital de la Croix Rousse, Hospices Civils de Lyon, France
| | - Maciej Orkisz
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| | - David Sarrut
- Univ Lyon, Université Claude Bernard Lyon 1, INSA-Lyon, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621, Lyon, France
| |
Collapse
|
39
|
Ding Y, Zheng W, Geng J, Qin Z, Choo KKR, Qin Z, Hou X. MVFusFra: A Multi-View Dynamic Fusion Framework for Multimodal Brain Tumor Segmentation. IEEE J Biomed Health Inform 2021; 26:1570-1581. [PMID: 34699375 DOI: 10.1109/jbhi.2021.3122328] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Medical practitioners generally rely on multimodal brain images, for example based on the information from the axial, coronal, and sagittal views, to inform brain tumor diagnosis. Hence, to further utilize the 3D information embedded in such datasets, this paper proposes a multi-view dynamic fusion framework (hereafter, referred to as MVFusFra) to improve the performance of brain tumor segmentation. The proposed framework consists of the following three key building blocks. First, a multi-view deep neural network architecture, which represents multi learning networks for segmenting the brain tumor from different views and each deep neural network corresponds to multi-modal brain images from one single view. Second, the dynamic decision fusion method, which is mainly used to fuse segmentation results from multi-views into an integrated method. Then, two different fusion methods (i.e., voting and weighted averaging) are used to evaluate the fusing process. Third, the multi-view fusion loss (comprising segmentation loss, transition loss, and decision loss) is proposed to facilitate the training process of multi-view learning networks, so as to ensure consistency in appearance and space, for both fusing segmentation results and the training of the learning network. We evaluate the performance of MVFusFra on the BRATS 2015 and BRATS 2018 datasets. Findings from the evaluations suggest that fusion results from multi-views achieve better performance than segmentation results from the single view, and also implying effectiveness of the proposed multi-view fusion loss. A comparative summary also shows that MVFusFra achieves better segmentation performance, in terms of efficiency, in comparison to other competing approaches.
Collapse
|
40
|
3D cephalometric landmark detection by multiple stage deep reinforcement learning. Sci Rep 2021; 11:17509. [PMID: 34471202 PMCID: PMC8410904 DOI: 10.1038/s41598-021-97116-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 08/17/2021] [Indexed: 11/09/2022] Open
Abstract
The lengthy time needed for manual landmarking has delayed the widespread adoption of three-dimensional (3D) cephalometry. We here propose an automatic 3D cephalometric annotation system based on multi-stage deep reinforcement learning (DRL) and volume-rendered imaging. This system considers geometrical characteristics of landmarks and simulates the sequential decision process underlying human professional landmarking patterns. It consists mainly of constructing an appropriate two-dimensional cutaway or 3D model view, then implementing single-stage DRL with gradient-based boundary estimation or multi-stage DRL to dictate the 3D coordinates of target landmarks. This system clearly shows sufficient detection accuracy and stability for direct clinical applications, with a low level of detection error and low inter-individual variation (1.96 ± 0.78 mm). Our system, moreover, requires no additional steps of segmentation and 3D mesh-object construction for landmark detection. We believe these system features will enable fast-track cephalometric analysis and planning and expect it to achieve greater accuracy as larger CT datasets become available for training and testing.
Collapse
|
41
|
Yang J, Huang X, He Y, Xu J, Yang C, Xu G, Ni B. Reinventing 2D Convolutions for 3D Images. IEEE J Biomed Health Inform 2021; 25:3009-3018. [PMID: 33406047 DOI: 10.1109/jbhi.2021.3049452] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
There have been considerable debates over 2D and 3D representation learning on 3D medical images. 2D approaches could benefit from large-scale 2D pretraining, whereas they are generally weak in capturing large 3D contexts. 3D approaches are natively strong in 3D contexts, however few publicly available 3D medical dataset is large and diverse enough for universal 3D pretraining. Even for hybrid (2D + 3D) approaches, the intrinsic disadvantages within the 2D/3D parts still exist. In this study, we bridge the gap between 2D and 3D convolutions by reinventing the 2D convolutions. We propose ACS (axial-coronal-sagittal) convolutions to perform natively 3D representation learning, while utilizing the pretrained weights on 2D datasets. In ACS convolutions, 2D convolution kernels are split by channel into three parts, and convoluted separately on the three views (axial, coronal and sagittal) of 3D representations. Theoretically, ANY 2D CNN (ResNet, DenseNet, or DeepLab) is able to be converted into a 3D ACS CNN, with pretrained weight of a same parameter size. Extensive experiments validate the consistent superiority of the pretrained ACS CNNs, over the 2D/3D CNN counterparts with/without pretraining. Even without pretraining, the ACS convolution can be used as a plug-and-play replacement of standard 3D convolution, with smaller model size and less computation.
Collapse
|
42
|
Ro K, Kim JY, Park H, Cho BH, Kim IY, Shim SB, Choi IY, Yoo JC. Deep-learning framework and computer assisted fatty infiltration analysis for the supraspinatus muscle in MRI. Sci Rep 2021; 11:15065. [PMID: 34301978 PMCID: PMC8302634 DOI: 10.1038/s41598-021-93026-w] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 06/02/2021] [Indexed: 02/07/2023] Open
Abstract
Occupation ratio and fatty infiltration are important parameters for evaluating patients with rotator cuff tears. We analyzed the occupation ratio using a deep-learning framework and studied the fatty infiltration of the supraspinatus muscle using an automated region-based Otsu thresholding technique. To calculate the amount of fatty infiltration of the supraspinatus muscle using an automated region-based Otsu thresholding technique. The mean Dice similarity coefficient, accuracy, sensitivity, specificity, and relative area difference for the segmented lesion, measuring the similarity of clinician assessment and that of a deep neural network, were 0.97, 99.84, 96.89, 99.92, and 0.07, respectively, for the supraspinatus fossa and 0.94, 99.89, 93.34, 99.95, and 2.03, respectively, for the supraspinatus muscle. The fatty infiltration measure using the Otsu thresholding method significantly differed among the Goutallier grades (Grade 0; 0.06, Grade 1; 4.68, Grade 2; 20.10, Grade 3; 42.86, Grade 4; 55.79, p < 0.0001). The occupation ratio and fatty infiltration using Otsu thresholding demonstrated a moderate negative correlation (ρ = - 0.75, p < 0.0001). This study included 240 randomly selected patients who underwent shoulder magnetic resonance imaging (MRI) from January 2015 to December 2016. We used a fully convolutional deep-learning algorithm to quantitatively detect the fossa and muscle regions by measuring the occupation ratio of the supraspinatus muscle. Fatty infiltration was objectively evaluated using the Otsu thresholding method. The proposed convolutional neural network exhibited fast and accurate segmentation of the supraspinatus muscle and fossa from shoulder MRI, allowing automatic calculation of the occupation ratio. Quantitative evaluation using a modified Otsu thresholding method can be used to calculate the proportion of fatty infiltration in the supraspinatus muscle. We expect that this will improve the efficiency and objectivity of diagnoses by quantifying the index used for shoulder MRI.
Collapse
Affiliation(s)
- Kyunghan Ro
- Gangnambon Research Institute, Gangnambon Orthopaedic Cinic, Seoul, Republic of Korea
| | - Joo Young Kim
- Department of Biomedical Engineering, Hanyang University, Seoul, Republic of Korea
| | - Heeseol Park
- Department of Orthopaedic Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Baek Hwan Cho
- Medical AI Research Center, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
- Department of Medical Device Management and Research, SAIHST, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| | - In Young Kim
- Department of Biomedical Engineering, Hanyang University, Seoul, Republic of Korea
| | - Seung Bo Shim
- Department of Orthopaedic Surgery, Yonsei Thebaro Hospital, Seoul, Republic of Korea
| | - In Young Choi
- Department of Radiology, Korea University Ansan Hospital, Korea University, Ansan-si, Gyeonggi-do, Republic of Korea
| | - Jae Chul Yoo
- Department of Orthopaedic Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
43
|
Wirth W, Eckstein F, Kemnitz J, Baumgartner CF, Konukoglu E, Fuerst D, Chaudhari AS. Accuracy and longitudinal reproducibility of quantitative femorotibial cartilage measures derived from automated U-Net-based segmentation of two different MRI contrasts: data from the osteoarthritis initiative healthy reference cohort. MAGMA (NEW YORK, N.Y.) 2021; 34:337-354. [PMID: 33025284 PMCID: PMC8154803 DOI: 10.1007/s10334-020-00889-7] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 08/22/2020] [Accepted: 09/10/2020] [Indexed: 12/19/2022]
Abstract
OBJECTIVE To evaluate the agreement, accuracy, and longitudinal reproducibility of quantitative cartilage morphometry from 2D U-Net-based automated segmentations for 3T coronal fast low angle shot (corFLASH) and sagittal double echo at steady-state (sagDESS) MRI. METHODS 2D U-Nets were trained using manual, quality-controlled femorotibial cartilage segmentations available for 92 Osteoarthritis Initiative healthy reference cohort participants from both corFLASH and sagDESS (n = 50/21/21 training/validation/test-set). Cartilage morphometry was computed from automated and manual segmentations for knees from the test-set. Agreement and accuracy were evaluated from baseline visits (dice similarity coefficient: DSC, correlation analysis, systematic offset). The longitudinal reproducibility was assessed from year-1 and -2 follow-up visits (root-mean-squared coefficient of variation, RMSCV%). RESULTS Automated segmentations showed high agreement (DSC 0.89-0.92) and high correlations (r ≥ 0.92) with manual ground truth for both corFLASH and sagDESS and only small systematic offsets (≤ 10.1%). The automated measurements showed a similar test-retest reproducibility over 1 year (RMSCV% 1.0-4.5%) as manual measurements (RMSCV% 0.5-2.5%). DISCUSSION The 2D U-Net-based automated segmentation method yielded high agreement compared with manual segmentation and also demonstrated high accuracy and longitudinal test-retest reproducibility for morphometric analysis of articular cartilage derived from it, using both corFLASH and sagDESS.
Collapse
Affiliation(s)
- Wolfgang Wirth
- Department of Imaging and Functional Musculoskeletal Research, Institute of Anatomy and Cell Biology, Paracelsus Medical University Salzburg and Nuremberg, Strubergasse 21, 5020, Salzburg, Austria.
- Ludwig Boltzmann Institute for Arthritis and Rehabilitation, Paracelsus Medical University, Salzburg, Austria.
- Chondrometrics GmbH, Ainring, Germany.
| | - Felix Eckstein
- Department of Imaging and Functional Musculoskeletal Research, Institute of Anatomy and Cell Biology, Paracelsus Medical University Salzburg and Nuremberg, Strubergasse 21, 5020, Salzburg, Austria
- Ludwig Boltzmann Institute for Arthritis and Rehabilitation, Paracelsus Medical University, Salzburg, Austria
- Chondrometrics GmbH, Ainring, Germany
| | - Jana Kemnitz
- Department of Imaging and Functional Musculoskeletal Research, Institute of Anatomy and Cell Biology, Paracelsus Medical University Salzburg and Nuremberg, Strubergasse 21, 5020, Salzburg, Austria
| | | | | | - David Fuerst
- Department of Imaging and Functional Musculoskeletal Research, Institute of Anatomy and Cell Biology, Paracelsus Medical University Salzburg and Nuremberg, Strubergasse 21, 5020, Salzburg, Austria
- Ludwig Boltzmann Institute for Arthritis and Rehabilitation, Paracelsus Medical University, Salzburg, Austria
- Chondrometrics GmbH, Ainring, Germany
| | | |
Collapse
|
44
|
Joint image and feature adaptative attention-aware networks for cross-modality semantic segmentation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06064-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
45
|
Automatic segmentation of gross target volume of nasopharynx cancer using ensemble of multiscale deep neural networks with spatial attention. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.06.146] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
46
|
Mauer MAD, Well EJV, Herrmann J, Groth M, Morlock MM, Maas R, Säring D. Automated age estimation of young individuals based on 3D knee MRI using deep learning. Int J Legal Med 2021; 135:649-663. [PMID: 33331995 PMCID: PMC7870623 DOI: 10.1007/s00414-020-02465-z] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Accepted: 11/09/2020] [Indexed: 01/05/2023]
Abstract
Age estimation is a crucial element of forensic medicine to assess the chronological age of living individuals without or lacking valid legal documentation. Methods used in practice are labor-intensive, subjective, and frequently comprise radiation exposure. Recently, also non-invasive methods using magnetic resonance imaging (MRI) have evaluated and confirmed a correlation between growth plate ossification in long bones and the chronological age of young subjects. However, automated and user-independent approaches are required to perform reliable assessments on large datasets. The aim of this study was to develop a fully automated and computer-based method for age estimation based on 3D knee MRIs using machine learning. The proposed solution is based on three parts: image-preprocessing, bone segmentation, and age estimation. A total of 185 coronal and 404 sagittal MR volumes from Caucasian male subjects in the age range of 13 and 21 years were available. The best result of the fivefold cross-validation was a mean absolute error of 0.67 ± 0.49 years in age regression and an accuracy of 90.9%, a sensitivity of 88.6%, and a specificity of 94.2% in classification (18-year age limit) using a combination of convolutional neural networks and tree-based machine learning algorithms. The potential of deep learning for age estimation is reflected in the results and can be further improved if it is trained on even larger and more diverse datasets.
Collapse
Affiliation(s)
- Markus Auf der Mauer
- Medical and Industrial Image Processing, University of Applied Sciences of Wedel, Feldstraße 143, 22880 Wedel, Germany
| | - Eilin Jopp-van Well
- Department of Legal Medicine, University Medical Center Hamburg-Eppendorf (UKE), Butenfeld 34, 22529 Hamburg, Germany
| | - Jochen Herrmann
- Section of Pediatric Radiology, Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf (UKE), Martinistr. 52, 20246 Hamburg, Germany
| | - Michael Groth
- Section of Pediatric Radiology, Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf (UKE), Martinistr. 52, 20246 Hamburg, Germany
| | - Michael M. Morlock
- Institute of Biomechanics M3, Hamburg University of Technology (TUHH), Denickestraße 15, 21073 Hamburg, Germany
| | - Rainer Maas
- Radiologie Raboisen 38, Raboisen 38, 20095 Hamburg, Germany
| | - Dennis Säring
- Medical and Industrial Image Processing, University of Applied Sciences of Wedel, Feldstraße 143, 22880 Wedel, Germany
| |
Collapse
|
47
|
Chandrasekaran AC, Fu Z, Kraniski R, Wilson FP, Teaw S, Cheng M, Wang A, Ren S, Omar IM, Hinchcliff ME. Computer vision applied to dual-energy computed tomography images for precise calcinosis cutis quantification in patients with systemic sclerosis. Arthritis Res Ther 2021; 23:6. [PMID: 33407814 PMCID: PMC7788847 DOI: 10.1186/s13075-020-02392-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 12/09/2020] [Indexed: 01/12/2023] Open
Abstract
Background Although treatments have been proposed for calcinosis cutis (CC) in patients with systemic sclerosis (SSc), a standardized and validated method for CC burden quantification is necessary to enable valid clinical trials. We tested the hypothesis that computer vision applied to dual-energy computed tomography (DECT) finger images is a useful approach for precise and accurate CC quantification in SSc patients. Methods De-identified 2-dimensional (2D) DECT images from SSc patients with clinically evident lesser finger CC lesions were obtained. An expert musculoskeletal radiologist confirmed accurate manual segmentation (subtraction) of the phalanges for each image as a gold standard, and a U-Net Convolutional Neural Network (CNN) computer vision model for segmentation of healthy phalanges was developed and tested. A validation study was performed in an independent dataset whereby two independent radiologists manually measured the longest length and perpendicular short axis of each lesion and then calculated an estimated area by assuming the lesion was elliptical using the formula long axis/2 × short axis/2 × π, and a computer scientist used a region growing technique to calculate the area of CC lesions. Spearman’s correlation coefficient, Lin’s concordance correlation coefficient with 95% confidence intervals (CI), and a Bland-Altman plot (Stata V 15.1, College Station, TX) were used to test for equivalence between the radiologists’ and the CNN algorithm-generated area estimates. Results Forty de-identified 2D DECT images from SSc patients with clinically evident finger CC lesions were obtained and divided into training (N = 30 with image rotation × 3 to expand the set to N = 120) and test sets (N = 10). In the training set, five hundred epochs (iterations) were required to train the CNN algorithm to segment phalanges from adjacent CC, and accurate segmentation was evaluated using the ten held-out images. To test model performance, CC lesional area estimates calculated by two independent radiologists and a computer scientist were compared (radiologist 1 vs. radiologist 2 and radiologist 1 vs. computer vision approach) using an independent test dataset comprised of 31 images (8 index finger and 23 other fingers). For the two radiologists’, and the radiologist vs. computer vision measurements, Spearman’s rho was 0.91 and 0.94, respectively, both p < 0.0001; Lin’s concordance correlation coefficient was 0.91 (95% CI 0.85–0.98, p < 0.001) and 0.95 (95% CI 0.91–0.99, p < 0.001); and Bland-Altman plots demonstrated a mean difference between radiologist vs. radiologist, and radiologist vs. computer vision area estimates of − 0.5 mm2 (95% limits of agreement − 10.0–9.0 mm2) and 1.7 mm2 (95% limits of agreement − 6.0–9.5 mm2, respectively. Conclusions We demonstrate that CNN quantification has a high degree of correlation with expert radiologist measurement of finger CC area measurements. Future work will include segmentation of 3-dimensional (3D) images for volumetric and density quantification, as well as validation in larger, independent cohorts.
Collapse
Affiliation(s)
- Anita C Chandrasekaran
- Yale School of Medicine, Section of Rheumatology, Allergy & Immunology, The Anlyan Center, 300 Cedar Street, PO BOX 208031, New Haven, CT, 06520, USA
| | - Zhicheng Fu
- Department of Computer Science, Illinois Institute of Technology, 10 W 31st St, Chicago, IL, 60616, USA.,Motorola Mobility LLC, 222 W Merchandise Mart Plaza #1800, Chicago, IL, 60654, USA
| | - Reid Kraniski
- Department of Radiology, Yale School of Medicine, 330 Cedar St, New Haven, CT, 06520, USA
| | - F Perry Wilson
- Clinical and Translational Research Accelerator, Department of Medicine, Yale School of Medicine, Temple Medical Center, 60 Temple Street Suite 6C, New Haven, CT, 06510, USA
| | - Shannon Teaw
- Yale School of Medicine, Section of Rheumatology, Allergy & Immunology, The Anlyan Center, 300 Cedar Street, PO BOX 208031, New Haven, CT, 06520, USA
| | - Michelle Cheng
- Yale School of Medicine, Section of Rheumatology, Allergy & Immunology, The Anlyan Center, 300 Cedar Street, PO BOX 208031, New Haven, CT, 06520, USA
| | - Annie Wang
- Department of Radiology, Yale School of Medicine, 330 Cedar St, New Haven, CT, 06520, USA
| | - Shangping Ren
- Department of Computer Science, Illinois Institute of Technology, 10 W 31st St, Chicago, IL, 60616, USA.,Department of Computer Science, San Diego State University, 5500 Campanile Drive, San Diego, CA, 92182, USA
| | - Imran M Omar
- Department of Radiology, Northwestern University Feinberg School of Medicine, 676 N St Clair St, Chicago, IL, 60611, USA
| | - Monique E Hinchcliff
- Yale School of Medicine, Section of Rheumatology, Allergy & Immunology, The Anlyan Center, 300 Cedar Street, PO BOX 208031, New Haven, CT, 06520, USA. .,Clinical and Translational Research Accelerator, Department of Medicine, Yale School of Medicine, Temple Medical Center, 60 Temple Street Suite 6C, New Haven, CT, 06510, USA. .,Department of Medicine, Division of Rheumatology, Northwestern University Feinberg School of Medicine, 240 E. Huron Street, Suite M-300, Chicago, IL, 60611, USA.
| |
Collapse
|
48
|
CARNet: Automatic Cerebral Aneurysm Classification in Time-of-Flight MR Angiography by Leveraging Recurrent Neural Networks. ARTIF INTELL 2021. [DOI: 10.1007/978-3-030-93046-2_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
49
|
Zhou T, Tan T, Pan X, Tang H, Li J. Fully automatic deep learning trained on limited data for carotid artery segmentation from large image volumes. Quant Imaging Med Surg 2021; 11:67-83. [PMID: 33392012 PMCID: PMC7719941 DOI: 10.21037/qims-20-286] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Accepted: 07/21/2020] [Indexed: 11/06/2022]
Abstract
BACKGROUND The objectives of this study were to develop a 3D convolutional deep learning framework (CarotidNet) for fully automatic segmentation of carotid bifurcations in computed tomography angiography (CTA) images and to facilitate the quantification of carotid stenosis and risk assessment of stroke. METHODS Our pipeline was a two-stage cascade network that included a localization phase and a segmentation phase. The network framework was based on the 3D version of U-Net, but was refined in three ways: (I) by adding residual connections and a deep supervision strategy to cope with the vanishing problem in back-propagation; (II) by adopting dilated convolution in order to strengthen the capacity to capture contextual information; and (III) by establishing a hybrid objective function to address the extreme imbalance between foreground and background voxels. RESULTS We trained our networks on 15 cases and evaluated their performance based on 41 cases from the MICCAI Challenge 2009 dataset. A Dice similarity coefficient of 82.3% was achieved for the test cases. CONCLUSIONS We developed a carotid segmentation method based on U-Net that can segment tiny carotid bifurcation lumens from very large backgrounds with no manual intervention. This was the first attempt to use deep learning to achieve carotid bifurcation segmentation in 3D CTA images. Our results indicate that deep learning is a promising method for automatically extracting carotid bifurcation lumens.
Collapse
Affiliation(s)
- Tianshu Zhou
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Tao Tan
- Department of Mathematics and Computer Science, Eindhoven University of Technology and Radiology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Xiaoyan Pan
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Hui Tang
- Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus MC, 3000 CA Rotterdam, the Netherlands
| | - Jingsong Li
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
| |
Collapse
|
50
|
Ding Y, Gudapati V, Lin R, Fei Y, Sevag Packard RR, Song S, Chang CC, Baek KI, Wang Z, Roustaei M, Kuang D, Jay Kuo CC, Hsiai TK. Saak Transform-Based Machine Learning for Light-Sheet Imaging of Cardiac Trabeculation. IEEE Trans Biomed Eng 2021; 68:225-235. [PMID: 32365015 PMCID: PMC7606319 DOI: 10.1109/tbme.2020.2991754] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
OBJECTIVE Recent advances in light-sheet fluorescence microscopy (LSFM) enable 3-dimensional (3-D) imaging of cardiac architecture and mechanics in toto. However, segmentation of the cardiac trabecular network to quantify cardiac injury remains a challenge. METHODS We hereby employed "subspace approximation with augmented kernels (Saak) transform" for accurate and efficient quantification of the light-sheet image stacks following chemotherapy-treatment. We established a machine learning framework with augmented kernels based on the Karhunen-Loeve Transform (KLT) to preserve linearity and reversibility of rectification. RESULTS The Saak transform-based machine learning enhances computational efficiency and obviates iterative optimization of cost function needed for neural networks, minimizing the number of training datasets for segmentation in our scenario. The integration of forward and inverse Saak transforms can also serve as a light-weight module to filter adversarial perturbations and reconstruct estimated images, salvaging robustness of existing classification methods. The accuracy and robustness of the Saak transform are evident following the tests of dice similarity coefficients and various adversary perturbation algorithms, respectively. The addition of edge detection further allows for quantifying the surface area to volume ratio (SVR) of the myocardium in response to chemotherapy-induced cardiac remodeling. CONCLUSION The combination of Saak transform, random forest, and edge detection augments segmentation efficiency by 20-fold as compared to manual processing. SIGNIFICANCE This new methodology establishes a robust framework for post light-sheet imaging processing, and creating a data-driven machine learning for automated quantification of cardiac ultra-structure.
Collapse
Affiliation(s)
- Yichen Ding
- Henry Samueli School of Engineering and David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| | - Varun Gudapati
- Henry Samueli School of Engineering and David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| | - Ruiyuan Lin
- Ming-Hsieh Department of Electrical Engineering, University of Southern California, Los Angeles, CA 90089 USA
| | - Yanan Fei
- Ming-Hsieh Department of Electrical Engineering, University of Southern California, Los Angeles, CA 90089 USA
| | - René R Sevag Packard
- Henry Samueli School of Engineering and David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| | - Sibo Song
- Ming-Hsieh Department of Electrical Engineering, University of Southern California, Los Angeles, CA 90089 USA
| | - Chih-Chiang Chang
- Henry Samueli School of Engineering and David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| | - Kyung In Baek
- Henry Samueli School of Engineering and David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| | - Zhaoqiang Wang
- Henry Samueli School of Engineering and David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| | - Mehrdad Roustaei
- Henry Samueli School of Engineering and David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| | - Dengfeng Kuang
- Tianjin Key Laboratory of Optoelectronic Sensor and Sensing Network Technology, and Institute of Modern Optics, Nankai University, Tianjin 300350, China
| | - C.-C. Jay Kuo
- Ming-Hsieh Department of Electrical Engineering, University of Southern California, Los Angeles, CA 90089 USA
| | - Tzung K. Hsiai
- Henry Samueli School of Engineering and David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| |
Collapse
|