1
|
Sauer ST, Christner SA, Lois AM, Woznicki P, Curtaz C, Kunz AS, Weiland E, Benkert T, Bley TA, Baeßler B, Grunz JP. Deep Learning k-Space-to-Image Reconstruction Facilitates High Spatial Resolution and Scan Time Reduction in Diffusion-Weighted Imaging Breast MRI. J Magn Reson Imaging 2024; 60:1190-1200. [PMID: 37974498 DOI: 10.1002/jmri.29139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 11/03/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023] Open
Abstract
BACKGROUND For time-consuming diffusion-weighted imaging (DWI) of the breast, deep learning-based imaging acceleration appears particularly promising. PURPOSE To investigate a combined k-space-to-image reconstruction approach for scan time reduction and improved spatial resolution in breast DWI. STUDY TYPE Retrospective. POPULATION 133 women (age 49.7 ± 12.1 years) underwent multiparametric breast MRI. FIELD STRENGTH/SEQUENCE 3.0T/T2 turbo spin echo, T1 3D gradient echo, DWI (800 and 1600 sec/mm2). ASSESSMENT DWI data were retrospectively processed using deep learning-based k-space-to-image reconstruction (DL-DWI) and an additional super-resolution algorithm (SRDL-DWI). In addition to signal-to-noise ratio and apparent diffusion coefficient (ADC) comparisons among standard, DL- and SRDL-DWI, a range of quantitative similarity (e.g., structural similarity index [SSIM]) and error metrics (e.g., normalized root mean square error [NRMSE], symmetric mean absolute percent error [SMAPE], log accuracy error [LOGAC]) was calculated to analyze structural variations. Subjective image evaluation was performed independently by three radiologists on a seven-point rating scale. STATISTICAL TESTS Friedman's rank-based analysis of variance with Bonferroni-corrected pairwise post-hoc tests. P < 0.05 was considered significant. RESULTS Both DL- and SRDL-DWI allowed for a 39% reduction in simulated scan time over standard DWI (5 vs. 3 minutes). The highest image quality ratings were assigned to SRDL-DWI with good interreader agreement (ICC 0.834; 95% confidence interval 0.818-0.848). Irrespective of b-value, both standard and DL-DWI produced superior SNR compared to SRDL-DWI. ADC values were slightly higher in SRDL-DWI (+0.5%) and DL-DWI (+3.4%) than in standard DWI. Structural similarity was excellent between DL-/SRDL-DWI and standard DWI for either b value (SSIM ≥ 0.86). Calculation of error metrics (NRMSE ≤ 0.05, SMAPE ≤ 0.02, and LOGAC ≤ 0.04) supported the assumption of low voxel-wise error. DATA CONCLUSION Deep learning-based k-space-to-image reconstruction reduces simulated scan time of breast DWI by 39% without influencing structural similarity. Additionally, super-resolution interpolation allows for substantial improvement of subjective image quality. EVIDENCE LEVEL 4 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Stephanie Tina Sauer
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Sara Aniki Christner
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Anna-Maria Lois
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Piotr Woznicki
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Carolin Curtaz
- Department of Obstetrics and Gynecology, University Hospital Würzburg, Würzburg, Germany
| | - Andreas Steven Kunz
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Elisabeth Weiland
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany
| | - Thomas Benkert
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany
| | - Thorsten Alexander Bley
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Bettina Baeßler
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Jan-Peter Grunz
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| |
Collapse
|
2
|
Xu Z, Rauch DE, Mohamed RM, Pashapoor S, Zhou Z, Panthi B, Son JB, Hwang KP, Musall BC, Adrada BE, Candelaria RP, Leung JWT, Le-Petross HTC, Lane DL, Perez F, White J, Clayborn A, Reed B, Chen H, Sun J, Wei P, Thompson A, Korkut A, Huo L, Hunt KK, Litton JK, Valero V, Tripathy D, Yang W, Yam C, Ma J. Deep Learning for Fully Automatic Tumor Segmentation on Serially Acquired Dynamic Contrast-Enhanced MRI Images of Triple-Negative Breast Cancer. Cancers (Basel) 2023; 15:4829. [PMID: 37835523 PMCID: PMC10571741 DOI: 10.3390/cancers15194829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 09/10/2023] [Accepted: 09/22/2023] [Indexed: 10/15/2023] Open
Abstract
Accurate tumor segmentation is required for quantitative image analyses, which are increasingly used for evaluation of tumors. We developed a fully automated and high-performance segmentation model of triple-negative breast cancer using a self-configurable deep learning framework and a large set of dynamic contrast-enhanced MRI images acquired serially over the patients' treatment course. Among all models, the top-performing one that was trained with the images across different time points of a treatment course yielded a Dice similarity coefficient of 93% and a sensitivity of 96% on baseline images. The top-performing model also produced accurate tumor size measurements, which is valuable for practical clinical applications.
Collapse
Affiliation(s)
- Zhan Xu
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - David E. Rauch
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Rania M. Mohamed
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Sanaz Pashapoor
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Zijian Zhou
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Bikash Panthi
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Jong Bum Son
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Ken-Pin Hwang
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Benjamin C. Musall
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| | - Beatriz E. Adrada
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Rosalind P. Candelaria
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jessica W. T. Leung
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Huong T. C. Le-Petross
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Deanna L. Lane
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Frances Perez
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jason White
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Alyson Clayborn
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Brandy Reed
- Department of Clinical Research Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Huiqin Chen
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jia Sun
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Peng Wei
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Alastair Thompson
- Section of Breast Surgery, Baylor College of Medicine, Houston, TX 77030, USA
| | - Anil Korkut
- Department of Bioinformatics & Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Lei Huo
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Kelly K. Hunt
- Department of Breast Surgical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jennifer K. Litton
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Vicente Valero
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Debu Tripathy
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Wei Yang
- Department of Breast Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Clinton Yam
- Department of Breast Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jingfei Ma
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; (Z.X.)
| |
Collapse
|
3
|
Kuang S, Woodruff HC, Granzier R, van Nijnatten TJA, Lobbes MBI, Smidt ML, Lambin P, Mehrkanoon S. MSCDA: Multi-level semantic-guided contrast improves unsupervised domain adaptation for breast MRI segmentation in small datasets. Neural Netw 2023; 165:119-134. [PMID: 37285729 DOI: 10.1016/j.neunet.2023.05.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 04/09/2023] [Accepted: 05/09/2023] [Indexed: 06/09/2023]
Abstract
Deep learning (DL) applied to breast tissue segmentation in magnetic resonance imaging (MRI) has received increased attention in the last decade, however, the domain shift which arises from different vendors, acquisition protocols, and biological heterogeneity, remains an important but challenging obstacle on the path towards clinical implementation. In this paper, we propose a novel Multi-level Semantic-guided Contrastive Domain Adaptation (MSCDA) framework to address this issue in an unsupervised manner. Our approach incorporates self-training with contrastive learning to align feature representations between domains. In particular, we extend the contrastive loss by incorporating pixel-to-pixel, pixel-to-centroid, and centroid-to-centroid contrasts to better exploit the underlying semantic information of the image at different levels. To resolve the data imbalance problem, we utilize a category-wise cross-domain sampling strategy to sample anchors from target images and build a hybrid memory bank to store samples from source images. We have validated MSCDA with a challenging task of cross-domain breast MRI segmentation between datasets of healthy volunteers and invasive breast cancer patients. Extensive experiments show that MSCDA effectively improves the model's feature alignment capabilities between domains, outperforming state-of-the-art methods. Furthermore, the framework is shown to be label-efficient, achieving good performance with a smaller source dataset. The code is publicly available at https://github.com/ShengKuangCN/MSCDA.
Collapse
Affiliation(s)
- Sheng Kuang
- The D-Lab, Department of Precision Medicine, GROW - School or Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Henry C Woodruff
- The D-Lab, Department of Precision Medicine, GROW - School or Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands; Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Renee Granzier
- Department of Surgery, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Thiemo J A van Nijnatten
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands; GROW - School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Marc B I Lobbes
- Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands; GROW - School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands; Department of Medical Imaging, Zuyderland Medical Center, Sittard-Geleen, The Netherlands
| | - Marjolein L Smidt
- Department of Surgery, Maastricht University Medical Centre+, Maastricht, The Netherlands; GROW - School for Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW - School or Oncology and Reproduction, Maastricht University, Maastricht, The Netherlands; Department of Radiology and Nuclear Medicine, Maastricht University Medical Centre+, Maastricht, The Netherlands
| | - Siamak Mehrkanoon
- Department of Information and Computing Sciences, Utrecht University, Utrecht, The Netherlands.
| |
Collapse
|
4
|
Ham S, Kim M, Lee S, Wang CB, Ko B, Kim N. Improvement of semantic segmentation through transfer learning of multi-class regions with convolutional neural networks on supine and prone breast MRI images. Sci Rep 2023; 13:6877. [PMID: 37106024 PMCID: PMC10140273 DOI: 10.1038/s41598-023-33900-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 04/20/2023] [Indexed: 04/29/2023] Open
Abstract
Semantic segmentation of breast and surrounding tissues in supine and prone breast magnetic resonance imaging (MRI) is required for various kinds of computer-assisted diagnoses for surgical applications. Variability of breast shape in supine and prone poses along with various MRI artifacts makes it difficult to determine robust breast and surrounding tissue segmentation. Therefore, we evaluated semantic segmentation with transfer learning of convolutional neural networks to create robust breast segmentation in supine breast MRI without considering supine or prone positions. Total 29 patients with T1-weighted contrast-enhanced images were collected at Asan Medical Center and two types of breast MRI were performed in the prone position and the supine position. The four classes, including lungs and heart, muscles and bones, parenchyma with cancer, and skin and fat, were manually drawn by an expert. Semantic segmentation on breast MRI scans with supine, prone, transferred from prone to supine, and pooled supine and prone MRI were trained and compared using 2D U-Net, 3D U-Net, 2D nnU-Net and 3D nnU-Net. The best performance was 2D models with transfer learning. Our results showed excellent performance and could be used for clinical purposes such as breast registration and computer-aided diagnosis.
Collapse
Affiliation(s)
- Sungwon Ham
- Healthcare Readiness Institute for Unified Korea, Korea University Ansan Hospital, Korea University College of Medicine, 123 Jeokgeum-ro, Danwon-gu, Ansan city, Gyeonggi-do, Republic of Korea
| | - Minjee Kim
- Promedius Inc., 4 Songpa-daero 49-gil, Songpa-gu, Seoul, South Korea
| | - Sangwook Lee
- ANYMEDI Inc., 388-1 Pungnap-dong, Songpa-gu, Seoul, South Korea
| | - Chuan-Bing Wang
- Department of Radiology, First Affiliated Hospital of Nanjing Medical University, 300, Guangzhou Road, Nanjing, Jiangsu, China
| | - BeomSeok Ko
- Department of Breast Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea
| | - Namkug Kim
- Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
- Department of Convergence Medicine, Asan Medical Center, Asan Medical Institute of Convergence Science and Technology, University of Ulsan College of Medicine, 5F, 26, Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea.
| |
Collapse
|
5
|
Automated methods for sella turcica segmentation on cephalometric radiographic data using deep learning (CNN) techniques. Oral Radiol 2023; 39:248-265. [PMID: 35737215 DOI: 10.1007/s11282-022-00629-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 05/18/2022] [Indexed: 10/17/2022]
Abstract
OBJECTIVE The objective of this work is to present a novel technique using convolutional neural network (CNN) architectures for automatic segmentation of sella turcica (ST) on cephalometric radiographic image dataset. The proposed work suggests possible deep learning approaches to distinguish ST on complex cephalometric radiographs using deep learning techniques. MATERIALS AND METHODS The dataset of 525 lateral cephalometric images was employed and randomly split into different training and testing subset ratios. The ground truth (annotated images) represents pixel-wise annotation of the ST using an online annotation platform by dental specialists. This study compared convolutional neural network architectures based on fine-tuned versions of the VGG19, ResNet34, InceptionV3, and ResNext50 architectures to select an appropriate model for autonomous segmentation of the nonlinear structure of ST. RESULTS The study compared training and prediction results of the selected models: VGG19, ResNet34, InceptionV3, and ResNext50. The mean IoU scores for VGG19, ResNet34, InceptionV3 and ResNext50 are 0.7651, 0.7241, 0.4717, 0.4287, dice coefficients are 0.7794, 0.7487, 0.4714, 0.4363 and loss scores are 0.0973, 0.1299, 0.2049 and 0.2251, respectively. CONCLUSION The obtained findings suggest that the VGG19 and Resnet34 architectures (mean IoU and dice coefficient > 75%) comparatively outperformed the InceptionV3 and ResNext50 architectures (mean IoU and dice coefficients is around 45%) for considered cephalometric radiographic dataset. The study findings can be used as a reference model for future investigation of non-linear ST morphological characteristics and related biological anomalies.
Collapse
|
6
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
7
|
Iima M, Le Bihan D. The road to breast cancer screening with diffusion MRI. Front Oncol 2023; 13:993540. [PMID: 36895474 PMCID: PMC9989267 DOI: 10.3389/fonc.2023.993540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 01/10/2023] [Indexed: 02/23/2023] Open
Abstract
Breast cancer is the leading cause of cancer in women with a huge medical, social and economic impact. Mammography (MMG) has been the gold standard method until now because it is relatively inexpensive and widely available. However, MMG suffers from certain limitations, such as exposure to X-rays and difficulty of interpretation in dense breasts. Among other imaging methods, MRI has clearly the highest sensitivity and specificity, and breast MRI is the gold standard for the investigation and management of suspicious lesions revealed by MMG. Despite this performance, MRI, which does not rely on X-rays, is not used for screening except for a well-defined category of women at risk, because of its high cost and limited availability. In addition, the standard approach to breast MRI relies on Dynamic Contrast Enhanced (DCE) MRI with the injection of Gadolinium based contrast agents (GBCA), which have their own contraindications and can lead to deposit of gadolinium in tissues, including the brain, when examinations are repeated. On the other hand, diffusion MRI of breast, which provides information on tissue microstructure and tumor perfusion without the use of contrast agents, has been shown to offer higher specificity than DCE MRI with similar sensitivity, superior to MMG. Diffusion MRI thus appears to be a promising alternative approach to breast cancer screening, with the primary goal of eliminating with a very high probability the existence of a life-threatening lesion. To achieve this goal, it is first necessary to standardize the protocols for acquisition and analysis of diffusion MRI data, which have been found to vary largely in the literature. Second, the accessibility and cost-effectiveness of MRI examinations must be significantly improved, which may become possible with the development of dedicated low-field MRI units for breast cancer screening. In this article, we will first review the principles and current status of diffusion MRI, comparing its clinical performance with MMG and DCE MRI. We will then look at how breast diffusion MRI could be implemented and standardized to optimize accuracy of results. Finally, we will discuss how a dedicated, low-cost prototype of breast MRI system could be implemented and introduced to the healthcare market.
Collapse
Affiliation(s)
- Mami Iima
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Kyoto, Japan.,Department of Clinical Innovative Medicine, Institute for Advancement of Clinical and Translational Science, Kyoto University Hospital, Kyoto, Japan
| | - Denis Le Bihan
- NeuroSpin, Joliot Institute, Department of Fundamental Research, Commissariat á l'Energie Atomique (CEA)-Saclay, Gif-sur-Yvette, France
| |
Collapse
|
8
|
Richter L, Fetit AE. Accurate segmentation of neonatal brain MRI with deep learning. Front Neuroinform 2022; 16:1006532. [PMID: 36246394 PMCID: PMC9554654 DOI: 10.3389/fninf.2022.1006532] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 09/06/2022] [Indexed: 11/13/2022] Open
Abstract
An important step toward delivering an accurate connectome of the human brain is robust segmentation of 3D Magnetic Resonance Imaging (MRI) scans, which is particularly challenging when carried out on perinatal data. In this paper, we present an automated, deep learning-based pipeline for accurate segmentation of tissues from neonatal brain MRI and extend it by introducing an age prediction pathway. A major constraint to using deep learning techniques on developing brain data is the need to collect large numbers of ground truth labels. We therefore also investigate two practical approaches that can help alleviate the problem of label scarcity without loss of segmentation performance. First, we examine the efficiency of different strategies of distributing a limited budget of annotated 2D slices over 3D training images. In the second approach, we compare the segmentation performance of pre-trained models with different strategies of fine-tuning on a small subset of preterm infants. Our results indicate that distributing labels over a larger number of brain scans can improve segmentation performance. We also show that even partial fine-tuning can be superior in performance to a model trained from scratch, highlighting the relevance of transfer learning strategies under conditions of label scarcity. We illustrate our findings on large, publicly available T1- and T2-weighted MRI scans (n = 709, range of ages at scan: 26–45 weeks) obtained retrospectively from the Developing Human Connectome Project (dHCP) cohort.
Collapse
Affiliation(s)
- Leonie Richter
- Department of Computing, Imperial College London, London, United Kingdom
- *Correspondence: Leonie Richter
| | - Ahmed E. Fetit
- Department of Computing, Imperial College London, London, United Kingdom
- UKRI CDT in Artificial Intelligence for Healthcare, Imperial College London, London, United Kingdom
| |
Collapse
|
9
|
Bhowmik A, Eskreis-Winkler S. Deep learning in breast imaging. BJR Open 2022; 4:20210060. [PMID: 36105427 PMCID: PMC9459862 DOI: 10.1259/bjro.20210060] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 04/04/2022] [Accepted: 04/21/2022] [Indexed: 11/22/2022] Open
Abstract
Millions of breast imaging exams are performed each year in an effort to reduce the morbidity and mortality of breast cancer. Breast imaging exams are performed for cancer screening, diagnostic work-up of suspicious findings, evaluating extent of disease in recently diagnosed breast cancer patients, and determining treatment response. Yet, the interpretation of breast imaging can be subjective, tedious, time-consuming, and prone to human error. Retrospective and small reader studies suggest that deep learning (DL) has great potential to perform medical imaging tasks at or above human-level performance, and may be used to automate aspects of the breast cancer screening process, improve cancer detection rates, decrease unnecessary callbacks and biopsies, optimize patient risk assessment, and open up new possibilities for disease prognostication. Prospective trials are urgently needed to validate these proposed tools, paving the way for real-world clinical use. New regulatory frameworks must also be developed to address the unique ethical, medicolegal, and quality control issues that DL algorithms present. In this article, we review the basics of DL, describe recent DL breast imaging applications including cancer detection and risk prediction, and discuss the challenges and future directions of artificial intelligence-based systems in the field of breast cancer.
Collapse
Affiliation(s)
- Arka Bhowmik
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
10
|
A U-Net Approach to Apical Lesion Segmentation on Panoramic Radiographs. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7035367. [PMID: 35075428 PMCID: PMC8783705 DOI: 10.1155/2022/7035367] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 12/13/2021] [Accepted: 12/16/2021] [Indexed: 01/21/2023]
Abstract
The purpose of the paper was the assessment of the success of an artificial intelligence (AI) algorithm formed on a deep-convolutional neural network (D-CNN) model for the segmentation of apical lesions on dental panoramic radiographs. A total of 470 anonymized panoramic radiographs were used to progress the D-CNN AI model based on the U-Net algorithm (CranioCatch, Eskisehir, Turkey) for the segmentation of apical lesions. The radiographs were obtained from the Radiology Archive of the Department of Oral and Maxillofacial Radiology of the Faculty of Dentistry of Eskisehir Osmangazi University. A U-Net implemented with PyTorch model (version 1.4.0) was used for the segmentation of apical lesions. In the test data set, the AI model segmented 63 periapical lesions on 47 panoramic radiographs. The sensitivity, precision, and F1-score for segmentation of periapical lesions at 70% IoU values were 0.92, 0.84, and 0.88, respectively. AI systems have the potential to overcome clinical problems. AI may facilitate the assessment of periapical pathology based on panoramic radiographs.
Collapse
|
11
|
Automated segmentation of articular disc of the temporomandibular joint on magnetic resonance images using deep learning. Sci Rep 2022; 12:221. [PMID: 34997167 PMCID: PMC8741780 DOI: 10.1038/s41598-021-04354-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 12/20/2021] [Indexed: 02/06/2023] Open
Abstract
Temporomandibular disorders are typically accompanied by a number of clinical manifestations that involve pain and dysfunction of the masticatory muscles and temporomandibular joint. The most important subgroup of articular abnormalities in patients with temporomandibular disorders includes patients with different forms of articular disc displacement and deformation. Here, we propose a fully automated articular disc detection and segmentation system to support the diagnosis of temporomandibular disorder on magnetic resonance imaging. This system uses deep learning-based semantic segmentation approaches. The study included a total of 217 magnetic resonance images from 10 patients with anterior displacement of the articular disc and 10 healthy control subjects with normal articular discs. These images were used to evaluate three deep learning-based semantic segmentation approaches: our proposed convolutional neural network encoder-decoder named 3DiscNet (Detection for Displaced articular DISC using convolutional neural NETwork), U-Net, and SegNet-Basic. Of the three algorithms, 3DiscNet and SegNet-Basic showed comparably good metrics (Dice coefficient, sensitivity, and positive predictive value). This study provides a proof-of-concept for a fully automated deep learning-based segmentation methodology for articular discs on magnetic resonance images, and obtained promising initial results, indicating that the method could potentially be used in clinical practice for the assessment of temporomandibular disorders.
Collapse
|
12
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|
13
|
Liu Y, Wang X, Li J, Hao L, Zhao T, Zou H, Xu D. Deep Learning Technology in Pathological Image Analysis of Breast Tissue. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:9610830. [PMID: 34868535 PMCID: PMC8635881 DOI: 10.1155/2021/9610830] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 09/16/2021] [Accepted: 10/15/2021] [Indexed: 12/28/2022]
Abstract
To explore the application value of the multilevel pyramid convolutional neural network (MPCNN) model based on convolutional neural network (CNN) in breast histopathology image analysis, in this study, based on CNN algorithm and softmax classifier (SMC), a sparse autoencoder (SAE) is introduced to optimize it. The sliding window method is used to identify cells, and the CNN + SMC pathological image cell detection method is established. Furthermore, the local region active contour (LRAC) is introduced to optimize it and the LRAC fine segmentation model driven by local Gaussian distribution is established. On this basis, the sparse automatic encoder is further introduced to optimize it and the MPCNN model is established. The proposed algorithm is evaluated on the pathological image data set. The results showed that the Acc value, F value, and Re value of pathological cell detection of CNN + SMC algorithm were significantly higher than those of the other two algorithms (P < 0.05). The Dice, OL, Sen, and Spe values of pathological image regional segmentation of CNN algorithm were significantly higher than those of the other two algorithms, and the difference was statistically significant (P < 0.05). The accuracy, recall, and F-measure of the optimized CNN algorithm for detecting breast histopathological images were 85.25%, 89.27%, and 80.09%, respectively. In the two databases with segmentation standards, the segmentation accuracy of MPCNN is 55%, 73.1%, 78.8%, and 82.1%. In the deep convolution network model, the training time of the MPCNN algorithm is about 80 min. It shows that when the feature dimension is low, the feature map extracted by MPCNN is more effective than the traditional feature extraction method.
Collapse
Affiliation(s)
- Yanan Liu
- Medical Technology Department, Qiqihar Medical University, Qiqihar 161006, Heilongjiang, China
| | - Xiaoyan Wang
- Breast Department, Qiqihar First Hospital, Qiqihar 161006, Heilongjiang, China
| | - Jingyu Li
- Medical Technology Department, Qiqihar Medical University, Qiqihar 161006, Heilongjiang, China
| | - Liguo Hao
- Medical Technology Department, Qiqihar Medical University, Qiqihar 161006, Heilongjiang, China
| | - Tianyu Zhao
- Medical Technology Department, Qiqihar Medical University, Qiqihar 161006, Heilongjiang, China
| | - He Zou
- Medical Technology Department, Qiqihar Medical University, Qiqihar 161006, Heilongjiang, China
| | - Dongbin Xu
- Medical Technology Department, Qiqihar Medical University, Qiqihar 161006, Heilongjiang, China
| |
Collapse
|
14
|
Pang X, Wang F, Zhang Q, Li Y, Huang R, Yin X, Fan X. A Pipeline for Predicting the Treatment Response of Neoadjuvant Chemoradiotherapy for Locally Advanced Rectal Cancer Using Single MRI Modality: Combining Deep Segmentation Network and Radiomics Analysis Based on "Suspicious Region". Front Oncol 2021; 11:711747. [PMID: 34422664 PMCID: PMC8371269 DOI: 10.3389/fonc.2021.711747] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 07/06/2021] [Indexed: 12/11/2022] Open
Abstract
Patients with locally advanced rectal cancer (LARC) who achieve a pathologic complete response (pCR) after neoadjuvant chemoradiotherapy (nCRT) typically have a good prognosis. An early and accurate prediction of the treatment response, i.e., whether a patient achieves pCR, could significantly help doctors make tailored plans for LARC patients. This study proposes a pipeline of pCR prediction using a combination of deep learning and radiomics analysis. Taking into consideration missing pre-nCRT magnetic resonance imaging (MRI), as well as aiming to improve the efficiency for clinical application, the pipeline only included a post-nCRT T2-weighted (T2-w) MRI. Unlike other studies that attempted to carefully find the region of interest (ROI) using a pre-nCRT MRI as a reference, we placed the ROI on a "suspicious region", which is a continuous area that has a high possibility to contain a tumor or fibrosis as assessed by radiologists. A deep segmentation network, termed the two-stage rectum-aware U-Net (tsraU-Net), is designed to segment the ROI to substitute for a time-consuming manual delineation. This is followed by a radiomics analysis model based on the ROI to extract the hidden information and predict the pCR status. The data from a total of 275 patients were collected from two hospitals and partitioned into four datasets: Seg-T (N = 88) for training the tsraUNet, Rad-T (N = 107) for building the radiomics model, In-V (N = 46) for internal validation, and Ex-V (N = 34) for external validation. The proposed method achieved an area under the curve (AUC) of 0.829 (95% confidence interval [CI]: 0.821, 0.837) on In-V and 0.815 (95% CI, 0.801, 0.830) on Ex-V. The performance of the method was considerable and stable in two validation sets, indicating that the well-designed pipeline has the potential to be used in real clinical procedures.
Collapse
Affiliation(s)
- Xiaolin Pang
- Department of Radiation Oncology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Guangdong Institute of Gastroenterology, Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Supported by National Key Clinical Discipline, Guangzhou, China
| | - Fang Wang
- Guangdong Institute of Gastroenterology, Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Supported by National Key Clinical Discipline, Guangzhou, China
| | - Qianru Zhang
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Yan Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Ruiyan Huang
- Guangdong Institute of Gastroenterology, Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Supported by National Key Clinical Discipline, Guangzhou, China
- Department of Pathology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xinke Yin
- Guangdong Institute of Gastroenterology, Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Supported by National Key Clinical Discipline, Guangzhou, China
| | - Xinjuan Fan
- Guangdong Institute of Gastroenterology, Guangdong Provincial Key Laboratory of Colorectal and Pelvic Floor Diseases, Supported by National Key Clinical Discipline, Guangzhou, China
- Department of Pathology, The Sixth Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
15
|
Whole Volume Apparent Diffusion Coefficient (ADC) Histogram as a Quantitative Imaging Biomarker to Differentiate Breast Lesions: Correlation with the Ki-67 Proliferation Index. BIOMED RESEARCH INTERNATIONAL 2021; 2021:4970265. [PMID: 34258262 PMCID: PMC8249125 DOI: 10.1155/2021/4970265] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Accepted: 06/09/2021] [Indexed: 11/18/2022]
Abstract
Objectives To evaluate the value of the whole volume apparent diffusion coefficient (ADC) histogram in distinguishing between benign and malignant breast lesions and differentiating different molecular subtypes of breast cancers and to assess the correlation between ADC histogram parameters and Ki-67 expression in breast cancers. Methods The institutional review board approved this retrospective study. Between September 2016 and February 2019, 189 patients with 84 benign lesions and 105 breast cancers underwent magnetic resonance imaging (MRI). Volumetric ADC histograms were created by placing regions of interest (ROIs) on the whole lesion. The relationships between the ADC parameters and Ki-67 were analysed using Spearman's correlation analysis. Results Of the 189 breast lesions included, there were significant differences in patient age (P < 0.001) and lesion size (P = 0.006) between the benign and malignant lesions. The results also demonstrated significant differences in all ADC histogram parameters between benign and malignant lesions (all P < 0.001). The median and mean ADC histogram parameters performed better than the other ADC histogram parameters (AUCs were 0.943 and 0.930, respectively). The receiver operating characteristic (ROC) analysis revealed that the 10th percentile ADC value and entropy could determine the human epidermal growth factor receptor 2 (HER-2) status (both P = 0.001) and estrogen receptor (ER)/progesterone receptor (PR) status (P = 0.020 and P = 0.041, respectively). Among all breast cancer lesions, 35 tumours in the low-proliferation group (Ki − 67 < 14%) and 70 tumours in the high-proliferation group (Ki − 67 ≥ 14) were analysed with ROC curves and correlation analyses. The ROC analysis revealed that entropy and skewness could determine the Ki-67 status (P = 0.007 and P < 0.001, respectively), and there were weak correlations between ADC entropy (r = 0.383) and skewness (r = 0.209) and the Ki-67 index. Conclusion The volumetric ADC histogram could serve as an imaging marker to determine breast lesion characteristics and may be a supplemental method in predicting tumour proliferation in breast cancer.
Collapse
|
16
|
Huo L, Hu X, Xiao Q, Gu Y, Chu X, Jiang L. Segmentation of whole breast and fibroglandular tissue using nnU-Net in dynamic contrast enhanced MR images. Magn Reson Imaging 2021; 82:31-41. [PMID: 34147598 DOI: 10.1016/j.mri.2021.06.017] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 05/14/2021] [Accepted: 06/15/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE Segmentation of the whole breast and fibroglandular tissue (FGT) is important for quantitatively analyzing the breast cancer risk in the dynamic contrast-enhanced magnetic resonance (DCE-MR) images. The purpose of this study is to improve the accuracy and efficiency of the segmentation of the whole breast and FGT in 3-D fat-suppressed DCE-MR images with a versatile deep learning (DL) framework. METHODS We randomly collected 100 breast DCE-MR scans from Shanghai Cancer Hospital of Fudan University. The MR scans in the dataset were different in both the spatial resolution and the MR scanners employed. Furthermore, four breast density categories were assessed by radiologists based on Breast Imaging Reporting and Data System (BI-RADS) of American College of Radiology. The dataset was separated into the training and the testing sets, while keeping a balanced distribution of scans with different imaging parameters and density categories. The nnU-Net has been recently proposed to automatically adapt preprocessing strategies and network architectures for a given medical image dataset, thus showing a great potential in the systematic adaptation of DL methods to different datasets. In this study, we applied the nnU-Net to segment the whole breast and FGT in 3-D fat-suppressed DCE-MR images. Five-fold cross validation was employed to train and validate the segmentation method. RESULTS The segmentation performance was evaluated with the volume and surface agreement metrics between the DL-based automatic and the manually delineated masks, as quantified with the following measures: the average Dice volume overlap (0.968 ± 0.017 and 0.877 ± 0.081), the average surface distances (0.201 ± 0.080 mm and 0.310 ± 0.043 mm), and the Pearson correlation coefficient of masks (0.995 and 0.972) between the automatic and the manually delineated masks, as calculated for the whole breast and the FGT segmentation, respectively. The correlation coefficient between the breast densities obtained with the DL-based segmentation and the manual delineation was 0.981. There was a positive bias of 0.8% (DL-based relative to manual) in breast density measurement with the Bland-Altman plot. The execution time of the DL-based segmentation was approximately 20 s for the whole breast segmentation and 15 s for the FGT segmentation. CONCLUSIONS Our DL-based segmentation framework using nnU-Net could robustly achieve high accuracy and efficiency across variable MR imaging settings without extra pre- or post-processing procedures. It would be useful for developing DCE-MR-based CAD systems to quantify breast cancer risk and to be integrated into the clinical workflow.
Collapse
Affiliation(s)
- Lu Huo
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; University of Chinese Academy of Sciences, No.19 Yuquan Road, Beijing 100049, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China
| | - Xiaoxin Hu
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Qin Xiao
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Yajia Gu
- Department of Radiology, Shanghai Cancer Hospital of Fudan University, No. 270 DongAn Road, Shanghai 200032, China
| | - Xu Chu
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China
| | - Luan Jiang
- Center for Advanced Medical Imaging Technology, Shanghai Advanced Research Institute, Chinese Academy of Sciences, No.99 Haike Road, Shanghai 201200, China; Shanghai United Imaging Healthcare Co., Ltd., No. 2258 Chengbei Road, Shanghai 201807, China.
| |
Collapse
|
17
|
Verde F, Romeo V, Stanzione A, Maurea S. Current trends of artificial intelligence in cancer imaging. Artif Intell Med Imaging 2020; 1:87-93. [DOI: 10.35711/aimi.v1.i3.87] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 09/22/2020] [Accepted: 09/23/2020] [Indexed: 02/06/2023] Open
Abstract
In this editorial, we discussed the current research status of artificial intelligence (AI) in Oncology, reviewing the basics of machine learning (ML) and deep learning (DL) techniques and their emerging applications on clinical and imaging cancer workflow. The growing amounts of available “big data” coupled to the increasing computational power have enabled the development of computer-based systems capable to perform advanced tasks in many areas of clinical care, especially in medical imaging. ML is a branch of data science that allows the creation of computer algorithms that can learn and make predictions without prior instructions. DL is a subgroup of artificial neural network algorithms configurated to automatically extract features and perform high-level tasks; convolutional neural networks are the most common DL models used in medical image analysis. AI methods have been proposed in many areas of oncology granting promising results in radiology-based clinical applications. In detail, we explored the emerging applications of AI in oncological risk assessment, lesion detection, characterization, staging, and therapy response. Critical issues such as the lack of reproducibility and generalizability need to be addressed to fully implement AI systems in clinical practice. Nevertheless, AI impact on cancer imaging has been driving the shift of oncology towards a precision diagnostics and personalized cancer treatment.
Collapse
Affiliation(s)
- Francesco Verde
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Napoli 80131, Italy
| | - Valeria Romeo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Napoli 80131, Italy
| | - Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Napoli 80131, Italy
| | - Simone Maurea
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Napoli 80131, Italy
| |
Collapse
|
18
|
Meyer-Bäse A, Morra L, Meyer-Bäse U, Pinker K. Current Status and Future Perspectives of Artificial Intelligence in Magnetic Resonance Breast Imaging. CONTRAST MEDIA & MOLECULAR IMAGING 2020; 2020:6805710. [PMID: 32934610 PMCID: PMC7474774 DOI: 10.1155/2020/6805710] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Revised: 04/17/2020] [Accepted: 05/28/2020] [Indexed: 12/12/2022]
Abstract
Recent advances in artificial intelligence (AI) and deep learning (DL) have impacted many scientific fields including biomedical maging. Magnetic resonance imaging (MRI) is a well-established method in breast imaging with several indications including screening, staging, and therapy monitoring. The rapid development and subsequent implementation of AI into clinical breast MRI has the potential to affect clinical decision-making, guide treatment selection, and improve patient outcomes. The goal of this review is to provide a comprehensive picture of the current status and future perspectives of AI in breast MRI. We will review DL applications and compare them to standard data-driven techniques. We will emphasize the important aspect of developing quantitative imaging biomarkers for precision medicine and the potential of breast MRI and DL in this context. Finally, we will discuss future challenges of DL applications for breast MRI and an AI-augmented clinical decision strategy.
Collapse
Affiliation(s)
- Anke Meyer-Bäse
- Department of Scientific Computing, Florida State University, Tallahassee, Florida 32310-4120, USA
| | - Lia Morra
- Dipartimento di Automatica e Informatica, Politecnico di Torino, Torino, Italy
| | - Uwe Meyer-Bäse
- Department of Electrical and Computer Engineering, Florida A&M University and Florida State University, Tallahassee, Florida 32310-4120, USA
| | - Katja Pinker
- Department of Biomedical Imaging and Image-Guided Therapy, Division of Molecular and Gender Imaging, Medical University of Vienna, Vienna, Austria
- Department of Radiology, Memorial Sloan-Kettering Cancer Center, New York, New York 10065, USA
| |
Collapse
|
19
|
Double-Step U-Net: A Deep Learning-Based Approach for the Estimation of Wildfire Damage Severity through Sentinel-2 Satellite Data. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10124332] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Wildfire damage severity census is a crucial activity for estimating monetary losses and for planning a prompt restoration of the affected areas. It consists in assigning, after a wildfire, a numerical damage/severity level, between 0 and 4, to each sub-area of the hit area. While burned area identification has been automatized by means of machine learning algorithms, the wildfire damage severity census operation is usually still performed manually and requires a significant effort of domain experts through the analysis of imagery and, sometimes, on-site missions. In this paper, we propose a novel supervised learning approach for the automatic estimation of the damage/severity level of the hit areas after the wildfire extinction. Specifically, the proposed approach, leveraging on the combination of a classification algorithm and a regression one, predicts the damage/severity level of the sub-areas of the area under analysis by processing a single post-fire satellite acquisition. Our approach has been validated in five different European countries and on 21 wildfires. It has proved to be robust for the application in several geographical contexts presenting similar geological aspects.
Collapse
|
20
|
Adamian N, Naunheim MR, Jowett N. An Open-Source Computer Vision Tool for Automated Vocal Fold Tracking From Videoendoscopy. Laryngoscope 2020; 131:E219-E225. [PMID: 32356903 DOI: 10.1002/lary.28669] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Revised: 03/03/2020] [Accepted: 03/18/2020] [Indexed: 12/17/2022]
Abstract
OBJECTIVES Contemporary clinical assessment of vocal fold adduction and abduction is qualitative and subjective. Herein is described a novel computer vision tool for automated quantitative tracking of vocal fold motion from videolaryngoscopy. The potential of this software as a diagnostic aid in unilateral vocal fold paralysis is demonstrated. STUDY DESIGN Case-control. METHODS A deep-learning algorithm was trained for vocal fold localization from videoendoscopy for automated frame-wise estimation of glottic opening angles. Algorithm accuracy was compared against manual expert markings. Maximum glottic opening angles between adults with normal movements (N = 20) and those with unilateral vocal fold paralysis (N = 20) were characterized. RESULTS Algorithm angle estimations demonstrated a correlation coefficient of 0.97 (P < .001) and mean absolute difference of 3.72° (standard deviation [SD], 3.49°) in comparison to manual expert markings. In comparison to those with normal movements, patients with unilateral vocal fold paralysis demonstrated significantly lower maximal glottic opening angles (mean 68.75° ± 11.82° vs. 49.44° ± 10.42°; difference, 19.31°; 95% confidence interval [CI] [12.17°-26.44°]; P < .001). Maximum opening angle less than 58.65° predicted unilateral vocal fold paralysis with a sensitivity of 0.85 and specificity of 0.85, with an area under the receiver operating characteristic curve of 0.888 (95% CI [0.784-0.991]; P < .001). CONCLUSION A user-friendly software tool for automated quantification of vocal fold movements from previously recorded videolaryngoscopy examinations is presented, termed automated glottic action tracking by artificial intelligence (AGATI). This tool may prove useful for diagnosis and outcomes tracking of vocal fold movement disorders. LEVEL OF EVIDENCE IV Laryngoscope, 131:E219-E225, 2021.
Collapse
Affiliation(s)
- Nat Adamian
- Surgical Photonics & Engineering Laboratory, Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear and Harvard Medical School, Boston, Massachusetts, U.S.A
| | - Matthew R Naunheim
- Division of Laryngology, Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear and Harvard Medical School, Boston, Massachusetts, U.S.A
| | - Nate Jowett
- Surgical Photonics & Engineering Laboratory, Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear and Harvard Medical School, Boston, Massachusetts, U.S.A
| |
Collapse
|