1
|
Fujita S, Fushimi Y, Ito R, Matsui Y, Tatsugami F, Fujioka T, Ueda D, Fujima N, Hirata K, Tsuboyama T, Nozaki T, Yanagawa M, Kamagata K, Kawamura M, Yamada A, Nakaura T, Naganawa S. Advancing clinical MRI exams with artificial intelligence: Japan's contributions and future prospects. Jpn J Radiol 2025; 43:355-364. [PMID: 39548049 PMCID: PMC11868336 DOI: 10.1007/s11604-024-01689-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Accepted: 10/22/2024] [Indexed: 11/17/2024]
Abstract
In this narrative review, we review the applications of artificial intelligence (AI) into clinical magnetic resonance imaging (MRI) exams, with a particular focus on Japan's contributions to this field. In the first part of the review, we introduce the various applications of AI in optimizing different aspects of the MRI process, including scan protocols, patient preparation, image acquisition, image reconstruction, and postprocessing techniques. Additionally, we examine AI's growing influence in clinical decision-making, particularly in areas such as segmentation, radiation therapy planning, and reporting assistance. By emphasizing studies conducted in Japan, we highlight the nation's contributions to the advancement of AI in MRI. In the latter part of the review, we highlight the characteristics that make Japan a unique environment for the development and implementation of AI in MRI examinations. Japan's healthcare landscape is distinguished by several key factors that collectively create a fertile ground for AI research and development. Notably, Japan boasts one of the highest densities of MRI scanners per capita globally, ensuring widespread access to the exam. Japan's national health insurance system plays a pivotal role by providing MRI scans to all citizens irrespective of socioeconomic status, which facilitates the collection of inclusive and unbiased imaging data across a diverse population. Japan's extensive health screening programs, coupled with collaborative research initiatives like the Japan Medical Imaging Database (J-MID), enable the aggregation and sharing of large, high-quality datasets. With its technological expertise and healthcare infrastructure, Japan is well-positioned to make meaningful contributions to the MRI-AI domain. The collaborative efforts of researchers, clinicians, and technology experts, including those in Japan, will continue to advance the future of AI in clinical MRI, potentially leading to improvements in patient care and healthcare efficiency.
Collapse
Affiliation(s)
- Shohei Fujita
- Department of Radiology, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo, Japan.
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Sakyoku, Kyoto, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Kita-Ku, Okayama, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Minami-Ku, Hiroshima City, Hiroshima, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo, Japan
| | - Daiju Ueda
- Department of Artificial Intelligence, Graduate School of Medicine, Osaka Metropolitan University, Abeno-Ku, Osaka, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Kenji Hirata
- Department of Nuclear Medicine, Hokkaido University Hospital, Sapporo, Hokkaido, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Kobe University Graduate School of Medicine, Chuo-Ku, Kobe, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Tokyo, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Tokyo, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| | - Akira Yamada
- Medical Data Science Course, Shinshu University School of Medicine, Matsumoto, Nagano, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Kumamoto, Kumamoto, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi, Japan
| |
Collapse
|
2
|
Nakamoto I, Chen H, Wang R, Guo Y, Chen W, Feng J, Wu J. WDRIV-Net: a weighted ensemble transfer learning to improve automatic type stratification of lumbar intervertebral disc bulge, prolapse, and herniation. Biomed Eng Online 2025; 24:11. [PMID: 39915867 PMCID: PMC11800529 DOI: 10.1186/s12938-025-01341-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 01/20/2025] [Indexed: 02/11/2025] Open
Abstract
The degeneration of the intervertebral discs in the lumbar spine is the common cause of neurological and physical dysfunctions and chronic disability of patients, which can be stratified into single-(e.g., disc herniation, prolapse, or bulge) and comorbidity-type degeneration (e.g., simultaneous presence of two or more conditions), respectively. A sample of lumbar magnetic resonance imaging (MRI) images from multiple clinical hospitals in China was collected and used in the proposal assessment. We devised a weighted transfer learning framework WDRIV-Net by ensembling four pre-trained models including Densenet169, ResNet101, InceptionV3, and VGG19. The proposed approach was applied to the clinical data and achieved 96.25% accuracy, surpassing the benchmark ResNet101 (87.5%), DenseNet169 (82.5%), VGG19 (88.75%), InceptionV3 (93.75%), and other state-of-the-art (SOTA) ensemble deep learning models. Furthermore, improved performance was observed as well for the metric of the area under the curve (AUC), producing a ≥ 7% increase versus other SOTA ensemble learning, a ≥ 6% increase versus most-studied models, and a ≥ 2% increase versus the baselines. WDRIV-Net can serve as a guide in the initial and efficient type screening of complex degeneration of lumbar intervertebral discs (LID) and assist in the early-stage selection of clinically differentiated treatment options.
Collapse
Affiliation(s)
- Ichiro Nakamoto
- School of Internet Economics and Business, Fujian University of Technology, Fuzhou, China
| | - Hua Chen
- Department of Radiology, Pingtan Comprehensive Experimentation Area Hospital, Pingtan, China
- Department of Radiology, Fujian Medical University Union Hospital, Fuzhou, China
| | - Rui Wang
- Department of Neurosurgery, Pingtan Comprehensive Experimentation Area Hospital, Pingtan, China
- Department of Neurosurgery, Fujian Medical University Union Hospital, Fuzhou, China
| | - Yan Guo
- School of Internet Economics and Business, Fujian University of Technology, Fuzhou, China
| | - Wei Chen
- School of Internet Economics and Business, Fujian University of Technology, Fuzhou, China
| | - Jie Feng
- Department of Radiology, Pingtan Comprehensive Experimentation Area Hospital, Pingtan, China
- Department of Radiology, Fujian Medical University Union Hospital, Fuzhou, China
| | - Jianfeng Wu
- Department of Neurosurgery, Pingtan Comprehensive Experimentation Area Hospital, Pingtan, China.
- Department of Neurosurgery, Fujian Medical University Union Hospital, Fuzhou, China.
| |
Collapse
|
3
|
Luo L, Wang X, Lin Y, Ma X, Tan A, Chan R, Vardhanabhuti V, Chu WC, Cheng KT, Chen H. Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions. IEEE Rev Biomed Eng 2025; 18:130-151. [PMID: 38265911 DOI: 10.1109/rbme.2024.3357877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. This paper provides an extensive review of deep learning-based breast cancer imaging research, covering studies on mammograms, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are elaborated and discussed. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.
Collapse
|
4
|
Al Mansour AGM, Alshomrani F, Alfahaid A, Almutairi ATM. MammoViT: A Custom Vision Transformer Architecture for Accurate BIRADS Classification in Mammogram Analysis. Diagnostics (Basel) 2025; 15:285. [PMID: 39941215 PMCID: PMC11817779 DOI: 10.3390/diagnostics15030285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2024] [Revised: 01/10/2025] [Accepted: 01/24/2025] [Indexed: 02/16/2025] Open
Abstract
Background: Breast cancer screening through mammography interpretation is crucial for early detection and improved patient outcomes. However, the manual classification of mammograms using the BIRADS (Breast Imaging-Reporting and Data System) remains challenging due to subtle imaging features, inter-reader variability, and increasing radiologist workload. Traditional computer-aided detection systems often struggle with complex feature extraction and contextual understanding of mammographic abnormalities. To address these limitations, this study proposes MammoViT, a novel hybrid deep learning framework that leverages both ResNet50's hierarchical feature extraction capabilities and Vision Transformer's ability to capture long-range dependencies in images. Methods: We implemented a multi-stage approach utilizing a pre-trained ResNet50 model for initial feature extraction from mammogram images. To address the significant class imbalance in our four-class BIRADS dataset, we applied SMOTE (Synthetic Minority Over-sampling Technique) to generate synthetic samples for minority classes. The extracted feature arrays were transformed into non-overlapping patches with positional encodings for Vision Transformer processing. The Vision Transformer employs multi-head self-attention mechanisms to capture both local and global relationships between image patches, with each attention head learning different aspects of spatial dependencies. The model was optimized using Keras Tuner and trained using 5-fold cross-validation with early stopping to prevent overfitting. Results: MammoViT achieved 97.4% accuracy in classifying mammogram images across different BIRADS categories. The model's effectiveness was validated through comprehensive evaluation metrics, including a classification report, confusion matrix, probability distribution, and comparison with existing studies. Conclusions: MammoViT effectively combines ResNet50 and Vision Transformer architectures while addressing the challenge of imbalanced medical imaging datasets. The high accuracy and robust performance demonstrate its potential as a reliable tool for supporting clinical decision-making in breast cancer screening.
Collapse
Affiliation(s)
- Abdullah G. M. Al Mansour
- Radiology and Medical Imaging Department, College of Applied Medical Sciences, Prince Sattam Bin Abdulaziz University, Alkharj 11942, Saudi Arabia
| | - Faisal Alshomrani
- Department of Diagnostic Radiology Technology, College of Applied Medical Science, Taibah University, Medinah 42353, Saudi Arabia
| | - Abdullah Alfahaid
- College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia
| | - Abdulaziz T. M. Almutairi
- Department of Computer, College of Science and Humanities, Shaqra University, Shaqra 11961, Saudi Arabia
| |
Collapse
|
5
|
Wang W, Li J, Wang Z, Liu Y, Yang F, Cui S. Study on the classification of benign and malignant breast lesions using a multi-sequence breast MRI fusion radiomics and deep learning model. Eur J Radiol Open 2024; 13:100607. [PMID: 39502650 PMCID: PMC11536030 DOI: 10.1016/j.ejro.2024.100607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2024] [Revised: 10/05/2024] [Accepted: 10/13/2024] [Indexed: 11/08/2024] Open
Abstract
Purpose To develop a multi-modal model combining multi-sequence breast MRI fusion radiomics and deep learning for the classification of benign and malignant breast lesions, to assist clinicians in better selecting treatment plans. Methods A total of 314 patients who underwent breast MRI examinations were included. They were randomly divided into training, validation, and test sets in a ratio of 7:1:2. Subsequently, features of T1-weighted images (T1WI), T2-weighted images (T2WI), and dynamic contrast-enhanced MRI (DCE-MRI) were extracted using the convolutional neural network ResNet50 for fusion, and then combined with radiomic features from the three sequences. The following models were established: T1 model, T2 model, DCE model, DCE_T1_T2 model, and DCE_T1_T2_rad model. The performance of the models was evaluated by the area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, specificity, positive predictive value, and negative predictive value. The differences between the DCE_T1_T2_rad model and the other four models were compared using the Delong test, with a P-value < 0.05 considered statistically significant. Results The five models established in this study performed well, with AUC values of 0.53 for the T1 model, 0.62 for the T2 model, 0.79 for the DCE model, 0.94 for the DCE_T1_T2 model, and 0.98 for the DCE_T1_T2_rad model. The DCE_T1_T2_rad model showed statistically significant differences (P < 0.05) compared to the other four models. Conclusion The use of a multi-modal model combining multi-sequence breast MRI fusion radiomics and deep learning can effectively improve the diagnostic performance of breast lesion classification.
Collapse
Affiliation(s)
- Wenjiang Wang
- Graduate Faculty, Hebei North University, Zhangjiakou, Hebei, China
| | - Jiaojiao Li
- Department of Medical Imaging, The First Affiliated Hospital of Hebei North University, Zhangjiakou, Hebei, China
| | - Zimeng Wang
- Graduate Faculty, Hebei North University, Zhangjiakou, Hebei, China
| | - Yanjun Liu
- Graduate Faculty, Hebei North University, Zhangjiakou, Hebei, China
| | - Fei Yang
- Department of Medical Imaging, The First Affiliated Hospital of Hebei North University, Zhangjiakou, Hebei, China
| | - Shujun Cui
- Department of Medical Imaging, The First Affiliated Hospital of Hebei North University, Zhangjiakou, Hebei, China
| |
Collapse
|
6
|
Gullo RL, Brunekreef J, Marcus E, Han LK, Eskreis-Winkler S, Thakur SB, Mann R, Lipman KG, Teuwen J, Pinker K. AI Applications to Breast MRI: Today and Tomorrow. J Magn Reson Imaging 2024; 60:2290-2308. [PMID: 38581127 PMCID: PMC11452568 DOI: 10.1002/jmri.29358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 03/07/2024] [Accepted: 03/09/2024] [Indexed: 04/08/2024] Open
Abstract
In breast imaging, there is an unrelenting increase in the demand for breast imaging services, partly explained by continuous expanding imaging indications in breast diagnosis and treatment. As the human workforce providing these services is not growing at the same rate, the implementation of artificial intelligence (AI) in breast imaging has gained significant momentum to maximize workflow efficiency and increase productivity while concurrently improving diagnostic accuracy and patient outcomes. Thus far, the implementation of AI in breast imaging is at the most advanced stage with mammography and digital breast tomosynthesis techniques, followed by ultrasound, whereas the implementation of AI in breast magnetic resonance imaging (MRI) is not moving along as rapidly due to the complexity of MRI examinations and fewer available dataset. Nevertheless, there is persisting interest in AI-enhanced breast MRI applications, even as the use of and indications of breast MRI continue to expand. This review presents an overview of the basic concepts of AI imaging analysis and subsequently reviews the use cases for AI-enhanced MRI interpretation, that is, breast MRI triaging and lesion detection, lesion classification, prediction of treatment response, risk assessment, and image quality. Finally, it provides an outlook on the barriers and facilitators for the adoption of AI in breast MRI. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 6.
Collapse
Affiliation(s)
- Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Joren Brunekreef
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Eric Marcus
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Lynn K Han
- Weill Cornell Medical College, New York-Presbyterian Hospital, New York, NY, USA
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Sunitha B Thakur
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Ritse Mann
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Kevin Groot Lipman
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Jonas Teuwen
- AI for Oncology, Netherlands Cancer Institute, Amsterdam, the Netherlands
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Katja Pinker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| |
Collapse
|
7
|
Hussain MA, LaMay D, Grant E, Ou Y. Deep learning of structural MRI predicts fluid, crystallized, and general intelligence. Sci Rep 2024; 14:27935. [PMID: 39537706 PMCID: PMC11561325 DOI: 10.1038/s41598-024-78157-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024] Open
Abstract
Can brain structure predict human intelligence? T1-weighted structural brain magnetic resonance images (sMRI) have been correlated with intelligence. However, the population-level association does not fully account for individual variability in intelligence. To address this, studies have emerged recently to predict individual subject's intelligence or neurocognitive scores. However, they are mostly on predicting fluid intelligence (the ability to solve new problems). Studies are lacking to predict crystallized intelligence (the ability to accumulate knowledge) or general intelligence (fluid and crystallized intelligence combined). This study tests whether deep learning of sMRI can predict an individual subject's verbal, comprehensive, and full-scale intelligence quotients (VIQ, PIQ, and FSIQ), which reflect fluid and crystallized intelligence. We performed a comprehensive set of 432 experiments, using different input image channels, six deep learning models, and two outcome settings, in 850 healthy and autistic subjects 6-64 years of age. Our findings indicate a statistically significant potential of T1-weighted sMRI in predicting intelligence, with a Pearson correlation exceeding 0.21 (p < 0.001). Interestingly, we observed that an increase in the complexity of deep learning models does not necessarily translate to higher accuracy in intelligence prediction. The interpretations of our 2D and 3D CNNs, based on GradCAM, align well with the Parieto-Frontal Integration Theory (P-FIT), reinforcing the theory's suggestion that human intelligence is a result of interactions among various brain regions, including the occipital, temporal, parietal, and frontal lobes. These promising results invite further studies and open new questions in the field.
Collapse
Affiliation(s)
- Mohammad Arafat Hussain
- Department of Pediatrics, Boston Children's Hospital, Harvard Medical School, 401 Park Drive, Boston, MA, 02115, USA
| | - Danielle LaMay
- Department of Pediatrics, Boston Children's Hospital, Harvard Medical School, 401 Park Drive, Boston, MA, 02115, USA
- Khoury College of Computer and Information Science, Northeastern University, 360 Huntington Ave, Boston, MA, 02115, USA
| | - Ellen Grant
- Department of Pediatrics, Boston Children's Hospital, Harvard Medical School, 401 Park Drive, Boston, MA, 02115, USA
- Department of Radiology, Harvard Medical School, 401 Park Drive, Boston, MA, 02115, USA
| | - Yangming Ou
- Department of Pediatrics, Boston Children's Hospital, Harvard Medical School, 401 Park Drive, Boston, MA, 02115, USA.
- Department of Radiology, Harvard Medical School, 401 Park Drive, Boston, MA, 02115, USA.
- Computational Health Informatics Program, Boston Children's Hospital, Harvard Medical School, 401 Park Drive, Boston, MA, 02115, USA.
| |
Collapse
|
8
|
Arslan M, Asim M, Sattar H, Khan A, Thoppil Ali F, Zehra M, Talluri K. Role of Radiology in the Diagnosis and Treatment of Breast Cancer in Women: A Comprehensive Review. Cureus 2024; 16:e70097. [PMID: 39449897 PMCID: PMC11500669 DOI: 10.7759/cureus.70097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/24/2024] [Indexed: 10/26/2024] Open
Abstract
Breast cancer remains a leading cause of morbidity and mortality among women worldwide. Early detection and precise diagnosis are critical for effective treatment and improved patient outcomes. This review explores the evolving role of radiology in the diagnosis and treatment of breast cancer, highlighting advancements in imaging technologies and the integration of artificial intelligence (AI). Traditional imaging modalities such as mammography, ultrasound, and magnetic resonance imaging have been the cornerstone of breast cancer diagnostics, with each modality offering unique advantages. The advent of radiomics, which involves extracting quantitative data from medical images, has further augmented the diagnostic capabilities of these modalities. AI, particularly deep learning algorithms, has shown potential in improving diagnostic accuracy and reducing observer variability across imaging modalities. AI-driven tools are increasingly being integrated into clinical workflows to assist in image interpretation, lesion classification, and treatment planning. Additionally, radiology plays a crucial role in guiding treatment decisions, particularly in the context of image-guided radiotherapy and monitoring response to neoadjuvant chemotherapy. The review also discusses the emerging field of theranostics, where diagnostic imaging is combined with therapeutic interventions to provide personalized cancer care. Despite these advancements, challenges such as the need for large annotated datasets and the integration of AI into clinical practice remain. The review concludes that while the role of radiology in breast cancer management is rapidly evolving, further research is required to fully realize the potential of these technologies in improving patient outcomes.
Collapse
Affiliation(s)
| | - Muhammad Asim
- Emergency Medicine, Royal Free Hospital, London, GBR
| | - Hina Sattar
- Medicine, Dow University of Health Sciences, Karachi, PAK
| | - Anita Khan
- Medicine, Khyber Girls Medical College, Peshawar, PAK
| | | | - Muneeza Zehra
- Internal Medicine, Karachi Medical and Dental College, Karachi, PAK
| | - Keerthi Talluri
- General Medicine, GSL (Ganni Subba Lakshmi garu) Medical College, Rajahmundry, IND
| |
Collapse
|
9
|
Lin Z, Chen L, Wang Y, Zhang T, Huang P. Improving ultrasound diagnostic Precision for breast cancer and adenosis with modality-specific enhancement (MSE) - Breast Net. Cancer Lett 2024; 596:216977. [PMID: 38795759 DOI: 10.1016/j.canlet.2024.216977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 05/10/2024] [Accepted: 05/18/2024] [Indexed: 05/28/2024]
Abstract
Adenosis is a benign breast condition whose lesions can mimic breast carcinoma and is evaluated for malignancy with the Breast Imaging-Reporting and Data System (BI-RADS). We construct and validate the performance of modality-specific enhancement (MSE)-Breast Net based on multimodal ultrasound images and compare it to the BI-RADS in differentiating adenosis from breast cancer. A total of 179 patients with breast carcinoma and 229 patients with adenosis were included in this retrospective, two-institution study, then divided into a training cohort (institution I, n = 292) and a validation cohort (institution II, n = 116). In the training cohort, the final model had a significantly greater AUC (0.82; P < 0.05) than B-mode-based model (0.69, 95% CI [0.49-0.90]). In the validation cohort, the AUC of the final model was 0.81, greater than that of the BI-RADS (0.75, P < 0.05). The multimodal model outperformed the individual and bimodal models, reaching a significantly greater AUC of 0.87 (95% CI = 0.69-1.0) (P < 0.05). MSE-Breast Net, based on multimodal ultrasound images, exhibited better diagnostic performance than the BI-RADS in differentiating adenosis from breast cancer and may contribute to clinical diagnosis and treatment.
Collapse
Affiliation(s)
- Zimei Lin
- Department of Ultrasound in Medicine, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China
| | - Libin Chen
- Department of Ultrasound in Medicine, The First Affiliated Hospital of Ningbo University, Ningbo, 315201, China
| | - Yunzhong Wang
- Department of Ultrasound in Medicine, The First Affiliated Hospital of Ningbo University, Ningbo, 315201, China
| | - Tao Zhang
- Department of Ultrasound in Medicine, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China.
| | - Pintong Huang
- Department of Ultrasound in Medicine, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China; Research Center of Ultrasound in Medicine and Biomedical Engineering, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China; Research Center for Life Science and Human Health, Binjiang Institute of Zhejiang University, Hangzhou, 310053, China.
| |
Collapse
|
10
|
Fujioka T, Kubota K, Hsu JF, Chang RF, Sawada T, Ide Y, Taruno K, Hankyo M, Kurita T, Nakamura S, Tateishi U, Takei H. Examining the effectiveness of a deep learning-based computer-aided breast cancer detection system for breast ultrasound. J Med Ultrason (2001) 2023; 50:511-520. [PMID: 37400724 PMCID: PMC10556122 DOI: 10.1007/s10396-023-01332-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 05/03/2023] [Indexed: 07/05/2023]
Abstract
PURPOSE This study aimed to evaluate the clinical usefulness of a deep learning-based computer-aided detection (CADe) system for breast ultrasound. METHODS The set of 88 training images was expanded to 14,000 positive images and 50,000 negative images. The CADe system was trained to detect lesions in real- time using deep learning with an improved model of YOLOv3-tiny. Eighteen readers evaluated 52 test image sets with and without CADe. Jackknife alternative free-response receiver operating characteristic analysis was used to estimate the effectiveness of this system in improving lesion detection. RESULT The area under the curve (AUC) for image sets was 0.7726 with CADe and 0.6304 without CADe, with a 0.1422 difference, indicating that with CADe was significantly higher than that without CADe (p < 0.0001). The sensitivity per case was higher with CADe (95.4%) than without CADe (83.7%). The specificity of suspected breast cancer cases with CADe (86.6%) was higher than that without CADe (65.7%). The number of false positives per case (FPC) was lower with CADe (0.22) than without CADe (0.43). CONCLUSION The use of a deep learning-based CADe system for breast ultrasound by readers significantly improved their reading ability. This system is expected to contribute to highly accurate breast cancer screening and diagnosis.
Collapse
Affiliation(s)
- Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Kazunori Kubota
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan.
- Department of Radiology, Dokkyo Medical University Saitama Medical Center, 2-1-50 Minami-Koshigaya, Koshigaya, Saitama, 343-8555, Japan.
| | - Jen Feng Hsu
- Department of Computer Science and Information Engineering, National Taiwan University, No. 1, Sec. 4, Roosevelt Rd, Taipei, 10617, Taiwan, ROC
| | - Ruey Feng Chang
- Department of Computer Science and Information Engineering, National Taiwan University, No. 1, Sec. 4, Roosevelt Rd, Taipei, 10617, Taiwan, ROC
| | - Terumasa Sawada
- Department of Breast Surgery, NTT Medical Center Tokyo, 5-9-22 Higashi-Gotanda, Shinagawa-ku, Tokyo, 141-8625, Japan
- Department of Breast Surgical Oncology, Department of Surgery, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Yoshimi Ide
- Department of Breast Surgical Oncology, Department of Surgery, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
- Department of Breast Oncology, Kikuna Memorial Hospital, 4-4-27 Kikuna, Kohoku-ku, Yokohama, 222-0011, Japan
| | - Kanae Taruno
- Department of Breast Surgical Oncology, Department of Surgery, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Meishi Hankyo
- Department of Breast Surgical Oncology, Nippon Medical School, 1-1-5 Sendagi, Bunkyo-ku, Tokyo, 113-8602, Japan
| | - Tomoko Kurita
- Department of Breast Surgical Oncology, Nippon Medical School, 1-1-5 Sendagi, Bunkyo-ku, Tokyo, 113-8602, Japan
| | - Seigo Nakamura
- Department of Breast Surgical Oncology, Department of Surgery, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-ku, Tokyo, 142-8666, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Hiroyuki Takei
- Department of Breast Surgical Oncology, Nippon Medical School, 1-1-5 Sendagi, Bunkyo-ku, Tokyo, 113-8602, Japan
| |
Collapse
|
11
|
Adam R, Dell'Aquila K, Hodges L, Maldjian T, Duong TQ. Deep learning applications to breast cancer detection by magnetic resonance imaging: a literature review. Breast Cancer Res 2023; 25:87. [PMID: 37488621 PMCID: PMC10367400 DOI: 10.1186/s13058-023-01687-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 07/11/2023] [Indexed: 07/26/2023] Open
Abstract
Deep learning analysis of radiological images has the potential to improve diagnostic accuracy of breast cancer, ultimately leading to better patient outcomes. This paper systematically reviewed the current literature on deep learning detection of breast cancer based on magnetic resonance imaging (MRI). The literature search was performed from 2015 to Dec 31, 2022, using Pubmed. Other database included Semantic Scholar, ACM Digital Library, Google search, Google Scholar, and pre-print depositories (such as Research Square). Articles that were not deep learning (such as texture analysis) were excluded. PRISMA guidelines for reporting were used. We analyzed different deep learning algorithms, methods of analysis, experimental design, MRI image types, types of ground truths, sample sizes, numbers of benign and malignant lesions, and performance in the literature. We discussed lessons learned, challenges to broad deployment in clinical practice and suggested future research directions.
Collapse
Affiliation(s)
- Richard Adam
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Kevin Dell'Aquila
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Laura Hodges
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Takouhie Maldjian
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA
| | - Tim Q Duong
- Department of Radiology, Albert Einstein College of Medicine and the Montefiore Medical Center, 1300 Morris Park Avenue, Bronx, NY, 10461, USA.
| |
Collapse
|
12
|
Vainio T, Mäkelä T, Arkko A, Savolainen S, Kangasniemi M. Leveraging open dataset and transfer learning for accurate recognition of chronic pulmonary embolism from CT angiogram maximum intensity projection images. Eur Radiol Exp 2023; 7:33. [PMID: 37340248 DOI: 10.1186/s41747-023-00346-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 04/14/2023] [Indexed: 06/22/2023] Open
Abstract
BACKGROUND Early diagnosis of the potentially fatal but curable chronic pulmonary embolism (CPE) is challenging. We have developed and investigated a novel convolutional neural network (CNN) model to recognise CPE from CT pulmonary angiograms (CTPA) based on the general vascular morphology in two-dimensional (2D) maximum intensity projection images. METHODS A CNN model was trained on a curated subset of a public pulmonary embolism CT dataset (RSPECT) with 755 CTPA studies, including patient-level labels of CPE, acute pulmonary embolism (APE), or no pulmonary embolism. CPE patients with right-to-left-ventricular ratio (RV/LV) < 1 and APE patients with RV/LV ≥ 1 were excluded from the training. Additional CNN model selection and testing were done on local data with 78 patients without the RV/LV-based exclusion. We calculated area under the receiver operating characteristic curves (AUC) and balanced accuracies to evaluate the CNN performance. RESULTS We achieved a very high CPE versus no-CPE classification AUC 0.94 and balanced accuracy 0.89 on the local dataset using an ensemble model and considering CPE to be present in either one or both lungs. CONCLUSIONS We propose a novel CNN model with excellent predictive accuracy to differentiate chronic pulmonary embolism with RV/LV ≥ 1 from acute pulmonary embolism and non-embolic cases from 2D maximum intensity projection reconstructions of CTPA. RELEVANCE STATEMENT A DL CNN model identifies chronic pulmonary embolism from CTA with an excellent predictive accuracy. KEY POINTS • Automatic recognition of CPE from computed tomography pulmonary angiography was developed. • Deep learning was applied on two-dimensional maximum intensity projection images. • A large public dataset was used for training the deep learning model. • The proposed model showed an excellent predictive accuracy.
Collapse
Affiliation(s)
- Tuomas Vainio
- Radiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, 00290, Helsinki, Finland.
| | - Teemu Mäkelä
- Radiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, 00290, Helsinki, Finland
- Department of Physics, University of Helsinki, Helsinki, Finland
| | - Anssi Arkko
- Radiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, 00290, Helsinki, Finland
| | - Sauli Savolainen
- Radiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, 00290, Helsinki, Finland
- Department of Physics, University of Helsinki, Helsinki, Finland
| | - Marko Kangasniemi
- Radiology, HUS Medical Imaging Center, University of Helsinki and Helsinki University Hospital, 00290, Helsinki, Finland
| |
Collapse
|
13
|
Sujatha R, Chatterjee JM, Angelopoulou A, Kapetanios E, Srinivasu PN, Hemanth DJ. A transfer learning‐based system for grading breast invasive ductal carcinoma. IET IMAGE PROCESSING 2023; 17:1979-1990. [DOI: 10.1049/ipr2.12660] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 09/30/2022] [Indexed: 01/15/2025]
Abstract
AbstractBreast carcinoma is a sort of malignancy that begins in the breast. Breast malignancy cells generally structure a tumour that can routinely be seen on an x‐ray or felt like a lump. Despite advances in screening, treatment, and observation that have improved patient endurance rates, breast carcinoma is the most regularly analyzed malignant growth and the subsequent driving reason for malignancy mortality among ladies. Invasive ductal carcinoma is the most boundless breast malignant growth with about 80% of all analyzed cases. It has been found from numerous types of research that artificial intelligence has tremendous capabilities, which is why it is used in various sectors, especially in the healthcare domain. In the initial phase of the medical field, mammography is used for diagnosis, and finding cancer in the case of a dense breast is challenging. The evolution of deep learning and applying the same in the findings are helpful for earlier tracking and medication. The authors have tried to utilize the deep learning concepts for grading breast invasive ductal carcinoma using Transfer Learning in the present work. The authors have used five transfer learning approaches here, namely VGG16, VGG19, InceptionReNetV2, DenseNet121, and DenseNet201 with 50 epochs in the Google Colab platform which has a single 12GB NVIDIA Tesla K80 graphical processing unit (GPU) support that can be used up to 12 h continuously. The dataset used for this work can be openly accessed from http://databiox.com. The experimental results that the authors have received regarding the algorithm's accuracy are as follows: VGG16 with 92.5%, VGG19 with 89.77%, InceptionReNetV2 with 84.46%, DenseNet121 with 92.64%, DenseNet201 with 85.22%. From the experimental results, it is clear that the DenseNet121 gives the maximum accuracy in terms of cancer grading, whereas the InceptionReNetV2 has minimal accuracy.
Collapse
Affiliation(s)
| | | | | | - Epaminondas Kapetanios
- School of Physics, Engineering and Computer Science University of Hertfordshire Hertfordshire UK
| | | | | |
Collapse
|
14
|
Fujioka T, Satoh Y, Imokawa T, Mori M, Yamaga E, Takahashi K, Kubota K, Onishi H, Tateishi U. Proposal to Improve the Image Quality of Short-Acquisition Time-Dedicated Breast Positron Emission Tomography Using the Pix2pix Generative Adversarial Network. Diagnostics (Basel) 2022; 12:diagnostics12123114. [PMID: 36553120 PMCID: PMC9777139 DOI: 10.3390/diagnostics12123114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 11/26/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
This study aimed to evaluate the ability of the pix2pix generative adversarial network (GAN) to improve the image quality of low-count dedicated breast positron emission tomography (dbPET). Pairs of full- and low-count dbPET images were collected from 49 breasts. An image synthesis model was constructed using pix2pix GAN for each acquisition time with training (3776 pairs from 16 breasts) and validation data (1652 pairs from 7 breasts). Test data included dbPET images synthesized by our model from 26 breasts with short acquisition times. Two breast radiologists visually compared the overall image quality of the original and synthesized images derived from the short-acquisition time data (scores of 1−5). Further quantitative evaluation was performed using a peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In the visual evaluation, both readers revealed an average score of >3 for all images. The quantitative evaluation revealed significantly higher SSIM (p < 0.01) and PSNR (p < 0.01) for 26 s synthetic images and higher PSNR for 52 s images (p < 0.01) than for the original images. Our model improved the quality of low-count time dbPET synthetic images, with a more significant effect on images with lower counts.
Collapse
Affiliation(s)
- Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Yoko Satoh
- Yamanashi PET Imaging Clinic, Chuo City 409-3821, Japan
- Department of Radiology, University of Yamanashi, Chuo City 409-3898, Japan
- Correspondence:
| | - Tomoki Imokawa
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Mio Mori
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Emi Yamaga
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Kanae Takahashi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Kazunori Kubota
- Department of Radiology, Dokkyo Medical University Saitama Medical Center, Koshigaya 343-8555, Japan
| | - Hiroshi Onishi
- Department of Radiology, University of Yamanashi, Chuo City 409-3898, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| |
Collapse
|
15
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
16
|
Influence of Percutaneous Drainage Surgery and the Interval to Perform Laparoscopic Cholecystectomy on Acute Cholecystitis through Genetic Algorithm-Based Contrast-Enhanced Ultrasound Imaging. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3602811. [PMID: 35942459 PMCID: PMC9356791 DOI: 10.1155/2022/3602811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 06/08/2022] [Accepted: 06/28/2022] [Indexed: 12/07/2022]
Abstract
To discuss the optimal interval time between genetic algorithm-based ultrasound imaging-guided percutaneous drainage surgery (PTGD) and laparoscopic cholecystectomy (LC), 64 cholecystitis patients were selected as the research objects and evenly divided into experimental group (intelligent algorithm was adopted to recognize patients’ ultrasonic images) and control group (professional doctors carried out diagnosis). 92 acute cholecystitis patients undergoing PTGD were divided into three groups. 30 out of the 92 patients received LC within 2 months and were defined as the early group. 32 were performed with LC within 2 to 4 months and were defined as the metaphase group. 28 underwent LC over 4 months and were defined as the late-stage group. The average operation time, the transition from LC to laparotomy, the average postoperative hospital stay, and the incidence of complications of the three groups were compared. The results revealed that the comparison of the diagnostic accuracy and comprehensive effectiveness between experimental group and control group demonstrated that the differences were statistically significant (
). When the optimal interval of implementing LC after PTGD was realized, the corresponding values of the early group were 88.5 minutes, 16.67%, 8.13 days, and 13.75%. Those of the metaphase group were 49.91 minutes, 3.13%, 4.97 days, and 9.52%. Those of the late stage group were 68.78 minutes, 10.71%, 7.09 days, and 11.96%. To sum up, the diagnostic accuracy and comprehensive effectiveness of intelligent algorithm were higher than those of conventional ultrasound, and the optimal interval time of implementing LC after PTGD was 2 to 4 months.
Collapse
|
17
|
Bhowmik A, Eskreis-Winkler S. Deep learning in breast imaging. BJR Open 2022; 4:20210060. [PMID: 36105427 PMCID: PMC9459862 DOI: 10.1259/bjro.20210060] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 04/04/2022] [Accepted: 04/21/2022] [Indexed: 11/22/2022] Open
Abstract
Millions of breast imaging exams are performed each year in an effort to reduce the morbidity and mortality of breast cancer. Breast imaging exams are performed for cancer screening, diagnostic work-up of suspicious findings, evaluating extent of disease in recently diagnosed breast cancer patients, and determining treatment response. Yet, the interpretation of breast imaging can be subjective, tedious, time-consuming, and prone to human error. Retrospective and small reader studies suggest that deep learning (DL) has great potential to perform medical imaging tasks at or above human-level performance, and may be used to automate aspects of the breast cancer screening process, improve cancer detection rates, decrease unnecessary callbacks and biopsies, optimize patient risk assessment, and open up new possibilities for disease prognostication. Prospective trials are urgently needed to validate these proposed tools, paving the way for real-world clinical use. New regulatory frameworks must also be developed to address the unique ethical, medicolegal, and quality control issues that DL algorithms present. In this article, we review the basics of DL, describe recent DL breast imaging applications including cancer detection and risk prediction, and discuss the challenges and future directions of artificial intelligence-based systems in the field of breast cancer.
Collapse
Affiliation(s)
- Arka Bhowmik
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| |
Collapse
|
18
|
Balkenende L, Teuwen J, Mann RM. Application of Deep Learning in Breast Cancer Imaging. Semin Nucl Med 2022; 52:584-596. [PMID: 35339259 DOI: 10.1053/j.semnuclmed.2022.02.003] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/15/2022] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
This review gives an overview of the current state of deep learning research in breast cancer imaging. Breast imaging plays a major role in detecting breast cancer at an earlier stage, as well as monitoring and evaluating breast cancer during treatment. The most commonly used modalities for breast imaging are digital mammography, digital breast tomosynthesis, ultrasound and magnetic resonance imaging. Nuclear medicine imaging techniques are used for detection and classification of axillary lymph nodes and distant staging in breast cancer imaging. All of these techniques are currently digitized, enabling the possibility to implement deep learning (DL), a subset of Artificial intelligence, in breast imaging. DL is nowadays embedded in a plethora of different tasks, such as lesion classification and segmentation, image reconstruction and generation, cancer risk prediction, and prediction and assessment of therapy response. Studies show similar and even better performances of DL algorithms compared to radiologists, although it is clear that large trials are needed, especially for ultrasound and magnetic resonance imaging, to exactly determine the added value of DL in breast cancer imaging. Studies on DL in nuclear medicine techniques are only sparsely available and further research is mandatory. Legal and ethical issues need to be considered before the role of DL can expand to its full potential in clinical breast care practice.
Collapse
Affiliation(s)
- Luuk Balkenende
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Radiation Oncology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands
| | - Ritse M Mann
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
19
|
Wang L, Chang L, Luo R, Cui X, Liu H, Wu H, Chen Y, Zhang Y, Wu C, Li F, Liu H, Guan W, Wang D. An artificial intelligence system using maximum intensity projection MR images facilitates classification of non-mass enhancement breast lesions. Eur Radiol 2022; 32:4857-4867. [PMID: 35258676 DOI: 10.1007/s00330-022-08553-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 12/20/2021] [Accepted: 12/21/2021] [Indexed: 12/12/2022]
Abstract
OBJECTIVES To build an artificial intelligence (AI) system to classify benign and malignant non-mass enhancement (NME) lesions using maximum intensity projection (MIP) of early post-contrast subtracted breast MR images. METHODS This retrospective study collected 965 pure NME lesions (539 benign and 426 malignant) confirmed by histopathology or follow-up in 903 women. The 754 NME lesions acquired by one MR scanner were randomly split into the training set, validation set, and test set A (482/121/151 lesions). The 211 NME lesions acquired by another MR scanner were used as test set B. The AI system was developed using ResNet-50 with the axial and sagittal MIP images. One senior and one junior radiologist reviewed the MIP images of each case independently and rated its Breast Imaging Reporting and Data System category. The performance of the AI system and the radiologists was evaluated using the area under the receiver operating characteristic curve (AUC). RESULTS The AI system yielded AUCs of 0.859 and 0.816 in the test sets A and B, respectively. The AI system achieved comparable performance as the senior radiologist (p = 0.558, p = 0.041) and outperformed the junior radiologist (p < 0.001, p = 0.009) in both test sets A and B. After AI assistance, the AUC of the junior radiologist increased from 0.740 to 0.862 in test set A (p < 0.001) and from 0.732 to 0.843 in test set B (p < 0.001). CONCLUSION Our MIP-based AI system yielded good applicability in classifying NME lesions in breast MRI and can assist the junior radiologist achieve better performance. KEY POINTS • Our MIP-based AI system yielded good applicability in the dataset both from the same and a different MR scanner in predicting malignant NME lesions. • The AI system achieved comparable diagnostic performance with the senior radiologist and outperformed the junior radiologist. • This AI system can assist the junior radiologist achieve better performance in the classification of NME lesions in MRI.
Collapse
Affiliation(s)
- Lijun Wang
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Lufan Chang
- Department of Research & Development, Yizhun Medical AI Co. Ltd., Beijing, China
| | - Ran Luo
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Xuee Cui
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Huanhuan Liu
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Haoting Wu
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Yanhong Chen
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Yuzhen Zhang
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Chenqing Wu
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Fangzhen Li
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China
| | - Hao Liu
- Department of Research & Development, Yizhun Medical AI Co. Ltd., Beijing, China
| | - Wenbin Guan
- Department of Pathology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200092, China
| | - Dengbin Wang
- Department of Radiology, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, No. 1665 Kongjiang Road, Shanghai, 200092, China.
| |
Collapse
|
20
|
Deep Learning Using Multiple Degrees of Maximum-Intensity Projection for PET/CT Image Classification in Breast Cancer. Tomography 2022; 8:131-141. [PMID: 35076612 PMCID: PMC8788419 DOI: 10.3390/tomography8010011] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 12/23/2021] [Accepted: 12/31/2021] [Indexed: 11/17/2022] Open
Abstract
Deep learning (DL) has become a remarkably powerful tool for image processing recently. However, the usefulness of DL in positron emission tomography (PET)/computed tomography (CT) for breast cancer (BC) has been insufficiently studied. This study investigated whether a DL model using images with multiple degrees of PET maximum-intensity projection (MIP) images contributes to increase diagnostic accuracy for PET/CT image classification in BC. We retrospectively gathered 400 images of 200 BC and 200 non-BC patients for training data. For each image, we obtained PET MIP images with four different degrees (0°, 30°, 60°, 90°) and made two DL models using Xception. One DL model diagnosed BC with only 0-degree MIP and the other used four different degrees. After training phases, our DL models analyzed test data including 50 BC and 50 non-BC patients. Five radiologists interpreted these test data. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated. Our 4-degree model, 0-degree model, and radiologists had a sensitivity of 96%, 82%, and 80–98% and a specificity of 80%, 88%, and 76–92%, respectively. Our 4-degree model had equal or better diagnostic performance compared with that of the radiologists (AUC = 0.936 and 0.872–0.967, p = 0.036–0.405). A DL model similar to our 4-degree model may lead to help radiologists in their diagnostic work in the future.
Collapse
|
21
|
Kataoka M, Honda M, Ohashi A, Yamaguchi K, Mori N, Goto M, Fujioka T, Mori M, Kato Y, Satake H, Iima M, Kubota K. Ultrafast Dynamic Contrast-enhanced MRI of the Breast: How Is It Used? Magn Reson Med Sci 2022; 21:83-94. [PMID: 35228489 PMCID: PMC9199976 DOI: 10.2463/mrms.rev.2021-0157] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
Ultrafast dynamic contrast-enhanced (UF-DCE) MRI is a new approach to capture kinetic information in the very early post-contrast period with high temporal resolution while keeping reasonable spatial resolution. The detailed timing and shape of the upslope in the time–intensity curve are analyzed. New kinetic parameters obtained from UF-DCE MRI are useful in differentiating malignant from benign lesions and in evaluating prognostic markers of the breast cancers. Clinically, UF-DCE MRI contributes in identifying hypervascular lesions when the background parenchymal enhancement (BPE) is marked on conventional dynamic MRI. This review starts with the technical aspect of accelerated acquisition. Practical aspects of UF-DCE MRI include identification of target hypervascular lesions from marked BPE and diagnosis of malignant and benign lesions based on new kinetic parameters derived from UF-DCE MRI: maximum slope (MS), time to enhance (TTE), bolus arrival time (BAT), time interval between arterial and venous visualization (AVI), and empirical mathematical model (EMM). The parameters derived from UF-DCE MRI are compared in terms of their diagnostic performance and association with prognostic markers. Pitfalls of UF-DCE MRI in the clinical situation are also covered. Since UF-DCE MRI is an evolving technique, future prospects of UF-DCE MRI are discussed in detail by citing recent evidence. The topic covers prediction of treatment response, multiparametric approach using DWI-derived parameters, evaluation of tumor-related vessels, and application of artificial intelligence for UF-DCE MRI. Along with comprehensive literature review, illustrative clinical cases are used to understand the value of UF-DCE MRI.
Collapse
Affiliation(s)
- Masako Kataoka
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine
| | - Maya Honda
- Department of Diagnostic Radiology, Kansai Electric Power Hospital
| | - Akane Ohashi
- Department of Translational Medicine, Diagnostic Radiology, Lund University, Skåne University hospital
| | - Ken Yamaguchi
- Department of Radiology, Faculty of Medicine, Saga University
| | - Naoko Mori
- Department of Diagnostic Radiology, Tohoku University Graduate School of Medicine
| | - Mariko Goto
- Department of Radiology, Graduate School of Medical Science, Kyoto Prefectural University of Medicine
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University
| | - Mio Mori
- Department of Diagnostic Radiology, Tokyo Medical and Dental University
| | - Yutaka Kato
- Department of Radiological Technology, Nagoya University Hospital
| | - Hiroko Satake
- Department of Radiology, Nagoya University Graduate School of Medicine
| | - Mami Iima
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine
| | - Kazunori Kubota
- Department of Radiology, Dokkyo Medical University Saitama Medical Center
| |
Collapse
|
22
|
Mridha MF, Hamid MA, Monowar MM, Keya AJ, Ohi AQ, Islam MR, Kim JM. A Comprehensive Survey on Deep-Learning-Based Breast Cancer Diagnosis. Cancers (Basel) 2021; 13:6116. [PMID: 34885225 PMCID: PMC8656730 DOI: 10.3390/cancers13236116] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 11/25/2021] [Accepted: 12/01/2021] [Indexed: 12/11/2022] Open
Abstract
Breast cancer is now the most frequently diagnosed cancer in women, and its percentage is gradually increasing. Optimistically, there is a good chance of recovery from breast cancer if identified and treated at an early stage. Therefore, several researchers have established deep-learning-based automated methods for their efficiency and accuracy in predicting the growth of cancer cells utilizing medical imaging modalities. As of yet, few review studies on breast cancer diagnosis are available that summarize some existing studies. However, these studies were unable to address emerging architectures and modalities in breast cancer diagnosis. This review focuses on the evolving architectures of deep learning for breast cancer detection. In what follows, this survey presents existing deep-learning-based architectures, analyzes the strengths and limitations of the existing studies, examines the used datasets, and reviews image pre-processing techniques. Furthermore, a concrete review of diverse imaging modalities, performance metrics and results, challenges, and research directions for future researchers is presented.
Collapse
Affiliation(s)
- Muhammad Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Abdul Hamid
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Muhammad Mostafa Monowar
- Department of Information Technology, Faculty of Computing & Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia; (M.A.H.); (M.M.M.)
| | - Ashfia Jannat Keya
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Abu Quwsar Ohi
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (M.F.M.); (A.J.K.); (A.Q.O.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh;
| | - Jong-Myon Kim
- Department of Electrical, Electronics, and Computer Engineering, University of Ulsan, Ulsan 680-749, Korea
| |
Collapse
|
23
|
Can Deep Learning-Based Volumetric Analysis Predict Oxygen Demand Increase in Patients with COVID-19 Pneumonia? Medicina (B Aires) 2021; 57:medicina57111148. [PMID: 34833366 PMCID: PMC8619125 DOI: 10.3390/medicina57111148] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 10/09/2021] [Accepted: 10/19/2021] [Indexed: 11/17/2022] Open
Abstract
Background and Objectives: This study aimed to investigate whether predictive indicators for the deterioration of respiratory status can be derived from the deep learning data analysis of initial chest computed tomography (CT) scans of patients with coronavirus disease 2019 (COVID-19). Materials and Methods: Out of 117 CT scans of 75 patients with COVID-19 admitted to our hospital between April and June 2020, we retrospectively analyzed 79 CT scans that had a definite time of onset and were performed prior to any medication intervention. Patients were grouped according to the presence or absence of increased oxygen demand after CT scan. Quantitative volume data of lung opacity were measured automatically using a deep learning-based image analysis system. The sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) of the opacity volume data were calculated to evaluate the accuracy of the system in predicting the deterioration of respiratory status. Results: All 79 CT scans were included (median age, 62 years (interquartile range, 46–77 years); 56 (70.9%) were male. The volume of opacity was significantly higher for the increased oxygen demand group than for the nonincreased oxygen demand group (585.3 vs. 132.8 mL, p < 0.001). The sensitivity, specificity, and AUC were 76.5%, 68.2%, and 0.737, respectively, in the prediction of increased oxygen demand. Conclusion: Deep learning-based quantitative analysis of the affected lung volume in the initial CT scans of patients with COVID-19 can predict the deterioration of respiratory status to improve treatment and resource management.
Collapse
|
24
|
Convolutional Neural Network-Processed MRI Images in the Diagnosis of Plastic Bronchitis in Children. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:2748830. [PMID: 34621144 PMCID: PMC8457940 DOI: 10.1155/2021/2748830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 08/27/2021] [Accepted: 08/30/2021] [Indexed: 11/24/2022]
Abstract
Objective The study focused on the features of the convolutional neural networks- (CNN-) processed magnetic resonance imaging (MRI) images for plastic bronchitis (PB) in children. Methods 30 PB children were selected as subjects, including 19 boys and 11 girls. They all received the MRI examination for the chest. Then, a CNN-based algorithm was constructed and compared with Active Appearance Model (AAM) algorithm for segmentation effects of MRI images in 30 PB children, factoring into occurring simultaneously than (OST), Dice, and Jaccard coefficient. Results The maximum Dice coefficient of CNN algorithm reached 0.946, while that of active AAM was 0.843, and the Jaccard coefficient of CNN algorithm was also higher (0.894 vs. 0.758, P < 0.05). The MRI images showed pulmonary inflammation in all subjects. Of 30 patients, 14 (46.66%) had complicated pulmonary atelectasis, 9 (30%) had the complicated pleural effusion, 3 (10%) had pneumothorax, 2 (6.67%) had complicated mediastinal emphysema, and 2 (6.67%) had complicated pneumopericardium. Also, of 30 patients, 19 (63.33%) had lung consolidation and atelectasis in a single lung lobe and 11 (36.67%) in both two lung lobes. Conclusion The algorithm based on CNN can significantly improve the segmentation accuracy of MRI images for plastic bronchitis in children. The pleural effusion was a dangerous factor for the occurrence and development of PB.
Collapse
|
25
|
Fujioka T, Mori M, Kubota K, Oyama J, Yamaga E, Yashima Y, Katsuta L, Nomura K, Nara M, Oda G, Nakagawa T, Kitazume Y, Tateishi U. The Utility of Deep Learning in Breast Ultrasonic Imaging: A Review. Diagnostics (Basel) 2020; 10:diagnostics10121055. [PMID: 33291266 PMCID: PMC7762151 DOI: 10.3390/diagnostics10121055] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2020] [Revised: 12/04/2020] [Accepted: 12/05/2020] [Indexed: 12/13/2022] Open
Abstract
Breast cancer is the most frequently diagnosed cancer in women; it poses a serious threat to women's health. Thus, early detection and proper treatment can improve patient prognosis. Breast ultrasound is one of the most commonly used modalities for diagnosing and detecting breast cancer in clinical practice. Deep learning technology has made significant progress in data extraction and analysis for medical images in recent years. Therefore, the use of deep learning for breast ultrasonic imaging in clinical practice is extremely important, as it saves time, reduces radiologist fatigue, and compensates for a lack of experience and skills in some cases. This review article discusses the basic technical knowledge and algorithms of deep learning for breast ultrasound and the application of deep learning technology in image classification, object detection, segmentation, and image synthesis. Finally, we discuss the current issues and future perspectives of deep learning technology in breast ultrasound.
Collapse
Affiliation(s)
- Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (T.F.); (K.K.); (J.O.); (E.Y.); (Y.Y.); (L.K.); (K.N.); (M.N.); (Y.K.); (U.T.)
| | - Mio Mori
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (T.F.); (K.K.); (J.O.); (E.Y.); (Y.Y.); (L.K.); (K.N.); (M.N.); (Y.K.); (U.T.)
- Correspondence: ; Tel.: +81-3-5803-5311; Fax: +81-3-5803-0147
| | - Kazunori Kubota
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (T.F.); (K.K.); (J.O.); (E.Y.); (Y.Y.); (L.K.); (K.N.); (M.N.); (Y.K.); (U.T.)
- Department of Radiology, Dokkyo Medical University, Tochigi 321-0293, Japan
| | - Jun Oyama
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (T.F.); (K.K.); (J.O.); (E.Y.); (Y.Y.); (L.K.); (K.N.); (M.N.); (Y.K.); (U.T.)
| | - Emi Yamaga
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (T.F.); (K.K.); (J.O.); (E.Y.); (Y.Y.); (L.K.); (K.N.); (M.N.); (Y.K.); (U.T.)
| | - Yuka Yashima
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (T.F.); (K.K.); (J.O.); (E.Y.); (Y.Y.); (L.K.); (K.N.); (M.N.); (Y.K.); (U.T.)
| | - Leona Katsuta
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (T.F.); (K.K.); (J.O.); (E.Y.); (Y.Y.); (L.K.); (K.N.); (M.N.); (Y.K.); (U.T.)
| | - Kyoko Nomura
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (T.F.); (K.K.); (J.O.); (E.Y.); (Y.Y.); (L.K.); (K.N.); (M.N.); (Y.K.); (U.T.)
| | - Miyako Nara
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (T.F.); (K.K.); (J.O.); (E.Y.); (Y.Y.); (L.K.); (K.N.); (M.N.); (Y.K.); (U.T.)
- Department of Breast Surgery, Tokyo Metropolitan Cancer and Infectious Diseases Center Komagome Hospital, Tokyo 113-8677, Japan
| | - Goshi Oda
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (G.O.); (T.N.)
| | - Tsuyoshi Nakagawa
- Department of Surgery, Breast Surgery, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (G.O.); (T.N.)
| | - Yoshio Kitazume
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (T.F.); (K.K.); (J.O.); (E.Y.); (Y.Y.); (L.K.); (K.N.); (M.N.); (Y.K.); (U.T.)
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan; (T.F.); (K.K.); (J.O.); (E.Y.); (Y.Y.); (L.K.); (K.N.); (M.N.); (Y.K.); (U.T.)
| |
Collapse
|