1
|
Han B, Sun L, Li C, Yu Z, Jiang W, Liu W, Tao D, Liu B. Deep Location Soft-Embedding-Based Network With Regional Scoring for Mammogram Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3137-3148. [PMID: 38625766 DOI: 10.1109/tmi.2024.3389661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/18/2024]
Abstract
Early detection and treatment of breast cancer can significantly reduce patient mortality, and mammogram is an effective method for early screening. Computer-aided diagnosis (CAD) of mammography based on deep learning can assist radiologists in making more objective and accurate judgments. However, existing methods often depend on datasets with manual segmentation annotations. In addition, due to the large image sizes and small lesion proportions, many methods that do not use region of interest (ROI) mostly rely on multi-scale and multi-feature fusion models. These shortcomings increase the labor, money, and computational overhead of applying the model. Therefore, a deep location soft-embedding-based network with regional scoring (DLSEN-RS) is proposed. DLSEN-RS is an end-to-end mammography image classification method containing only one feature extractor and relies on positional embedding (PE) and aggregation pooling (AP) modules to locate lesion areas without bounding boxes, transfer learning, or multi-stage training. In particular, the introduced PE and AP modules exhibit versatility across various CNN models and improve the model's tumor localization and diagnostic accuracy for mammography images. Experiments are conducted on published INbreast and CBIS-DDSM datasets, and compared to previous state-of-the-art mammographic image classification methods, DLSEN-RS performed satisfactorily.
Collapse
|
2
|
Oza U, Gohel B, Kumar P, Oza P. Presegmenter Cascaded Framework for Mammogram Mass Segmentation. Int J Biomed Imaging 2024; 2024:9422083. [PMID: 39155940 PMCID: PMC11329304 DOI: 10.1155/2024/9422083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 05/09/2024] [Accepted: 06/26/2024] [Indexed: 08/20/2024] Open
Abstract
Accurate segmentation of breast masses in mammogram images is essential for early cancer diagnosis and treatment planning. Several deep learning (DL) models have been proposed for whole mammogram segmentation and mass patch/crop segmentation. However, current DL models for breast mammogram mass segmentation face several limitations, including false positives (FPs), false negatives (FNs), and challenges with the end-to-end approach. This paper presents a novel two-stage end-to-end cascaded breast mass segmentation framework that incorporates a saliency map of potential mass regions to guide the DL models for breast mass segmentation. The first-stage segmentation model of the cascade framework is used to generate a saliency map to establish a coarse region of interest (ROI), effectively narrowing the focus to probable mass regions. The proposed presegmenter attention (PSA) blocks are introduced in the second-stage segmentation model to enable dynamic adaptation to the most informative regions within the mammogram images based on the generated saliency map. Comparative analysis of the Attention U-net model with and without the cascade framework is provided in terms of dice scores, precision, recall, FP rates (FPRs), and FN outcomes. Experimental results consistently demonstrate enhanced breast mass segmentation performance by the proposed cascade framework across all three datasets: INbreast, CSAW-S, and DMID. The cascade framework shows superior segmentation performance by improving the dice score by about 6% for the INbreast dataset, 3% for the CSAW-S dataset, and 2% for the DMID dataset. Similarly, the FN outcomes were reduced by 10% for the INbreast dataset, 19% for the CSAW-S dataset, and 4% for the DMID dataset. Moreover, the proposed cascade framework's performance is validated with varying state-of-the-art segmentation models such as DeepLabV3+ and Swin transformer U-net. The presegmenter cascade framework has the potential to improve segmentation performance and mitigate FNs when integrated with any medical image segmentation framework, irrespective of the choice of the model.
Collapse
Affiliation(s)
- Urvi Oza
- Computer ScienceDhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, Gujarat, India
| | - Bakul Gohel
- Computer ScienceDhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, Gujarat, India
| | - Pankaj Kumar
- Computer Science & EngineeringNirma University, Ahmedabad, Gujarat, India
| | - Parita Oza
- Computer Science & EngineeringNirma University, Ahmedabad, Gujarat, India
| |
Collapse
|
3
|
Zhong Y, Piao Y, Tan B, Liu J. A multi-task fusion model based on a residual-Multi-layer perceptron network for mammographic breast cancer screening. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 247:108101. [PMID: 38432087 DOI: 10.1016/j.cmpb.2024.108101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 01/13/2024] [Accepted: 02/23/2024] [Indexed: 03/05/2024]
Abstract
BACKGROUND AND OBJECTIVE Deep learning approaches are being increasingly applied for medical computer-aided diagnosis (CAD). However, these methods generally target only specific image-processing tasks, such as lesion segmentation or benign state prediction. For the breast cancer screening task, single feature extraction models are generally used, which directly extract only those potential features from the input mammogram that are relevant to the target task. This can lead to the neglect of other important morphological features of the lesion as well as other auxiliary information from the internal breast tissue. To obtain more comprehensive and objective diagnostic results, in this study, we developed a multi-task fusion model that combines multiple specific tasks for CAD of mammograms. METHODS We first trained a set of separate, task-specific models, including a density classification model, a mass segmentation model, and a lesion benignity-malignancy classification model, and then developed a multi-task fusion model that incorporates all of the mammographic features from these different tasks to yield comprehensive and refined prediction results for breast cancer diagnosis. RESULTS The experimental results showed that our proposed multi-task fusion model outperformed other related state-of-the-art models in both breast cancer screening tasks in the publicly available datasets CBIS-DDSM and INbreast, achieving a competitive screening performance with area-under-the-curve scores of 0.92 and 0.95, respectively. CONCLUSIONS Our model not only allows an overall assessment of lesion types in mammography but also provides intermediate results related to radiological features and potential cancer risk factors, indicating its potential to offer comprehensive workflow support to radiologists.
Collapse
Affiliation(s)
- Yutong Zhong
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, PR China
| | - Yan Piao
- School of Electronic Information Engineering, Changchun University of Science and Technology, Changchun 130022, PR China.
| | - Baolin Tan
- Technology Co. LTD, Shenzhen 518000, PR China
| | - Jingxin Liu
- Department of Radiology, China-Japan Union Hospital, Jilin University, Changchun 130033, PR China
| |
Collapse
|
4
|
Ma Y, Peng Y. Mammogram mass segmentation and classification based on cross-view VAE and spatial hidden factor disentanglement. Phys Eng Sci Med 2024; 47:223-238. [PMID: 38150059 DOI: 10.1007/s13246-023-01359-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 11/19/2023] [Indexed: 12/28/2023]
Abstract
Breast masses are the most important clinical findings of breast carcinomas. The mass segmentation and classification in mammograms remain a crucial yet challenging topic in computer-aided diagnosis systems, as the masses show their irregularities in shape, size and texture. In this paper, we propose a new framework for mammogram mass classification and segmentation. Specifically, to utilize the complementary information within the mammographic cross-views, cranio caudal and mediolateral oblique, a cross-view based variational autoencoder (CV-VAE) combined with a spatial hidden factor disentanglement module is presented, where the two views can be reconstructed from each other through two explicitly disentangled hidden factors: class related (specified) and background common (unspecified). Then, the specified factor is not only divided into two categories: benign and malignant by a new introduced feature pyramid networks based mass classifier, but also used to predict the mass mask label based on a U-Net-like decoder. By integrating the two complementary modules, more discriminative morphological and semantic features can be learned to solve the mass classification and segmentation problems simultaneously. The proposed method is evaluated on two most used public mammography datasets, CBIS-DDSM and INbreast, achieving the Dice similarity coefficient (DSC) of 92.46% and 93.70% for segmentation and the area under receiver operating characteristic curve (AUC) of 93.20% and 95.01% for classification, respectively. Compared with other state-of-the-art approaches, it gives competitive results.
Collapse
Affiliation(s)
- Yingran Ma
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, CO, China
| | - Yanjun Peng
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao, 266590, CO, China.
- Shandong Province Key Laboratory of Wisdom Mining Information Technology, Shandong University of Science and Technology, Qingdao, 266590, CO, China.
| |
Collapse
|
5
|
Oza P, Oza U, Oza R, Sharma P, Patel S, Kumar P, Gohel B. Digital mammography dataset for breast cancer diagnosis research (DMID) with breast mass segmentation analysis. Biomed Eng Lett 2024; 14:317-330. [PMID: 38374902 PMCID: PMC10874363 DOI: 10.1007/s13534-023-00339-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 11/08/2023] [Accepted: 11/28/2023] [Indexed: 02/21/2024] Open
Abstract
Purpose:In the last two decades, computer-aided detection and diagnosis (CAD) systems have been created to help radiologists discover and diagnose lesions observed on breast imaging tests. These systems can serve as a second opinion tool for the radiologist. However, developing algorithms for identifying and diagnosing breast lesions relies heavily on mammographic datasets. Many existing databases do not consider all the needs necessary for research and study, such as mammographic masks, radiology reports, breast composition, etc. This paper aims to introduce and describe a new mammographic database. Methods:The proposed dataset comprises mammograms with several lesions, such as masses, calcifications, architectural distortions, and asymmetries. In addition, a radiologist report is provided, describing the details of the breast, such as breast density, description of abnormality present, condition of the skin, nipple and pectoral muscles, etc., for each mammogram. Results:We present results of commonly used segmentation framework trained on our proposed dataset. We used information regarding the class of abnormalities (benign or malignant) and breast tissue density provided with each mammogram to analyze the segmentation model's performance concerning these parameters. Conclusion:The presented dataset provides diverse mammogram images to develop and train models for breast cancer diagnosis applications.
Collapse
Affiliation(s)
| | - Urvi Oza
- Dhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, India
| | - Rajiv Oza
- Rad Imaging, X-Ray and Sonography Clinic, Ahmedabad, India
| | - Paawan Sharma
- Pandit Deendayal Energy University, Gandhinagar, India
| | - Samir Patel
- Pandit Deendayal Energy University, Gandhinagar, India
| | | | - Bakul Gohel
- Dhirubhai Ambani Institute of Information and Communication Technology, Gandhinagar, India
| |
Collapse
|
6
|
Wang L. Mammography with deep learning for breast cancer detection. Front Oncol 2024; 14:1281922. [PMID: 38410114 PMCID: PMC10894909 DOI: 10.3389/fonc.2024.1281922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/19/2024] [Indexed: 02/28/2024] Open
Abstract
X-ray mammography is currently considered the golden standard method for breast cancer screening, however, it has limitations in terms of sensitivity and specificity. With the rapid advancements in deep learning techniques, it is possible to customize mammography for each patient, providing more accurate information for risk assessment, prognosis, and treatment planning. This paper aims to study the recent achievements of deep learning-based mammography for breast cancer detection and classification. This review paper highlights the potential of deep learning-assisted X-ray mammography in improving the accuracy of breast cancer screening. While the potential benefits are clear, it is essential to address the challenges associated with implementing this technology in clinical settings. Future research should focus on refining deep learning algorithms, ensuring data privacy, improving model interpretability, and establishing generalizability to successfully integrate deep learning-assisted mammography into routine breast cancer screening programs. It is hoped that the research findings will assist investigators, engineers, and clinicians in developing more effective breast imaging tools that provide accurate diagnosis, sensitivity, and specificity for breast cancer.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
7
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
8
|
Zhang J, Wu J, Zhou XS, Shi F, Shen D. Recent advancements in artificial intelligence for breast cancer: Image augmentation, segmentation, diagnosis, and prognosis approaches. Semin Cancer Biol 2023; 96:11-25. [PMID: 37704183 DOI: 10.1016/j.semcancer.2023.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2023] [Revised: 08/03/2023] [Accepted: 09/05/2023] [Indexed: 09/15/2023]
Abstract
Breast cancer is a significant global health burden, with increasing morbidity and mortality worldwide. Early screening and accurate diagnosis are crucial for improving prognosis. Radiographic imaging modalities such as digital mammography (DM), digital breast tomosynthesis (DBT), magnetic resonance imaging (MRI), ultrasound (US), and nuclear medicine techniques, are commonly used for breast cancer assessment. And histopathology (HP) serves as the gold standard for confirming malignancy. Artificial intelligence (AI) technologies show great potential for quantitative representation of medical images to effectively assist in segmentation, diagnosis, and prognosis of breast cancer. In this review, we overview the recent advancements of AI technologies for breast cancer, including 1) improving image quality by data augmentation, 2) fast detection and segmentation of breast lesions and diagnosis of malignancy, 3) biological characterization of the cancer such as staging and subtyping by AI-based classification technologies, 4) prediction of clinical outcomes such as metastasis, treatment response, and survival by integrating multi-omics data. Then, we then summarize large-scale databases available to help train robust, generalizable, and reproducible deep learning models. Furthermore, we conclude the challenges faced by AI in real-world applications, including data curating, model interpretability, and practice regulations. Besides, we expect that clinical implementation of AI will provide important guidance for the patient-tailored management.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
9
|
Alruily M, Said W, Mostafa AM, Ezz M, Elmezain M. Breast Ultrasound Images Augmentation and Segmentation Using GAN with Identity Block and Modified U-Net 3. SENSORS (BASEL, SWITZERLAND) 2023; 23:8599. [PMID: 37896692 PMCID: PMC10610596 DOI: 10.3390/s23208599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/10/2023] [Accepted: 10/16/2023] [Indexed: 10/29/2023]
Abstract
One of the most prevalent diseases affecting women in recent years is breast cancer. Early breast cancer detection can help in the treatment, lower the infection risk, and worsen the results. This paper presents a hybrid approach for augmentation and segmenting breast cancer. The framework contains two main stages: augmentation and segmentation of ultrasound images. The augmentation of the ultrasounds is applied using generative adversarial networks (GAN) with nonlinear identity block, label smoothing, and a new loss function. The segmentation of the ultrasounds applied a modified U-Net 3+. The hybrid approach achieves efficient results in the segmentation and augmentation steps compared with the other available methods for the same task. The modified version of the GAN with the nonlinear identity block overcomes different types of modified GAN in the ultrasound augmentation process, such as speckle GAN, UltraGAN, and deep convolutional GAN. The modified U-Net 3+ also overcomes the different architectures of U-Nets in the segmentation process. The GAN with nonlinear identity blocks achieved an inception score of 14.32 and a Fréchet inception distance of 41.86 in the augmenting process. The GAN with identity achieves a smaller value in Fréchet inception distance (FID) and a bigger value in inception score; these results prove the model's efficiency compared with other versions of GAN in the augmentation process. The modified U-Net 3+ architecture achieved a Dice Score of 95.49% and an Accuracy of 95.67%.
Collapse
Affiliation(s)
- Meshrif Alruily
- College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia; (M.A.); (M.E.)
| | - Wael Said
- Computer Science Department, Faculty of Computers and Informatics, Zagazig University, Zagazig 44511, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| | - Ayman Mohamed Mostafa
- College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia; (M.A.); (M.E.)
| | - Mohamed Ezz
- College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia; (M.A.); (M.E.)
| | - Mahmoud Elmezain
- Computer Science Department, Faculty of Science, Tanta University, Tanta 31527, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Yanbu 966144, Saudi Arabia
| |
Collapse
|
10
|
Gao Y, Lin J, Zhou Y, Lin R. The application of traditional machine learning and deep learning techniques in mammography: a review. Front Oncol 2023; 13:1213045. [PMID: 37637035 PMCID: PMC10453798 DOI: 10.3389/fonc.2023.1213045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/25/2023] [Indexed: 08/29/2023] Open
Abstract
Breast cancer, the most prevalent malignant tumor among women, poses a significant threat to patients' physical and mental well-being. Recent advances in early screening technology have facilitated the early detection of an increasing number of breast cancers, resulting in a substantial improvement in patients' overall survival rates. The primary techniques used for early breast cancer diagnosis include mammography, breast ultrasound, breast MRI, and pathological examination. However, the clinical interpretation and analysis of the images produced by these technologies often involve significant labor costs and rely heavily on the expertise of clinicians, leading to inherent deviations. Consequently, artificial intelligence(AI) has emerged as a valuable technology in breast cancer diagnosis. Artificial intelligence includes Machine Learning(ML) and Deep Learning(DL). By simulating human behavior to learn from and process data, ML and DL aid in lesion localization reduce misdiagnosis rates, and improve accuracy. This narrative review provides a comprehensive review of the current research status of mammography using traditional ML and DL algorithms. It particularly highlights the latest advancements in DL methods for mammogram image analysis and offers insights into future development directions.
Collapse
Affiliation(s)
- Ying’e Gao
- School of Nursing Fujian Medical University, Fuzhou, China
| | - Jingjing Lin
- School of Nursing Fujian Medical University, Fuzhou, China
| | - Yuzhuo Zhou
- Department of Surgery, Hannover Medical School, Hannover, Germany
| | - Rongjin Lin
- School of Nursing Fujian Medical University, Fuzhou, China
- Department of Nursing, the First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| |
Collapse
|
11
|
Goceri E. Medical image data augmentation: techniques, comparisons and interpretations. Artif Intell Rev 2023; 56:1-45. [PMID: 37362888 PMCID: PMC10027281 DOI: 10.1007/s10462-023-10453-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/27/2023] [Indexed: 03/29/2023]
Abstract
Designing deep learning based methods with medical images has always been an attractive area of research to assist clinicians in rapid examination and accurate diagnosis. Those methods need a large number of datasets including all variations in their training stages. On the other hand, medical images are always scarce due to several reasons, such as not enough patients for some diseases, patients do not want to allow their images to be used, lack of medical equipment or equipment, inability to obtain images that meet the desired criteria. This issue leads to bias in datasets, overfitting, and inaccurate results. Data augmentation is a common solution to overcome this issue and various augmentation techniques have been applied to different types of images in the literature. However, it is not clear which data augmentation technique provides more efficient results for which image type since different diseases are handled, different network architectures are used, and these architectures are trained and tested with different numbers of data sets in the literature. Therefore, in this work, the augmentation techniques used to improve performances of deep learning based diagnosis of the diseases in different organs (brain, lung, breast, and eye) from different imaging modalities (MR, CT, mammography, and fundoscopy) have been examined. Also, the most commonly used augmentation methods have been implemented, and their effectiveness in classifications with a deep network has been discussed based on quantitative performance evaluations. Experiments indicated that augmentation techniques should be chosen carefully according to image types.
Collapse
Affiliation(s)
- Evgin Goceri
- Department of Biomedical Engineering, Engineering Faculty, Akdeniz University, Antalya, Turkey
| |
Collapse
|
12
|
A hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme for breast cancer segmentation based on DCE-MRI. Med Image Anal 2022; 82:102572. [PMID: 36055051 DOI: 10.1016/j.media.2022.102572] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 07/08/2022] [Accepted: 08/11/2022] [Indexed: 11/24/2022]
Abstract
Automatically and accurately annotating tumor in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), which provides a noninvasive in vivo method to evaluate tumor vasculature architectures based on contrast accumulation and washout, is a crucial step in computer-aided breast cancer diagnosis and treatment. However, it remains challenging due to the varying sizes, shapes, appearances and densities of tumors caused by the high heterogeneity of breast cancer, and the high dimensionality and ill-posed artifacts of DCE-MRI. In this paper, we propose a hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme that integrates pharmacokinetics prior and feature refinement to generate sufficiently adequate features in DCE-MRI for breast cancer segmentation. The pharmacokinetics prior expressed by time intensity curve (TIC) is incorporated into the scheme through objective function called dynamic contrast-enhanced prior (DCP) loss. It contains contrast agent kinetic heterogeneity prior knowledge, which is important to optimize our model parameters. Besides, we design a spatial fusion module (SFM) embedded in the scheme to exploit intra-slices spatial structural correlations, and deploy a spatial-kinetic fusion module (SKFM) to effectively leverage the complementary information extracted from spatial-kinetic space. Furthermore, considering that low spatial resolution often leads to poor image quality in DCE-MRI, we integrate a reconstruction autoencoder into the scheme to refine feature maps in an unsupervised manner. We conduct extensive experiments to validate the proposed method and show that our approach can outperform recent state-of-the-art segmentation methods on breast cancer DCE-MRI dataset. Moreover, to explore the generalization for other segmentation tasks on dynamic imaging, we also extend the proposed method to brain segmentation in DSC-MRI sequence. Our source code will be released on https://github.com/AI-medical-diagnosis-team-of-JNU/DCEDuDoFNet.
Collapse
|
13
|
An integrated framework for breast mass classification and diagnosis using stacked ensemble of residual neural networks. Sci Rep 2022; 12:12259. [PMID: 35851592 PMCID: PMC9293883 DOI: 10.1038/s41598-022-15632-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 06/27/2022] [Indexed: 11/16/2022] Open
Abstract
A computer-aided diagnosis (CAD) system requires automated stages of tumor detection, segmentation, and classification that are integrated sequentially into one framework to assist the radiologists with a final diagnosis decision. In this paper, we introduce the final step of breast mass classification and diagnosis using a stacked ensemble of residual neural network (ResNet) models (i.e. ResNet50V2, ResNet101V2, and ResNet152V2). The work presents the task of classifying the detected and segmented breast masses into malignant or benign, and diagnosing the Breast Imaging Reporting and Data System (BI-RADS) assessment category with a score from 2 to 6 and the shape as oval, round, lobulated, or irregular. The proposed methodology was evaluated on two publicly available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Comparative experiments were conducted on the individual models and an average ensemble of models with an XGBoost classifier. Qualitative and quantitative results show that the proposed model achieved better performance for (1) Pathology classification with an accuracy of 95.13%, 99.20%, and 95.88%; (2) BI-RADS category classification with an accuracy of 85.38%, 99%, and 96.08% respectively on CBIS-DDSM, INbreast, and the private dataset; and (3) shape classification with 90.02% on the CBIS-DDSM dataset. Our results demonstrate that our proposed integrated framework could benefit from all automated stages to outperform the latest deep learning methodologies.
Collapse
|
14
|
Yao MMS, Du H, Hartman M, Chan WP, Feng M. End-to-End Calcification Distribution Pattern Recognition for Mammograms: An Interpretable Approach with GNN. Diagnostics (Basel) 2022; 12:1376. [PMID: 35741186 PMCID: PMC9222096 DOI: 10.3390/diagnostics12061376] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 05/21/2022] [Accepted: 05/30/2022] [Indexed: 12/09/2022] Open
Abstract
Purpose: We aimed to develop a novel interpretable artificial intelligence (AI) model algorithm focusing on automatic detection and classification of various patterns of calcification distribution in mammographic images using a unique graph convolution approach. Materials and methods: Images from 292 patients, which showed calcifications according to the mammographic reports and diagnosed breast cancers, were collected. The calcification distributions were classified as diffuse, segmental, regional, grouped, or linear. Excluded were mammograms with (1) breast cancer with multiple lexicons such as mass, asymmetry, or architectural distortion without calcifications; (2) hidden calcifications that were difficult to mark; or (3) incomplete medical records. Results: A graph-convolutional-network-based model was developed. A total of 581 mammographic images from 292 cases of breast cancer were divided based on the calcification distribution pattern: diffuse (n = 67), regional (n = 115), group (n = 337), linear (n = 8), or segmental (n = 54). The classification performances were measured using metrics including precision, recall, F1 score, accuracy, and multi-class area under the receiver operating characteristic curve. The proposed model achieved a precision of 0.522 ± 0.028, sensitivity of 0.643 ± 0.017, specificity of 0.847 ± 0.009, F1 score of 0.559 ± 0.018, accuracy of 64.325 ± 1.694%, and area under the curve of 0.745 ± 0.030; thus, the method was found to be superior compared to all baseline models. The predicted linear and diffuse classifications were highly similar to the ground truth, and the predicted grouped and regional classifications were also superior compared to baseline models. The prediction results are interpretable using visualization methods to highlight the important calcification nodes in graphs. Conclusions: The proposed deep neural network framework is an AI solution that automatically detects and classifies calcification distribution patterns on mammographic images highly suspected of showing breast cancers. Further study of the AI model in an actual clinical setting and additional data collection will improve its performance.
Collapse
Affiliation(s)
- Melissa Min-Szu Yao
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan; (M.M.-S.Y.); (M.F.)
- Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan
| | - Hao Du
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore 117549, Singapore;
- National University Health System, Singapore 119228, Singapore
| | - Mikael Hartman
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore 117549, Singapore;
- National University Health System, Singapore 119228, Singapore
- Department of Surgery, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117549, Singapore
| | - Wing P. Chan
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan; (M.M.-S.Y.); (M.F.)
- Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei 110, Taiwan
- Medical Innovation Development Center, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan
| | - Mengling Feng
- Department of Radiology, Wan Fang Hospital, Taipei Medical University, Taipei 116, Taiwan; (M.M.-S.Y.); (M.F.)
- National University Health System, Singapore 119228, Singapore
- Institute of Data Science, National University of Singapore, Singapore 117602, Singapore
| |
Collapse
|
15
|
Satoh Y, Imokawa T, Fujioka T, Mori M, Yamaga E, Takahashi K, Takahashi K, Kawase T, Kubota K, Tateishi U, Onishi H. Deep learning for image classification in dedicated breast positron emission tomography (dbPET). Ann Nucl Med 2022; 36:401-410. [PMID: 35084712 DOI: 10.1007/s12149-022-01719-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 01/13/2022] [Indexed: 11/27/2022]
Abstract
OBJECTIVE This study aimed to investigate and determine the best deep learning (DL) model to predict breast cancer (BC) with dedicated breast positron emission tomography (dbPET) images. METHODS Of the 1598 women who underwent dbPET examination between April 2015 and August 2020, a total of 618 breasts on 309 examinations for 284 women who were diagnosed with BC or non-BC were analyzed in this retrospective study. The Xception-based DL model was trained to predict BC or non-BC using dbPET images from 458 breasts of 109 BCs and 349 non-BCs, which consisted of mediallateral and craniocaudal maximum intensity projection images, respectively. It was tested using dbPET images from 160 breasts of 43 BC and 117 non-BC. Two expert radiologists and two radiology residents also interpreted them. Sensitivity, specificity, and area under the receiver operating characteristic curves (AUCs) were calculated. RESULTS Our DL model had a sensitivity and specificity of 93% and 93%, respectively, while radiologists had a sensitivity and specificity of 77-89% and 79-100%, respectively. Diagnostic performance of our model (AUC = 0.937) tended to be superior to that of residents (AUC = 0.876 and 0.868, p = 0.073 and 0.073), although not significantly different. Moreover, no significant differences were found between the model and experts (AUC = 0.983 and 0.941, p = 0.095 and 0.907). CONCLUSIONS Our DL model could be applied to dbPET and achieve the same diagnostic ability as that of experts.
Collapse
Affiliation(s)
- Yoko Satoh
- Yamanashi PET Imaging Clinic, Chuo City, Yamanashi Prefecture, Japan
- Department of Radiology, University of Yamanashi, Chuo City, Yamanashi Prefecture, Japan
| | - Tomoki Imokawa
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo Ku, Tokyo, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo Ku, Tokyo, Japan.
| | - Mio Mori
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo Ku, Tokyo, Japan
| | - Emi Yamaga
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo Ku, Tokyo, Japan
| | - Kanae Takahashi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo Ku, Tokyo, Japan
| | - Keiko Takahashi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo Ku, Tokyo, Japan
| | - Takahiro Kawase
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo Ku, Tokyo, Japan
| | - Kazunori Kubota
- Department of Radiology, Dokkyo Medical University Saitama Medical Center, Koshigaya City, Saitama Prefecture, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo Ku, Tokyo, Japan
| | - Hiroshi Onishi
- Department of Radiology, University of Yamanashi, Chuo City, Yamanashi Prefecture, Japan
| |
Collapse
|