1
|
Carriero A, Groenhoff L, Vologina E, Basile P, Albera M. Deep Learning in Breast Cancer Imaging: State of the Art and Recent Advancements in Early 2024. Diagnostics (Basel) 2024; 14:848. [PMID: 38667493 PMCID: PMC11048882 DOI: 10.3390/diagnostics14080848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 04/07/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has significantly impacted various aspects of healthcare, particularly in the medical imaging field. This review focuses on recent developments in the application of deep learning (DL) techniques to breast cancer imaging. DL models, a subset of AI algorithms inspired by human brain architecture, have demonstrated remarkable success in analyzing complex medical images, enhancing diagnostic precision, and streamlining workflows. DL models have been applied to breast cancer diagnosis via mammography, ultrasonography, and magnetic resonance imaging. Furthermore, DL-based radiomic approaches may play a role in breast cancer risk assessment, prognosis prediction, and therapeutic response monitoring. Nevertheless, several challenges have limited the widespread adoption of AI techniques in clinical practice, emphasizing the importance of rigorous validation, interpretability, and technical considerations when implementing DL solutions. By examining fundamental concepts in DL techniques applied to medical imaging and synthesizing the latest advancements and trends, this narrative review aims to provide valuable and up-to-date insights for radiologists seeking to harness the power of AI in breast cancer care.
Collapse
Affiliation(s)
| | - Léon Groenhoff
- Radiology Department, Maggiore della Carità Hospital, 28100 Novara, Italy; (A.C.); (E.V.); (P.B.); (M.A.)
| | | | | | | |
Collapse
|
2
|
Zheng TL, Sha JC, Deng Q, Geng S, Xiao SY, Yang WJ, Byrne CD, Targher G, Li YY, Wang XX, Wu D, Zheng MH. Object detection: A novel AI technology for the diagnosis of hepatocyte ballooning. Liver Int 2024; 44:330-343. [PMID: 38014574 DOI: 10.1111/liv.15799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 11/02/2023] [Accepted: 11/12/2023] [Indexed: 11/29/2023]
Abstract
Metabolic dysfunction-associated fatty liver disease (MAFLD) has reached epidemic proportions worldwide and is the most frequent cause of chronic liver disease in developed countries. Within the spectrum of liver disease in MAFLD, steatohepatitis is a progressive form of liver disease and hepatocyte ballooning (HB) is a cardinal pathological feature of steatohepatitis. The accurate and reproducible diagnosis of HB is therefore critical for the early detection and treatment of steatohepatitis. Currently, a diagnosis of HB relies on pathological examination by expert pathologists, which may be a time-consuming and subjective process. Hence, there has been interest in developing automated methods for diagnosing HB. This narrative review briefly discusses the development of artificial intelligence (AI) technology for diagnosing fatty liver disease pathology over the last 30 years and provides an overview of the current research status of AI algorithms for the identification of HB, including published articles on traditional machine learning algorithms and deep learning algorithms. This narrative review also provides a summary of object detection algorithms, including the principles, historical developments, and applications in the medical image analysis. The potential benefits of object detection algorithms for HB diagnosis (specifically those combined with a transformer architecture) are discussed, along with the future directions of object detection algorithms in HB diagnosis and the potential applications of generative AI on transformer architecture in this field. In conclusion, object detection algorithms have huge potential for the identification of HB and could make the diagnosis of MAFLD more accurate and efficient in the near future.
Collapse
Affiliation(s)
- Tian-Lei Zheng
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, China
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, China
| | - Jun-Cheng Sha
- Department of Interventional Radiology, Affiliated Hospital of Xuzhou Medical University, Xuzhou, China
| | - Qian Deng
- Department of Histopathology, Ningbo Clinical Pathology Diagnosis Center, Ningbo, China
| | - Shi Geng
- Artificial Intelligence Unit, Department of Medical Equipment Management, Affiliated Hospital of Xuzhou Medical University, Xuzhou, China
| | - Shu-Yuan Xiao
- Department of Pathology, University of Chicago Medicine, Chicago, Illinois, USA
| | - Wen-Jun Yang
- Department of Pathology, the Affiliated Hospital of Hangzhou Normal University, Hangzhou, China
| | - Christopher D Byrne
- Southampton National Institute for Health and Care Research Biomedical Research Centre, University Hospital Southampton, Southampton General Hospital, and University of Southampton, Southampton, UK
| | - Giovanni Targher
- Department of Medicine, University of Verona, Verona, Italy
- IRCSS Sacro Cuore - Don Calabria Hospital, Negrar di Valpolicella, Italy
| | - Yang-Yang Li
- Department of Pathology, the First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Xiang-Xue Wang
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Di Wu
- Department of Pathology, Xuzhou Central Hospital, Xuzhou, China
| | - Ming-Hua Zheng
- MAFLD Research Center, Department of Hepatology, the First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
- Institute of Hepatology, Wenzhou Medical University, Wenzhou, China
- Key Laboratory of Diagnosis and Treatment for the Development of Chronic Liver Disease in Zhejiang Province, Wenzhou, China
| |
Collapse
|
3
|
Wang S, Zhao J, Cai Y, Li Y, Qi X, Qiu X, Yao X, Tian Y, Zhu Y, Cao W, Zhang X. A method for small-sized wheat seedlings detection: from annotation mode to model construction. PLANT METHODS 2024; 20:15. [PMID: 38287423 PMCID: PMC10826033 DOI: 10.1186/s13007-024-01147-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 01/23/2024] [Indexed: 01/31/2024]
Abstract
The number of seedlings is an important indicator that reflects the size of the wheat population during the seedling stage. Researchers increasingly use deep learning to detect and count wheat seedlings from unmanned aerial vehicle (UAV) images. However, due to the small size and diverse postures of wheat seedlings, it can be challenging to estimate their numbers accurately during the seedling stage. In most related works in wheat seedling detection, they label the whole plant, often resulting in a higher proportion of soil background within the annotated bounding boxes. This imbalance between wheat seedlings and soil background in the annotated bounding boxes decreases the detection performance. This study proposes a wheat seedling detection method based on a local annotation instead of a global annotation. Moreover, the detection model is also improved by replacing convolutional and pooling layers with the Space-to-depth Conv module and adding a micro-scale detection layer in the YOLOv5 head network to better extract small-scale features in these small annotation boxes. The optimization of the detection model can reduce the number of error detections caused by leaf occlusion between wheat seedlings and the small size of wheat seedlings. The results show that the proposed method achieves a detection accuracy of 90.1%, outperforming other state-of-the-art detection methods. The proposed method provides a reference for future wheat seedling detection and yield prediction.
Collapse
Affiliation(s)
- Suwan Wang
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
| | - Jianqing Zhao
- College of Geography, Jiangsu Second Normal University, Nanjing, 211200, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Yucheng Cai
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Yan Li
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Xuerui Qi
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Xiaolei Qiu
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
- Jiangsu Key Laboratory for Information Agriculture, Nanjing, 210095, China
| | - Xia Yao
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
- Jiangsu Key Laboratory for Information Agriculture, Nanjing, 210095, China
| | - Yongchao Tian
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Jiangsu Collaborative Innovation Center for Modern Crop Production, Nanjing, 210095, China
| | - Yan Zhu
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Weixing Cao
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China
| | - Xiaohu Zhang
- National Engineering and Technology Center for Information Agriculture, Nanjing Agricultural University, Nanjing, 210095, China.
- Key Laboratory for Crop System Analysis and Decision Making, Ministry of Agriculture and Rural Affairs, Nanjing, 210095, China.
- Jiangsu Collaborative Innovation Center for Modern Crop Production, Nanjing, 210095, China.
| |
Collapse
|
4
|
Aliniya P, Nicolescu M, Nicolescu M, Bebis G. Improved Loss Function for Mass Segmentation in Mammography Images Using Density and Mass Size. J Imaging 2024; 10:20. [PMID: 38249005 PMCID: PMC10816853 DOI: 10.3390/jimaging10010020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 12/31/2023] [Accepted: 01/04/2024] [Indexed: 01/23/2024] Open
Abstract
Mass segmentation is one of the fundamental tasks used when identifying breast cancer due to the comprehensive information it provides, including the location, size, and border of the masses. Despite significant improvement in the performance of the task, certain properties of the data, such as pixel class imbalance and the diverse appearance and sizes of masses, remain challenging. Recently, there has been a surge in articles proposing to address pixel class imbalance through the formulation of the loss function. While demonstrating an enhancement in performance, they mostly fail to address the problem comprehensively. In this paper, we propose a new perspective on the calculation of the loss that enables the binary segmentation loss to incorporate the sample-level information and region-level losses in a hybrid loss setting. We propose two variations of the loss to include mass size and density in the loss calculation. Also, we introduce a single loss variant using the idea of utilizing mass size and density to enhance focal loss. We tested the proposed method on benchmark datasets: CBIS-DDSM and INbreast. Our approach outperformed the baseline and state-of-the-art methods on both datasets.
Collapse
Affiliation(s)
- Parvaneh Aliniya
- Computer Science and Engineering Department, College of Engineering, University of Nevada, Reno, 89557 NV, USA; (M.N.); (G.B.)
| | - Mircea Nicolescu
- Computer Science and Engineering Department, College of Engineering, University of Nevada, Reno, 89557 NV, USA; (M.N.); (G.B.)
| | | | | |
Collapse
|
5
|
Pinto-Coelho L. How Artificial Intelligence Is Shaping Medical Imaging Technology: A Survey of Innovations and Applications. Bioengineering (Basel) 2023; 10:1435. [PMID: 38136026 PMCID: PMC10740686 DOI: 10.3390/bioengineering10121435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2023] [Revised: 12/12/2023] [Accepted: 12/15/2023] [Indexed: 12/24/2023] Open
Abstract
The integration of artificial intelligence (AI) into medical imaging has guided in an era of transformation in healthcare. This literature review explores the latest innovations and applications of AI in the field, highlighting its profound impact on medical diagnosis and patient care. The innovation segment explores cutting-edge developments in AI, such as deep learning algorithms, convolutional neural networks, and generative adversarial networks, which have significantly improved the accuracy and efficiency of medical image analysis. These innovations have enabled rapid and accurate detection of abnormalities, from identifying tumors during radiological examinations to detecting early signs of eye disease in retinal images. The article also highlights various applications of AI in medical imaging, including radiology, pathology, cardiology, and more. AI-based diagnostic tools not only speed up the interpretation of complex images but also improve early detection of disease, ultimately delivering better outcomes for patients. Additionally, AI-based image processing facilitates personalized treatment plans, thereby optimizing healthcare delivery. This literature review highlights the paradigm shift that AI has brought to medical imaging, highlighting its role in revolutionizing diagnosis and patient care. By combining cutting-edge AI techniques and their practical applications, it is clear that AI will continue shaping the future of healthcare in profound and positive ways.
Collapse
Affiliation(s)
- Luís Pinto-Coelho
- ISEP—School of Engineering, Polytechnic Institute of Porto, 4200-465 Porto, Portugal;
- INESCTEC, Campus of the Engineering Faculty of the University of Porto, 4200-465 Porto, Portugal
| |
Collapse
|
6
|
Sun L, Zhang Y, Liu T, Ge H, Tian J, Qi X, Sun J, Zhao Y. A collaborative multi-task learning method for BI-RADS category 4 breast lesion segmentation and classification of MRI images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107705. [PMID: 37454498 DOI: 10.1016/j.cmpb.2023.107705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 06/15/2023] [Accepted: 07/01/2023] [Indexed: 07/18/2023]
Abstract
BACKGROUND AND OBJECTIVE The diagnosis of BI-RADS category 4 breast lesion is difficult because its probability of malignancy ranges from 2% to 95%. For BI-RADS category 4 breast lesions, MRI is one of the prominent noninvasive imaging techniques. In this paper, we research computer algorithms to segment lesions and classify the benign or malignant lesions in MRI images. However, this task is challenging because the BI-RADS category 4 lesions are characterized by irregular shape, imbalanced class, and low contrast. METHODS We fully utilize the intrinsic correlation between segmentation and classification tasks, where accurate segmentation will yield accurate classification results, and classification results will promote better segmentation. Therefore, we propose a collaborative multi-task algorithm (CMTL-SC). Specifically, a preliminary segmentation subnet is designed to identify the boundaries, locations and segmentation masks of lesions; a classification subnet, which combines the information provided by the preliminary segmentation, is designed to achieve benign or malignant classification; a repartition segmentation subnet which aggregates the benign or malignant results, is designed to refine the lesion segment. The three subnets work cooperatively so that the CMTL-SC can identify the lesions better which solves the three challenges. RESULTS AND CONCLUSION We collect MRI data from 248 patients in the Second Hospital of Dalian Medical University. The results show that the lesion boundaries delineated by the CMTL-SC are close to the boundaries delineated by the physicians. Moreover, the CMTL-SC yields better results than the single-task and multi-task state-of-the-art algorithms. Therefore, CMTL-SC can help doctors make precise diagnoses and refine treatments for patients.
Collapse
Affiliation(s)
- Liang Sun
- College of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Yunling Zhang
- College of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Tang Liu
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Hongwei Ge
- College of Computer Science and Technology, Dalian University of Technology, Dalian, China
| | - Juan Tian
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Xin Qi
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Jian Sun
- Health Management Center, The Second Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Yiping Zhao
- Department of Radiology, The Second Affiliated Hospital of Dalian Medical University, Dalian, China.
| |
Collapse
|
7
|
Zhu F, Wang G, Zhao C, Malhotra S, Zhao M, He Z, Shi J, Jiang Z, Zhou W. Automatic reorientation by deep learning to generate short-axis SPECT myocardial perfusion images. J Nucl Cardiol 2023; 30:1825-1835. [PMID: 36859594 DOI: 10.1007/s12350-023-03226-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 01/30/2023] [Indexed: 03/03/2023]
Abstract
BACKGROUND Single photon emission computed tomography (SPECT) myocardial perfusion images (MPI) can be displayed both in traditional short-axis (SA) cardiac planes and polar maps for interpretation and quantification. It is essential to reorient the reconstructed transaxial SPECT MPI into standard SA slices. This study is aimed to develop a deep-learning-based approach for automatic reorientation of MPI. METHODS A total of 254 patients were enrolled, including 226 stress SPECT MPIs and 247 rest SPECT MPIs. Fivefold cross-validation with 180 stress and 201 rest MPIs was used for training and internal validation; the remaining images were used for testing. The rigid transformation parameters (translation and rotation) from manual reorientation were annotated by an experienced nuclear cardiologist and used as the reference standard. A convolutional neural network (CNN) was designed to predict the transformation parameters. Then, the derived transform was applied to the grid generator and sampler in spatial transformer network (STN) to generate the reoriented image. A loss function containing mean absolute errors for translation and mean square errors for rotation was employed. A three-stage optimization strategy was adopted for model optimization: (1) optimize the translation parameters while fixing the rotation parameters; (2) optimize rotation parameters while fixing the translation parameters; (3) optimize both translation and rotation parameters together. RESULTS In the test set, the Spearman determination coefficients of the translation distances and rotation angles between the model prediction and the reference standard were 0.993 in X axis, 0.992 in Y axis, 0.994 in Z axis, 0.987 along X axis, 0.990 along Y axis and 0.996 along Z axis, respectively. For the 46 stress MPIs in the test set, the Spearman determination coefficients were 0.858 in percentage of profusion defect (PPD) and 0.858 in summed stress score (SSS); for the 46 rest MPIs in the test set, the Spearman determination coefficients were 0.9 in PPD and 0.9 in summed rest score (SRS). CONCLUSIONS Our deep learning-based LV reorientation method is able to accurately generate the SA images. Technical validations and subsequent evaluations of measured clinical parameters show that it has great promise for clinical use.
Collapse
Affiliation(s)
- Fubao Zhu
- School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou, 450000, Henan, China
| | - Guojie Wang
- School of Computer and Communication Engineering, Zhengzhou University of Light Industry, Zhengzhou, 450000, Henan, China
| | - Chen Zhao
- Department of Applied Computing, Michigan Technological University, Houghton, MI, 49931, USA
| | - Saurabh Malhotra
- Division of Cardiology, Cook County Health and Hospitals System, Chicago, IL, 60612, USA
- Division of Cardiology, Rush Medical College, Chicago, IL, 60612, USA
| | - Min Zhao
- Department of Nuclear Medicine, Xiangya Hospital, Central South University, Changsha, 410008, China
| | - Zhuo He
- Department of Applied Computing, Michigan Technological University, Houghton, MI, 49931, USA
| | - Jianzhou Shi
- Department of Cardiology, The First Affiliated Hospital of Nanjing Medical University, Guangzhou Road 300, Nanjing, 210029, Jiangsu, China
| | - Zhixin Jiang
- Department of Cardiology, The First Affiliated Hospital of Nanjing Medical University, Guangzhou Road 300, Nanjing, 210029, Jiangsu, China.
| | - Weihua Zhou
- Department of Applied Computing, Michigan Technological University, Houghton, MI, 49931, USA.
- Center for Biocomputing and Digital Health, Institute of Computing and Cybersystems, and Health Research Institute, Michigan Technological University, 1400 Townsend Drive, Houghton, MI, 49931, USA.
| |
Collapse
|
8
|
Xu Z, Zhang X, Zhang H, Liu Y, Zhan Y, Lukasiewicz T. EFPN: Effective medical image detection using feature pyramid fusion enhancement. Comput Biol Med 2023; 163:107149. [PMID: 37348265 DOI: 10.1016/j.compbiomed.2023.107149] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 05/15/2023] [Accepted: 06/07/2023] [Indexed: 06/24/2023]
Abstract
Feature pyramid networks (FPNs) are widely used in the existing deep detection models to help them utilize multi-scale features. However, there exist two multi-scale feature fusion problems for the FPN-based deep detection models in medical image detection tasks: insufficient multi-scale feature fusion and the same importance for multi-scale features. Therefore, in this work, we propose a new enhanced backbone model, EFPNs, to overcome these problems and help the existing FPN-based detection models to achieve much better medical image detection performances. We first introduce an additional top-down pyramid to help the detection networks fuse deeper multi-scale information; then, a scale enhancement module is developed to use different sizes of kernels to generate more diverse multi-scale features. Finally, we propose a feature fusion attention module to estimate and assign different importance weights to features with different depths and scales. Extensive experiments are conducted on two public lesion detection datasets for different medical image modalities (X-ray and MRI). On the mAP and mR evaluation metrics, EFPN-based Faster R-CNNs improved 1.55% and 4.3% on the PenD (X-ray) dataset, and 2.74% and 3.1% on the BraTs (MRI) dataset, respectively. EFPN-based Faster R-CNNs achieve much better performances than the state-of-the-art baselines in medical image detection tasks. The proposed three improvements are all essential and effective for EFPNs to achieve superior performances; and besides Faster R-CNNs, EFPNs can be easily applied to other deep models to significantly enhance their performances in medical image detection tasks.
Collapse
Affiliation(s)
- Zhenghua Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China.
| | - Xudong Zhang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China
| | - Hexiang Zhang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China.
| | - Yunxin Liu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, China
| | - Yuefu Zhan
- Department of Radiology, Hainan Women and Children's Medical Center, Haikou, China.
| | - Thomas Lukasiewicz
- Institute of Logic and Computation, TU Wien, Vienna, Austria; Department of Computer Science, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
9
|
Gao Y, Lin J, Zhou Y, Lin R. The application of traditional machine learning and deep learning techniques in mammography: a review. Front Oncol 2023; 13:1213045. [PMID: 37637035 PMCID: PMC10453798 DOI: 10.3389/fonc.2023.1213045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 07/25/2023] [Indexed: 08/29/2023] Open
Abstract
Breast cancer, the most prevalent malignant tumor among women, poses a significant threat to patients' physical and mental well-being. Recent advances in early screening technology have facilitated the early detection of an increasing number of breast cancers, resulting in a substantial improvement in patients' overall survival rates. The primary techniques used for early breast cancer diagnosis include mammography, breast ultrasound, breast MRI, and pathological examination. However, the clinical interpretation and analysis of the images produced by these technologies often involve significant labor costs and rely heavily on the expertise of clinicians, leading to inherent deviations. Consequently, artificial intelligence(AI) has emerged as a valuable technology in breast cancer diagnosis. Artificial intelligence includes Machine Learning(ML) and Deep Learning(DL). By simulating human behavior to learn from and process data, ML and DL aid in lesion localization reduce misdiagnosis rates, and improve accuracy. This narrative review provides a comprehensive review of the current research status of mammography using traditional ML and DL algorithms. It particularly highlights the latest advancements in DL methods for mammogram image analysis and offers insights into future development directions.
Collapse
Affiliation(s)
- Ying’e Gao
- School of Nursing Fujian Medical University, Fuzhou, China
| | - Jingjing Lin
- School of Nursing Fujian Medical University, Fuzhou, China
| | - Yuzhuo Zhou
- Department of Surgery, Hannover Medical School, Hannover, Germany
| | - Rongjin Lin
- School of Nursing Fujian Medical University, Fuzhou, China
- Department of Nursing, the First Affiliated Hospital of Fujian Medical University, Fuzhou, China
| |
Collapse
|
10
|
Ribeiro RF, Torres HR, Oliveira B, Morais P, Vilaca JL. Comparative analysis of deep learning methods for lesion detection on full screening mammography. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082575 DOI: 10.1109/embc40787.2023.10340501] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Breast cancer is the most prevalent type of cancer in women. Although mammography is used as the main imaging modality for the diagnosis, robust lesion detection in mammography images is a challenging task, due to the poor contrast of the lesion boundaries and the widely diverse sizes and shapes of the lesions. Deep Learning techniques have been explored to facilitate automatic diagnosis and have produced outstanding outcomes when used for different medical challenges. This study provides a benchmark for breast lesion detection in mammography images. Five state-of-art methods were evaluated on 1592 mammograms from a publicly available dataset (CBIS-DDSM) and compared considering the following seven metrics: i) mean Average Precision (mAP); ii) intersection over union; iii) precision; iv) recall; v) True Positive Rate (TPR); and vi) false positive per image. The CenterNet, YOLOv5, Faster-R-CNN, EfficientDet, and RetinaNet architectures were trained with a combination of the L1 localization loss and L2 localization loss. Despite all evaluated networks having mAP ratings greater than 60%, two managed to stand out among the evaluated networks. In general, the results demonstrate the efficiency of the model CenterNet with Hourglass-104 as its backbone and the model YOLOv5, achieving mAP scores of 70.71% and 69.36%, and TPR scores of 96.10% and 92.19%, respectively, outperforming the state-of-the-art models.Clinical Relevance - This study demonstrates the effectiveness of deep learning algorithms for breast lesion detection in mammography, potentially improving the accuracy and efficiency of breast cancer diagnosis.
Collapse
|
11
|
Han Z, Huang H, Lu D, Fan Q, Ma C, Chen X, Gu Q, Chen Q. One-stage and lightweight CNN detection approach with attention: Application to WBC detection of microscopic images. Comput Biol Med 2023; 154:106606. [PMID: 36706565 DOI: 10.1016/j.compbiomed.2023.106606] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 01/01/2023] [Accepted: 01/22/2023] [Indexed: 01/24/2023]
Abstract
White blood cell (WBC) detection in microscopic images is indispensable in medical diagnostics; however, this work, based on manual checking, is time-consuming, labor-intensive, and easily results in errors. Using object detectors for WBCs with deep convolutional neural networks can be regarded as a feasible solution. In this paper, to improve the examination precision and efficiency, a one-stage and lightweight CNN detector with an attention mechanism for detecting microscopic WBC images, and a white blood cell detection vision system are proposed. The method integrates different optimizing strategies to strengthen the feature extraction capability through the combination of an improved residual convolution module, hybrid spatial pyramid pooling module, improved coordinate attention mechanism, efficient intersection over union (EIOU) loss and Mish activation function. Extensive ablation and contrast experiments on the latest public Raabin-WBC dataset verify the effectiveness and robustness of the proposed detector for achieving a better overall detection performance. It is also more efficient than other existing studies for blood cell detection on two additional classic public BCCD and LISC datasets. The novel detection approach is significant and flexible for medical technicians to use for blood cell microscopic examination in clinical practice.
Collapse
Affiliation(s)
- Zhenggong Han
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China
| | - Haisong Huang
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China; Information Engineering Institute, Chongqing Vocational and Technical University of Mechatronics, Chongqing, 402760, China.
| | - Dan Lu
- Guizhou University of Traditional Chinese Medicine, Guiyang, Guizhou, 550025, China
| | - Qingsong Fan
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China
| | - Chi Ma
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China
| | - Xingran Chen
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China
| | - Qiang Gu
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China
| | - Qipeng Chen
- Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou University, Guiyang, Guizhou, 550025, China
| |
Collapse
|
12
|
Ayana G, Dese K, Dereje Y, Kebede Y, Barki H, Amdissa D, Husen N, Mulugeta F, Habtamu B, Choe SW. Vision-Transformer-Based Transfer Learning for Mammogram Classification. Diagnostics (Basel) 2023; 13:diagnostics13020178. [PMID: 36672988 PMCID: PMC9857963 DOI: 10.3390/diagnostics13020178] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 12/27/2022] [Accepted: 12/27/2022] [Indexed: 01/06/2023] Open
Abstract
Breast mass identification is a crucial procedure during mammogram-based early breast cancer diagnosis. However, it is difficult to determine whether a breast lump is benign or cancerous at early stages. Convolutional neural networks (CNNs) have been used to solve this problem and have provided useful advancements. However, CNNs focus only on a certain portion of the mammogram while ignoring the remaining and present computational complexity because of multiple convolutions. Recently, vision transformers have been developed as a technique to overcome such limitations of CNNs, ensuring better or comparable performance in natural image classification. However, the utility of this technique has not been thoroughly investigated in the medical image domain. In this study, we developed a transfer learning technique based on vision transformers to classify breast mass mammograms. The area under the receiver operating curve of the new model was estimated as 1 ± 0, thus outperforming the CNN-based transfer-learning models and vision transformer models trained from scratch. The technique can, hence, be applied in a clinical setting, to improve the early diagnosis of breast cancer.
Collapse
Affiliation(s)
- Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Kokeb Dese
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Yisak Dereje
- Department of Information Engineering, Marche Polytechnic University, 60121 Ancona, Italy
| | - Yonas Kebede
- Biomedical Engineering Unit, Black Lion Hospital, Addis Ababa University, Addis Ababa 1000, Ethiopia
| | - Hika Barki
- Department of Artificial Intelligence Convergence, Pukyong National University, Busan 48513, Republic of Korea
| | - Dechassa Amdissa
- Department of Basic and Applied Science for Engineering, Sapienza University of Rome, 00161 Roma, Italy
| | - Nahimiya Husen
- Department of Bioengineering and Robotics, Campus Bio-Medico University of Rome, 00128 Roma, Italy
| | - Fikadu Mulugeta
- Center of Biomedical Engineering, Addis Ababa Institute of Technology, Addis Ababa University, Addis Ababa 1000, Ethiopia
| | - Bontu Habtamu
- School of Biomedical Engineering, Jimma University, Jimma 378, Ethiopia
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39253, Republic of Korea
- Correspondence: ; Tel.: +82-54-478-7781; Fax: +82-54-462-1049
| |
Collapse
|
13
|
Das HS, Das A, Neog A, Mallik S, Bora K, Zhao Z. Breast cancer detection: Shallow convolutional neural network against deep convolutional neural networks based approach. Front Genet 2023; 13:1097207. [PMID: 36685963 PMCID: PMC9846574 DOI: 10.3389/fgene.2022.1097207] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Accepted: 12/15/2022] [Indexed: 01/06/2023] Open
Abstract
Introduction: Of all the cancers that afflict women, breast cancer (BC) has the second-highest mortality rate, and it is also believed to be the primary cause of the high death rate. Breast cancer is the most common cancer that affects women globally. There are two types of breast tumors: benign (less harmful and unlikely to become breast cancer) and malignant (which are very dangerous and might result in aberrant cells that could result in cancer). Methods: To find breast abnormalities like masses and micro-calcifications, competent and educated radiologists often examine mammographic images. This study focuses on computer-aided diagnosis to help radiologists make more precise diagnoses of breast cancer. This study aims to compare and examine the performance of the proposed shallow convolutional neural network architecture having different specifications against pre-trained deep convolutional neural network architectures trained on mammography images. Mammogram images are pre-processed in this study's initial attempt to carry out the automatic identification of BC. Thereafter, three different types of shallow convolutional neural networks with representational differences are then fed with the resulting data. In the second method, transfer learning via fine-tuning is used to feed the same collection of images into pre-trained convolutional neural networks VGG19, ResNet50, MobileNet-v2, Inception-v3, Xception, and Inception-ResNet-v2. Results: In our experiment with two datasets, the accuracy for the CBIS-DDSM and INbreast datasets are 80.4%, 89.2%, and 87.8%, 95.1% respectively. Discussion: It can be concluded from the experimental findings that the deep network-based approach with precise tuning outperforms all other state-of-the-art techniques in experiments on both datasets.
Collapse
Affiliation(s)
- Himanish Shekhar Das
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Akalpita Das
- Department of Computer Science and Engineering, GIMT Guwahati, Guwahati, India
| | - Anupal Neog
- Department of AI and Machine Learning COE, IQVIA, Bengaluru, Karnataka, India
| | - Saurav Mallik
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, United States
- Department of Pharmacology and Toxicology, University of Arizona, Tucson, AZ, United States
| | - Kangkana Bora
- Department of Computer Science and Information Technology, Cotton University, Guwahati, India
| | - Zhongming Zhao
- Center for Precision Health, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Pathology and Laboratory Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
14
|
Huang ML, Wu YS. GCS-YOLOV4-Tiny: A lightweight group convolution network for multi-stage fruit detection. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:241-268. [PMID: 36650764 DOI: 10.3934/mbe.2023011] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Fruits require different planting techniques at different growth stages. Traditionally, the maturity stage of fruit is judged visually, which is time-consuming and labor-intensive. Fruits differ in size and color, and sometimes leaves or branches occult some of fruits, limiting automatic detection of growth stages in a real environment. Based on YOLOV4-Tiny, this study proposes a GCS-YOLOV4-Tiny model by (1) adding squeeze and excitation (SE) and the spatial pyramid pooling (SPP) modules to improve the accuracy of the model and (2) using the group convolution to reduce the size of the model and finally achieve faster detection speed. The proposed GCS-YOLOV4-Tiny model was executed on three public fruit datasets. Results have shown that GCS-YOLOV4-Tiny has favorable performance on mAP, Recall, F1-Score and Average IoU on Mango YOLO and Rpi-Tomato datasets. In addition, with the smallest model size of 20.70 MB, the mAP, Recall, F1-score, Precision and Average IoU of GCS-YOLOV4-Tiny achieve 93.42 ± 0.44, 91.00 ± 1.87, 90.80 ± 2.59, 90.80 ± 2.77 and 76.94 ± 1.35%, respectively, on F. margarita dataset. The detection results outperform the state-of-the-art YOLOV4-Tiny model with a 17.45% increase in mAP and a 13.80% increase in F1-score. The proposed model provides an effective and efficient performance to detect different growth stages of fruits and can be extended for different fruits and crops for object or disease detections.
Collapse
Affiliation(s)
- Mei-Ling Huang
- Department of Industrial Engineering & Management, National Chin-Yi University of Technology, Taichung, Taiwan
| | - Yi-Shan Wu
- Department of Industrial Engineering & Management, National Chin-Yi University of Technology, Taichung, Taiwan
| |
Collapse
|
15
|
Kotei E, Thirunavukarasu R. Ensemble Technique Coupled with Deep Transfer Learning Framework for Automatic Detection of Tuberculosis from Chest X-ray Radiographs. Healthcare (Basel) 2022; 10:2335. [PMID: 36421659 PMCID: PMC9690876 DOI: 10.3390/healthcare10112335] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 11/14/2022] [Accepted: 11/17/2022] [Indexed: 01/28/2024] Open
Abstract
Tuberculosis (TB) is an infectious disease affecting humans' lungs and is currently ranked the 13th leading cause of death globally. Due to advancements in technology and the availability of medical datasets, automatic analysis and classification of chest X-rays (CXRs) into TB and non-TB can be a reliable alternative for early TB screening. We propose an automatic TB detection system using advanced deep learning (DL) models. A substantial part of a CXR image is dark, with no relevant information for diagnosis and potentially confusing DL models. In this work, the U-Net model extracts the region of interest from CXRs and the segmented images are fed to the DL models for feature extraction. Eight different convolutional neural networks (CNN) models are employed in our experiments, and their classification performance is compared based on three publicly available CXR datasets. The U-Net model achieves segmentation accuracy of 98.58%, intersection over union (IoU) of 93.10, and a Dice coefficient score of 96.50. Our proposed stacked ensemble algorithm performed better by achieving accuracy, sensitivity, and specificity values of 98.38%, 98.89%, and 98.70%, respectively. Experimental results confirm that segmented lung CXR images with ensemble learning produce a better result than un-segmented lung CXR images.
Collapse
Affiliation(s)
| | - Ramkumar Thirunavukarasu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore 632014, India
| |
Collapse
|
16
|
Yu X, Wang SH, Zhang YD. Multiple-level thresholding for breast mass detection. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022; 35:115-130. [DOI: 10.1016/j.jksuci.2022.11.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|