1
|
Feng K, Yi Z, Xu B. Artificial Intelligence and Breast Cancer Management: From Data to the Clinic. CANCER INNOVATION 2025; 4:e159. [PMID: 39981497 PMCID: PMC11840326 DOI: 10.1002/cai2.159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Revised: 10/10/2024] [Accepted: 10/22/2024] [Indexed: 02/22/2025]
Abstract
Breast cancer (BC) remains a significant threat to women's health worldwide. The oncology field had an exponential growth in the abundance of medical images, clinical information, and genomic data. With its continuous advancement and refinement, artificial intelligence (AI) has demonstrated exceptional capabilities in processing intricate multidimensional BC-related data. AI has proven advantageous in various facets of BC management, encompassing efficient screening and diagnosis, precise prognosis assessment, and personalized treatment planning. However, the implementation of AI into precision medicine and clinical practice presents ongoing challenges that necessitate enhanced regulation, transparency, fairness, and integration of multiple clinical pathways. In this review, we provide a comprehensive overview of the current research related to AI in BC, highlighting its extensive applications throughout the whole BC cycle management and its potential for innovative impact. Furthermore, this article emphasizes the significance of constructing patient-oriented AI algorithms. Additionally, we explore the opportunities and potential research directions within this burgeoning field.
Collapse
Affiliation(s)
- Kaixiang Feng
- Department of Breast and Thyroid Surgery, Hubei Key Laboratory of Tumor Biological Behaviors, Hubei Cancer Clinical Study CenterZhongnan Hospital of Wuhan UniversityWuhanHubeiChina
- Department of Radiation and Medical Oncology, Hubei Key Laboratory of Tumor Biological Behaviors, Hubei Cancer Clinical Study CenterZhongnan Hospital of Wuhan UniversityWuhanHubeiChina
| | - Zongbi Yi
- Department of Radiation and Medical Oncology, Hubei Key Laboratory of Tumor Biological Behaviors, Hubei Cancer Clinical Study CenterZhongnan Hospital of Wuhan UniversityWuhanHubeiChina
| | - Binghe Xu
- Department of Medical Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer HospitalChinese Academy of Medical Sciences and Peking Union Medical CollegeBeijingChina
| |
Collapse
|
2
|
Jeny AA, Hamzehei S, Jin A, Baker SA, Van Rathe T, Bai J, Yang C, Nabavi S. Hybrid transformer-based model for mammogram classification by integrating prior and current images. Med Phys 2025. [PMID: 39887755 DOI: 10.1002/mp.17650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 12/28/2024] [Accepted: 01/09/2025] [Indexed: 02/01/2025] Open
Abstract
BACKGROUND Breast cancer screening via mammography plays a crucial role in early detection, significantly impacting women's health outcomes worldwide. However, the manual analysis of mammographic images is time-consuming and requires specialized expertise, presenting substantial challenges in medical practice. PURPOSE To address these challenges, we introduce a CNN-Transformer based model tailored for breast cancer classification through mammographic analysis. This model leverages both prior and current images to monitor temporal changes, aiming to enhance the efficiency and accuracy (ACC) of computer-aided diagnosis systems by mimicking the detailed examination process of radiologists. METHODS In this study, our proposed model incorporates a novel integration of a position-wise feedforward network and multi-head self-attention, enabling it to detect abnormal or cancerous changes in mammograms over time. Additionally, the model employs positional encoding and channel attention methods to accurately highlight critical spatial features, thus precisely differentiating between normal and cancerous tissues. Our methodology utilizes focal loss (FL) to precisely address challenging instances that are difficult to classify, reducing false negatives and false positives to improve diagnostic ACC. RESULTS We compared our model with eight baseline models; specifically, we utilized only current images for the single model ResNet50 while employing both prior and current images for the remaining models in terms of accuracy (ACC), sensitivity (SEN), precision (PRE), specificity (SPE), F1 score, and area under the curve (AUC). The results demonstrate that the proposed model outperforms the baseline models, achieving an ACC of 90.80%, SEN of 90.80%, PRE of 90.80%, SPE of 90.88%, an F1 score of 90.95%, and an AUC of 92.58%. The codes and related information are available at https://github.com/NabaviLab/PCTM. CONCLUSIONS Our proposed CNN-Transformer model integrates both prior and current images, removes long-range dependencies, and enhances its capability for nuanced classification. The application of FL reduces false positive rate (FPR) and false negative rates (FNR), improving both SEN and SPE. Furthermore, the model achieves the lowest false discovery rate and FNR across various abnormalities, including masses, calcification, and architectural distortions (ADs). These low error rates highlight the model's reliability and underscore its potential to improve early breast cancer detection in clinical practice.
Collapse
Affiliation(s)
- Afsana Ahsan Jeny
- School of Computing, University of Connecticut, Storrs, Connecticut, USA
| | - Sahand Hamzehei
- School of Computing, University of Connecticut, Storrs, Connecticut, USA
| | - Annie Jin
- Department of Radiology, UConn Health, Farmington, Connecticut, USA
| | | | - Tucker Van Rathe
- Department of Radiology, UConn Health, Farmington, Connecticut, USA
| | - Jun Bai
- School of Computing, University of Connecticut, Storrs, Connecticut, USA
- Department of Computer Science, University of Cincinnati, Cincinnati, Ohio, USA
| | - Clifford Yang
- Department of Radiology, UConn Health, Farmington, Connecticut, USA
| | - Sheida Nabavi
- School of Computing, University of Connecticut, Storrs, Connecticut, USA
| |
Collapse
|
3
|
Ungureanu AM, Matei SC, Malita D. Controversies in the Application of AI in Radiology-Is There Medico-Legal Support? Aspects from Romanian Practice. Diagnostics (Basel) 2025; 15:230. [PMID: 39857113 PMCID: PMC11765423 DOI: 10.3390/diagnostics15020230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2024] [Revised: 01/13/2025] [Accepted: 01/15/2025] [Indexed: 01/27/2025] Open
Abstract
Background/Objectives: Artificial intelligence (AI) is gaining an increasing amount of influence in various fields, including medicine. In radiology, where diagnoses are based on collaboration between diagnostic devices and the professional experience of radiologists, AI intervention seems much easier than in other fields, but this is often not the case. Many times, the patients orient themselves according to the doctor, which is not applicable in the case of AI. Another limitation rests in the controversies regarding medico-legal liability. In the field of radio-imaging in Romania, the implementation of AI systems in diagnosis is at its beginning. An important aspect of this is raising awareness among the population about these assistive AI systems and, also, awareness of the technological evolution of AI among medical staff. This narrative review manuscript analyzes the existing literature data regarding the medico-legal aspects of AI application in radiology, highlighting the controversial aspects and the lack of statutory legislative regulations in Romania. Methods: A detailed search was conducted across three electronic databases including MEDLINE/PubMed, Scopus, and Web of Science, with 53 papers serving as the literature corpus of our review. Results: General requirements for artificial intelligence systems used in radiology have been established. In the radiological diagnostic process, there are five levels of AI system implication. Until now, completely autonomous AI systems have not been used. Regarding liability in the case of malpractice, at the currently accepted legislative level, most of the time, the radiologist is liable for their own fault or non-compliant use of diagnostic AI systems. Controversies arise in the case of radio-imaging diagnosis in which AI systems act autonomously. Conclusions: In order for AI diagnostic radio-imaging systems to be implemented, they must meet certain quality standards and be approved. The radiologist must know these systems, accept them, know their limits, and validate them in accordance with their degree of involvement in radiological diagnosis. Considering the evolution of technology in the Romanian medical system, including radiology, in the future, an alignment with the legal standards established/proposed at the European level is desired.
Collapse
Affiliation(s)
- Ana-Maria Ungureanu
- Department XV, Clinic of Radiology and Medical Imaging, “VictorBabes” University of Medicine and Pharmacy, Eftimie Murgu Square, No. 2, 300041 Timisoara, Romania; (A.-M.U.); (D.M.)
- Department of Radiology and Medical Imaging, “Pius Brinzeu” Emergency County Hospital, 300723 Timisoara, Romania
| | - Sergiu-Ciprian Matei
- Abdominal Surgery and Phlebology Research Center, Victor Babeș University of Medicine and Pharmacy, 300041 Timișoara, Romania
| | - Daniel Malita
- Department XV, Clinic of Radiology and Medical Imaging, “VictorBabes” University of Medicine and Pharmacy, Eftimie Murgu Square, No. 2, 300041 Timisoara, Romania; (A.-M.U.); (D.M.)
| |
Collapse
|
4
|
Chia JLL, He GS, Ngiam KY, Hartman M, Ng QX, Goh SSN. Harnessing Artificial Intelligence to Enhance Global Breast Cancer Care: A Scoping Review of Applications, Outcomes, and Challenges. Cancers (Basel) 2025; 17:197. [PMID: 39857979 PMCID: PMC11764353 DOI: 10.3390/cancers17020197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Revised: 01/02/2025] [Accepted: 01/07/2025] [Indexed: 01/27/2025] Open
Abstract
BACKGROUND In recent years, Artificial Intelligence (AI) has shown transformative potential in advancing breast cancer care globally. This scoping review seeks to provide a comprehensive overview of AI applications in breast cancer care, examining how they could reshape diagnosis, treatment, and management on a worldwide scale and discussing both the benefits and challenges associated with their adoption. METHODS In accordance with PRISMA-ScR and ensuing guidelines on scoping reviews, PubMed, Web of Science, Cochrane Library, and Embase were systematically searched from inception to end of May 2024. Keywords included "Artificial Intelligence" and "Breast Cancer". Original studies were included based on their focus on AI applications in breast cancer care and narrative synthesis was employed for data extraction and interpretation, with the findings organized into coherent themes. RESULTS Finally, 84 articles were included. The majority were conducted in developed countries (n = 54). The majority of publications were in the last 10 years (n = 83). The six main themes for AI applications were AI for breast cancer screening (n = 32), AI for image detection of nodal status (n = 7), AI-assisted histopathology (n = 8), AI in assessing post-neoadjuvant chemotherapy (NACT) response (n = 23), AI in breast cancer margin assessment (n = 5), and AI as a clinical decision support tool (n = 9). AI has been used as clinical decision support tools to augment treatment decisions for breast cancer and in multidisciplinary tumor board settings. Overall, AI applications demonstrated improved accuracy and efficiency; however, most articles did not report patient-centric clinical outcomes. CONCLUSIONS AI applications in breast cancer care show promise in enhancing diagnostic accuracy and treatment planning. However, persistent challenges in AI adoption, such as data quality, algorithm transparency, and resource disparities, must be addressed to advance the field.
Collapse
Affiliation(s)
- Jolene Li Ling Chia
- NUS Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Dr. S117597, Singapore 119077, Singapore (G.S.H.)
| | - George Shiyao He
- NUS Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Dr. S117597, Singapore 119077, Singapore (G.S.H.)
| | - Kee Yuen Ngiam
- Department of Surgery, National University Hospital, Singapore 119074, Singapore; (K.Y.N.); (M.H.)
- Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, 12 Science Drive 2, #10-01, Singapore 117549, Singapore
| | - Mikael Hartman
- Department of Surgery, National University Hospital, Singapore 119074, Singapore; (K.Y.N.); (M.H.)
- Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, 12 Science Drive 2, #10-01, Singapore 117549, Singapore
| | - Qin Xiang Ng
- Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, 12 Science Drive 2, #10-01, Singapore 117549, Singapore
- SingHealth Duke-NUS Global Health Institute, Singapore 169857, Singapore
| | - Serene Si Ning Goh
- Department of Surgery, National University Hospital, Singapore 119074, Singapore; (K.Y.N.); (M.H.)
- Saw Swee Hock School of Public Health, National University of Singapore and National University Health System, 12 Science Drive 2, #10-01, Singapore 117549, Singapore
| |
Collapse
|
5
|
Dafni MF, Shih M, Manoel AZ, Yousif MYE, Spathi S, Harshal C, Bhatt G, Chodnekar SY, Chune NS, Rasool W, Umar TP, Moustakas DC, Achkar R, Kumar H, Naz S, Acuña-Chavez LM, Evgenikos K, Gulraiz S, Ali ESM, Elaagib A, Uggh IHP. Empowering cancer prevention with AI: unlocking new frontiers in prediction, diagnosis, and intervention. Cancer Causes Control 2024:10.1007/s10552-024-01942-9. [PMID: 39672997 DOI: 10.1007/s10552-024-01942-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Accepted: 11/18/2024] [Indexed: 12/15/2024]
Abstract
Artificial intelligence is rapidly changing our world at an exponential rate and its transformative power has extensively reached important sectors like healthcare. In the fight against cancer, AI proved to be a novel and powerful tool, offering new hope for prevention and early detection. In this review, we will comprehensively explore the medical applications of AI, including early cancer detection through pathological and imaging analysis, risk stratification, patient triage, and the development of personalized prevention approaches. However, despite the successful impact AI has contributed to, we will also discuss the myriad of challenges that we have faced so far toward optimal AI implementation. There are problems when it comes to the best way in which we can use AI systemically. Having the correct data that can be understood easily must remain one of the most significant concerns in all its uses including sharing information. Another challenge that exists is how to interpret AI models because they are too complicated for people to follow through examples used in their developments which may affect trust, especially among medical professionals. Other considerations like data privacy, algorithm bias, and equitable access to AI tools have also arisen. Finally, we will evaluate possible future directions for this promising field that highlight AI's capacity to transform preventative cancer care.
Collapse
Affiliation(s)
- Marianna-Foteini Dafni
- School of Medicine, Laboratory of Forensic Medicine and Toxicology, Aristotle Univerisity of Thessaloniki, Thessaloniki, Greece
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Mohamed Shih
- School of Medicine, Newgiza University, Giza, Egypt.
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece.
| | - Agnes Zanotto Manoel
- Faculty of Medicine, Federal University of Rio Grande, Rio Grande do Sul, Brazil
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Mohamed Yousif Elamin Yousif
- Faculty of Medicine, University of Khartoum, Khartoum, Sudan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Stavroula Spathi
- Faculty of Medicine, National and Kapodistrian University of Athens, Athens, Greece
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Chorya Harshal
- Faculty of Medicine, Medical College Baroda, Vadodara, India
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Gaurang Bhatt
- All India Institute of Medical Sciences, Rishikesh, India
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Swarali Yatin Chodnekar
- Faculty of Medicine, Teaching University Geomedi LLC, Tbilisi, Georgia
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Nicholas Stam Chune
- Faculty of Medicine, University of Nairobi, Nairobi, Kenya
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Warda Rasool
- Faculty of Medicine, King Edward Medical University, Lahore, Pakistan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Tungki Pratama Umar
- Division of Surgery and Interventional Science, Faculty of Medical Sciences, University College London, London, UK
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Dimitrios C Moustakas
- Faculty of Medicine, National and Kapodistrian University of Athens, Athens, Greece
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Robert Achkar
- Faculty of Medicine, Poznan University of Medical Sciences, Poznan, Poland
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Harendra Kumar
- Dow University of Health Sciences, Karachi, Pakistan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Suhaila Naz
- Tbilisi State Medical University, Tbilisi, Georgia
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Luis M Acuña-Chavez
- Facultad de Medicina de la Universidad Nacional de Trujillo, Trujillo, Peru
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Konstantinos Evgenikos
- Faculty of Medicine, National and Kapodistrian University of Athens, Athens, Greece
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Shaina Gulraiz
- Royal Bournemouth Hospital (University Hospitals Dorset), Bournemouth, UK
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Eslam Salih Musa Ali
- University of Dongola Faculty of Medicine and Health Science, Dongola, Sudan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Amna Elaagib
- Faculty of Medicine AlMughtaribeen University, Khartoum, Sudan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Innocent H Peter Uggh
- Kilimanjaro Clinical Research Institute, Kilimanjaro, Tanzania
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| |
Collapse
|
6
|
Singh S, Healy NA. The top 100 most-cited articles on artificial intelligence in breast radiology: a bibliometric analysis. Insights Imaging 2024; 15:297. [PMID: 39666106 PMCID: PMC11638451 DOI: 10.1186/s13244-024-01869-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2024] [Accepted: 11/24/2024] [Indexed: 12/13/2024] Open
Abstract
INTRODUCTION Artificial intelligence (AI) in radiology is a rapidly evolving field. In breast imaging, AI has already been applied in a real-world setting and multiple studies have been conducted in the area. The aim of this analysis is to identify the most influential publications on the topic of artificial intelligence in breast imaging. METHODS A retrospective bibliometric analysis was conducted on artificial intelligence in breast radiology using the Web of Science database. The search strategy involved searching for the keywords 'breast radiology' or 'breast imaging' and the various keywords associated with AI such as 'deep learning', 'machine learning,' and 'neural networks'. RESULTS From the top 100 list, the number of citations per article ranged from 30 to 346 (average 85). The highest cited article titled 'Artificial Neural Networks In Mammography-Application To Decision-Making In The Diagnosis Of Breast-Cancer' was published in Radiology in 1993. Eighty-three of the articles were published in the last 10 years. The journal with the greatest number of articles was Radiology (n = 22). The most common country of origin was the United States (n = 51). Commonly occurring topics published were the use of deep learning models for breast cancer detection in mammography or ultrasound, radiomics in breast cancer, and the use of AI for breast cancer risk prediction. CONCLUSION This study provides a comprehensive analysis of the top 100 most-cited papers on the subject of artificial intelligence in breast radiology and discusses the current most influential papers in the field. CLINICAL RELEVANCE STATEMENT This article provides a concise summary of the top 100 most-cited articles in the field of artificial intelligence in breast radiology. It discusses the most impactful articles and explores the recent trends and topics of research in the field. KEY POINTS Multiple studies have been conducted on AI in breast radiology. The most-cited article was published in the journal Radiology in 1993. This study highlights influential articles and topics on AI in breast radiology.
Collapse
Affiliation(s)
- Sneha Singh
- Department of Radiology, Royal College of Surgeons in Ireland, Dublin, Ireland.
- Beaumont Breast Centre, Beaumont Hospital, Dublin, Ireland.
| | - Nuala A Healy
- Department of Radiology, Royal College of Surgeons in Ireland, Dublin, Ireland
- Beaumont Breast Centre, Beaumont Hospital, Dublin, Ireland
- Department of Radiology, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
7
|
Bouni M, Hssina B, Douzi K, Douzi S. Synergistic use of handcrafted and deep learning features for tomato leaf disease classification. Sci Rep 2024; 14:26822. [PMID: 39500934 PMCID: PMC11538303 DOI: 10.1038/s41598-024-71225-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 08/26/2024] [Indexed: 11/08/2024] Open
Abstract
This research introduces a Computer-Aided Diagnosis-system designed aimed at automated detections & classification of tomato leaf diseases, combining traditional handcrafted features with advanced deep learning techniques. The system's process encompasses preprocessing, feature extraction, feature fusion, and classification. It utilizes enhancement filters and segmentation algorithms to isolate with Regions-of-Interests (ROI) in images tomato leaves. These features based arranged in ABCD rule (Asymmetry, Borders, Colors, and Diameter) are integrated with outputs from a Convolutional Neural Network (CNN) pretrained on ImageNet. To address data imbalance, we introduced a novel evaluation method that has shown to improve classification accuracy by 15% compared to traditional methods, achieving an overall accuracy rate of 92% in field tests. By merging classical feature engineering with modern machine learning techniques under mutual information-based feature fusion, our system sets a new standard for precision in agricultural diagnostics. Specific performance metrics showcasing the effectiveness of our approach in automated detection and classifying of tomato leaf disease.
Collapse
Affiliation(s)
- Mohamed Bouni
- Laboratory LIM, IT Department FST Mohammedia, Hassan II University, Casablanca, Morocco.
| | - Badr Hssina
- Laboratory LIM, IT Department FST Mohammedia, Hassan II University, Casablanca, Morocco
| | - Khadija Douzi
- Laboratory LIM, IT Department FST Mohammedia, Hassan II University, Casablanca, Morocco
| | - Samira Douzi
- FMPR, Mohammed V University in Rabat, Rabat, Morocco
| |
Collapse
|
8
|
Naeem OB, Saleem Y. CSA-Net: Channel and Spatial Attention-Based Network for Mammogram and Ultrasound Image Classification. J Imaging 2024; 10:256. [PMID: 39452419 PMCID: PMC11508210 DOI: 10.3390/jimaging10100256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 10/12/2024] [Accepted: 10/14/2024] [Indexed: 10/26/2024] Open
Abstract
Breast cancer persists as a critical global health concern, emphasizing the advancement of reliable diagnostic strategies to improve patient survival rates. To address this challenge, a computer-aided diagnostic methodology for breast cancer classification is proposed. An architecture that incorporates a pre-trained EfficientNet-B0 model along with channel and spatial attention mechanisms is employed. The efficiency of leveraging attention mechanisms for breast cancer classification is investigated here. The proposed model demonstrates commendable performance in classification tasks, particularly showing significant improvements upon integrating attention mechanisms. Furthermore, this model demonstrates versatility across various imaging modalities, as demonstrated by its robust performance in classifying breast lesions, not only in mammograms but also in ultrasound images during cross-modality evaluation. It has achieved accuracy of 99.9% for binary classification using the mammogram dataset and 92.3% accuracy on the cross-modality multi-class dataset. The experimental results emphasize the superiority of our proposed method over the current state-of-the-art approaches for breast cancer classification.
Collapse
Affiliation(s)
- Osama Bin Naeem
- Department of Electrical Engineering, University of Engineering and Technology, Lahore-Narowal Campus, Narowal 51600, Pakistan
- Department of Computer Engineering, University of Engineering and Technology, Lahore 39161, Pakistan;
| | - Yasir Saleem
- Department of Computer Engineering, University of Engineering and Technology, Lahore 39161, Pakistan;
| |
Collapse
|
9
|
Liu W, Qu A, Yuan J, Wang L, Chen J, Zhang X, Wang H, Han Z, Li Y. Colorectal cancer histopathology image analysis: A comparative study of prognostic values of automatically extracted morphometric nuclear features in multispectral and red-blue-green imagery. Histol Histopathol 2024; 39:1303-1316. [PMID: 38343355 DOI: 10.14670/hh-18-715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
OBJECTIVES Multispectral imaging (MSI) has been utilized to predict the prognosis of colorectal cancer (CRC) patients, however, our understanding of the prognostic value of nuclear morphological parameters of bright-field MSI in CRC is still limited. This study was designed to compare the efficiency of MSI and standard red-green-blue (RGB) images in predicting the prognosis of CRC. METHODS We compared the efficiency of MS and conventional RGB images on the quantitative assessment of hematoxylin-eosin (HE) stained histopathology images. A pipeline was developed using a pixel-wise support vector machine (SVM) classifier for gland-stroma segmentation, and a marker-controlled watershed algorithm was used for nuclei segmentation. The correlation between extracted morphological parameters and the five-year disease-free survival (5-DFS) was analyzed. RESULTS Forty-seven nuclear morphological parameters were extracted in total. Based on Kaplan-Meier analysis, eight features derived from MS images and seven featured derived from RGB images were significantly associated with 5-DFS, respectively. Compared with RGB images, MSI showed higher accuracy, precision, and Dice index in nuclei segmentation. Multivariate analysis indicated that both integrated parameters 1 (factors negatively correlated with CRC prognosis including nuclear number, circularity, eccentricity, major axis length) and 2 (factors positively correlated with CRC prognosis including nuclear average area, area perimeter, total area/total perimeter ratio, average area/perimeter ratio) in MS images were independent prognostic factors of 5-DFS, in contrast with only integrated parameter 1 (P<0.001) in RGB images. More importantly, the quantification of HE-stained MS images displayed higher accuracy in predicting 5-DFS compared with RGB images (76.9% vs 70.9%). CONCLUSIONS Quantitative evaluation of HE-stained MS images could yield more information and better predictive performance for CRC prognosis than conventional RGB images, thereby contributing to precision oncology.
Collapse
Affiliation(s)
- Wenlou Liu
- Department of Oncology, Affiliated Hospital of Xuzhou Medical University, Xuzhou, China
| | - Aiping Qu
- School of Computer, University of South China, Hengyang, China
| | - Jingping Yuan
- Department of Pathology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Linwei Wang
- Department of Oncology, Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Jiamei Chen
- Department of Oncology, Renmin Hospital of Wuhan University, Wuhan, China
| | - Xiuli Zhang
- Department of Radiology, Affiliated Hospital of Xuzhou Medical University, Xuzhou, China
| | - Hongmei Wang
- Department of Oncology, Affiliated Hospital of Xuzhou Medical University, Xuzhou, China
| | - Zhengxiang Han
- Department of Oncology, Affiliated Hospital of Xuzhou Medical University, Xuzhou, China.
| | - Yan Li
- Department of Cancer Surgery, Beijing Tsinghua Changgung Hospital, Tsinghua University, Beijing, China.
| |
Collapse
|
10
|
Du Y, Ma J, Wu T, Li F, Pan J, Du L, Zhang M, Diao X, Wu R. Downgrading Breast Imaging Reporting and Data System categories in ultrasound using strain elastography and computer-aided diagnosis system: a multicenter, prospective study. Br J Radiol 2024; 97:1653-1660. [PMID: 39102827 PMCID: PMC11417380 DOI: 10.1093/bjr/tqae136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 07/23/2024] [Accepted: 07/31/2024] [Indexed: 08/07/2024] Open
Abstract
OBJECTIVE To determine whether adding elastography strain ratio (SR) and a deep learning based computer-aided diagnosis (CAD) system to breast ultrasound (US) can help reclassify Breast Imaging Reporting and Data System (BI-RADS) 3 and 4a-c categories and avoid unnecessary biopsies. METHODS This prospective, multicentre study included 1049 masses (691 benign, 358 malignant) with assigned BI-RADS 3 and 4a-c between 2020 and 2022. CAD results was dichotomized possibly malignant vs. benign. All patients underwent SR and CAD examinations and histopathological findings were the standard of reference. Reduction of unnecessary biopsies (biopsies in benign lesions) and missed malignancies after reclassified (new BI-RADS 3) with SR and CAD were the outcome measures. RESULTS Following the routine conventional breast US assessment, 48.6% (336 of 691 masses) underwent unnecessary biopsies. After reclassifying BI-RADS 4a masses (SR cut-off <2.90, CAD dichotomized possibly benign), 25.62% (177 of 691 masses) underwent an unnecessary biopsies corresponding to a 50.14% (177 vs. 355) reduction of unnecessary biopsies. After reclassification, only 1.72% (9 of 523 masses) malignancies were missed in the new BI-RADS 3 group. CONCLUSION Adding SR and CAD to clinical practice may show an optimal performance in reclassifying BI-RADS 4a to 3 categories, and 50.14% masses would be benefit by keeping the rate of undetected malignancies with an acceptable value of 1.72%. ADVANCES IN KNOWLEDGE Leveraging the potential of SR in conjunction with CAD holds immense promise in substantially reducing the biopsy frequency associated with BI-RADS 3 and 4A lesions, thereby conferring substantial advantages upon patients encompassed within this cohort.
Collapse
Affiliation(s)
- Yu Du
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai 200080, China
| | - Ji Ma
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai 200080, China
| | - Tingting Wu
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai 200080, China
| | - Fang Li
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai 200080, China
| | - Jiazhen Pan
- Department of Ultrasound, Jiangsu Cancer Hospital & Jiangsu Institute of Cancer, Research & The Affiliated Cancer Hospital of Nanjing Medical University, Nanjing 210009, China
| | - Liwen Du
- Department of Ultrasound, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Manqi Zhang
- Department of Ultrasound, The First Affiliated Hospital of Nanjing Medical University, Nanjing 210029, China
| | - Xuehong Diao
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai 200080, China
| | - Rong Wu
- Department of Ultrasound, Shanghai General Hospital, Shanghai Jiaotong University School of Medicine, Shanghai 200080, China
| |
Collapse
|
11
|
Vogel R, Mück B. Artificial Intelligence-What to Expect From Machine Learning and Deep Learning in Hernia Surgery. JOURNAL OF ABDOMINAL WALL SURGERY : JAWS 2024; 3:13059. [PMID: 39310669 PMCID: PMC11412881 DOI: 10.3389/jaws.2024.13059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 07/26/2024] [Indexed: 09/25/2024]
Abstract
This mini-review explores the integration of Artificial Intelligence (AI) within hernia surgery, highlighting the role of Machine Learning (ML) and Deep Learning (DL). The term AI incorporates various technologies including ML, Neural Networks (NN), and DL. Classical ML algorithms depend on structured, labeled data for predictions, requiring significant human oversight. In contrast, DL, a subset of ML, generally leverages unlabeled, raw data such as images and videos to autonomously identify patterns and make intricate deductions. This process is enabled by neural networks used in DL, where hidden layers between the input and output capture complex data patterns. These layers' configuration and weighting are pivotal in developing effective models for various applications, such as image and speech recognition, natural language processing, and more specifically, surgical procedures and outcomes in hernia surgery. Significant advancements have been achieved with DL models in surgical settings, particularly in predicting the complexity of abdominal wall reconstruction (AWR) and other postoperative outcomes, which are elaborated in detail within the context of this mini-review. The review method involved analyzing relevant literature from databases such as PubMed and Google Scholar, focusing on studies related to preoperative planning, intraoperative techniques, and postoperative management within hernia surgery. Only recent, peer-reviewed publications in English that directly relate to the topic were included, highlighting the latest advancements in the field to depict potential benefits and current limitations of AI technologies in hernia surgery, advocating for further research and application in this evolving field.
Collapse
Affiliation(s)
- Robert Vogel
- Klinikum Kempten - Klinikverbund Allgäu, Kempten, Germany
| | - Björn Mück
- Klinikum Kempten - Klinikverbund Allgäu, Kempten, Germany
| |
Collapse
|
12
|
Lamb LR, Lehman CD, Do S, Kim K, Langarica S, Bahl M. Artificial Intelligence (AI)-Based Computer-Assisted Detection and Diagnosis for Mammography: An Evidence-Based Review of Food and Drug Administration (FDA)-Cleared Tools for Screening Digital Breast Tomosynthesis (DBT). AI IN PRECISION ONCOLOGY 2024; 1:195-206. [PMID: 40182614 PMCID: PMC11963389 DOI: 10.1089/aipo.2024.0022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/05/2025]
Abstract
In recent years, the emergence of new-generation deep learning-based artificial intelligence (AI) tools has reignited enthusiasm about the potential of computer-assisted detection (CADe) and diagnosis (CADx) for screening mammography. For screening mammography, digital breast tomosynthesis (DBT) combined with acquired digital 2D mammography or synthetic 2D mammography is widely used throughout the United States. As of this writing in July 2024, there are six Food and Drug Administration (FDA)-cleared AI-based CADe/x tools for DBT. These tools detect suspicious lesions on DBT and provide corresponding scores at the lesion and examination levels that reflect likelihood of malignancy. In this article, we review the evidence supporting the use of AI-based CADe/x for DBT. The published literature on this topic consists of multireader, multicase studies, retrospective analyses, and two "real-world" evaluations. These studies suggest that AI-based CADe/x could lead to improvements in sensitivity without compromising specificity and to improvements in efficiency. However, the overall published evidence is limited and includes only two small postimplementation clinical studies. Prospective studies and careful postimplementation clinical evaluation will be necessary to fully understand the impact of AI-based CADe/x on screening DBT outcomes.
Collapse
Affiliation(s)
- Leslie R. Lamb
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Constance D. Lehman
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Synho Do
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Kyungsu Kim
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Saul Langarica
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Manisha Bahl
- Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| |
Collapse
|
13
|
Pan H, Shi C, Zhang Y, Zhong Z. Artificial intelligence-based classification of breast nodules: a quantitative morphological analysis of ultrasound images. Quant Imaging Med Surg 2024; 14:3381-3392. [PMID: 38720871 PMCID: PMC11074741 DOI: 10.21037/qims-23-1652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 03/25/2024] [Indexed: 05/12/2024]
Abstract
Background Accurate classification of breast nodules into benign and malignant types is critical for the successful treatment of breast cancer. Traditional methods rely on subjective interpretation, which can potentially lead to diagnostic errors. Artificial intelligence (AI)-based methods using the quantitative morphological analysis of ultrasound images have been explored for the automated and reliable classification of breast cancer. This study aimed to investigate the effectiveness of AI-based approaches for improving diagnostic accuracy and patient outcomes. Methods In this study, a quantitative analysis approach was adopted, with a focus on five critical features for evaluation: degree of boundary regularity, clarity of boundaries, echo intensity, and uniformity of echoes. Furthermore, the classification results were assessed using five machine learning methods: logistic regression (LR), support vector machine (SVM), decision tree (DT), naive Bayes, and K-nearest neighbor (KNN). Based on these assessments, a multifeature combined prediction model was established. Results We evaluated the performance of our classification model by quantifying various features of the ultrasound images and using the area under the receiver operating characteristic (ROC) curve (AUC). The moment of inertia achieved an AUC value of 0.793, while the variance and mean of breast nodule areas achieved AUC values of 0.725 and 0.772, respectively. The convexity and concavity achieved AUC values of 0.988 and 0.987, respectively. Additionally, we conducted a joint analysis of multiple features after normalization, achieving a recall value of 0.98, which surpasses most medical evaluation indexes on the market. To ensure experimental rigor, we conducted cross-validation experiments, which yielded no significant differences among the classifiers under 5-, 8-, and 10-fold cross-validation (P>0.05). Conclusions The quantitative analysis can accurately differentiate between benign and malignant breast nodules.
Collapse
Affiliation(s)
- Hao Pan
- School of Electronic Information, Xijing University, Xi’an, China
| | - Changbei Shi
- Department of Nuclear Medicine, Shaanxi Provincial Cancer Hospital, Xi’an, China
| | - Yuxing Zhang
- School of Electronic Information, Xijing University, Xi’an, China
- School of Medicine, Xijing University, Xi’an, China
| | - Zijian Zhong
- School of Electronic Information, Xijing University, Xi’an, China
| |
Collapse
|
14
|
Gómez-Flores W, Gregorio-Calas MJ, Coelho de Albuquerque Pereira W. BUS-BRA: A breast ultrasound dataset for assessing computer-aided diagnosis systems. Med Phys 2024; 51:3110-3123. [PMID: 37937827 DOI: 10.1002/mp.16812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 11/09/2023] Open
Abstract
PURPOSE Computer-aided diagnosis (CAD) systems on breast ultrasound (BUS) aim to increase the efficiency and effectiveness of breast screening, helping specialists to detect and classify breast lesions. CAD system development requires a set of annotated images, including lesion segmentation, biopsy results to specify benign and malignant cases, and BI-RADS categories to indicate the likelihood of malignancy. Besides, standardized partitions of training, validation, and test sets promote reproducibility and fair comparisons between different approaches. Thus, we present a publicly available BUS dataset whose novelty is the substantial increment of cases with the above-mentioned annotations and the inclusion of standardized partitions to objectively assess and compare CAD systems. ACQUISITION AND VALIDATION METHODS The BUS dataset comprises 1875 anonymized images from 1064 female patients acquired via four ultrasound scanners during systematic studies at the National Institute of Cancer (Rio de Janeiro, Brazil). The dataset includes biopsy-proven tumors divided into 722 benign and 342 malignant cases. Besides, a senior ultrasonographer performed a BI-RADS assessment in categories 2 to 5. Additionally, the ultrasonographer manually outlined the breast lesions to obtain ground truth segmentations. Furthermore, 5- and 10-fold cross-validation partitions are provided to standardize the training and test sets to evaluate and reproduce CAD systems. Finally, to validate the utility of the BUS dataset, an evaluation framework is implemented to assess the performance of deep neural networks for segmenting and classifying breast lesions. DATA FORMAT AND USAGE NOTES The BUS dataset is publicly available for academic and research purposes through an open-access repository under the name BUS-BRA: A Breast Ultrasound Dataset for Assessing CAD Systems. BUS images and reference segmentations are saved in Portable Network Graphic (PNG) format files, and the dataset information is stored in separate Comma-Separated Value (CSV) files. POTENTIAL APPLICATIONS The BUS-BRA dataset can be used to develop and assess artificial intelligence-based lesion detection and segmentation methods, and the classification of BUS images into pathological classes and BI-RADS categories. Other potential applications include developing image processing methods like despeckle filtering and contrast enhancement methods to improve image quality and feature engineering for image description.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Tamaulipas, Mexico
| | | | | |
Collapse
|
15
|
Lee SE, Hong H, Kim EK. Positive Predictive Values of Abnormality Scores From a Commercial Artificial Intelligence-Based Computer-Aided Diagnosis for Mammography. Korean J Radiol 2024; 25:343-350. [PMID: 38528692 PMCID: PMC10973732 DOI: 10.3348/kjr.2023.0907] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 11/17/2023] [Accepted: 12/05/2023] [Indexed: 03/27/2024] Open
Abstract
OBJECTIVE Artificial intelligence-based computer-aided diagnosis (AI-CAD) is increasingly used in mammography. While the continuous scores of AI-CAD have been related to malignancy risk, the understanding of how to interpret and apply these scores remains limited. We investigated the positive predictive values (PPVs) of the abnormality scores generated by a deep learning-based commercial AI-CAD system and analyzed them in relation to clinical and radiological findings. MATERIALS AND METHODS From March 2020 to May 2022, 656 breasts from 599 women (mean age 52.6 ± 11.5 years, including 0.6% [4/599] high-risk women) who underwent mammography and received positive AI-CAD results (Lunit Insight MMG, abnormality score ≥ 10) were retrospectively included in this study. Univariable and multivariable analyses were performed to evaluate the associations between the AI-CAD abnormality scores and clinical and radiological factors. The breasts were subdivided according to the abnormality scores into groups 1 (10-49), 2 (50-69), 3 (70-89), and 4 (90-100) using the optimal binning method. The PPVs were calculated for all breasts and subgroups. RESULTS Diagnostic indications and positive imaging findings by radiologists were associated with higher abnormality scores in the multivariable regression analysis. The overall PPV of AI-CAD was 32.5% (213/656) for all breasts, including 213 breast cancers, 129 breasts with benign biopsy results, and 314 breasts with benign outcomes in the follow-up or diagnostic studies. In the screening mammography subgroup, the PPVs were 18.6% (58/312) overall and 5.1% (12/235), 29.0% (9/31), 57.9% (11/19), and 96.3% (26/27) for score groups 1, 2, 3, and 4, respectively. The PPVs were significantly higher in women with diagnostic indications (45.1% [155/344]), palpability (51.9% [149/287]), fatty breasts (61.2% [60/98]), and certain imaging findings (masses with or without calcifications and distortion). CONCLUSION PPV increased with increasing AI-CAD abnormality scores. The PPVs of AI-CAD satisfied the acceptable PPV range according to Breast Imaging-Reporting and Data System for screening mammography and were higher for diagnostic mammography.
Collapse
Affiliation(s)
- Si Eun Lee
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Republic of Korea
| | - Hanpyo Hong
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Republic of Korea.
| |
Collapse
|
16
|
Chafai N, Bonizzi L, Botti S, Badaoui B. Emerging applications of machine learning in genomic medicine and healthcare. Crit Rev Clin Lab Sci 2024; 61:140-163. [PMID: 37815417 DOI: 10.1080/10408363.2023.2259466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 09/12/2023] [Indexed: 10/11/2023]
Abstract
The integration of artificial intelligence technologies has propelled the progress of clinical and genomic medicine in recent years. The significant increase in computing power has facilitated the ability of artificial intelligence models to analyze and extract features from extensive medical data and images, thereby contributing to the advancement of intelligent diagnostic tools. Artificial intelligence (AI) models have been utilized in the field of personalized medicine to integrate clinical data and genomic information of patients. This integration allows for the identification of customized treatment recommendations, ultimately leading to enhanced patient outcomes. Notwithstanding the notable advancements, the application of artificial intelligence (AI) in the field of medicine is impeded by various obstacles such as the limited availability of clinical and genomic data, the diversity of datasets, ethical implications, and the inconclusive interpretation of AI models' results. In this review, a comprehensive evaluation of multiple machine learning algorithms utilized in the fields of clinical and genomic medicine is conducted. Furthermore, we present an overview of the implementation of artificial intelligence (AI) in the fields of clinical medicine, drug discovery, and genomic medicine. Finally, a number of constraints pertaining to the implementation of artificial intelligence within the healthcare industry are examined.
Collapse
Affiliation(s)
- Narjice Chafai
- Laboratory of Biodiversity, Ecology, and Genome, Faculty of Sciences, Department of Biology, Mohammed V University in Rabat, Rabat, Morocco
| | - Luigi Bonizzi
- Department of Biomedical, Surgical and Dental Science, University of Milan, Milan, Italy
| | - Sara Botti
- PTP Science Park, Via Einstein - Loc. Cascina Codazza, Lodi, Italy
| | - Bouabid Badaoui
- Laboratory of Biodiversity, Ecology, and Genome, Faculty of Sciences, Department of Biology, Mohammed V University in Rabat, Rabat, Morocco
- African Sustainable Agriculture Research Institute (ASARI), Mohammed VI Polytechnic University (UM6P), Laâyoune, Morocco
| |
Collapse
|
17
|
Chhillar I, Singh A. A feature engineering-based machine learning technique to detect and classify lung and colon cancer from histopathological images. Med Biol Eng Comput 2024; 62:913-924. [PMID: 38091162 DOI: 10.1007/s11517-023-02984-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 11/29/2023] [Indexed: 02/22/2024]
Abstract
Globally, lung and colon cancers are among the most prevalent and lethal tumors. Early cancer identification is essential to increase the likelihood of survival. Histopathological images are considered an appropriate tool for diagnosing cancer, which is tedious and error-prone if done manually. Recently, machine learning methods based on feature engineering have gained prominence in automatic histopathological image classification. Furthermore, these methods are more interpretable than deep learning, which operates in a "black box" manner. In the medical profession, the interpretability of a technique is critical to gaining the trust of end users to adopt it. In view of the above, this work aims to create an accurate and interpretable machine-learning technique for the automated classification of lung and colon cancers from histopathology images. In the proposed approach, following the preprocessing steps, texture and color features are retrieved by utilizing the Haralick and Color histogram feature extraction algorithms, respectively. The obtained features are concatenated to form a single feature set. The three feature sets (texture, color, and combined features) are passed into the Light Gradient Boosting Machine (LightGBM) classifier for classification. And their performance is evaluated on the LC25000 dataset using hold-out and stratified 10-fold cross-validation (Stratified 10-FCV) techniques. With a test/hold-out set, the LightGBM with texture, color, and combined features classifies the lung and colon cancer images with 97.72%, 99.92%, and 100% accuracy respectively. In addition, a stratified 10-fold cross-validation method also revealed that LightGBM's combined or color features performed well, with an excellent mean auc_mu score and a low mean multi_logloss value. Thus, this proposed technique can help histologists detect and classify lung and colon histopathology images more efficiently, effectively, and economically, resulting in more productivity.
Collapse
Affiliation(s)
- Indu Chhillar
- Department of Computer Science and Engineering, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Haryana, India.
| | - Ajmer Singh
- Department of Computer Science and Engineering, Deenbandhu Chhotu Ram University of Science and Technology, Murthal, Haryana, India
| |
Collapse
|
18
|
Lokaj B, Pugliese MT, Kinkel K, Lovis C, Schmid J. Barriers and facilitators of artificial intelligence conception and implementation for breast imaging diagnosis in clinical practice: a scoping review. Eur Radiol 2024; 34:2096-2109. [PMID: 37658895 PMCID: PMC10873444 DOI: 10.1007/s00330-023-10181-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/07/2023] [Accepted: 07/10/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVE Although artificial intelligence (AI) has demonstrated promise in enhancing breast cancer diagnosis, the implementation of AI algorithms in clinical practice encounters various barriers. This scoping review aims to identify these barriers and facilitators to highlight key considerations for developing and implementing AI solutions in breast cancer imaging. METHOD A literature search was conducted from 2012 to 2022 in six databases (PubMed, Web of Science, CINHAL, Embase, IEEE, and ArXiv). The articles were included if some barriers and/or facilitators in the conception or implementation of AI in breast clinical imaging were described. We excluded research only focusing on performance, or with data not acquired in a clinical radiology setup and not involving real patients. RESULTS A total of 107 articles were included. We identified six major barriers related to data (B1), black box and trust (B2), algorithms and conception (B3), evaluation and validation (B4), legal, ethical, and economic issues (B5), and education (B6), and five major facilitators covering data (F1), clinical impact (F2), algorithms and conception (F3), evaluation and validation (F4), and education (F5). CONCLUSION This scoping review highlighted the need to carefully design, deploy, and evaluate AI solutions in clinical practice, involving all stakeholders to yield improvement in healthcare. CLINICAL RELEVANCE STATEMENT The identification of barriers and facilitators with suggested solutions can guide and inform future research, and stakeholders to improve the design and implementation of AI for breast cancer detection in clinical practice. KEY POINTS • Six major identified barriers were related to data; black-box and trust; algorithms and conception; evaluation and validation; legal, ethical, and economic issues; and education. • Five major identified facilitators were related to data, clinical impact, algorithms and conception, evaluation and validation, and education. • Coordinated implication of all stakeholders is required to improve breast cancer diagnosis with AI.
Collapse
Affiliation(s)
- Belinda Lokaj
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland.
- Faculty of Medicine, University of Geneva, Geneva, Switzerland.
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland.
| | - Marie-Thérèse Pugliese
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| | - Karen Kinkel
- Réseau Hospitalier Neuchâtelois, Neuchâtel, Switzerland
| | - Christian Lovis
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Jérôme Schmid
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| |
Collapse
|
19
|
Shen X, He Z, Shi Y, Liu T, Yang Y, Luo J, Tang X, Chen B, Xu S, Zhou Y, Xiao J, Qin Y. Development and Validation of an Automated Classification System for Osteonecrosis of the Femoral Head Using Deep Learning Approach: A Multicenter Study. J Arthroplasty 2024; 39:379-386.e2. [PMID: 37572719 DOI: 10.1016/j.arth.2023.08.018] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 08/01/2023] [Accepted: 08/03/2023] [Indexed: 08/14/2023] Open
Abstract
BACKGROUND Accurate classification can facilitate the selection of appropriate interventions to delay the progression of osteonecrosis of the femoral head (ONFH). This study aimed to perform the classification of ONFH through a deep learning approach. METHODS We retrospectively sampled 1,806 midcoronal magnetic resonance images (MRIs) of 1,337 hips from 4 institutions. Of these, 1,472 midcoronal MRIs of 1,155 hips were divided into training, validation, and test datasets with a ratio of 7:1:2 to develop a convolutional neural network model (CNN). An additional 334 midcoronal MRIs of 182 hips were used to perform external validation. The predictive performance of the CNN and the review panel was also compared. RESULTS A multiclass CNN model was successfully developed. In internal validation, the overall accuracy of the CNN for predicting the severity of ONFH based on the Japanese Investigation Committee classification was 87.8%. The macroaverage values of area under the curve (AUC), precision, recall, and F-value were 0.90, 84.8, 84.8, and 84.6%, respectively. In external validation, the overall accuracy of the CNN was 83.8%. The macroaverage values of area under the curve, precision, recall, and F-value were 0.87, 79.5, 80.5, and 79.9%, respectively. In a human-machine comparison study, the CNN outperformed or was comparable to that of the deputy chief orthopaedic surgeons. CONCLUSION The CNN is feasible and robust for classifying ONFH and correctly locating the necrotic area. These findings suggest that classifying ONFH using deep learning with high accuracy and generalizability may aid in predicting femoral head collapse and clinical decision-making.
Collapse
Affiliation(s)
- Xianyue Shen
- Department of Orthopedics, The Second Hospital of Jilin University, Changchun, Jilin province, PR China
| | - Ziling He
- College of Computer Science and Technology, Jilin University, Changchun, Jilin province, PR China
| | - Yi Shi
- Department of Orthopedics, The Second Hospital of Anhui Medical University, Hefei, Anhui province, PR China
| | - Tong Liu
- Department of Orthopedics, China-Japan Union Hospital of Jilin University, Changchun, Jilin province, PR China
| | - Yuhui Yang
- Department of Orthopedics, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong province, PR China
| | - Jia Luo
- College of Computer Science and Technology, Jilin University, Changchun, Jilin province, PR China
| | - Xiongfeng Tang
- Department of Orthopedics, The Second Hospital of Jilin University, Changchun, Jilin province, PR China
| | - Bo Chen
- Department of Orthopedics, The Second Hospital of Jilin University, Changchun, Jilin province, PR China
| | - Shenghao Xu
- Department of Orthopedics, China-Japan Union Hospital of Jilin University, Changchun, Jilin province, PR China
| | - You Zhou
- College of Software, Jilin University, Changchun, Jilin province, PR China
| | - Jianlin Xiao
- Department of Orthopedics, China-Japan Union Hospital of Jilin University, Changchun, Jilin province, PR China
| | - Yanguo Qin
- Department of Orthopedics, The Second Hospital of Jilin University, Changchun, Jilin province, PR China
| |
Collapse
|
20
|
Wu D, Ni J, Fan W, Jiang Q, Wang L, Sun L, Cai Z. Opportunities and challenges of computer aided diagnosis in new millennium: A bibliometric analysis from 2000 to 2023. Medicine (Baltimore) 2023; 102:e36703. [PMID: 38134105 PMCID: PMC10735127 DOI: 10.1097/md.0000000000036703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND After entering the new millennium, computer-aided diagnosis (CAD) is rapidly developing as an emerging technology worldwide. Expanding the spectrum of CAD-related diseases is a possible future research trend. Nevertheless, bibliometric studies in this area have not yet been reported. This study aimed to explore the hotspots and frontiers of research on CAD from 2000 to 2023, which may provide a reference for researchers in this field. METHODS In this paper, we use bibliometrics to analyze CAD-related literature in the Web of Science database between 2000 and 2023. The scientometric softwares VOSviewer and CiteSpace were used to visually analyze the countries, institutions, authors, journals, references and keywords involved in the literature. Keywords burst analysis were utilized to further explore the current state and development trends of research on CAD. RESULTS A total of 13,970 publications were included in this study, with a noticeably rising annual publication trend. China and the United States are major contributors to the publication, with the United States being the dominant position in CAD research. The American research institutions, lead by the University of Chicago, are pioneers of CAD. Acharya UR, Zheng B and Chan HP are the most prolific authors. Institute of Electrical and Electronics Engineers Transactions on Medical Imaging focuses on CAD and publishes the most articles. New computer technologies related to CAD are in the forefront of attention. Currently, CAD is used extensively in breast diseases, pulmonary diseases and brain diseases. CONCLUSION Expanding the spectrum of CAD-related diseases is a possible future research trend. How to overcome the lack of large sample datasets and establish a universally accepted standard for the evaluation of CAD system performance are urgent issues for CAD development and validation. In conclusion, this paper provides valuable information on the current state of CAD research and future developments.
Collapse
Affiliation(s)
- Di Wu
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Jiachun Ni
- Department of Coloproctology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Wenbin Fan
- Department of Proctology, Bishan Hospital of Traditional Chinese Medicine, Chongqing, China
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Qiong Jiang
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
| | - Ling Wang
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Li Sun
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Zengjin Cai
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| |
Collapse
|
21
|
Gómez-Flores W, Pereira WCDA. Gray-to-color image conversion in the classification of breast lesions on ultrasound using pre-trained deep neural networks. Med Biol Eng Comput 2023; 61:3193-3207. [PMID: 37713158 DOI: 10.1007/s11517-023-02928-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Accepted: 08/29/2023] [Indexed: 09/16/2023]
Abstract
Breast ultrasound (BUS) image classification in benign and malignant classes is often based on pre-trained convolutional neural networks (CNNs) to cope with small-sized training data. Nevertheless, BUS images are single-channel gray-level images, whereas pre-trained CNNs learned from color images with red, green, and blue (RGB) components. Thus, a gray-to-color conversion method is applied to fit the BUS image to the CNN's input layer size. This paper evaluates 13 gray-to-color conversion methods proposed in the literature that follow three strategies: replicating the gray-level image to all RGB channels, decomposing the image to enhance inherent information like the lesion's texture and morphology, and learning a matching layer. Besides, we introduce an image decomposition method based on the lesion's structural information to describe its inner and outer complexity. These gray-to-color conversion methods are evaluated under the same experimental framework using a pre-trained CNN architecture named ResNet-18 and a BUS dataset with more than 3000 images. In addition, the Matthews correlation coefficient (MCC), sensitivity (SEN), and specificity (SPE) measure the classification performance. The experimental results show that decomposition methods outperform replication and learning-based methods when using information from the lesion's binary mask (obtained from a segmentation method), reaching an MCC value greater than 0.70 and specificity up to 0.92, although the sensitivity is about 0.80. On the other hand, regarding the proposed method, the trade-off between sensitivity and specificity is better balanced, obtaining about 0.88 for both indices and an MCC of 0.73. This study contributes to the objective assessment of different gray-to-color conversion approaches in classifying breast lesions, revealing that mask-based decomposition methods improve classification performance. Besides, the proposed method based on structural information improves the sensitivity, obtaining more reliable classification results on malignant cases and potentially benefiting clinical practice.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del IPN, Unidad Tamaulipas, Ciudad Victoria, 87138, Tamaulipas, Mexico.
| | | |
Collapse
|
22
|
Hossain S, Azam S, Montaha S, Karim A, Chowa SS, Mondol C, Zahid Hasan M, Jonkman M. Automated breast tumor ultrasound image segmentation with hybrid UNet and classification using fine-tuned CNN model. Heliyon 2023; 9:e21369. [PMID: 37885728 PMCID: PMC10598544 DOI: 10.1016/j.heliyon.2023.e21369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 10/11/2023] [Accepted: 10/20/2023] [Indexed: 10/28/2023] Open
Abstract
Introduction Breast cancer stands as the second most deadly form of cancer among women worldwide. Early diagnosis and treatment can significantly mitigate mortality rates. Purpose The study aims to classify breast ultrasound images into benign and malignant tumors. This approach involves segmenting the breast's region of interest (ROI) employing an optimized UNet architecture and classifying the ROIs through an optimized shallow CNN model utilizing an ablation study. Method Several image processing techniques are utilized to improve image quality by removing text, artifacts, and speckle noise, and statistical analysis is done to check the enhanced image quality is satisfactory. With the processed dataset, the segmentation of breast tumor ROI is carried out, optimizing the UNet model through an ablation study where the architectural configuration and hyperparameters are altered. After obtaining the tumor ROIs from the fine-tuned UNet model (RKO-UNet), an optimized CNN model is employed to classify the tumor into benign and malignant classes. To enhance the CNN model's performance, an ablation study is conducted, coupled with the integration of an attention unit. The model's performance is further assessed by classifying breast cancer with mammogram images. Result The proposed classification model (RKONet-13) results in an accuracy of 98.41 %. The performance of the proposed model is further compared with five transfer learning models for both pre-segmented and post-segmented datasets. K-fold cross-validation is done to assess the proposed RKONet-13 model's performance stability. Furthermore, the performance of the proposed model is compared with previous literature, where the proposed model outperforms existing methods, demonstrating its effectiveness in breast cancer diagnosis. Lastly, the model demonstrates its robustness for breast cancer classification, delivering an exceptional performance of 96.21 % on a mammogram dataset. Conclusion The efficacy of this study relies on image pre-processing, segmentation with hybrid attention UNet, and classification with fine-tuned robust CNN model. This comprehensive approach aims to determine an effective technique for detecting breast cancer within ultrasound images.
Collapse
Affiliation(s)
- Shahed Hossain
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, 0909, NT, Australia
| | - Sidratul Montaha
- Department of Computer Science, University of Calgary, Calgary, AB, T2N 1N4, Canada
| | - Asif Karim
- Faculty of Science and Technology, Charles Darwin University, Casuarina, 0909, NT, Australia
| | - Sadia Sultana Chowa
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Chaity Mondol
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Md Zahid Hasan
- Health Informatics Research Laboratory (HIRL), Department of Computer Science and Engineering, Daffodil International University, Dhaka, 1341, Bangladesh
| | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, 0909, NT, Australia
| |
Collapse
|
23
|
Wang L, Ni J, Hou C, Wu D, Sun L, Jiang Q, Cai Z, Fan W. Time to change? Present and prospects of hemorrhoidal classification. Front Med (Lausanne) 2023; 10:1252468. [PMID: 37901411 PMCID: PMC10603225 DOI: 10.3389/fmed.2023.1252468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Accepted: 09/01/2023] [Indexed: 10/31/2023] Open
Abstract
As a common benign anal condition, the high incidence and recurrence of hemorrhoids pose challenges for both patients and doctors. The classification of hemorrhoids plays a crucial role in assessing, diagnosing, and treating the condition. By using appropriate classification and corresponding treatment strategies, we can achieve higher cure rates and lower recurrence rates of hemorrhoids. Since the introduction of the Miles classification in 1919, various classifications have been developed, which include objective classifications based on anatomical or instrumental assessment and subjective classifications based on symptoms and patient sensations. These classifications aim to accurately evaluate the condition. In this study, we discuss the evaluation values of each classification in terms of their advantages, disadvantages, treatment relevance, reproducibility, practicality, and assessment value. We also analyze the significant and essential factors, principles of use, and components of assessment indicators of hemorrhoidal classification. This study proposes several strategies to address the limitations of current hemorrhoidal assessment methods. All these will provide a reference for the development regarding the assessment and classification of hemorrhoids and clinical diagnosis and management of hemorrhoids.
Collapse
Affiliation(s)
- Ling Wang
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
- Chongqing Medical University, Chongqing, China
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Jiachun Ni
- Department of Coloproctology, Yueyang Hospital of Integrated Traditional Chinese and Western Medicine, Shanghai University of Traditional Chinese Medicine, Shanghai, China
| | - Changcheng Hou
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Di Wu
- Chongqing Medical University, Chongqing, China
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Li Sun
- Chongqing Medical University, Chongqing, China
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Qiong Jiang
- Chongqing Medical University, Chongqing, China
| | - Zengjin Cai
- Department of Proctology, Yongchuan Hospital of Traditional Chinese Medicine, Chongqing Medical University, Chongqing, China
| | - Wenbin Fan
- Chongqing College of Traditional Chinese Medicine, Chongqing, China
- Chongqing Medical University, Chongqing, China
| |
Collapse
|
24
|
Youk JH, Han K, Lee SE, Kim EK. Consistency of Artificial Intelligence (AI)-based Diagnostic Support Software in Short-term Digital Mammography Reimaging After Core Needle Biopsy. J Digit Imaging 2023; 36:1965-1973. [PMID: 37326891 PMCID: PMC10501993 DOI: 10.1007/s10278-023-00863-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 05/08/2023] [Accepted: 05/26/2023] [Indexed: 06/17/2023] Open
Abstract
To evaluate the consistency in the performance of Artificial Intelligence (AI)-based diagnostic support software in short-term digital mammography reimaging after core needle biopsy. Of 276 women who underwent short-term (<3 mo) serial digital mammograms followed by breast cancer surgery from Jan. to Dec. 2017, 550 breasts were included. All core needle biopsies for breast lesions were performed between serial exams. All mammography images were analyzed using a commercially available AI-based software providing an abnormality score (0-100). Demographic data for age, interval between serial exams, biopsy, and final diagnosis were compiled. Mammograms were reviewed for mammographic density and finding. Statistical analysis was performed to evaluate the distribution of variables according to biopsy and to test the interaction effects of variables with the difference in AI-based score according to biopsy. AI-based score of 550 exams (benign or normal in 263 and malignant in 287) showed significant difference between malignant and benign/normal exams (0.48 vs. 91.97 in first exam and 0.62 vs. 87.13 in second exam, P<0.0001). In comparison of serial exams, no significant difference was found in AI-based score. AI-based score difference between serial exams was significantly different according to biopsy performed or not (-0.25 vs. 0.07, P = 0.035). In linear regression analysis, there was no significant interaction effect of all clinical and mammographic characteristics with mammographic examinations performed after biopsy or not. The results from AI-based diagnostic support software for digital mammography was relatively consistent in short-term reimaging even after core needle biopsy.
Collapse
Affiliation(s)
- Ji Hyun Youk
- Department of Radiology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Kyunghwa Han
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Si Eun Lee
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi‑do, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi‑do, Republic of Korea.
| |
Collapse
|
25
|
Balaji K. Image Augmentation based on Variational Autoencoder for Breast Tumor Segmentation. Acad Radiol 2023; 30 Suppl 2:S172-S183. [PMID: 36804294 DOI: 10.1016/j.acra.2022.12.035] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 12/18/2022] [Accepted: 12/21/2022] [Indexed: 02/18/2023]
Abstract
RATIONALE AND OBJECTIVES Breast tumor segmentation based on Dynamic Contrast-Enhanced Magnetic Resonance Imaging is significant step for computable radiomics analysis of breast cancer. Manual tumor annotation is time-consuming process and involves medical acquaintance, biased, inclined to error, and inter-user discrepancy. A number of modern trainings have revealed the capability of deep learning representations in image segmentation. MATERIALS AND METHODS Here, we describe a 3D Connected-UNets for tumor segmentation from 3D Magnetic Resonance Imagings based on encoder-decoder architecture. Due to a restricted training dataset size, a variational auto-encoder outlet is supplementary to renovate the input image itself in order to identify the shared decoder and execute additional controls on its layers. Based on initial segmentation of Connected-UNets, fully connected 3D provisional unsystematic domain is used to enhance segmentation outcomes by discovering 2D neighbor areas and 3D volume statistics. Moreover, 3D connected modules evaluation is used to endure around large modules and decrease segmentation noise. RESULTS The proposed method has been assessed on two widely offered datasets, explicitly INbreast and the curated breast imaging subset of digital database for screening mammography The proposed model has also been estimated using a private dataset. CONCLUSION The experimental results show that the proposed model outperforms the state-of-the-art methods for breast tumor segmentation.
Collapse
Affiliation(s)
- K Balaji
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamilnadu, 632014 India.
| |
Collapse
|
26
|
Zakareya S, Izadkhah H, Karimpour J. A New Deep-Learning-Based Model for Breast Cancer Diagnosis from Medical Images. Diagnostics (Basel) 2023; 13:diagnostics13111944. [PMID: 37296796 DOI: 10.3390/diagnostics13111944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 05/15/2023] [Accepted: 05/28/2023] [Indexed: 06/12/2023] Open
Abstract
Breast cancer is one of the most prevalent cancers among women worldwide, and early detection of the disease can be lifesaving. Detecting breast cancer early allows for treatment to begin faster, increasing the chances of a successful outcome. Machine learning helps in the early detection of breast cancer even in places where there is no access to a specialist doctor. The rapid advancement of machine learning, and particularly deep learning, leads to an increase in the medical imaging community's interest in applying these techniques to improve the accuracy of cancer screening. Most of the data related to diseases is scarce. On the other hand, deep-learning models need much data to learn well. For this reason, the existing deep-learning models on medical images cannot work as well as other images. To overcome this limitation and improve breast cancer classification detection, inspired by two state-of-the-art deep networks, GoogLeNet and residual block, and developing several new features, this paper proposes a new deep model to classify breast cancer. Utilizing adopted granular computing, shortcut connection, two learnable activation functions instead of traditional activation functions, and an attention mechanism is expected to improve the accuracy of diagnosis and consequently decrease the load on doctors. Granular computing can improve diagnosis accuracy by capturing more detailed and fine-grained information about cancer images. The proposed model's superiority is demonstrated by comparing it to several state-of-the-art deep models and existing works using two case studies. The proposed model achieved an accuracy of 93% and 95% on ultrasound images and breast histopathology images, respectively.
Collapse
Affiliation(s)
- Salman Zakareya
- Department of Computer Science, University of Tabriz, Tabriz 5166616471, Iran
| | - Habib Izadkhah
- Department of Computer Science, University of Tabriz, Tabriz 5166616471, Iran
- Research Department of Computational Algorithms and Mathematical Models, University of Tabriz, Tabriz 5166616471, Iran
| | - Jaber Karimpour
- Department of Computer Science, University of Tabriz, Tabriz 5166616471, Iran
| |
Collapse
|
27
|
Isosalo A, Inkinen SI, Turunen T, Ipatti PS, Reponen J, Nieminen MT. Independent evaluation of a multi-view multi-task convolutional neural network breast cancer classification model using Finnish mammography screening data. Comput Biol Med 2023; 161:107023. [PMID: 37230016 DOI: 10.1016/j.compbiomed.2023.107023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 04/30/2023] [Accepted: 05/09/2023] [Indexed: 05/27/2023]
Abstract
BACKGROUND Development of deep convolutional neural networks for breast cancer classification has taken significant steps towards clinical adoption. It is though unclear how the models perform for unseen data, and what is required to adapt them to different demographic populations. In this retrospective study, we adopt an openly available pre-trained mammography breast cancer multi-view classification model and evaluate it by utilizing an independent Finnish dataset. METHODS Transfer learning was used, and the pre-trained model was finetuned with 8,829 examinations from the Finnish dataset (4,321 normal, 362 malignant and 4,146 benign examinations). Holdout dataset with 2,208 examinations from the Finnish dataset (1,082 normal, 70 malignant and 1,056 benign examinations) was used in the evaluation. The performance was also evaluated on a manually annotated malignant suspect subset. Receiver Operating Characteristic (ROC) and Precision-Recall curves were used to performance measures. RESULTS The Area Under ROC [95%CI] values for malignancy classification obtained with the finetuned model for the entire holdout set were 0.82 [0.76, 0.87], 0.84 [0.77, 0.89], 0.85 [0.79, 0.90], and 0.83 [0.76, 0.89] for R-MLO, L-MLO, R-CC and L-CC views respectively. Performance on the malignant suspect subset was slightly better. On the auxiliary benign classification task performance remained low. CONCLUSIONS The results indicate that the model performs well also in an out-of-distribution setting. Finetuning allowed the model to adapt to some of the underlying local demographics. Future research should concentrate to identify breast cancer subgroups adversely affecting performance, as it is a requirement for increasing the model's readiness level for a clinical setting.
Collapse
Affiliation(s)
- A Isosalo
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland.
| | - S I Inkinen
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland; HUS Diagnostic Center, Clinical Physiology and Nuclear Medicine, Helsinki University and Helsinki University Hospital, Helsinki, Finland
| | - T Turunen
- Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - P S Ipatti
- Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland
| | - J Reponen
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland; Medical Research Centre Oulu, Oulu University Hospital and University of Oulu, Oulu, Finland
| | - M T Nieminen
- Research Unit of Health Sciences and Technology, Faculty of Medicine, University of Oulu, Oulu, Finland; Department of Diagnostic Radiology, Oulu University Hospital, Oulu, Finland; Medical Research Centre Oulu, Oulu University Hospital and University of Oulu, Oulu, Finland
| |
Collapse
|
28
|
Trepanier C, Huang A, Liu M, Ha R. Emerging uses of artificial intelligence in breast and axillary ultrasound. Clin Imaging 2023; 100:64-68. [PMID: 37243994 DOI: 10.1016/j.clinimag.2023.05.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 05/02/2023] [Indexed: 05/29/2023]
Abstract
Breast ultrasound is a valuable adjunctive tool to mammography in detecting breast cancer, especially in women with dense breasts. Ultrasound also plays an important role in staging breast cancer by assessing axillary lymph nodes. However, its utility is limited by operator dependence, high recall rate, low positive predictive value and low specificity. These limitations present an opportunity for artificial intelligence (AI) to improve diagnostic performance and pioneer novel uses of ultrasound. Research in developing AI for radiology has flourished over the past few years. A subset of AI, deep learning, uses interconnected computational nodes to form a neural network, which extracts complex visual features from image data to train itself into a predictive model. This review summarizes several key studies evaluating AI programs' performance in predicting breast cancer and demonstrates that AI can assist radiologists and address limitations of ultrasound by acting as a decision support tool. This review also touches on how AI programs allow for novel predictive uses of ultrasound, particularly predicting molecular subtypes of breast cancer and response to neoadjuvant chemotherapy, which have the potential to change how breast cancer is managed by providing non-invasive prognostic and treatment data from ultrasound images. Lastly, this review explores how AI programs demonstrate improved diagnostic accuracy in predicting axillary lymph node metastasis. The limitations and future challenges in developing and implementing AI for breast and axillary ultrasound will also be discussed.
Collapse
Affiliation(s)
- Christopher Trepanier
- Columbia University Irving Medical Center, 622 W 168th St, New York, NY 10032, United States of America.
| | - Alice Huang
- Columbia University Irving Medical Center, 622 W 168th St, New York, NY 10032, United States of America.
| | - Michael Liu
- Columbia University Irving Medical Center, 622 W 168th St, New York, NY 10032, United States of America.
| | - Richard Ha
- Columbia University Irving Medical Center, 622 W 168th St, New York, NY 10032, United States of America.
| |
Collapse
|
29
|
Alhussan AA, Eid MM, Towfek SK, Khafaga DS. Breast Cancer Classification Depends on the Dynamic Dipper Throated Optimization Algorithm. Biomimetics (Basel) 2023; 8:163. [PMID: 37092415 PMCID: PMC10123690 DOI: 10.3390/biomimetics8020163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/12/2023] [Accepted: 04/14/2023] [Indexed: 04/25/2023] Open
Abstract
According to the American Cancer Society, breast cancer is the second largest cause of mortality among women after lung cancer. Women's death rates can be decreased if breast cancer is diagnosed and treated early. Due to the lengthy duration of manual breast cancer diagnosis, an automated approach is necessary for early cancer identification. This research proposes a novel framework integrating metaheuristic optimization with deep learning and feature selection for robustly classifying breast cancer from ultrasound images. The structure of the proposed methodology consists of five stages, namely, data augmentation to improve the learning of convolutional neural network (CNN) models, transfer learning using GoogleNet deep network for feature extraction, selection of the best set of features using a novel optimization algorithm based on a hybrid of dipper throated and particle swarm optimization algorithms, and classification of the selected features using CNN optimized using the proposed optimization algorithm. To prove the effectiveness of the proposed approach, a set of experiments were conducted on a breast cancer dataset, freely available on Kaggle, to evaluate the performance of the proposed feature selection method and the performance of the optimized CNN. In addition, statistical tests were established to study the stability and difference of the proposed approach compared to state-of-the-art approaches. The achieved results confirmed the superiority of the proposed approach with a classification accuracy of 98.1%, which is better than the other approaches considered in the conducted experiments.
Collapse
Affiliation(s)
- Amel Ali Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Marwa M. Eid
- Faculty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35712, Egypt
| | - S. K. Towfek
- Delta Higher Institute for Engineering and Technology, Mansoura 35111, Egypt
- Computer Science and Intelligent Systems Research Center, Blacksburg, VA 24060, USA
| | - Doaa Sami Khafaga
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
30
|
Alsharif WM. The utilization of artificial intelligence applications to improve breast cancer detection and prognosis. Saudi Med J 2023; 44:119-127. [PMID: 36773967 PMCID: PMC9987701 DOI: 10.15537/smj.2023.44.2.20220611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023] Open
Abstract
Breast imaging faces challenges with the current increase in medical imaging requests and lesions that breast screening programs can miss. Solutions to improve these challenges are being sought with the recent advancement and adoption of artificial intelligent (AI)-based applications to enhance workflow efficiency as well as patient-healthcare outcomes. rtificial intelligent tools have been proposed and used to analyze different modes of breast imaging, in most of the published studies, mainly for the detection and classification of breast lesions, breast lesion segmentation, breast density evaluation, and breast cancer risk assessment. This article reviews the background of the Conventional Computer-aided Detection system and AI, AI-based applications in breast medical imaging for the identification, segmentation, and categorization of lesions, breast density and cancer risk evaluation. In addition, the challenges, and limitations of AI-based applications in breast imaging are also discussed.
Collapse
Affiliation(s)
- Walaa M. Alsharif
- From the Diagnostic Radiology Technology Department, College of Applied Medical Sciences, Taibah University, Al Madinah Al Munawwarah; and from the Society of Artificial Intelligence in Healthcare, Riyadh, Kingdom of Saudi Arabia.
| |
Collapse
|
31
|
Tan T, Rodriguez-Ruiz A, Zhang T, Xu L, Beets-Tan RGH, Shen Y, Karssemeijer N, Xu J, Mann RM, Bao L. Multi-modal artificial intelligence for the combination of automated 3D breast ultrasound and mammograms in a population of women with predominantly dense breasts. Insights Imaging 2023; 14:10. [PMID: 36645507 PMCID: PMC9842825 DOI: 10.1186/s13244-022-01352-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 12/09/2022] [Indexed: 01/17/2023] Open
Abstract
OBJECTIVES To assess the stand-alone and combined performance of artificial intelligence (AI) detection systems for digital mammography (DM) and automated 3D breast ultrasound (ABUS) in detecting breast cancer in women with dense breasts. METHODS 430 paired cases of DM and ABUS examinations from a Asian population with dense breasts were retrospectively collected. All cases were analyzed by two AI systems, one for DM exams and one for ABUS exams. A selected subset (n = 152) was read by four radiologists. The performance of AI systems was based on analysis of the area under the receiver operating characteristic curve (AUC). The maximum Youden's index and its associated sensitivity and specificity were also reported for each AI systems. Detection performance of human readers in the subcohort of the reader study was measured in terms of sensitivity and specificity. RESULTS The performance of the AI systems in a multi-modal setting was significantly better when the weights of AI-DM and AI-ABUS were 0.25 and 0.75, respectively, than each system individually in a single-modal setting (AUC-AI-Multimodal = 0.865; AUC-AI-DM = 0.832, p = 0.026; AUC-AI-ABUS = 0.841, p = 0.041). The maximum Youden's index for AI-Multimodal was 0.707 (sensitivity = 79.4%, specificity = 91.2%). In the subcohort that underwent human reading, the panel of four readers achieved a sensitivity of 93.2% and specificity of 32.7%. AI-multimodal achieves superior or equal sensitivity as single human readers at the same specificity operating points on the ROC curve. CONCLUSION Multimodal (ABUS + DM) AI systems for detecting breast cancer in women with dense breasts are a potential solution for breast screening in radiologist-scarce regions.
Collapse
Affiliation(s)
- Tao Tan
- grid.430814.a0000 0001 0674 1393Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX Amsterdam, The Netherlands ,Faculty of Applied Science, Macao Polytechnic University, Macao, 999078 China
| | | | - Tianyu Zhang
- grid.430814.a0000 0001 0674 1393Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX Amsterdam, The Netherlands ,grid.5012.60000 0001 0481 6099GROW School for Oncology and Development Biology, Maastricht University, P. O. Box 616, 6200 MD Maastricht, The Netherlands
| | - Lin Xu
- grid.440637.20000 0004 4657 8879School of Information Science and Technology, ShanghaiTech University, Shanghai, 201210 China
| | - Regina G. H. Beets-Tan
- grid.430814.a0000 0001 0674 1393Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX Amsterdam, The Netherlands ,grid.5012.60000 0001 0481 6099GROW School for Oncology and Development Biology, Maastricht University, P. O. Box 616, 6200 MD Maastricht, The Netherlands
| | - Yingzhao Shen
- grid.13402.340000 0004 1759 700XAffiliated Hangzhou First People’s Hospital, Zhejiang University School of Medicine, Xueshi Road, Hubin Street, Shangcheng District, Hangzhou, 310006 Zhejiang China
| | - Nico Karssemeijer
- grid.10417.330000 0004 0444 9382Department of Diagnostic Imaging, Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - Jun Xu
- grid.260478.f0000 0000 9249 2313Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, 210044 China
| | - Ritse M. Mann
- grid.430814.a0000 0001 0674 1393Department of Radiology, Netherlands Cancer Institute (NKI), Plesmanlaan 121, 1066 CX Amsterdam, The Netherlands ,grid.10417.330000 0004 0444 9382Department of Diagnostic Imaging, Radboud University Medical Center, PO Box 9101, 6500 HB Nijmegen, The Netherlands
| | - Lingyun Bao
- grid.13402.340000 0004 1759 700XAffiliated Hangzhou First People’s Hospital, Zhejiang University School of Medicine, Xueshi Road, Hubin Street, Shangcheng District, Hangzhou, 310006 Zhejiang China
| |
Collapse
|
32
|
Castro E, Costa Pereira J, Cardoso JS. Symmetry-based regularization in deep breast cancer screening. Med Image Anal 2023; 83:102690. [PMID: 36446314 DOI: 10.1016/j.media.2022.102690] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 10/28/2022] [Accepted: 11/09/2022] [Indexed: 11/23/2022]
Abstract
Breast cancer is the most common and lethal form of cancer in women. Recent efforts have focused on developing accurate neural network-based computer-aided diagnosis systems for screening to help anticipate this disease. The ultimate goal is to reduce mortality and improve quality of life after treatment. Due to the difficulty in collecting and annotating data in this domain, data scarcity is - and will continue to be - a limiting factor. In this work, we present a unified view of different regularization methods that incorporate domain-known symmetries in the model. Three general strategies were followed: (i) data augmentation, (ii) invariance promotion in the loss function, and (iii) the use of equivariant architectures. Each of these strategies encodes different priors on the functions learned by the model and can be readily introduced in most settings. Empirically we show that the proposed symmetry-based regularization procedures improve generalization to unseen examples. This advantage is verified in different scenarios, datasets and model architectures. We hope that both the principle of symmetry-based regularization and the concrete methods presented can guide development towards more data-efficient methods for breast cancer screening as well as other medical imaging domains.
Collapse
Affiliation(s)
- Eduardo Castro
- INESC TEC, Campus da Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal; Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal.
| | - Jose Costa Pereira
- INESC TEC, Campus da Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal; Huawei Technologies R&D, Noah's Ark Lab, Gridiron building, 1 Pancras Square, 5th floor, London N1C 4AG, United Kingdom
| | - Jaime S Cardoso
- INESC TEC, Campus da Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal; Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal
| |
Collapse
|
33
|
Magni V, Cozzi A, Schiaffino S, Colarieti A, Sardanelli F. Artificial intelligence for digital breast tomosynthesis: Impact on diagnostic performance, reading times, and workload in the era of personalized screening. Eur J Radiol 2023; 158:110631. [PMID: 36481480 DOI: 10.1016/j.ejrad.2022.110631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 11/24/2022] [Indexed: 12/05/2022]
Abstract
The ultimate goals of the application of artificial intelligence (AI) to digital breast tomosynthesis (DBT) are the reduction of reading times, the increase of diagnostic performance, and the reduction of interval cancer rates. In this review, after outlining the journey from computer-aided detection/diagnosis systems to AI applied to digital mammography (DM), we summarize the results of studies where AI was applied to DBT, noting that long-term advantages of DBT screening and its crucial ability to decrease the interval cancer rate are still under scrutiny. AI has shown the capability to overcome some shortcomings of DBT in the screening setting by improving diagnostic performance and by reducing recall rates (from -2 % to -27 %) and reading times (up to -53 %, with an average 20 % reduction), but the ability of AI to reduce interval cancer rates has not yet been clearly investigated. Prospective validation is needed to assess the cost-effectiveness and real-world impact of AI models assisting DBT interpretation, especially in large-scale studies with low breast cancer prevalence. Finally, we focus on the incoming era of personalized and risk-stratified screening that will first see the application of contrast-enhanced breast imaging to screen women with extremely dense breasts. As the diagnostic advantage of DBT over DM was concentrated in this category, we try to understand if the application of AI to DM in the remaining cohorts of women with heterogeneously dense or non-dense breast could close the gap in diagnostic performance between DM and DBT, thus neutralizing the usefulness of AI application to DBT.
Collapse
Affiliation(s)
- Veronica Magni
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Luigi Mangiagalli 31, 20133 Milano, Italy.
| | - Andrea Cozzi
- Unit of Radiology, IRCCS Policlinico San Donato, Via Rodolfo Morandi 30, 20097 San Donato Milanese, Italy
| | - Simone Schiaffino
- Unit of Radiology, IRCCS Policlinico San Donato, Via Rodolfo Morandi 30, 20097 San Donato Milanese, Italy
| | - Anna Colarieti
- Unit of Radiology, IRCCS Policlinico San Donato, Via Rodolfo Morandi 30, 20097 San Donato Milanese, Italy
| | - Francesco Sardanelli
- Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Luigi Mangiagalli 31, 20133 Milano, Italy; Unit of Radiology, IRCCS Policlinico San Donato, Via Rodolfo Morandi 30, 20097 San Donato Milanese, Italy.
| |
Collapse
|
34
|
Wang L, Ding N, Zuo P, Wang X, Rai BK. Application and Challenges of Artificial Intelligence in Medical Imaging. 2022 INTERNATIONAL CONFERENCE ON KNOWLEDGE ENGINEERING AND COMMUNICATION SYSTEMS (ICKES) 2022:1-6. [DOI: 10.1109/ickecs56523.2022.10059898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
Affiliation(s)
- Lingyu Wang
- School of Health Care Technology, Dalian Neusoft University of Information,Dalian,Liaoning,China
| | - Ning Ding
- School of Health Care Technology, Dalian Neusoft University of Information,Dalian,Liaoning,China
| | - Pengfei Zuo
- School of Health Care Technology, Dalian Neusoft University of Information,Dalian,Liaoning,China
| | - Xuenan Wang
- School of Health Care Technology, Dalian Neusoft University of Information,Dalian,Liaoning,China
| | - B Karunakara Rai
- Nitte Meenakshi Institute of Technology,Department of Electronics and Communication Engineering,Bengaluru,India
| |
Collapse
|
35
|
Zhang S, Liao M, Wang J, Zhu Y, Zhang Y, Zhang J, Zheng R, Lv L, Zhu D, Chen H, Wang W. Fully automatic tumor segmentation of breast ultrasound images with deep learning. J Appl Clin Med Phys 2022; 24:e13863. [PMID: 36495018 PMCID: PMC9859996 DOI: 10.1002/acm2.13863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 09/28/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND Breast ultrasound (BUS) imaging is one of the most prevalent approaches for the detection of breast cancers. Tumor segmentation of BUS images can facilitate doctors in localizing tumors and is a necessary step for computer-aided diagnosis systems. While the majority of clinical BUS scans are normal ones without tumors, segmentation approaches such as U-Net often predict mass regions for these images. Such false-positive problem becomes serious if a fully automatic artificial intelligence system is used for routine screening. METHODS In this study, we proposed a novel model which is more suitable for routine BUS screening. The model contains a classification branch that determines whether the image is normal or with tumors, and a segmentation branch that outlines tumors. Two branches share the same encoder network. We also built a new dataset that contains 1600 BUS images from 625 patients for training and a testing dataset with 130 images from 120 patients for testing. The dataset is the largest one with pixel-wise masks manually segmented by experienced radiologists. Our code is available at https://github.com/szhangNJU/BUS_segmentation. RESULTS The area under the receiver operating characteristic curve (AUC) for classifying images into normal/abnormal categories was 0.991. The dice similarity coefficient (DSC) for segmentation of mass regions was 0.898, better than the state-of-the-art models. Testing on an external dataset gave a similar performance, demonstrating a good transferability of our model. Moreover, we simulated the use of the model in actual clinic practice by processing videos recorded during BUS scans; the model gave very low false-positive predictions on normal images without sacrificing sensitivities for images with tumors. CONCLUSIONS Our model achieved better segmentation performance than the state-of-the-art models and showed a good transferability on an external test set. The proposed deep learning architecture holds potential for use in fully automatic BUS health screening.
Collapse
Affiliation(s)
- Shuai Zhang
- Collaborative Innovation Center of Advanced MicrostructuresSchool of PhysicsNanjing UniversityNanjingChina
| | - Mei Liao
- Department of UltrasoundThird Affiliated HospitalSun Yat‐sen UniversityGuangzhouChina
| | - Jing Wang
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Yongyi Zhu
- Department of UltrasoundThird Affiliated HospitalSun Yat‐sen UniversityGuangzhouChina
| | - Yanling Zhang
- Department of UltrasoundThird Affiliated HospitalSun Yat‐sen UniversityGuangzhouChina
| | - Jian Zhang
- Collaborative Innovation Center of Advanced MicrostructuresSchool of PhysicsNanjing UniversityNanjingChina
- Institute for Brain SciencesNanjing UniversityNanjingChina
| | - Rongqin Zheng
- Department of UltrasoundThird Affiliated HospitalSun Yat‐sen UniversityGuangzhouChina
| | | | | | - Hao Chen
- Precision Care technologyHangzhouChina
| | - Wei Wang
- Collaborative Innovation Center of Advanced MicrostructuresSchool of PhysicsNanjing UniversityNanjingChina
- Institute for Brain SciencesNanjing UniversityNanjingChina
| |
Collapse
|
36
|
Li Z, Koban KC, Schenck TL, Giunta RE, Li Q, Sun Y. Artificial Intelligence in Dermatology Image Analysis: Current Developments and Future Trends. J Clin Med 2022; 11:jcm11226826. [PMID: 36431301 PMCID: PMC9693628 DOI: 10.3390/jcm11226826] [Citation(s) in RCA: 49] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 10/24/2022] [Accepted: 10/28/2022] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Thanks to the rapid development of computer-based systems and deep-learning-based algorithms, artificial intelligence (AI) has long been integrated into the healthcare field. AI is also particularly helpful in image recognition, surgical assistance and basic research. Due to the unique nature of dermatology, AI-aided dermatological diagnosis based on image recognition has become a modern focus and future trend. Key scientific concepts of review: The use of 3D imaging systems allows clinicians to screen and label skin pigmented lesions and distributed disorders, which can provide an objective assessment and image documentation of lesion sites. Dermatoscopes combined with intelligent software help the dermatologist to easily correlate each close-up image with the corresponding marked lesion in the 3D body map. In addition, AI in the field of prosthetics can assist in the rehabilitation of patients and help to restore limb function after amputation in patients with skin tumors. THE AIM OF THE STUDY For the benefit of patients, dermatologists have an obligation to explore the opportunities, risks and limitations of AI applications. This study focuses on the application of emerging AI in dermatology to aid clinical diagnosis and treatment, analyzes the current state of the field and summarizes its future trends and prospects so as to help dermatologists realize the impact of new technological innovations on traditional practices so that they can embrace and use AI-based medical approaches more quickly.
Collapse
Affiliation(s)
- Zhouxiao Li
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200023, China
- Division of Hand, Plastic and Aesthetic Surgery, University Hospital, LMU Munich, 80339 Munich, Germany
| | | | - Thilo Ludwig Schenck
- Division of Hand, Plastic and Aesthetic Surgery, University Hospital, LMU Munich, 80339 Munich, Germany
| | - Riccardo Enzo Giunta
- Division of Hand, Plastic and Aesthetic Surgery, University Hospital, LMU Munich, 80339 Munich, Germany
| | - Qingfeng Li
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200023, China
- Correspondence: (Q.L.); (Y.S.)
| | - Yangbai Sun
- Department of Plastic and Reconstructive Surgery, Shanghai 9th People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200023, China
- Correspondence: (Q.L.); (Y.S.)
| |
Collapse
|
37
|
Depiction of breast cancers on digital mammograms by artificial intelligence-based computer-assisted diagnosis according to cancer characteristics. Eur Radiol 2022; 32:7400-7408. [PMID: 35499564 DOI: 10.1007/s00330-022-08718-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 02/03/2022] [Accepted: 03/02/2022] [Indexed: 01/03/2023]
Abstract
OBJECTIVE To evaluate how breast cancers are depicted by artificial intelligence-based computer-assisted diagnosis (AI-CAD) according to clinical, radiological, and pathological factors. MATERIALS AND METHODS From January 2017 to December 2017, 896 patients diagnosed with 930 breast cancers were enrolled in this retrospective study. Commercial AI-CAD was applied to digital mammograms and abnormality scores were obtained. We evaluated the abnormality score according to clinical, radiological, and pathological characteristics. False-negative results were defined by abnormality scores less than 10. RESULTS The median abnormality score of 930 breasts was 87.4 (range 0-99). The false-negative rate of AI-CAD was 19.4% (180/930). Cancers with an abnormality score of more than 90 showed a high proportion of palpable lesions, BI-RADS 4c and 5 lesions, cancers presenting as mass with or without microcalcifications and invasive cancers compared with low-scored cancers (all p < 0.001). False-negative cancers were more likely to develop in asymptomatic patients and extremely dense breasts and to be diagnosed as occult breast cancers and DCIS compared to detected cancers. CONCLUSION Breast cancers depicted with high abnormality scores by AI-CAD are associated with higher BI-RADS category, invasive pathology, and higher cancer stage. KEY POINTS • High-scored cancers by AI-CAD included a high proportion of BI-RADS 4c and 5 lesions, masses with or without microcalcifications, and cancers with invasive pathology. • Among invasive cancers, cancers with higher T and N stage and HER2-enriched subtype were depicted with higher abnormality scores by AI-CAD. • Cancers missed by AI-CAD tended to be in asymptomatic patients and extremely dense breasts and to be diagnosed as occult breast cancers by radiologists.
Collapse
|
38
|
Retson TA, Watanabe AT, Vu H, Chim CY. Multicenter, Multivendor Validation of an FDA-approved Algorithm for Mammography Triage. JOURNAL OF BREAST IMAGING 2022; 4:488-495. [PMID: 38416951 DOI: 10.1093/jbi/wbac046] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Indexed: 03/01/2024]
Abstract
OBJECTIVE Artificial intelligence (AI)-based triage algorithms may improve cancer detection and expedite radiologist workflow. To this end, the performance of a commercial AI-based triage algorithm on screening mammograms was evaluated across breast densities and lesion types. METHODS This retrospective, IRB-exempt, multicenter, multivendor study examined 1255 screening 4-view mammograms (400 positive and 855 negative studies). Images were anonymized by providing institutions and analyzed by a commercially available AI algorithm (cmTriage, CureMetrix, La Jolla, CA) that performed retrospective triage at the study level by flagging exams as "suspicious" or not. Sensitivities and specificities with confidence intervals were derived from area under the curve (AUC) calculations. RESULTS The algorithm demonstrated an AUC of 0.95 (95% CI: 0.94-0.96) for case identification. Area under the curve held across densities (0.95) and lesion types (masses: 0.94 [95% CI: 0.92-0.96] or microcalcifications: 0.97 [95% CI: 0.96-0.99]). The algorithm has a default sensitivity of 93% (95% CI: 95.6%-90.5%) with specificity of 76.3% (95% CI: 79.2%-73.4%). To evaluate real-world performance, a sensitivity of 86.9% (95% CI: 83.6%-90.2%) was tested, as observed for practicing radiologists by the Breast Cancer Surveillance Consortium (BCSC) study. The resulting specificity was 88.5% (95% CI: 86.4%-90.7%), similar to the BCSC specificity of 88.9%, indicating performance comparable to real-world results. CONCLUSION When tested for lesion detection, an AI-based triage software can perform at the level of practicing radiologists. Drawing attention to suspicious exams may improve reader specificity and help streamline radiologist workflow, enabling faster turnaround times and improving care.
Collapse
Affiliation(s)
- Tara A Retson
- University of California School of Medicine, Department of Radiology, La Jolla, CA, USA
| | - Alyssa T Watanabe
- University of Southern California Keck School of Medicine, Department of Radiology, Los Angeles, CA, USA
- CureMetrix, Inc., La Jolla, CA, USA
| | - Hoanh Vu
- CureMetrix, Inc., La Jolla, CA, USA
| | | |
Collapse
|
39
|
Zimmerer D, Full PM, Isensee F, Jager P, Adler T, Petersen J, Kohler G, Ross T, Reinke A, Kascenas A, Jensen BS, O'Neil AQ, Tan J, Hou B, Batten J, Qiu H, Kainz B, Shvetsova N, Fedulova I, Dylov DV, Yu B, Zhai J, Hu J, Si R, Zhou S, Wang S, Li X, Chen X, Zhao Y, Marimont SN, Tarroni G, Saase V, Maier-Hein L, Maier-Hein K. MOOD 2020: A Public Benchmark for Out-of-Distribution Detection and Localization on Medical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2728-2738. [PMID: 35468060 DOI: 10.1109/tmi.2022.3170077] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Detecting Out-of-Distribution (OoD) data is one of the greatest challenges in safe and robust deployment of machine learning algorithms in medicine. When the algorithms encounter cases that deviate from the distribution of the training data, they often produce incorrect and over-confident predictions. OoD detection algorithms aim to catch erroneous predictions in advance by analysing the data distribution and detecting potential instances of failure. Moreover, flagging OoD cases may support human readers in identifying incidental findings. Due to the increased interest in OoD algorithms, benchmarks for different domains have recently been established. In the medical imaging domain, for which reliable predictions are often essential, an open benchmark has been missing. We introduce the Medical-Out-Of-Distribution-Analysis-Challenge (MOOD) as an open, fair, and unbiased benchmark for OoD methods in the medical imaging domain. The analysis of the submitted algorithms shows that performance has a strong positive correlation with the perceived difficulty, and that all algorithms show a high variance for different anomalies, making it yet hard to recommend them for clinical practice. We also see a strong correlation between challenge ranking and performance on a simple toy test set, indicating that this might be a valuable addition as a proxy dataset during anomaly detection algorithm development.
Collapse
|
40
|
Hendrix N, Lowry KP, Elmore JG, Lotter W, Sorensen G, Hsu W, Liao GJ, Parsian S, Kolb S, Naeim A, Lee CI. Radiologist Preferences for Artificial Intelligence-Based Decision Support During Screening Mammography Interpretation. J Am Coll Radiol 2022; 19:1098-1110. [PMID: 35970474 PMCID: PMC9840464 DOI: 10.1016/j.jacr.2022.06.019] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 06/03/2022] [Accepted: 06/07/2022] [Indexed: 01/17/2023]
Abstract
BACKGROUND Artificial intelligence (AI) may improve cancer detection and risk prediction during mammography screening, but radiologists' preferences regarding its characteristics and implementation are unknown. PURPOSE To quantify how different attributes of AI-based cancer detection and risk prediction tools affect radiologists' intentions to use AI during screening mammography interpretation. MATERIALS AND METHODS Through qualitative interviews with radiologists, we identified five primary attributes for AI-based breast cancer detection and four for breast cancer risk prediction. We developed a discrete choice experiment based on these attributes and invited 150 US-based radiologists to participate. Each respondent made eight choices for each tool between three alternatives: two hypothetical AI-based tools versus screening without AI. We analyzed samplewide preferences using random parameters logit models and identified subgroups with latent class models. RESULTS Respondents (n = 66; 44% response rate) were from six diverse practice settings across eight states. Radiologists were more interested in AI for cancer detection when sensitivity and specificity were balanced (94% sensitivity with <25% of examinations marked) and AI markup appeared at the end of the hanging protocol after radiologists complete their independent review. For AI-based risk prediction, radiologists preferred AI models using both mammography images and clinical data. Overall, 46% to 60% intended to adopt any of the AI tools presented in the study; 26% to 33% approached AI enthusiastically but were deterred if the features did not align with their preferences. CONCLUSION Although most radiologists want to use AI-based decision support, short-term uptake may be maximized by implementing tools that meet the preferences of dissuadable users.
Collapse
Affiliation(s)
- Nathaniel Hendrix
- Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Kathryn P Lowry
- Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington.
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, California
| | - William Lotter
- Chief Technology Officer, DeepHealth Inc, RadNet AI Solutions, Cambridge, Massachusetts
| | - Gregory Sorensen
- Chief Technology Officer, DeepHealth Inc, RadNet AI Solutions, Cambridge, Massachusetts
| | - William Hsu
- Department of Radiological Sciences, Data Integration, Architecture, and Analytics Group, University of California, Los Angeles, California; American Medical Informatics Association: Member, Governance Committee; RSNA: Deputy Editor, Radiology: Artificial Intelligence
| | - Geraldine J Liao
- Department of Radiology, Virginia Mason Medical Center, Seattle, Washington
| | - Sana Parsian
- Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington; Department of Radiology, Kaiser Permanente Washington, Seattle, Washington
| | - Suzanne Kolb
- Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington
| | - Arash Naeim
- Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, California; Chief Medical Officer for Clinical Research, UCLA Health; Codirector: Clinical and Translational Science Institute and Center for SMART Health; Associate Director: Institute for Precision Health, Jonsson Comprehensive Cancer Center, Garrick Institute for Risk Sciences
| | - Christoph I Lee
- Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington; Department of Health Services, School of Public Health, University of Washington, Seattle, Washington; and Deputy Editor, JACR
| |
Collapse
|
41
|
Bao C, Shen J, Zhang Y, Zhang Y, Wei W, Wang Z, Ding J, Han L. Evaluation of an artificial intelligence support system for breast cancer screening in Chinese people based on mammogram. Cancer Med 2022; 12:3718-3726. [PMID: 36082949 PMCID: PMC9939225 DOI: 10.1002/cam4.5231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 08/16/2022] [Accepted: 08/30/2022] [Indexed: 11/05/2022] Open
Abstract
BACKGROUND To evaluate the diagnostic performance of radiologists on breast cancer with or without artificial intelligence (AI) support. METHODS A retrospective study was performed. In total, 643 mammograms (average age: 54 years; female: 100%; cancer: 62.05%) were randomly allocated into two groups. Seventy-five percent of mammograms in each group were randomly selected for assessment by two independent radiologists, and the rest were read once. Half of the 71 radiologists could read mammograms with AI support, and the other half could not. Sensitivity, specificity, Youden's index, agreement rate, Kappa value, the area under the receiver operating characteristic curve (AUC) and the reading time of radiologists in each group were analyzed. RESULTS The average AUC was higher if the AI support system was used (unaided: 0.84; with AI support: 0.91; p < 0.01). The average sensitivity increased from 84.77% to 95.07% with AI support (p < 0.01), but the average specificity decreased (p = 0.07). Youden's index, agreement rate and Kappa value were larger in the group with AI support, and the average reading time was shorter (p < 0.01). CONCLUSIONS The AI support system might contribute to enhancing the diagnostic performance (e.g., higher sensitivity and AUC) of radiologists. In the future, the AI algorithm should be improved, and prospective studies should be conducted.
Collapse
Affiliation(s)
- Chengzhen Bao
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | - Jie Shen
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | - Yue Zhang
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | - Yan Zhang
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | - Wei Wei
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | | | | | - Lili Han
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| |
Collapse
|
42
|
Lamb LR, Lehman CD, Gastounioti A, Conant EF, Bahl M. Artificial Intelligence (AI) for Screening Mammography, From the AJR Special Series on AI Applications. AJR Am J Roentgenol 2022; 219:369-380. [PMID: 35018795 DOI: 10.2214/ajr.21.27071] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Artificial intelligence (AI) applications for screening mammography are being marketed for clinical use in the interpretative domains of lesion detection and diagnosis, triage, and breast density assessment and in the noninterpretive domains of breast cancer risk assessment, image quality control, image acquisition, and dose reduction. Evidence in support of these nascent applications, particularly for lesion detection and diagnosis, is largely based on multireader studies with cancer-enriched datasets rather than rigorous clinical evaluation aligned with the application's specific intended clinical use. This article reviews commercial AI algorithms for screening mammography that are currently available for clinical practice, their use, and evidence supporting their performance. Clinical implementation considerations, such as workflow integration, governance, and ethical issues, are also described. In addition, the future of AI for screening mammography is discussed, including the development of interpretive and noninterpretive AI applications and strategic priorities for research and development.
Collapse
Affiliation(s)
- Leslie R Lamb
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St, WAC 240, Boston, MA 02114
| | - Constance D Lehman
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St, WAC 240, Boston, MA 02114
| | - Aimilia Gastounioti
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA
- Present affiliation: Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO
| | - Emily F Conant
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA
| | - Manisha Bahl
- Department of Radiology, Massachusetts General Hospital, 55 Fruit St, WAC 240, Boston, MA 02114
| |
Collapse
|
43
|
Siviengphanom S, Gandomkar Z, Lewis SJ, Brennan PC. Mammography-based Radiomics in Breast Cancer: A Scoping Review of Current Knowledge and Future Needs. Acad Radiol 2022; 29:1228-1247. [PMID: 34799256 DOI: 10.1016/j.acra.2021.09.025] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/14/2021] [Accepted: 09/26/2021] [Indexed: 12/19/2022]
Abstract
RATIONALE AND OBJECTIVES Breast cancer is a highly complex heterogeneous disease. Current validated prognostic factors (e.g., histological grade, lymph node involvement, receptor status, and proliferation index), as well as multigene tests (e.g., Oncotype DX and PAM50) are helpful to describe breast cancer characteristics and predict the chance of recurrence risk and survival. Nevertheless, they are invasive and cannot capture a complete heterogeneity of the entire breast tumor resulting in up to 30% of patients being either over- or under-treated for breast cancer. Furthermore, multigene testings are time consuming and expensive. Radiomics is emerging as a reliable, accurate, non-invasive, and cost-effective approach of using quantitative image features to classify breast cancer characteristics and predict patient outcomes. Several recent radiomics reviews have been conducted in breast cancer, however, specific mammography-based radiomics studies have not been well discussed. This scoping review aims to assess and summarize the current evidence on the potential usefulness of mammography-based (i.e., digital mammography, digital breast tomosynthesis, and contrast-enhanced mammography) radiomics in predicting factors that describe breast cancer characteristics, recurrence, and survival. MATERIALS AND METHODS PubMed database and eligible text reference were searched using relevant keywords to identify studies published between 2015 and December 19, 2020. Studies collected were screened and assessed based on the inclusion and exclusion criteria. RESULTS Eighteen eligible studies were included and organized into three main sections: radiomics predicting breast cancer characteristics, radiomics predicting breast cancer recurrence and survival, and radiomics integrating with clinical data. Majority of publications reported retrospective studies while three studies examined prospective cohorts. Encouraging results were reported, suggesting the potential clinical value of mammography-based radiomics. Further efforts are required to standardize radiomics approaches and catalogue reproducible and relevant mammographic radiomic features. The role of integrating radiomics with other information is discussed. CONCLUSION The potential role of mammography-based radiomics appears promising but more efforts are required to further evaluate its reliability as a routine clinical tool.
Collapse
Affiliation(s)
- Somphone Siviengphanom
- Discipline of Medical Imaging Sciences, Sydney School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Level 7, Susan Wakil Health Building D18, Sydney, NSW 2006, Australia..
| | - Ziba Gandomkar
- Discipline of Medical Imaging Sciences, Sydney School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Level 7, Susan Wakil Health Building D18, Sydney, NSW 2006, Australia
| | - Sarah J Lewis
- Discipline of Medical Imaging Sciences, Sydney School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Level 7, Susan Wakil Health Building D18, Sydney, NSW 2006, Australia
| | - Patrick C Brennan
- Discipline of Medical Imaging Sciences, Sydney School of Health Sciences, Faculty of Medicine and Health, University of Sydney, Level 7, Susan Wakil Health Building D18, Sydney, NSW 2006, Australia
| |
Collapse
|
44
|
Chen X, Zhang K, Abdoli N, Gilley PW, Wang X, Liu H, Zheng B, Qiu Y. Transformers Improve Breast Cancer Diagnosis from Unregistered Multi-View Mammograms. Diagnostics (Basel) 2022; 12:diagnostics12071549. [PMID: 35885455 PMCID: PMC9320758 DOI: 10.3390/diagnostics12071549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/21/2022] [Accepted: 06/24/2022] [Indexed: 11/16/2022] Open
Abstract
Deep convolutional neural networks (CNNs) have been widely used in various medical imaging tasks. However, due to the intrinsic locality of convolution operations, CNNs generally cannot model long-range dependencies well, which are important for accurately identifying or mapping corresponding breast lesion features computed from unregistered multiple mammograms. This motivated us to leverage the architecture of Multi-view Vision Transformers to capture long-range relationships of multiple mammograms from the same patient in one examination. For this purpose, we employed local transformer blocks to separately learn patch relationships within four mammograms acquired from two-view (CC/MLO) of two-side (right/left) breasts. The outputs from different views and sides were concatenated and fed into global transformer blocks, to jointly learn patch relationships between four images representing two different views of the left and right breasts. To evaluate the proposed model, we retrospectively assembled a dataset involving 949 sets of mammograms, which included 470 malignant cases and 479 normal or benign cases. We trained and evaluated the model using a five-fold cross-validation method. Without any arduous preprocessing steps (e.g., optimal window cropping, chest wall or pectoral muscle removal, two-view image registration, etc.), our four-image (two-view-two-side) transformer-based model achieves case classification performance with an area under ROC curve (AUC = 0.818 ± 0.039), which significantly outperforms AUC = 0.784 ± 0.016 achieved by the state-of-the-art multi-view CNNs (p = 0.009). It also outperforms two one-view-two-side models that achieve AUC of 0.724 ± 0.013 (CC view) and 0.769 ± 0.036 (MLO view), respectively. The study demonstrates the potential of using transformers to develop high-performing computer-aided diagnosis schemes that combine four mammograms.
Collapse
Affiliation(s)
- Xuxin Chen
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
- Correspondence: (X.C.); (Y.Q.)
| | - Ke Zhang
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
- Stephenson School of Biomedical Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Neman Abdoli
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
| | - Patrik W. Gilley
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
| | | | - Hong Liu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA; (K.Z.); (N.A.); (P.W.G.); (H.L.); (B.Z.)
- Correspondence: (X.C.); (Y.Q.)
| |
Collapse
|
45
|
Smetherman D, Golding L, Moy L, Rubin E. The Economic Impact of AI on Breast Imaging. JOURNAL OF BREAST IMAGING 2022; 4:302-308. [PMID: 38416968 DOI: 10.1093/jbi/wbac012] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Indexed: 03/01/2024]
Abstract
This article explores the development of computer-aided detection (CAD) and artificial or augmented intelligence (AI) for breast radiology examinations and describes the current applications of AI in breast imaging. Although radiologists in other subspecialties may be less familiar with the use of these technologies in their practices, CAD has been used in breast imaging for more than two decades, as mammography CAD programs have been commercially available in the United States since the late 1990s. Likewise, breast radiologists have seen payment for CAD in mammography and breast MRI evolve over time. With the rapid expansion of AI products in radiology in recent years, many new applications for these technologies in breast imaging have emerged. This article outlines the current state of reimbursement for breast radiology AI algorithms within the traditional fee-for-service model used by Medicare and commercial insurers as well as potential future payment pathways. In addition, the inherent challenges of employing the existing payment framework in the United States to AI programs in radiology are detailed for the reader. This article aims to give breast radiologists a better understanding of how AI will be reimbursed as they seek to further incorporate these emerging technologies into their practices to advance patient care and improve workflow efficiency.
Collapse
Affiliation(s)
- Dana Smetherman
- Ochsner Health, Department of Radiology, New Orleans, LA, USA
| | - Lauren Golding
- Triad Radiology Associates, PLLC, Winston-Salem, NC, USA
| | - Linda Moy
- NYU Langone Health, New York, NY, USA
| | - Eric Rubin
- Southeast Radiology Limited, Philadelphia, PA,USA
| |
Collapse
|
46
|
Jabeen K, Khan MA, Alhaisoni M, Tariq U, Zhang YD, Hamza A, Mickus A, Damaševičius R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. SENSORS 2022; 22:s22030807. [PMID: 35161552 PMCID: PMC8840464 DOI: 10.3390/s22030807] [Citation(s) in RCA: 78] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 12/11/2022]
Abstract
After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj 11942, Saudi Arabia;
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester LE1 7RH, UK;
| | - Ameer Hamza
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Artūras Mickus
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
- Correspondence:
| |
Collapse
|
47
|
Connected-UNets: a deep learning architecture for breast mass segmentation. NPJ Breast Cancer 2021; 7:151. [PMID: 34857755 PMCID: PMC8640011 DOI: 10.1038/s41523-021-00358-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 11/01/2021] [Indexed: 12/19/2022] Open
Abstract
Breast cancer analysis implies that radiologists inspect mammograms to detect suspicious breast lesions and identify mass tumors. Artificial intelligence techniques offer automatic systems for breast mass segmentation to assist radiologists in their diagnosis. With the rapid development of deep learning and its application to medical imaging challenges, UNet and its variations is one of the state-of-the-art models for medical image segmentation that showed promising performance on mammography. In this paper, we propose an architecture, called Connected-UNets, which connects two UNets using additional modified skip connections. We integrate Atrous Spatial Pyramid Pooling (ASPP) in the two standard UNets to emphasize the contextual information within the encoder–decoder network architecture. We also apply the proposed architecture on the Attention UNet (AUNet) and the Residual UNet (ResUNet). We evaluated the proposed architectures on two publically available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Experiments were also conducted using additional synthetic data using the cycle-consistent Generative Adversarial Network (CycleGAN) model between two unpaired datasets to augment and enhance the images. Qualitative and quantitative results show that the proposed architecture can achieve better automatic mass segmentation with a high Dice score of 89.52%, 95.28%, and 95.88% and Intersection over Union (IoU) score of 80.02%, 91.03%, and 92.27%, respectively, on CBIS-DDSM, INbreast, and the private dataset.
Collapse
|
48
|
Keshavamurthy KN, Eickhoff C, Juluru K. Weakly supervised pneumonia localization in chest X-rays using generative adversarial networks. Med Phys 2021; 48:7154-7171. [PMID: 34459001 PMCID: PMC10997001 DOI: 10.1002/mp.15185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 07/12/2021] [Accepted: 07/27/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Automatic localization of pneumonia on chest X-rays (CXRs) is highly desirable both as an interpretive aid to the radiologist and for timely diagnosis of the disease. However, pneumonia's amorphous appearance on CXRs and complexity of normal anatomy in the chest present key challenges that hinder accurate localization. Existing studies in this area are either not optimized to preserve spatial information of abnormality or depend on expensive expert-annotated bounding boxes. We present a novel generative adversarial network (GAN)-based machine learning approach for this problem, which is weakly supervised (does not require any location annotations), was trained to retain spatial information, and can produce pixel-wise abnormality maps highlighting regions of abnormality (as opposed to bounding boxes around abnormality). METHODS Our method is based on the Wasserstein GAN framework and, to the best of our knowledge, the first application of GANs to this problem. Specifically, from an abnormal CXR as input, we generated the corresponding pseudo normal CXR image as output. The pseudo normal CXR is the "hypothetical" normal, if the same abnormal CXR were not to have any abnormalities. We surmise that the difference between the pseudo normal and the abnormal CXR highlights the pixels suspected to have pneumonia and hence is our output abnormality map. We trained our algorithm on an "unpaired" data set of abnormal and normal CXRs and did not require any location annotations such as bounding boxes/segmentations of abnormal regions. Furthermore, we incorporated additional prior knowledge/constraints into the model and showed that they help improve localization performance. We validated the model on a data set consisting of 14 184 CXRs from the Radiological Society of North America pneumonia detection challenge. RESULTS We evaluated our methods by comparing the generated abnormality maps with radiologist annotated bounding boxes using receiver operating characteristic (ROC) analysis, image similarity metrics such as normalized cross-correlation/mutual information, and abnormality detection rate.We also present visual examples of the abnormality maps, covering various scenarios of abnormality occurrence. Results demonstrate the ability to highlight regions of abnormality with the best method achieving an ROC area under the curve (AUC) of 0.77 and a detection rate of 85%.The GAN tended to perform better as prior knowledge/constraints were incorporated into the model. CONCLUSIONS We presented a novel GAN based approach for localizing pneumonia on CXRs that (1) does not require expensive hand annotated location ground truth; and (2) was trained to produce abnormality maps at the pixel level as opposed to bounding boxes. We demonstrated the efficacy of our methods via quantitative and qualitative results.
Collapse
Affiliation(s)
- Krishna Nand Keshavamurthy
- Brown University, Providence, RI 02912, USA
- Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
| | | | - Krishna Juluru
- Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY 10065, USA
| |
Collapse
|
49
|
Gandomkar Z, Siviengphanom S, Ekpo EU, Suleiman M, Taba ST, Li T, Xu D, Evans KK, Lewis SJ, Wolfe JM, Brennan PC. Global processing provides malignancy evidence complementary to the information captured by humans or machines following detailed mammogram inspection. Sci Rep 2021; 11:20122. [PMID: 34635726 PMCID: PMC8505651 DOI: 10.1038/s41598-021-99582-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 09/13/2021] [Indexed: 12/15/2022] Open
Abstract
The information captured by the gist signal, which refers to radiologists’ first impression arising from an initial global image processing, is poorly understood. We examined whether the gist signal can provide complementary information to data captured by radiologists (experiment 1), or computer algorithms (experiment 2) based on detailed mammogram inspection. In the first experiment, 19 radiologists assessed a case set twice, once based on a half-second image presentation (i.e., gist signal) and once in the usual viewing condition. Their performances in two viewing conditions were compared using repeated measure correlation (rm-corr). The cancer cases (19 cases × 19 readers) exhibited non-significant trend with rm-corr = 0.012 (p = 0.82, CI: −0.09, 0.12). For normal cases (41 cases × 19 readers), a weak correlation of rm-corr = 0.238 (p < 0.001, CI: 0.17, 0.30) was found. In the second experiment, we combined the abnormality score from a state-of-the-art deep learning-based tool (DL) with the radiological gist signal using a support vector machine (SVM). To obtain the gist signal, 53 radiologists assessed images based on half-second image presentation. The SVM performance for each radiologist and an average reader, whose gist responses were the mean abnormality scores given by all 53 readers to each image was assessed using leave-one-out cross-validation. For the average reader, the AUC for gist, DL, and the SVM, were 0.76 (CI: 0.62–0.86), 0.79 (CI: 0.63–0.89), and 0.88 (CI: 0.79–0.94). For all readers with a gist AUC significantly better than chance-level, the SVM outperformed DL. The gist signal provided malignancy evidence with no or weak associations with the information captured by humans in normal radiologic reporting, which involves detailed mammogram inspection. Adding gist signal to a state-of-the-art deep learning-based tool improved its performance for the breast cancer detection.
Collapse
Affiliation(s)
- Ziba Gandomkar
- Discipline of Medical Imaging Sciences, Faculty of Medicine and Health, University of Sydney, 512/Block M, Cumberland Campus, Sydney, NSW, 2006, Australia.
| | - Somphone Siviengphanom
- Discipline of Medical Imaging Sciences, Faculty of Medicine and Health, University of Sydney, 512/Block M, Cumberland Campus, Sydney, NSW, 2006, Australia
| | - Ernest U Ekpo
- Discipline of Medical Imaging Sciences, Faculty of Medicine and Health, University of Sydney, 512/Block M, Cumberland Campus, Sydney, NSW, 2006, Australia
| | - Mo'ayyad Suleiman
- Discipline of Medical Imaging Sciences, Faculty of Medicine and Health, University of Sydney, 512/Block M, Cumberland Campus, Sydney, NSW, 2006, Australia
| | - Seyedamir Tavakoli Taba
- Discipline of Medical Imaging Sciences, Faculty of Medicine and Health, University of Sydney, 512/Block M, Cumberland Campus, Sydney, NSW, 2006, Australia
| | - Tong Li
- Discipline of Medical Imaging Sciences, Faculty of Medicine and Health, University of Sydney, 512/Block M, Cumberland Campus, Sydney, NSW, 2006, Australia
| | - Dong Xu
- School of Electrical and Information Engineering, Faculty of Engineering, University of Sydney, Sydney, NSW, 2006, Australia
| | - Karla K Evans
- Department of Psychology, University of York, York, UK
| | - Sarah J Lewis
- Discipline of Medical Imaging Sciences, Faculty of Medicine and Health, University of Sydney, 512/Block M, Cumberland Campus, Sydney, NSW, 2006, Australia
| | - Jeremy M Wolfe
- Harvard Medical School, Boston, MA, USA.,Brigham and Women's Hospital, Boston, MA, USA
| | - Patrick C Brennan
- Discipline of Medical Imaging Sciences, Faculty of Medicine and Health, University of Sydney, 512/Block M, Cumberland Campus, Sydney, NSW, 2006, Australia
| |
Collapse
|
50
|
Laishram R, Rabidas R. WDO optimized detection for mammographic masses and its diagnosis: A unified CAD system. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107620] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|