1
|
Wang J, Liu G, Zhou C, Cui X, Wang W, Wang J, Huang Y, Jiang J, Wang Z, Tang Z, Zhang A, Cui D. Application of artificial intelligence in cancer diagnosis and tumor nanomedicine. NANOSCALE 2024; 16:14213-14246. [PMID: 39021117 DOI: 10.1039/d4nr01832j] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Cancer is a major health concern due to its high incidence and mortality rates. Advances in cancer research, particularly in artificial intelligence (AI) and deep learning, have shown significant progress. The swift evolution of AI in healthcare, especially in tools like computer-aided diagnosis, has the potential to revolutionize early cancer detection. This technology offers improved speed, accuracy, and sensitivity, bringing a transformative impact on cancer diagnosis, treatment, and management. This paper provides a concise overview of the application of artificial intelligence in the realms of medicine and nanomedicine, with a specific emphasis on the significance and challenges associated with cancer diagnosis. It explores the pivotal role of AI in cancer diagnosis, leveraging structured, unstructured, and multimodal fusion data. Additionally, the article delves into the applications of AI in nanomedicine sensors and nano-oncology drugs. The fundamentals of deep learning and convolutional neural networks are clarified, underscoring their relevance to AI-driven cancer diagnosis. A comparative analysis is presented, highlighting the accuracy and efficiency of traditional methods juxtaposed with AI-based approaches. The discussion not only assesses the current state of AI in cancer diagnosis but also delves into the challenges faced by AI in this context. Furthermore, the article envisions the future development direction and potential application of artificial intelligence in cancer diagnosis, offering a hopeful prospect for enhanced cancer detection and improved patient prognosis.
Collapse
Affiliation(s)
- Junhao Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Guan Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Cheng Zhou
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Xinyuan Cui
- Imaging Department of Rui Jin Hospital, Medical School of Shanghai Jiao Tong University, Shanghai, China
| | - Wei Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Jiulin Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Yixin Huang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Jinlei Jiang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Zhitao Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Zengyi Tang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Amin Zhang
- Department of Food Science & Technology, School of Agriculture & Biology, Shanghai Jiao Tong University, Shanghai, China.
| | - Daxiang Cui
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
- School of Medicine, Henan University, Henan, China
| |
Collapse
|
2
|
Squires S, Mackenzie A, Evans DG, Howell SJ, Astley SM. Capability and reliability of deep learning models to make density predictions on low-dose mammograms. J Med Imaging (Bellingham) 2024; 11:044506. [PMID: 39114539 PMCID: PMC11301609 DOI: 10.1117/1.jmi.11.4.044506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 06/21/2024] [Accepted: 07/19/2024] [Indexed: 08/10/2024] Open
Abstract
Purpose Breast density is associated with the risk of developing cancer and can be automatically estimated using deep learning models from digital mammograms. Our aim is to evaluate the capacity and reliability of such models to predict density from low-dose mammograms taken to enable risk estimates for younger women. Approach We trained deep learning models on standard-dose and simulated low-dose mammograms. The models were then tested on a mammography dataset with paired standard- and low-dose images. The effect of different factors (including age, density, and dose ratio) on the differences between predictions on standard and low doses is analyzed. Methods to improve performance are assessed, and factors that reduce the model quality are demonstrated. Results We showed that, although many factors have no significant effect on the quality of low-dose density prediction, both density and breast area have an impact. The correlation between density predictions on low- and standard-dose images of breasts with the largest breast area is 0.985 (0.949 to 0.995), whereas that with the smallest is 0.882 (0.697 to 0.961). We also demonstrated that averaging across craniocaudal-mediolateral oblique (CC-MLO) images and across repeatedly trained models can improve predictive performance. Conclusions Low-dose mammography can be used to produce density and risk estimates that are comparable to standard-dose images. Averaging across CC-MLO and model predictions should improve this performance. The model quality is reduced when making predictions on denser and smaller breasts.
Collapse
Affiliation(s)
- Steven Squires
- University of Manchester, School of Health Sciences, Division of Imaging, Informatics and Data Sciences, Faculty of Biology, Medicine and Health, Manchester, United Kingdom
| | | | - Dafydd Gareth Evans
- University of Manchester, School of Biological Sciences, Division of Evolution, Infection and Genomics, Manchester, Greater Manchester, United Kingdom
| | - Sacha J. Howell
- University of Manchester, Division of Cancer Sciences, Manchester, United Kingdom
| | - Susan M. Astley
- University of Manchester, School of Health Sciences, Division of Imaging, Informatics and Data Sciences, Faculty of Biology, Medicine and Health, Manchester, United Kingdom
| |
Collapse
|
3
|
Biroš M, Kvak D, Dandár J, Hrubý R, Janů E, Atakhanova A, Al-antari MA. Enhancing Accuracy in Breast Density Assessment Using Deep Learning: A Multicentric, Multi-Reader Study. Diagnostics (Basel) 2024; 14:1117. [PMID: 38893643 PMCID: PMC11172127 DOI: 10.3390/diagnostics14111117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 05/20/2024] [Accepted: 05/24/2024] [Indexed: 06/21/2024] Open
Abstract
The evaluation of mammographic breast density, a critical indicator of breast cancer risk, is traditionally performed by radiologists via visual inspection of mammography images, utilizing the Breast Imaging-Reporting and Data System (BI-RADS) breast density categories. However, this method is subject to substantial interobserver variability, leading to inconsistencies and potential inaccuracies in density assessment and subsequent risk estimations. To address this, we present a deep learning-based automatic detection algorithm (DLAD) designed for the automated evaluation of breast density. Our multicentric, multi-reader study leverages a diverse dataset of 122 full-field digital mammography studies (488 images in CC and MLO projections) sourced from three institutions. We invited two experienced radiologists to conduct a retrospective analysis, establishing a ground truth for 72 mammography studies (BI-RADS class A: 18, BI-RADS class B: 43, BI-RADS class C: 7, BI-RADS class D: 4). The efficacy of the DLAD was then compared to the performance of five independent radiologists with varying levels of experience. The DLAD showed robust performance, achieving an accuracy of 0.819 (95% CI: 0.736-0.903), along with an F1 score of 0.798 (0.594-0.905), precision of 0.806 (0.596-0.896), recall of 0.830 (0.650-0.946), and a Cohen's Kappa (κ) of 0.708 (0.562-0.841). The algorithm achieved robust performance that matches and in four cases exceeds that of individual radiologists. The statistical analysis did not reveal a significant difference in accuracy between DLAD and the radiologists, underscoring the model's competitive diagnostic alignment with professional radiologist assessments. These results demonstrate that the deep learning-based automatic detection algorithm can enhance the accuracy and consistency of breast density assessments, offering a reliable tool for improving breast cancer screening outcomes.
Collapse
Affiliation(s)
- Marek Biroš
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
| | - Daniel Kvak
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
- Department of Simulation Medicine, Faculty of Medicine, Masaryk University, 625 00 Brno, Czech Republic
| | - Jakub Dandár
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
| | - Robert Hrubý
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
| | - Eva Janů
- Department of Radiology, Masaryk Memorial Cancer Institute, 602 00 Brno, Czech Republic
| | - Anora Atakhanova
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence and Data Science, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea;
| |
Collapse
|
4
|
Palomba G, Fernicola A, Corte MD, Capuano M, De Palma GD, Aprea G. Artificial intelligence in screening and diagnosis of surgical diseases: A narrative review. AIMS Public Health 2024; 11:557-576. [PMID: 39027395 PMCID: PMC11252578 DOI: 10.3934/publichealth.2024028] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 03/29/2024] [Accepted: 04/02/2024] [Indexed: 07/20/2024] Open
Abstract
Artificial intelligence (AI) is playing an increasing role in several fields of medicine. It is also gaining popularity among surgeons as a valuable screening and diagnostic tool for many conditions such as benign and malignant colorectal, gastric, thyroid, parathyroid, and breast disorders. In the literature, there is no review that groups together the various application domains of AI when it comes to the screening and diagnosis of main surgical diseases. The aim of this review is to describe the use of AI in these settings. We performed a literature review by searching PubMed, Web of Science, Scopus, and Embase for all studies investigating the role of AI in the surgical setting, published between January 01, 2000, and June 30, 2023. Our focus was on randomized controlled trials (RCTs), meta-analysis, systematic reviews, and observational studies, dealing with large cohorts of patients. We then gathered further relevant studies from the reference list of the selected publications. Based on the studies reviewed, it emerges that AI could strongly enhance the screening efficiency, clinical ability, and diagnostic accuracy for several surgical conditions. Some of the future advantages of this technology include implementing, speeding up, and improving the automaticity with which AI recognizes, differentiates, and classifies the various conditions.
Collapse
Affiliation(s)
- Giuseppe Palomba
- Department of Clinical Medicine and Surgery, University of Naples, “Federico II”, Sergio Pansini 5, 80131, Naples, Italy
| | - Agostino Fernicola
- Department of Clinical Medicine and Surgery, University of Naples, “Federico II”, Sergio Pansini 5, 80131, Naples, Italy
| | - Marcello Della Corte
- Azienda Ospedaliera Universitaria San Giovanni di Dio e Ruggi d'Aragona - OO. RR. Scuola Medica Salernitana, Salerno, Italy
| | - Marianna Capuano
- Department of Clinical Medicine and Surgery, University of Naples, “Federico II”, Sergio Pansini 5, 80131, Naples, Italy
| | - Giovanni Domenico De Palma
- Department of Clinical Medicine and Surgery, University of Naples, “Federico II”, Sergio Pansini 5, 80131, Naples, Italy
| | - Giovanni Aprea
- Department of Clinical Medicine and Surgery, University of Naples, “Federico II”, Sergio Pansini 5, 80131, Naples, Italy
| |
Collapse
|
5
|
Tandon R, Agrawal S, Rathore NPS, Mishra AK, Jain SK. A systematic review on deep learning-based automated cancer diagnosis models. J Cell Mol Med 2024; 28:e18144. [PMID: 38426930 PMCID: PMC10906380 DOI: 10.1111/jcmm.18144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 12/08/2023] [Accepted: 01/16/2024] [Indexed: 03/02/2024] Open
Abstract
Deep learning is gaining importance due to its wide range of applications. Many researchers have utilized deep learning (DL) models for the automated diagnosis of cancer patients. This paper provides a systematic review of DL models for automated diagnosis of cancer patients. Initially, various DL models for cancer diagnosis are presented. Five major categories of cancers such as breast, lung, liver, brain and cervical cancer are considered. As these categories of cancers have a very high percentage of occurrences with high mortality rate. The comparative analysis of different types of DL models is drawn for the diagnosis of cancer at early stages by considering the latest research articles from 2016 to 2022. After comprehensive comparative analysis, it is found that most of the researchers achieved appreciable accuracy with implementation of the convolutional neural network model. These utilized the pretrained models for automated diagnosis of cancer patients. Various shortcomings with the existing DL-based automated cancer diagnosis models are also been presented. Finally, future directions are discussed to facilitate further research for automated diagnosis of cancer patients.
Collapse
Affiliation(s)
| | | | | | - Abhinava K. Mishra
- Molecular, Cellular and Developmental Biology DepartmentUniversity of California Santa BarbaraSanta BarbaraCaliforniaUSA
| | | |
Collapse
|
6
|
Tassoker M, Öziç MÜ, Yuce F. Performance evaluation of a deep learning model for automatic detection and localization of idiopathic osteosclerosis on dental panoramic radiographs. Sci Rep 2024; 14:4437. [PMID: 38396289 PMCID: PMC10891049 DOI: 10.1038/s41598-024-55109-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 02/20/2024] [Indexed: 02/25/2024] Open
Abstract
Idiopathic osteosclerosis (IO) are focal radiopacities of unknown etiology observed in the jaws. These radiopacities are incidentally detected on dental panoramic radiographs taken for other reasons. In this study, we investigated the performance of a deep learning model in detecting IO using a small dataset of dental panoramic radiographs with varying contrasts and features. Two radiologists collected 175 IO-diagnosed dental panoramic radiographs from the dental school database. The dataset size is limited due to the rarity of IO, with its incidence in the Turkish population reported as 2.7% in studies. To overcome this limitation, data augmentation was performed by horizontally flipping the images, resulting in an augmented dataset of 350 panoramic radiographs. The images were annotated by two radiologists and divided into approximately 70% for training (245 radiographs), 15% for validation (53 radiographs), and 15% for testing (52 radiographs). The study employing the YOLOv5 deep learning model evaluated the results using precision, recall, F1-score, mAP (mean Average Precision), and average inference time score metrics. The training and testing processes were conducted on the Google Colab Pro virtual machine. The test process's performance criteria were obtained with a precision value of 0.981, a recall value of 0.929, an F1-score value of 0.954, and an average inference time of 25.4 ms. Although radiographs diagnosed with IO have a small dataset and exhibit different contrasts and features, it has been observed that the deep learning model provides high detection speed, accuracy, and localization results. The automatic identification of IO lesions using artificial intelligence algorithms, with high success rates, can contribute to the clinical workflow of dentists by preventing unnecessary biopsy procedure.
Collapse
Affiliation(s)
- Melek Tassoker
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Necmettin Erbakan University, Bağlarbaşı Street, 42090, Meram, Konya, Turkey.
| | - Muhammet Üsame Öziç
- Faculty of Technology, Department of Biomedical Engineering, Pamukkale University, Denizli, Turkey
| | - Fatma Yuce
- Faculty of Dentistry, Department of Oral and Maxillofacial Radiology, Istanbul Okan University, Istanbul, Turkey
| |
Collapse
|
7
|
Sahu A, Das PK, Meher S. Recent advancements in machine learning and deep learning-based breast cancer detection using mammograms. Phys Med 2023; 114:103138. [PMID: 37914431 DOI: 10.1016/j.ejmp.2023.103138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 07/22/2023] [Accepted: 09/14/2023] [Indexed: 11/03/2023] Open
Abstract
OBJECTIVE Mammogram-based automatic breast cancer detection has a primary role in accurate cancer diagnosis and treatment planning to save valuable lives. Mammography is one basic yet efficient test for screening breast cancer. Very few comprehensive surveys have been presented to briefly analyze methods for detecting breast cancer with mammograms. In this article, our objective is to give an overview of recent advancements in machine learning (ML) and deep learning (DL)-based breast cancer detection systems. METHODS We give a structured framework to categorize mammogram-based breast cancer detection techniques. Several publicly available mammogram databases and different performance measures are also mentioned. RESULTS After deliberate investigation, we find most of the works classify breast tumors either as normal-abnormal or malignant-benign rather than classifying them into three classes. Furthermore, DL-based features are more significant than hand-crafted features. However, transfer learning is preferred over others as it yields better performance in small datasets, unlike classical DL techniques. SIGNIFICANCE AND CONCLUSION In this article, we have made an attempt to give recent advancements in artificial intelligence (AI)-based breast cancer detection systems. Furthermore, a number of challenging issues and possible research directions are mentioned, which will help researchers in further scopes of research in this field.
Collapse
Affiliation(s)
- Adyasha Sahu
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| | - Pradeep Kumar Das
- School of Electronics Engineering (SENSE), VIT Vellore, Tamil Nadu, 632014, India.
| | - Sukadev Meher
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| |
Collapse
|
8
|
Seth I, Bulloch G, Joseph K, Hunter-Smith DJ, Rozen WM. Use of Artificial Intelligence in the Advancement of Breast Surgery and Implications for Breast Reconstruction: A Narrative Review. J Clin Med 2023; 12:5143. [PMID: 37568545 PMCID: PMC10419723 DOI: 10.3390/jcm12155143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 07/28/2023] [Accepted: 08/04/2023] [Indexed: 08/13/2023] Open
Abstract
BACKGROUND Breast reconstruction is a pivotal part of the recuperation process following a mastectomy and aims to restore both the physical aesthetic and emotional well-being of breast cancer survivors. In recent years, artificial intelligence (AI) has emerged as a revolutionary technology across numerous medical disciplines. This narrative review of the current literature and evidence analysis explores the role of AI in the domain of breast reconstruction, outlining its potential to refine surgical procedures, enhance outcomes, and streamline decision making. METHODS A systematic search on Medline (via PubMed), Cochrane Library, Web of Science, Google Scholar, Clinical Trials, and Embase databases from January 1901 to June 2023 was conducted. RESULTS By meticulously evaluating a selection of recent studies and engaging with inherent challenges and prospective trajectories, this review spotlights the promising role AI plays in advancing the techniques of breast reconstruction. However, issues concerning data quality, privacy, and ethical considerations pose hurdles to the seamless integration of AI in the medical field. CONCLUSION The future research agenda comprises dataset standardization, AI algorithm refinement, and the implementation of prospective clinical trials and fosters cross-disciplinary partnerships. The fusion of AI with other emergent technologies like augmented reality and 3D printing could further propel progress in breast surgery.
Collapse
Affiliation(s)
- Ishith Seth
- Department of Plastic Surgery, Peninsula Health, Melbourne, VIC 3199, Australia
- Faculty of Medicine, The University of Melbourne, Melbourne, VIC 3053, Australia
| | - Gabriella Bulloch
- Faculty of Medicine, The University of Melbourne, Melbourne, VIC 3053, Australia
| | - Konrad Joseph
- Faculty of Medicine, The University of Wollongong, Wollongon, NSW 2500, Australia
| | | | - Warren Matthew Rozen
- Department of Plastic Surgery, Peninsula Health, Melbourne, VIC 3199, Australia
- Faculty of Medicine, The University of Melbourne, Melbourne, VIC 3053, Australia
| |
Collapse
|
9
|
Lauritzen AD, von Euler-Chelpin MC, Lynge E, Vejborg I, Nielsen M, Karssemeijer N, Lillholm M. Assessing Breast Cancer Risk by Combining AI for Lesion Detection and Mammographic Texture. Radiology 2023; 308:e230227. [PMID: 37642571 DOI: 10.1148/radiol.230227] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Background Recent mammography-based risk models can estimate short-term or long-term breast cancer risk, but whether risk assessment may improve by combining these models has not been evaluated. Purpose To determine whether breast cancer risk assessment improves when combining a diagnostic artificial intelligence (AI) system for lesion detection and a mammographic texture model. Materials and Methods This retrospective study included Danish women consecutively screened for breast cancer at mammography from November 2012 to December 2015 who had at least 5 years of follow-up data. Examinations were evaluated for short-term risk using a commercially available diagnostic AI system for lesion detection, which produced a score to indicate the probability of cancer. A mammographic texture model, trained on a separate data set, assessed textures associated with long-term cancer risk. Area under the receiver operating characteristic curve (AUC) analysis was used to evaluate both the individual and combined performance of the AI and texture models for the prediction of future cancers in women with a negative screening mammogram, including those with interval cancers diagnosed within 2 years of screening and long-term cancers diagnosed 2 years or more after screening. AUCs were compared using the DeLong test. Results The Danish screening cohort included 119 650 women (median age, 59 years [IQR, 53-64 years]), of whom 320 developed interval cancers and 1401 developed long-term cancers. The combination model achieved a higher AUC for interval and long-term cancers grouped together than either the diagnostic AI (AUC, 0.73 vs 0.70; P < .001) or the texture risk (AUC, 0.73 vs 0.66; P < .001) models. The 10% of women with the highest combined risk identified by the combination model accounted for 44.1% (141 of 320) of interval cancers and 33.7% (472 of 1401) of long-term cancers. Conclusion Combining a diagnostic AI system and mammographic texture model resulted in improved risk assessment for interval cancers and long-term cancers and enabled identification of women at high risk. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Poynton and Slanetz in this issue.
Collapse
Affiliation(s)
- Andreas D Lauritzen
- From the Departments of Computer Science (A.D.L., M.N., M.L.) and Public Health (M.C.v.E.C., E.L.), University of Copenhagen, Universitetsparken 1, 2100 Copenhagen Ø, Denmark; Department of Breast Examinations, Gentofte Hospital, Gentofte, Denmark (I.V.); and Department of Radiology and Nuclear Medicine, Radboud University Medical Centre and ScreenPoint Medical, Nijmegen, the Netherlands (N.K.)
| | - My C von Euler-Chelpin
- From the Departments of Computer Science (A.D.L., M.N., M.L.) and Public Health (M.C.v.E.C., E.L.), University of Copenhagen, Universitetsparken 1, 2100 Copenhagen Ø, Denmark; Department of Breast Examinations, Gentofte Hospital, Gentofte, Denmark (I.V.); and Department of Radiology and Nuclear Medicine, Radboud University Medical Centre and ScreenPoint Medical, Nijmegen, the Netherlands (N.K.)
| | - Elsebeth Lynge
- From the Departments of Computer Science (A.D.L., M.N., M.L.) and Public Health (M.C.v.E.C., E.L.), University of Copenhagen, Universitetsparken 1, 2100 Copenhagen Ø, Denmark; Department of Breast Examinations, Gentofte Hospital, Gentofte, Denmark (I.V.); and Department of Radiology and Nuclear Medicine, Radboud University Medical Centre and ScreenPoint Medical, Nijmegen, the Netherlands (N.K.)
| | - Ilse Vejborg
- From the Departments of Computer Science (A.D.L., M.N., M.L.) and Public Health (M.C.v.E.C., E.L.), University of Copenhagen, Universitetsparken 1, 2100 Copenhagen Ø, Denmark; Department of Breast Examinations, Gentofte Hospital, Gentofte, Denmark (I.V.); and Department of Radiology and Nuclear Medicine, Radboud University Medical Centre and ScreenPoint Medical, Nijmegen, the Netherlands (N.K.)
| | - Mads Nielsen
- From the Departments of Computer Science (A.D.L., M.N., M.L.) and Public Health (M.C.v.E.C., E.L.), University of Copenhagen, Universitetsparken 1, 2100 Copenhagen Ø, Denmark; Department of Breast Examinations, Gentofte Hospital, Gentofte, Denmark (I.V.); and Department of Radiology and Nuclear Medicine, Radboud University Medical Centre and ScreenPoint Medical, Nijmegen, the Netherlands (N.K.)
| | - Nico Karssemeijer
- From the Departments of Computer Science (A.D.L., M.N., M.L.) and Public Health (M.C.v.E.C., E.L.), University of Copenhagen, Universitetsparken 1, 2100 Copenhagen Ø, Denmark; Department of Breast Examinations, Gentofte Hospital, Gentofte, Denmark (I.V.); and Department of Radiology and Nuclear Medicine, Radboud University Medical Centre and ScreenPoint Medical, Nijmegen, the Netherlands (N.K.)
| | - Martin Lillholm
- From the Departments of Computer Science (A.D.L., M.N., M.L.) and Public Health (M.C.v.E.C., E.L.), University of Copenhagen, Universitetsparken 1, 2100 Copenhagen Ø, Denmark; Department of Breast Examinations, Gentofte Hospital, Gentofte, Denmark (I.V.); and Department of Radiology and Nuclear Medicine, Radboud University Medical Centre and ScreenPoint Medical, Nijmegen, the Netherlands (N.K.)
| |
Collapse
|
10
|
Siddique M, Liu M, Duong P, Jambawalikar S, Ha R. Deep Learning Approaches with Digital Mammography for Evaluating Breast Cancer Risk, a Narrative Review. Tomography 2023; 9:1110-1119. [PMID: 37368543 DOI: 10.3390/tomography9030091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 05/29/2023] [Accepted: 06/02/2023] [Indexed: 06/29/2023] Open
Abstract
Breast cancer remains the leading cause of cancer-related deaths in women worldwide. Current screening regimens and clinical breast cancer risk assessment models use risk factors such as demographics and patient history to guide policy and assess risk. Applications of artificial intelligence methods (AI) such as deep learning (DL) and convolutional neural networks (CNNs) to evaluate individual patient information and imaging showed promise as personalized risk models. We reviewed the current literature for studies related to deep learning and convolutional neural networks with digital mammography for assessing breast cancer risk. We discussed the literature and examined the ongoing and future applications of deep learning techniques in breast cancer risk modeling.
Collapse
Affiliation(s)
- Maham Siddique
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Michael Liu
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Phuong Duong
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Sachin Jambawalikar
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Richard Ha
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| |
Collapse
|
11
|
Squires S, Harkness EF, Evans DG, Astley SM. The effect of variable labels on deep learning models trained to predict breast density. Biomed Phys Eng Express 2023; 9:035030. [PMID: 37023727 PMCID: PMC10114494 DOI: 10.1088/2057-1976/accaea] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 03/28/2023] [Accepted: 04/06/2023] [Indexed: 04/08/2023]
Abstract
Purpose. High breast density is associated with reduced efficacy of mammographic screening and increased risk of developing breast cancer. Accurate and reliable automated density estimates can be used for direct risk prediction and passing density related information to further predictive models. Expert reader assessments of density show a strong relationship to cancer risk but also inter-reader variation. The effect of label variability on model performance is important when considering how to utilise automated methods for both research and clinical purposes.Methods. We utilise subsets of images with density labels from the same 13 readers and 12 reader pairs, and train a deep transfer learning model which is used to assess how label variability affects the mapping from representation to prediction. We then create two end-to-end models: one that is trained on averaged labels across the reader pairs and the second that is trained using individual reader scores, with a novel alteration to the objective function. The combination of these two end-to-end models allows us to investigate the effect of label variability on the model representation formed.Results. We show that the trained mappings from representations to labels are altered considerably by the variability of reader scores. Training on labels with distribution variation removed causes the Spearman rank correlation coefficients to rise from 0.751 ± 0.002 to either 0.815 ± 0.026 when averaging across readers or 0.844 ± 0.002 when averaging across images. However, when we train different models to investigate the representation effect we see little difference, with Spearman rank correlation coefficients of 0.846 ± 0.006 and 0.850 ± 0.006 showing no statistically significant difference in the quality of the model representation with regard to density prediction.Conclusions. We show that the mapping between representation and mammographic density prediction is significantly affected by label variability. However, the effect of the label variability on the model representation is limited.
Collapse
|
12
|
Rezaei SR, Ahmadi A. A hierarchical GAN method with ensemble CNN for accurate nodule detection. Int J Comput Assist Radiol Surg 2023; 18:695-705. [PMID: 36522545 PMCID: PMC9754998 DOI: 10.1007/s11548-022-02807-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 11/30/2022] [Indexed: 12/23/2022]
Abstract
PURPOSE Lung cancer can evolve into one of the deadliest diseases whose early detection is one of the major survival factors. However, early detection is a challenging task due to the unclear structure, shape, and the size of the nodule. Hence, radiologists need automated tools to make accurate decisions. METHODS This paper develops a new approach based on generative adversarial network (GAN) architecture for nodule detection to propose a two-step GAN model containing lung segmentation and nodule localization. The first generator comprises a U-net network, while the second utilizes a mask R-CNN. The task of lung segmentation involves a two-class classification of the pixels in each image, categorizing lung pixels in one class and the rest in the other. The classifier becomes imbalanced due to numerous non-lung pixels, decreasing the model performance. This problem is resolved by using the focal loss function for training the generator. Moreover, a new loss function is developed as the nodule localization generator to enhance the diagnosis quality. Discriminator nets are implemented in GANs as an ensemble of convolutional neural networks (ECNNs), using multiple CNNs and connecting their outputs to make a final decision. RESULTS Several experiments are designed to assess the model on the well-known LUNA dataset. The experiments indicate that the proposed model can reduce the error of the state-of-the-art models on the IoU criterion by about 35 and 16% for lung segmentation and nodule localization, respectively. CONCLUSION Unlike recent studies, the proposed method considers two loss functions for generators, further promoting the goal achievements. Moreover, the network of discriminators is regarded as ECNNs, generating rich features for decisions.
Collapse
Affiliation(s)
- Seyed Reza Rezaei
- Department of Industrial Engineering and Management Systems, Amirkabir University of Technology, Tehran, Iran
| | - Abbas Ahmadi
- Department of Industrial Engineering and Management Systems, Amirkabir University of Technology, Tehran, Iran.
| |
Collapse
|
13
|
Acciavatti RJ, Lee SH, Reig B, Moy L, Conant EF, Kontos D, Moon WK. Beyond Breast Density: Risk Measures for Breast Cancer in Multiple Imaging Modalities. Radiology 2023; 306:e222575. [PMID: 36749212 PMCID: PMC9968778 DOI: 10.1148/radiol.222575] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 11/23/2022] [Accepted: 12/05/2022] [Indexed: 02/08/2023]
Abstract
Breast density is an independent risk factor for breast cancer. In digital mammography and digital breast tomosynthesis, breast density is assessed visually using the four-category scale developed by the American College of Radiology Breast Imaging Reporting and Data System (5th edition as of November 2022). Epidemiologically based risk models, such as the Tyrer-Cuzick model (version 8), demonstrate superior modeling performance when mammographic density is incorporated. Beyond just density, a separate mammographic measure of breast cancer risk is parenchymal textural complexity. With advancements in radiomics and deep learning, mammographic textural patterns can be assessed quantitatively and incorporated into risk models. Other supplemental screening modalities, such as breast US and MRI, offer independent risk measures complementary to those derived from mammography. Breast US allows the two components of fibroglandular tissue (stromal and glandular) to be visualized separately in a manner that is not possible with mammography. A higher glandular component at screening breast US is associated with higher risk. With MRI, a higher background parenchymal enhancement of the fibroglandular tissue has also emerged as an imaging marker for risk assessment. Imaging markers observed at mammography, US, and MRI are powerful tools in refining breast cancer risk prediction, beyond mammographic density alone.
Collapse
Affiliation(s)
| | | | - Beatriu Reig
- From the Department of Radiology, University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104 (R.J.A., E.F.C., D.K.); Department of
Radiology, Seoul National University Hospital, Seoul, South Korea (S.H.L.,
W.K.M.); and Department of Radiology, NYU Langone Health, New York, NY (B.R.,
L.M.)
| | - Linda Moy
- From the Department of Radiology, University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104 (R.J.A., E.F.C., D.K.); Department of
Radiology, Seoul National University Hospital, Seoul, South Korea (S.H.L.,
W.K.M.); and Department of Radiology, NYU Langone Health, New York, NY (B.R.,
L.M.)
| | - Emily F. Conant
- From the Department of Radiology, University of Pennsylvania, 3400
Spruce St, Philadelphia, PA 19104 (R.J.A., E.F.C., D.K.); Department of
Radiology, Seoul National University Hospital, Seoul, South Korea (S.H.L.,
W.K.M.); and Department of Radiology, NYU Langone Health, New York, NY (B.R.,
L.M.)
| | | | | |
Collapse
|
14
|
Edmonds CE, O'Brien SR, Conant EF. Mammographic Breast Density: Current Assessment Methods, Clinical Implications, and Future Directions. Semin Ultrasound CT MR 2023; 44:35-45. [PMID: 36792272 DOI: 10.1053/j.sult.2022.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Mammographic breast density is widely accepted as an independent risk factor for the development of breast cancer. In addition, because dense breast tissue may mask breast malignancies, breast density is inversely related to the sensitivity of screening mammography. Given the risks associated with breast density, as well as ongoing efforts to stratify individual risk and personalize breast cancer screening and prevention, numerous studies have sought to better understand the factors that impact breast density, and to develop and implement reproducible, quantitative methods to assess mammographic density. Breast density assessments have been incorporated into risk assessment models to improve risk stratification. Recently, novel techniques for analyzing mammographic parenchymal complexity, or texture, have been explored as potential means of refining mammographic tissue-based risk assessment beyond breast density.
Collapse
Affiliation(s)
- Christine E Edmonds
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA.
| | - Sophia R O'Brien
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA
| | - Emily F Conant
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA
| |
Collapse
|
15
|
Qureshi I, Yan J, Abbas Q, Shaheed K, Riaz AB, Wahid A, Khan MWJ, Szczuko P. Medical image segmentation using deep semantic-based methods: A review of techniques, applications and emerging trends. INFORMATION FUSION 2023. [DOI: 10.1016/j.inffus.2022.09.031] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
|
16
|
Vedantham S, Shazeeb MS, Chiang A, Vijayaraghavan GR. Artificial Intelligence in Breast X-Ray Imaging. Semin Ultrasound CT MR 2023; 44:2-7. [PMID: 36792270 PMCID: PMC9932302 DOI: 10.1053/j.sult.2022.12.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
This topical review is focused on the clinical breast x-ray imaging applications of the rapidly evolving field of artificial intelligence (AI). The range of AI applications is broad. AI can be used for breast cancer risk estimation that could allow for tailoring the screening interval and the protocol that are woman-specific and for triaging the screening exams. It also can serve as a tool to aid in the detection and diagnosis for improved sensitivity and specificity and as a tool to reduce radiologists' reading time. AI can also serve as a potential second 'reader' during screening interpretation. During the last decade, numerous studies have shown the potential of AI-assisted interpretation of mammography and to a lesser extent digital breast tomosynthesis; however, most of these studies are retrospective in nature. There is a need for prospective clinical studies to evaluate these technologies to better understand their real-world efficacy. Further, there are ethical, medicolegal, and liability concerns that need to be considered prior to the routine use of AI in the breast imaging clinic.
Collapse
Affiliation(s)
| | | | - Alan Chiang
- Department of Medical Imaging, University of Arizona, Tucson, AZ
| | | |
Collapse
|
17
|
Sundell VM, Mäkelä T, Vitikainen AM, Kaasalainen T. Convolutional neural network -based phantom image scoring for mammography quality control. BMC Med Imaging 2022; 22:216. [PMID: 36476319 PMCID: PMC9727908 DOI: 10.1186/s12880-022-00944-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 11/28/2022] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Visual evaluation of phantom images is an important, but time-consuming part of mammography quality control (QC). Consistent scoring of phantom images over the device's lifetime is highly desirable. Recently, convolutional neural networks (CNNs) have been applied to a wide range of image classification problems, performing with a high accuracy. The purpose of this study was to automate mammography QC phantom scoring task by training CNN models to mimic a human reviewer. METHODS Eight CNN variations consisting of three to ten convolutional layers were trained for detecting targets (fibres, microcalcifications and masses) in American College of Radiology (ACR) accreditation phantom images and the results were compared with human scoring. Regular and artificially degraded/improved QC phantom images from eight mammography devices were visually evaluated by one reviewer. These images were used in training the CNN models. A separate test set consisted of daily QC images from the eight devices and separately acquired images with varying dose levels. These were scored by four reviewers and considered the ground truth for CNN performance testing. RESULTS Although hyper-parameter search space was limited, an optimal network depth after which additional layers resulted in decreased accuracy was identified. The highest scoring accuracy (95%) was achieved with the CNN consisting of six convolutional layers. The highest deviation between the CNN and the reviewers was found at lowest dose levels. No significant difference emerged between the visual reviews and CNN results except in case of smallest masses. CONCLUSION A CNN-based automatic mammography QC phantom scoring system can score phantom images in a good agreement with human reviewers, and can therefore be of benefit in mammography QC.
Collapse
Affiliation(s)
- Veli-Matti Sundell
- grid.7737.40000 0004 0410 2071Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland ,grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Teemu Mäkelä
- grid.7737.40000 0004 0410 2071Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki, Finland ,grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Anne-Mari Vitikainen
- grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| | - Touko Kaasalainen
- grid.7737.40000 0004 0410 2071HUS Diagnostic Center, Radiology, University of Helsinki and Helsinki University Hospital, P.O. Box 340, Haartmaninkatu 4, 00290 Helsinki, Finland
| |
Collapse
|
18
|
Classification of Multiclass Histopathological Breast Images Using Residual Deep Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:9086060. [PMID: 36262625 PMCID: PMC9576372 DOI: 10.1155/2022/9086060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 07/21/2022] [Accepted: 08/29/2022] [Indexed: 11/20/2022]
Abstract
Pathologists need a lot of clinical experience and time to do the histopathological investigation. AI may play a significant role in supporting pathologists and resulting in more accurate and efficient histopathological diagnoses. Breast cancer is one of the most diagnosed cancers in women worldwide. Breast cancer may be detected and diagnosed using imaging methods such as histopathological images. Since various tissues make up the breast, there is a wide range of textural intensity, making abnormality detection difficult. As a result, there is an urgent need to improve computer-assisted systems (CAD) that can serve as a second opinion for radiologists when they use medical images. A self-training learning method employing deep learning neural network with residual learning is proposed to overcome the issue of needing a large number of labeled images to train deep learning models in breast cancer histopathology image classification. The suggested model is built from scratch and trained.
Collapse
|
19
|
Zhang Z, Conant EF, Zuckerman S. Opinions on the Assessment of Breast Density Among Members of the Society of Breast Imaging. JOURNAL OF BREAST IMAGING 2022; 4:480-487. [PMID: 38416952 DOI: 10.1093/jbi/wbac047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Indexed: 03/01/2024]
Abstract
OBJECTIVE Dense breast decreases the sensitivity and specificity of mammography and is associated with an increased risk of breast cancer. We conducted a survey to assess the opinions of Society of Breast Imaging (SBI) members regarding density assessment. METHODS An online survey was sent to SBI members twice in September 2020. The survey included active members who were practicing radiologists, residents, and fellows. Mammograms from three patients were presented for density assessment based on routine clinical practice and BI-RADS fourth and fifth editions. Dense breasts were defined as heterogeneously or extremely dense. Frequencies were calculated for each survey response. Pearson's correlation coefficient was used to evaluate the correlation of density assessments by different definitions. RESULTS The survey response rate was 12.4% (357/2875). For density assessments, the Pearson correlation coefficients between routine clinical practice and BI-RADS fourth edition were 0.05, 0.43, and 0.12 for patients 1, 2, and 3, respectively; these increased to 0.65, 0.65, and 0.66 between routine clinical practice and BI-RADS fifth edition for patients 1, 2, and 3, respectively. For future density grading, 79.0% (282/357) of respondents thought it should reflect both potential for masking and overall dense tissue for risk assessment. Additionally, 47.1% (168/357) of respondents thought quantitative methods were of use. CONCLUSION Density assessment varied based on routine clinical practice and BI-RADS fourth and fifth editions. Most breast radiologists agreed that density assessment should capture both masking and overall density. Moreover, almost half of respondents believed computer or artificial intelligence-assisted quantitative methods may help refine density assessment.
Collapse
Affiliation(s)
- Zi Zhang
- Einstein Healthcare Network of Jefferson Health, Department of Radiology, Philadelphia, PA, USA
| | - Emily F Conant
- Hospital of the University of Pennsylvania, Department of Radiology, Philadelphia, PA, USA
| | - Samantha Zuckerman
- Hospital of the University of Pennsylvania, Department of Radiology, Philadelphia, PA, USA
| |
Collapse
|
20
|
Kim SH, Kim J, Yang S, Oh SH, Lee SP, Yang HJ, Kim TI, Yi WJ. Automatic and quantitative measurement of alveolar bone level in OCT images using deep learning. BIOMEDICAL OPTICS EXPRESS 2022; 13:5468-5482. [PMID: 36425614 PMCID: PMC9664875 DOI: 10.1364/boe.468212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 09/14/2022] [Accepted: 09/15/2022] [Indexed: 06/16/2023]
Abstract
We propose a method to automatically segment the periodontal structures of the tooth enamel and the alveolar bone using convolutional neural network (CNN) and to measure quantitatively and automatically the alveolar bone level (ABL) by detecting the cemento-enamel junction and the alveolar bone crest in optical coherence tomography (OCT) images. The tooth enamel and the alveolar bone regions were automatically segmented using U-Net, Dense-UNet, and U2-Net, and the ABL was quantitatively measured as the distance between the cemento-enamel junction and the alveolar bone crest using image processing. The mean distance difference (MDD) measured by our suggested method ranged from 0.19 to 0.22 mm for the alveolar bone crest (ABC) and from 0.18 to 0.32 mm for the cemento-enamel junction (CEJ). All CNN models showed the mean absolute error (MAE) of less than 0.25 mm in the x and y coordinates and greater than 90% successful detection rate (SDR) at 0.5 mm for both the ABC and the CEJ. The CNN models showed high segmentation accuracies in the tooth enamel and the alveolar bone regions, and the ABL measurements at the incisors by detected results from CNN predictions demonstrated high correlation and reliability with the ground truth in OCT images.
Collapse
Affiliation(s)
- Sul-Hee Kim
- Department of Periodontology, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
- These authors contributed equally as the first author
| | - Jin Kim
- Interdisciplinary Program in Bioengineering, Seoul National University, Seoul, 08826, Republic of Korea
- These authors contributed equally as the first author
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Republic of Korea
| | - Sung-Hye Oh
- Department of Periodontology, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| | - Seung-Pyo Lee
- Department of Oral Anatomy and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Republic of Korea
| | - Hoon Joo Yang
- Department of Oral and Maxillofacial Surgery and Dental Research Institute, School of Dentistry, Seoul National University, Seoul 03080, Republic of Korea
| | - Tae-Il Kim
- Department of Periodontology, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
- Department of Periodontology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| | - Won-Jin Yi
- Interdisciplinary Program in Bioengineering, Seoul National University, Seoul, 08826, Republic of Korea
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, 03080, Republic of Korea
| |
Collapse
|
21
|
Syed AH, Khan T. Evolution of research trends in artificial intelligence for breast cancer diagnosis and prognosis over the past two decades: A bibliometric analysis. Front Oncol 2022; 12:854927. [PMID: 36267967 PMCID: PMC9578338 DOI: 10.3389/fonc.2022.854927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 08/30/2022] [Indexed: 01/27/2023] Open
Abstract
Objective In recent years, among the available tools, the concurrent application of Artificial Intelligence (AI) has improved the diagnostic performance of breast cancer screening. In this context, the present study intends to provide a comprehensive overview of the evolution of AI for breast cancer diagnosis and prognosis research using bibliometric analysis. Methodology Therefore, in the present study, relevant peer-reviewed research articles published from 2000 to 2021 were downloaded from the Scopus and Web of Science (WOS) databases and later quantitatively analyzed and visualized using Bibliometrix (R package). Finally, open challenges areas were identified for future research work. Results The present study revealed that the number of literature studies published in AI for breast cancer detection and survival prediction has increased from 12 to 546 between the years 2000 to 2021. The United States of America (USA), the Republic of China, and India are the most productive publication-wise in this field. Furthermore, the USA leads in terms of the total citations; however, hungry and Holland take the lead positions in average citations per year. Wang J is the most productive author, and Zhan J is the most relevant author in this field. Stanford University in the USA is the most relevant affiliation by the number of published articles. The top 10 most relevant sources are Q1 journals with PLOS ONE and computer in Biology and Medicine are the leading journals in this field. The most trending topics related to our study, transfer learning and deep learning, were identified. Conclusion The present findings provide insight and research directions for policymakers and academic researchers for future collaboration and research in AI for breast cancer patients.
Collapse
Affiliation(s)
- Asif Hassan Syed
- Department of Computer Science, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia,*Correspondence: Asif Hassan Syed,
| | - Tabrej Khan
- Department of Information Systems, Faculty of Computing and Information Technology Rabigh (FCITR), King Abdulaziz University, Jeddah, Saudi Arabia
| |
Collapse
|
22
|
Ngnamsie Njimbouom S, Lee K, Kim JD. MMDCP: Multi-Modal Dental Caries Prediction for Decision Support System Using Deep Learning. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:10928. [PMID: 36078635 PMCID: PMC9518085 DOI: 10.3390/ijerph191710928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/25/2022] [Accepted: 08/29/2022] [Indexed: 06/15/2023]
Abstract
In recent years, healthcare has gained unprecedented attention from researchers in the field of Human health science and technology. Oral health, a subdomain of healthcare described as being very complex, is threatened by diseases like dental caries, gum disease, oral cancer, etc. The critical point is to propose an identification mechanism to prevent the population from being affected by these diseases. The large amount of online data allows scholars to perform tremendous research on health conditions, specifically oral health. Regardless of the high-performing dental consultation tools available in current healthcare, computer-based technology has shown the ability to complete some tasks in less time and cost less than when using similar healthcare tools to perform the same type of work. Machine learning has displayed a wide variety of advantages in oral healthcare, such as predicting dental caries in the population. Compared to the standard dental caries prediction previously proposed, this work emphasizes the importance of using multiple data sources, referred to as multi-modality, to extract more features and obtain accurate performances. The proposed prediction model constructed using multi-modal data demonstrated promising performances with an accuracy of 90%, F1-score of 89%, a recall of 90%, and a precision of 89%.
Collapse
Affiliation(s)
| | - Kwonwoo Lee
- Department of Computer and Electronics Convergence Engineering, Sun Moon University, Asan 31460, Korea
| | - Jeong-Dong Kim
- Department of Computer and Electronics Convergence Engineering, Sun Moon University, Asan 31460, Korea
- Genome-Based BioIT Convergence Institute, Sun Moon University, Asan 31460, Korea
| |
Collapse
|
23
|
Connected-SegNets: A Deep Learning Model for Breast Tumor Segmentation from X-ray Images. Cancers (Basel) 2022; 14:cancers14164030. [PMID: 36011022 PMCID: PMC9406420 DOI: 10.3390/cancers14164030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 08/17/2022] [Accepted: 08/18/2022] [Indexed: 11/17/2022] Open
Abstract
Inspired by Connected-UNets, this study proposes a deep learning model, called Connected-SegNets, for breast tumor segmentation from X-ray images. In the proposed model, two SegNet architectures are connected with skip connections between their layers. Moreover, the cross-entropy loss function of the original SegNet has been replaced by the intersection over union (IoU) loss function in order to make the proposed model more robust against noise during the training process. As part of data preprocessing, a histogram equalization technique, called contrast limit adapt histogram equalization (CLAHE), is applied to all datasets to enhance the compressed regions and smooth the distribution of the pixels. Additionally, two image augmentation methods, namely rotation and flipping, are used to increase the amount of training data and to prevent overfitting. The proposed model has been evaluated on two publicly available datasets, specifically INbreast and the curated breast imaging subset of digital database for screening mammography (CBIS-DDSM). The proposed model has also been evaluated using a private dataset obtained from Cheng Hsin General Hospital in Taiwan. The experimental results show that the proposed Connected-SegNets model outperforms the state-of-the-art methods in terms of Dice score and IoU score. The proposed Connected-SegNets produces a maximum Dice score of 96.34% on the INbreast dataset, 92.86% on the CBIS-DDSM dataset, and 92.25% on the private dataset. Furthermore, the experimental results show that the proposed model achieves the highest IoU score of 91.21%, 87.34%, and 83.71% on INbreast, CBIS-DDSM, and the private dataset, respectively.
Collapse
|
24
|
A survey on machine learning in array databases. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03979-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
AbstractThis paper provides an in-depth survey on the integration of machine learning and array databases. First,machine learning support in modern database management systems is introduced. From straightforward implementations of linear algebra operations in SQL to machine learning capabilities of specialized database managers designed to process specific types of data, a number of different approaches are overviewed. Then, the paper covers the database features already implemented in current machine learning systems. Features such as rewriting, compression, and caching allow users to implement more efficient machine learning applications. The underlying linear algebra computations in some of the most used machine learning algorithms are studied in order to determine which linear algebra operations should be efficiently implemented by array databases. An exhaustive overview of array data and relevant array database managers is also provided. Those database features that have been proven of special importance for efficient execution of machine learning algorithms are analyzed in detail for each relevant array database management system. Finally, current state of array databases capabilities for machine learning implementation is shown through two example implementations in Rasdaman and SciDB.
Collapse
|
25
|
Vicini S, Bortolotto C, Rengo M, Ballerini D, Bellini D, Carbone I, Preda L, Laghi A, Coppola F, Faggioni L. A narrative review on current imaging applications of artificial intelligence and radiomics in oncology: focus on the three most common cancers. Radiol Med 2022; 127:819-836. [DOI: 10.1007/s11547-022-01512-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Accepted: 06/01/2022] [Indexed: 12/24/2022]
|
26
|
din NMU, Dar RA, Rasool M, Assad A. Breast cancer detection using deep learning: Datasets, methods, and challenges ahead. Comput Biol Med 2022; 149:106073. [DOI: 10.1016/j.compbiomed.2022.106073] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 08/21/2022] [Accepted: 08/27/2022] [Indexed: 12/22/2022]
|
27
|
Larroza A, Pérez-Benito FJ, Perez-Cortes JC, Román M, Pollán M, Pérez-Gómez B, Salas-Trejo D, Casals M, Llobet R. Breast Dense Tissue Segmentation with Noisy Labels: A Hybrid Threshold-Based and Mask-Based Approach. Diagnostics (Basel) 2022; 12:diagnostics12081822. [PMID: 36010173 PMCID: PMC9406546 DOI: 10.3390/diagnostics12081822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Revised: 07/18/2022] [Accepted: 07/25/2022] [Indexed: 11/30/2022] Open
Abstract
Breast density assessed from digital mammograms is a known biomarker related to a higher risk of developing breast cancer. Supervised learning algorithms have been implemented to determine this. However, the performance of these algorithms depends on the quality of the ground-truth information, which expert readers usually provide. These expert labels are noisy approximations to the ground truth, as there is both intra- and inter-observer variability among them. Thus, it is crucial to provide a reliable method to measure breast density from mammograms. This paper presents a fully automated method based on deep learning to estimate breast density, including breast detection, pectoral muscle exclusion, and dense tissue segmentation. We propose a novel confusion matrix (CM)—YNet model for the segmentation step. This architecture includes networks to model each radiologist’s noisy label and gives the estimated ground-truth segmentation as well as two parameters that allow interaction with a threshold-based labeling tool. A multi-center study involving 1785 women whose “for presentation” mammograms were obtained from 11 different medical facilities was performed. A total of 2496 mammograms were used as the training corpus, and 844 formed the testing corpus. Additionally, we included a totally independent dataset from a different center, composed of 381 women with one image per patient. Each mammogram was labeled independently by two expert radiologists using a threshold-based tool. The implemented CM-Ynet model achieved the highest DICE score averaged over both test datasets (0.82±0.14) when compared to the closest dense-tissue segmentation assessment from both radiologists. The level of concordance between the two radiologists showed a DICE score of 0.76±0.17. An automatic breast density estimator based on deep learning exhibited higher performance when compared with two experienced radiologists. This suggests that modeling each radiologist’s label allows for better estimation of the unknown ground-truth segmentation. The advantage of the proposed model is that it also provides the threshold parameters that enable user interaction with a threshold-based tool.
Collapse
Affiliation(s)
- Andrés Larroza
- Instituto Tecnológico de la Informática, Universitat Politècnica de València, Camino de Vera, 46022 València, Spain; (F.J.P.-B.); (J.-C.P.-C.); (R.L.)
- Correspondence:
| | - Francisco Javier Pérez-Benito
- Instituto Tecnológico de la Informática, Universitat Politècnica de València, Camino de Vera, 46022 València, Spain; (F.J.P.-B.); (J.-C.P.-C.); (R.L.)
| | - Juan-Carlos Perez-Cortes
- Instituto Tecnológico de la Informática, Universitat Politècnica de València, Camino de Vera, 46022 València, Spain; (F.J.P.-B.); (J.-C.P.-C.); (R.L.)
| | - Marta Román
- Department of Epidemiology and Evaluation, IMIM (Hospital del Mar Medical Research Institute), Passeig Marítim 25–29, 08003 Barcelona, Spain;
| | - Marina Pollán
- National Center for Epidemiology, Carlos III Institute of Health, Monforte de Lemos, 5, 28029 Madrid, Spain; (M.P.); (B.P.-G.)
- Consortium for Biomedical Research in Epidemiology and Public Health (CIBER en Epidemiología y Salud Pública—CIBERESP), Carlos III Institute of Health, Monforte de Lemos, 5, 28029 Madrid, Spain
| | - Beatriz Pérez-Gómez
- National Center for Epidemiology, Carlos III Institute of Health, Monforte de Lemos, 5, 28029 Madrid, Spain; (M.P.); (B.P.-G.)
- Consortium for Biomedical Research in Epidemiology and Public Health (CIBER en Epidemiología y Salud Pública—CIBERESP), Carlos III Institute of Health, Monforte de Lemos, 5, 28029 Madrid, Spain
| | - Dolores Salas-Trejo
- Valencian Breast Cancer Screening Program, General Directorate of Public Health, 46022 València, Spain; (D.S.-T.); (M.C.)
- Centro Superior de Investigación en Salud Pública, CSISP, FISABIO, 46020 València, Spain
| | - María Casals
- Valencian Breast Cancer Screening Program, General Directorate of Public Health, 46022 València, Spain; (D.S.-T.); (M.C.)
- Centro Superior de Investigación en Salud Pública, CSISP, FISABIO, 46020 València, Spain
| | - Rafael Llobet
- Instituto Tecnológico de la Informática, Universitat Politècnica de València, Camino de Vera, 46022 València, Spain; (F.J.P.-B.); (J.-C.P.-C.); (R.L.)
| |
Collapse
|
28
|
Area-based breast percentage density estimation in mammograms using weight-adaptive multitask learning. Sci Rep 2022; 12:12060. [PMID: 35835933 PMCID: PMC9283472 DOI: 10.1038/s41598-022-16141-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 07/05/2022] [Indexed: 12/02/2022] Open
Abstract
Breast density, which is a measure of the relative amount of fibroglandular tissue within the breast area, is one of the most important breast cancer risk factors. Accurate segmentation of fibroglandular tissues and breast area is crucial for computing the breast density. Semiautomatic and fully automatic computer-aided design tools have been developed to estimate the percentage of breast density in mammograms. However, the available approaches are usually limited to specific mammogram views and are inadequate for complete delineation of the pectoral muscle. These tools also perform poorly in cases of data variability and often require an experienced radiologist to adjust the segmentation threshold for fibroglandular tissue within the breast area. This study proposes a new deep learning architecture that automatically estimates the area-based breast percentage density from mammograms using a weight-adaptive multitask learning approach. The proposed approach simultaneously segments the breast and dense tissues and further estimates the breast percentage density. We evaluate the performance of the proposed model in both segmentation and density estimation on an independent evaluation set of 7500 craniocaudal and mediolateral oblique-view mammograms from Kuopio University Hospital, Finland. The proposed multitask segmentation approach outperforms and achieves average relative improvements of 2.88% and 9.78% in terms of F-score compared to the multitask U-net and a fully convolutional neural network, respectively. The estimated breast density values using our approach strongly correlate with radiologists’ assessments with a Pearson’s correlation of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$r = 0.90$$\end{document}r=0.90 (95% confidence interval [0.89, 0.91]). We conclude that our approach greatly improves the segmentation accuracy of the breast area and dense tissues; thus, it can play a vital role in accurately computing the breast density. Our density estimation model considerably reduces the time and effort needed to estimate density values from mammograms by radiologists and therefore, decreases inter- and intra-reader variability.
Collapse
|
29
|
Nourse R, Cartledge S, Tegegne T, Gurrin C, Maddison R. Now you see it! Using wearable cameras to gain insights into the lived experience of cardiovascular conditions. Eur J Cardiovasc Nurs 2022; 21:750-755. [PMID: 35714119 DOI: 10.1093/eurjcn/zvac053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Revised: 05/27/2022] [Accepted: 06/01/2022] [Indexed: 11/14/2022]
Abstract
Wearable cameras offer an innovative way to discover new insights into the lived experience of people with cardiovascular conditions. Wearable cameras can be used alone or supplement more traditional research methods, such as interviews and participant observations. This paper provides an overview of the benefits of using wearable cameras for data collection and outlines some key considerations for researchers and clinicians interested in this method. We provide a case study describing a study design using wearable cameras and how the data were used.
Collapse
Affiliation(s)
- Rebecca Nourse
- Institute for Physical Activity and Nutrition, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia
| | - Susie Cartledge
- Institute for Physical Activity and Nutrition, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia.,School of Public Health and Preventive Medicine, Monash University, Melbourne, Australia
| | - Teketo Tegegne
- Institute for Physical Activity and Nutrition, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia
| | - Cathal Gurrin
- School of Computing, Dublin City University, Dublin, Ireland
| | - Ralph Maddison
- Institute for Physical Activity and Nutrition, Deakin University, 221 Burwood Highway, Burwood, VIC 3125, Australia
| |
Collapse
|
30
|
Lung’s Segmentation Using Context-Aware Regressive Conditional GAN. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12125768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
After declaring COVID-19 pneumonia as a pandemic, researchers promptly advanced to seek solutions for patients fighting this fatal disease. Computed tomography (CT) scans offer valuable insight into how COVID-19 infection affects the lungs. Analysis of CT scans is very significant, especially when physicians are striving for quick solutions. This study successfully segmented lung infection due to COVID-19 and provided a physician with a quantitative analysis of the condition. COVID-19 lesions often occur near and over parenchyma walls, which are denser and exhibit lower contrast than the tissues outside the parenchyma. We applied Adoptive Wallis and Gaussian filter alternatively to regulate the outlining of the lungs and lesions near the parenchyma. We proposed a context-aware conditional generative adversarial network (CGAN) with gradient penalty and spectral normalization for automatic segmentation of lungs and lesion segmentation. The proposed CGAN implements higher-order statistics when compared to traditional deep-learning models. The proposed CGAN produced promising results for lung segmentation. Similarly, CGAN has shown outstanding results for COVID-19 lesions segmentation with an accuracy of 99.91%, DSC of 92.91%, and AJC of 92.91%. Moreover, we achieved an accuracy of 99.87%, DSC of 96.77%, and AJC of 95.59% for lung segmentation. Additionally, the suggested network attained a sensitivity of 100%, 81.02%, 76.45%, and 99.01%, respectively, for critical, severe, moderate, and mild infection severity levels. The proposed model outperformed state-of-the-art techniques for the COVID-19 segmentation and detection cases.
Collapse
|
31
|
Dontchos BN, Cavallo-Hom K, Lamb LR, Mercaldo SF, Eklund M, Dang P, Lehman CD. Impact of a Deep Learning Model for Predicting Mammographic Breast Density in Routine Clinical Practice. J Am Coll Radiol 2022; 19:1021-1030. [DOI: 10.1016/j.jacr.2022.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 03/31/2022] [Accepted: 04/01/2022] [Indexed: 10/18/2022]
|
32
|
Automatic Breast Tumor Screening of Mammographic Images with Optimal Convolutional Neural Network. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12084079] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Mammography is a first-line imaging examination approach used for early breast tumor screening. Computational techniques based on deep-learning methods, such as convolutional neural network (CNN), are routinely used as classifiers for rapid automatic breast tumor screening in mammography examination. Classifying multiple feature maps on two-dimensional (2D) digital images, a multilayer CNN has multiple convolutional-pooling layers and fully connected networks, which can increase the screening accuracy and reduce the error rate. However, this multilayer architecture presents some limitations, such as high computational complexity, large-scale training dataset requirements, and poor suitability for real-time clinical applications. Hence, this study designs an optimal multilayer architecture for a CNN-based classifier for automatic breast tumor screening, consisting of three convolutional layers, two pooling layers, a flattening layer, and a classification layer. In the first convolutional layer, the proposed classifier performs the fractional-order convolutional process to enhance the image and remove unwanted noise for obtaining the desired object’s edges; in the second and third convolutional-pooling layers, two kernel convolutional and pooling operations are used to ensure the continuous enhancement and sharpening of the feature patterns for further extracting of the desired features at different scales and different levels. Moreover, there is a reduction of the dimensions of the feature patterns. In the classification layer, a multilayer network with an adaptive moment estimation algorithm is used to refine a classifier’s network parameters for mammography classification by separating tumor-free feature patterns from tumor feature patterns. Images can be selected from a curated breast imaging subset of a digital database for screening mammography (CBIS-DDSM), and K-fold cross-validations are performed. The experimental results indicate promising performance for automatic breast tumor screening in terms of recall (%), precision (%), accuracy (%), F1 score, and Youden’s index.
Collapse
|
33
|
Evans DGR, van Veen EM, Harkness EF, Brentnall AR, Astley SM, Byers H, Woodward ER, Sampson S, Southworth J, Howell SJ, Maxwell AJ, Newman WG, Cuzick J, Howell A. Breast cancer risk stratification in women of screening age: Incremental effects of adding mammographic density, polygenic risk, and a gene panel. Genet Med 2022; 24:1485-1494. [PMID: 35426792 DOI: 10.1016/j.gim.2022.03.009] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 03/14/2022] [Accepted: 03/15/2022] [Indexed: 12/17/2022] Open
Abstract
PURPOSE There is great promise in breast cancer risk stratification to target screening and prevention. It is unclear whether adding gene panels to other risk tools improves breast cancer risk stratification and adds discriminatory benefit on a population basis. METHODS In total, 10,025 of 57,902 women aged 46 to 73 years in the Predicting Risk of Cancer at Screening study provided DNA samples. A case-control study was used to evaluate breast cancer risk assessment using polygenic risk scores (PRSs), cancer gene panel (n = 33), mammographic density (density residual [DR]), and risk factors collected using a self-completed 2-page questionnaire (Tyrer-Cuzick [TC] model version 8). In total, 525 cases and 1410 controls underwent gene panel testing and PRS calculation (18, 143, and/or 313 single-nucleotide polymorphisms [SNPs]). RESULTS Actionable pathogenic variants (PGVs) in BRCA1/2 were found in 1.7% of cases and 0.55% of controls, and overall PGVs were found in 6.1% of cases and 1.3% of controls. A combined assessment of TC8-DR-SNP313 and gene panel provided the best risk stratification with 26.1% of controls and 9.7% of cases identified at <1.4% 10-year risk and 9.01% of controls and 23.3% of cases at ≥8% 10-year risk. Because actionable PGVs were uncommon, discrimination was identical with/without gene panel (with/without: area under the curve = 0.67, 95% CI = 0.64-0.70). Only 7 of 17 PGVs in cases resulted in actionable risk category change. Extended case (n = 644)-control (n = 1779) series with TC8-DR-SNP143 identified 18.9% of controls and only 6.4% of stage 2+ cases at <1.4% 10-year risk and 20.7% of controls and 47.9% of stage 2+ cases at ≥5% 10-year risk. CONCLUSION Further studies and economic analysis will determine whether adding panels to PRS is a cost-effective strategy for risk stratification.
Collapse
Affiliation(s)
- D Gareth R Evans
- Division of Evolution, Infection and Genomics, School of Biological Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom; Prevention Breast Cancer Unit and Nightingale Breast Screening Centre, Manchester University NHS Foundation Trust (South), Manchester, United Kingdom; The Christie NHS Foundation Trust, Manchester, United Kingdom; Manchester Centre for Genomic Medicine, Manchester University NHS Foundation Trust (Central), Manchester, United Kingdom; Manchester Breast Centre, Manchester Cancer Research Centre, The University of Manchester, Manchester, United Kingdom; Cancer Prevention Early Detection Theme, NIHR Manchester Biomedical Research Centre, The Christie NHS Foundation Trust, Manchester, United Kingdom.
| | - Elke M van Veen
- Division of Evolution, Infection and Genomics, School of Biological Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom
| | - Elaine F Harkness
- Prevention Breast Cancer Unit and Nightingale Breast Screening Centre, Manchester University NHS Foundation Trust (South), Manchester, United Kingdom; Cancer Prevention Early Detection Theme, NIHR Manchester Biomedical Research Centre, The Christie NHS Foundation Trust, Manchester, United Kingdom; Division of Informatics, Imaging and Data Sciences, School of Health Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom; Manchester Academic Health Science Centre, The University of Manchester, Manchester, United Kingdom
| | - Adam R Brentnall
- Centre for Cancer Prevention, Wolfson Institute of Preventive Medicine, Charterhouse Square, Barts and The London, Queen Mary University of London, London, United Kingdom
| | - Susan M Astley
- Prevention Breast Cancer Unit and Nightingale Breast Screening Centre, Manchester University NHS Foundation Trust (South), Manchester, United Kingdom; Manchester Breast Centre, Manchester Cancer Research Centre, The University of Manchester, Manchester, United Kingdom; Cancer Prevention Early Detection Theme, NIHR Manchester Biomedical Research Centre, The Christie NHS Foundation Trust, Manchester, United Kingdom; Division of Informatics, Imaging and Data Sciences, School of Health Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom; Manchester Academic Health Science Centre, The University of Manchester, Manchester, United Kingdom
| | - Helen Byers
- Division of Evolution, Infection and Genomics, School of Biological Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom; Cancer Prevention Early Detection Theme, NIHR Manchester Biomedical Research Centre, The Christie NHS Foundation Trust, Manchester, United Kingdom
| | - Emma R Woodward
- Division of Evolution, Infection and Genomics, School of Biological Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom; Cancer Prevention Early Detection Theme, NIHR Manchester Biomedical Research Centre, The Christie NHS Foundation Trust, Manchester, United Kingdom
| | - Sarah Sampson
- Prevention Breast Cancer Unit and Nightingale Breast Screening Centre, Manchester University NHS Foundation Trust (South), Manchester, United Kingdom
| | - Jake Southworth
- Prevention Breast Cancer Unit and Nightingale Breast Screening Centre, Manchester University NHS Foundation Trust (South), Manchester, United Kingdom
| | - Sacha J Howell
- Prevention Breast Cancer Unit and Nightingale Breast Screening Centre, Manchester University NHS Foundation Trust (South), Manchester, United Kingdom; The Christie NHS Foundation Trust, Manchester, United Kingdom; Manchester Breast Centre, Manchester Cancer Research Centre, The University of Manchester, Manchester, United Kingdom; Cancer Prevention Early Detection Theme, NIHR Manchester Biomedical Research Centre, The Christie NHS Foundation Trust, Manchester, United Kingdom
| | - Anthony J Maxwell
- Prevention Breast Cancer Unit and Nightingale Breast Screening Centre, Manchester University NHS Foundation Trust (South), Manchester, United Kingdom; Manchester Breast Centre, Manchester Cancer Research Centre, The University of Manchester, Manchester, United Kingdom; Cancer Prevention Early Detection Theme, NIHR Manchester Biomedical Research Centre, The Christie NHS Foundation Trust, Manchester, United Kingdom; Division of Informatics, Imaging and Data Sciences, School of Health Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom; Manchester Academic Health Science Centre, The University of Manchester, Manchester, United Kingdom
| | - William G Newman
- Division of Evolution, Infection and Genomics, School of Biological Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, United Kingdom; Manchester Centre for Genomic Medicine, Manchester University NHS Foundation Trust (Central), Manchester, United Kingdom; Manchester Breast Centre, Manchester Cancer Research Centre, The University of Manchester, Manchester, United Kingdom
| | - Jack Cuzick
- Centre for Cancer Prevention, Wolfson Institute of Preventive Medicine, Charterhouse Square, Barts and The London, Queen Mary University of London, London, United Kingdom
| | - Anthony Howell
- Prevention Breast Cancer Unit and Nightingale Breast Screening Centre, Manchester University NHS Foundation Trust (South), Manchester, United Kingdom; The Christie NHS Foundation Trust, Manchester, United Kingdom; Manchester Breast Centre, Manchester Cancer Research Centre, The University of Manchester, Manchester, United Kingdom; Cancer Prevention Early Detection Theme, NIHR Manchester Biomedical Research Centre, The Christie NHS Foundation Trust, Manchester, United Kingdom
| |
Collapse
|
34
|
Ray A, Bhardwaj A, Malik YK, Singh S, Gupta R. Artificial intelligence and Psychiatry: An overview. Asian J Psychiatr 2022; 70:103021. [PMID: 35219978 PMCID: PMC9760544 DOI: 10.1016/j.ajp.2022.103021] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 01/06/2022] [Accepted: 02/11/2022] [Indexed: 12/14/2022]
Abstract
The burden of mental illness both in world and India is increasing at an alarming rate. Adding to it, there has been an increase in mental health challenges during covid-19 pandemic with a rise in suicide, loneliness and substance use. Artificial intelligence can act as a potential solution to address this shortage. The use of artificial intelligence is increasingly being employed in various fields of mental health like affective disorders, psychosis, and geriatric psychiatry. The benefits are various like lower costs, wider reach but at the same time it comes with its own disadvantages. This article reviews the current understanding of artificial intelligence, the types of Artificial intelligence, its current use in various mental health disorders, current status in India, advantages, disadvantages and future potentials. With the passage of time and digitalization of the modern age, there will be an increase in the use of artificial intelligence in psychiatry hence a detailed understanding will be thoughtful. For this, we searched PubMed, Google Scholar, and Science Direct, China national Knowledge Infrastructure (CNKI), Globus Index Medicus search engines by using keywords. Initial searches involved the use of each individual keyword while the later searches involved the use of more than one word in different permutation combinations.
Collapse
Affiliation(s)
- Adwitiya Ray
- Department of Psychiatry, Institute of Mental Health, Pandit Bhagwat Dayal Sharma Post Graduate Institute of Medical Sciences, Rohtak, Haryana, India
| | - Akansha Bhardwaj
- Department of Psychiatry, Institute of Mental Health, Pandit Bhagwat Dayal Sharma Post Graduate Institute of Medical Sciences, Rohtak, Haryana, India
| | - Yogender Kumar Malik
- Department of Psychiatry, Institute of Mental Health, Pandit Bhagwat Dayal Sharma Post Graduate Institute of Medical Sciences, Rohtak, Haryana, India.
| | - Shipra Singh
- Department of Psychiatry, Institute of Mental Health, Pandit Bhagwat Dayal Sharma Post Graduate Institute of Medical Sciences, Rohtak, Haryana, India
| | - Rajiv Gupta
- Department of Psychiatry, Institute of Mental Health, Pandit Bhagwat Dayal Sharma Post Graduate Institute of Medical Sciences, Rohtak, Haryana, India
| |
Collapse
|
35
|
Wimmer M, Sluiter G, Major D, Lenis D, Berg A, Neubauer T, Buhler K. Multi-Task Fusion for Improving Mammography Screening Data Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:937-950. [PMID: 34788218 DOI: 10.1109/tmi.2021.3129068] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Machine learning and deep learning methods have become essential for computer-assisted prediction in medicine, with a growing number of applications also in the field of mammography. Typically these algorithms are trained for a specific task, e.g., the classification of lesions or the prediction of a mammogram's pathology status. To obtain a comprehensive view of a patient, models which were all trained for the same task(s) are subsequently ensembled or combined. In this work, we propose a pipeline approach, where we first train a set of individual, task-specific models and subsequently investigate the fusion thereof, which is in contrast to the standard model ensembling strategy. We fuse model predictions and high-level features from deep learning models with hybrid patient models to build stronger predictors on patient level. To this end, we propose a multi-branch deep learning model which efficiently fuses features across different tasks and mammograms to obtain a comprehensive patient-level prediction. We train and evaluate our full pipeline on public mammography data, i.e., DDSM and its curated version CBIS-DDSM, and report an AUC score of 0.962 for predicting the presence of any lesion and 0.791 for predicting the presence of malignant lesions on patient level. Overall, our fusion approaches improve AUC scores significantly by up to 0.04 compared to standard model ensembling. Moreover, by providing not only global patient-level predictions but also task-specific model results that are related to radiological features, our pipeline aims to closely support the reading workflow of radiologists.
Collapse
|
36
|
Shah SY, Larijani H, Gibson RM, Liarokapis D. Random Neural Network Based Epileptic Seizure Episode Detection Exploiting Electroencephalogram Signals. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22072466. [PMID: 35408080 PMCID: PMC9002775 DOI: 10.3390/s22072466] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/14/2022] [Accepted: 03/17/2022] [Indexed: 06/12/2023]
Abstract
Epileptic seizures are caused by abnormal electrical activity in the brain that manifests itself in a variety of ways, including confusion and loss of awareness. Correct identification of epileptic seizures is critical in the treatment and management of patients with epileptic disorders. One in four patients present resistance against seizures episodes and are in dire need of detecting these critical events through continuous treatment in order to manage the specific disease. Epileptic seizures can be identified by reliably and accurately monitoring the patients' neuro and muscle activities, cardiac activity, and oxygen saturation level using state-of-the-art sensing techniques including electroencephalograms (EEGs), electromyography (EMG), electrocardiograms (ECGs), and motion or audio/video recording that focuses on the human head and body. EEG analysis provides a prominent solution to distinguish between the signals associated with epileptic episodes and normal signals; therefore, this work aims to leverage on the latest EEG dataset using cutting-edge deep learning algorithms such as random neural network (RNN), convolutional neural network (CNN), extremely random tree (ERT), and residual neural network (ResNet) to classify multiple variants of epileptic seizures from non-seizures. The results obtained highlighted that RNN outperformed all other algorithms used and provided an overall accuracy of 97%, which was slightly improved after cross validation.
Collapse
Affiliation(s)
- Syed Yaseen Shah
- School of Computing, Engineering and Built Environment, Glasgow Caledonian University, Glasgow G4 0BA, UK; (R.M.G.); (D.L.)
| | - Hadi Larijani
- SMART Technology Research Centre, Glasgow Caledonian University, Cowcaddens Road, Glasgow G4 0BA, UK
| | - Ryan M. Gibson
- School of Computing, Engineering and Built Environment, Glasgow Caledonian University, Glasgow G4 0BA, UK; (R.M.G.); (D.L.)
| | - Dimitrios Liarokapis
- School of Computing, Engineering and Built Environment, Glasgow Caledonian University, Glasgow G4 0BA, UK; (R.M.G.); (D.L.)
| |
Collapse
|
37
|
Lee S, Kim H, Lee H, Cho S. Deep-learning-based projection-domain breast thickness estimation for shape-prior iterative image reconstruction in digital breast tomosynthesis. Med Phys 2022; 49:3670-3682. [PMID: 35297075 DOI: 10.1002/mp.15612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Revised: 03/10/2022] [Accepted: 03/11/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Digital breast tomosynthesis (DBT) is a technique that can overcome the shortcomings of conventional X-ray mammography and can be effective for the early screening of breast cancer. The compression of the breast is essential during the DBT imaging. However, since the periphery of the breast cannot be compressed to a constant value, nonuniformity of thickness and in-plane shape variation happen. These cause inconvenience in diagnosis, scatter correction, and breast density estimation. PURPOSE In this study, we propose a deep-learning-based methodology for projection-domain breast thickness estimation and demonstrate a shape-prior iterative DBT image reconstruction. METHODS We prepared the Euclidean distance map, the thickness map, and the thickness corrected image of the simulated breast projections for thickness and shape estimation. Each pixel of the Euclidean distance map denotes a distance to the closest skin-line. The thickness map is defined as a conceptual projection of ideal breast support that differentiates the inner and outer regions of the breast phantom. The thickness projection map thus represents the x-ray path lengths of a homogeneous breast phantom. We generated the thickness corrected image by dividing the projection image by the thickness map in a pixel-wise manner. We developed a convolutional neural network for thickness estimation and correction. The network utilizes a projection image and a Euclidean distance image together as a dual input. An estimated breast thickness map is then used for constructing the breast shape mask by use of the discrete algebraic reconstruction technique (DART). RESULTS The proposed network effectively corrected the breast thickness in various simulation situations. Low normalized root-mean-squared error (NRMSE; 1.976%) and high structural similarity (SSIM; 99.997%) indicated a good agreement between the network-generated thickness corrected image and the ground-truth image. Compared to the existing methods and simple single-input network, the proposed method showed outperformance in breast thickness estimation and accordingly in breast shape recovery for various numerical phantoms without provoking any significant artifact. We have demonstrated that the uniformity of voxel value has improved by the inclusion of a shape-prior for the iterative DBT reconstruction. CONCLUSIONS We presented a novel deep-learning-based breast thickness correction and a shape reconstruction method. This approach to estimating the true thickness map and the shape of the breast undergoing compression can benefit various fields such as improvement of diagnostic breast images, scatter correction, material decomposition, and breast density estimation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Seoyoung Lee
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| | - Hyeongseok Kim
- KAIST Institute for Artificial Intelligence, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| | - Hoyeon Lee
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, 02114, USA
| | - Seungryong Cho
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea.,KAIST Institute for Artificial Intelligence, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea.,KAIST Institutes for IT Convergence and Health Science and Technology, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| |
Collapse
|
38
|
DCP: Prediction of Dental Caries Using Machine Learning in Personalized Medicine. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12063043] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Dental caries is an infectious disease that deteriorates the tooth structure, with tooth cavities as the most common result. Classified as one of the most prevalent oral health issues, research on dental caries has been carried out for early detection due to pain and cost of treatment. Medical research in oral healthcare has shown limitations such as considerable funds and time required; therefore, artificial intelligence has been used in recent years to develop models that can predict the risk of dental caries. The data used in our study were collected from a children’s oral health survey conducted in 2018 by the Korean Center for Disease Control and Prevention. Several Machine Learning algorithms were applied to this data, and their performances were evaluated using accuracy, F1-score, precision, and recall. Random forest has achieved the highest performance compared to other machine learnings methods, with an accuracy of 92%, F1-score of 90%, precision of 94%, and recall of 87%. The results of the proposed paper show that ML is highly recommended for dental professionals in assisting them in decision making for the early detection and treatment of dental caries.
Collapse
|
39
|
Breast Histopathological Image Classification Method Based on Autoencoder and Siamese Framework. INFORMATION 2022. [DOI: 10.3390/info13030107] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The automated classification of breast cancer histopathological images is one of the important tasks in computer-aided diagnosis systems (CADs). Due to the characteristics of small inter-class and large intra-class variances in breast cancer histopathological images, extracting features for breast cancer classification is difficult. To address this problem, an improved autoencoder (AE) network using a Siamese framework that can learn the effective features from histopathological images for CAD breast cancer classification tasks was designed. First, the inputted image is processed at multiple scales using a Gaussian pyramid to obtain multi-scale features. Second, in the feature extraction stage, a Siamese framework is used to constrain the pre-trained AE so that the extracted features have smaller intra-class variance and larger inter-class variance. Experimental results show that the proposed method classification accuracy was as high as 97.8% on the BreakHis dataset. Compared with commonly used algorithms in breast cancer histopathological classification, this method has superior, faster performance.
Collapse
|
40
|
Gastounioti A, Desai S, Ahluwalia VS, Conant EF, Kontos D. Artificial intelligence in mammographic phenotyping of breast cancer risk: a narrative review. Breast Cancer Res 2022; 24:14. [PMID: 35184757 PMCID: PMC8859891 DOI: 10.1186/s13058-022-01509-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 02/08/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Improved breast cancer risk assessment models are needed to enable personalized screening strategies that achieve better harm-to-benefit ratio based on earlier detection and better breast cancer outcomes than existing screening guidelines. Computational mammographic phenotypes have demonstrated a promising role in breast cancer risk prediction. With the recent exponential growth of computational efficiency, the artificial intelligence (AI) revolution, driven by the introduction of deep learning, has expanded the utility of imaging in predictive models. Consequently, AI-based imaging-derived data has led to some of the most promising tools for precision breast cancer screening. MAIN BODY This review aims to synthesize the current state-of-the-art applications of AI in mammographic phenotyping of breast cancer risk. We discuss the fundamentals of AI and explore the computing advancements that have made AI-based image analysis essential in refining breast cancer risk assessment. Specifically, we discuss the use of data derived from digital mammography as well as digital breast tomosynthesis. Different aspects of breast cancer risk assessment are targeted including (a) robust and reproducible evaluations of breast density, a well-established breast cancer risk factor, (b) assessment of a woman's inherent breast cancer risk, and (c) identification of women who are likely to be diagnosed with breast cancers after a negative or routine screen due to masking or the rapid and aggressive growth of a tumor. Lastly, we discuss AI challenges unique to the computational analysis of mammographic imaging as well as future directions for this promising research field. CONCLUSIONS We provide a useful reference for AI researchers investigating image-based breast cancer risk assessment while indicating key priorities and challenges that, if properly addressed, could accelerate the implementation of AI-assisted risk stratification to future refine and individualize breast cancer screening strategies.
Collapse
Affiliation(s)
- Aimilia Gastounioti
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Shyam Desai
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Vinayak S Ahluwalia
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Emily F Conant
- Department of Radiology, Hospital of the University of Pennsylvania, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Despina Kontos
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
41
|
Yin XX, Hadjiloucas S, Zhang Y, Tian Z. MRI radiogenomics for intelligent diagnosis of breast tumors and accurate prediction of neoadjuvant chemotherapy responses-a review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 214:106510. [PMID: 34852935 DOI: 10.1016/j.cmpb.2021.106510] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 11/01/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE This paper aims to overview multidimensional mining algorithms in relation to Magnetic Resonance Imaging (MRI) radiogenomics for computer aided detection and diagnosis of breast tumours. The work also aims to address a new problem in radiogenomics mining: how to combine structural radiomics information with non-structural genomics information for improving the accuracy and efficacy of Neoadjuvant Chemotherapy (NAC). METHODS This requires the automated extraction of parameters from non-structural breast radiomics data, and finding feature vectors with diagnostic value, which then are combined with genomics data. In order to address the problem of weakly labelled tumour images, a Generative Adiversarial Networks (GAN) based deep learning strategy is proposed for the classification of tumour types; this has significant potential for providing accurate real-time identification of tumorous regions from MRI scans. In order to efficiently integrate in a deep learning framework different features from radiogenomics datasets at multiple spatio-temporal resolutions, pyramid structured and multi-scale densely connected U-Nets are proposed. A bidirectional gated recurrent unit (BiGRU) combined with an attention based deep learning approach is also proposed. RESULTS The aim is to accurately predict NAC responses by combining imaging and genomic datasets. The approaches discussed incorporate some of the latest developments in of current signal processing and artificial intelligence and have significant potential in advancing and provide a development platform for future cutting-edge biomedical radiogenomics analysis. CONCLUSIONS The association of genotypic and phenotypic features is at the core of the emergent field of Precision Medicine. It makes use of advances in biomedical big data analysis, which enables the correlation between disease-associated phenotypic characteristics, genetics polymorphism and gene activation to be revealed.
Collapse
Affiliation(s)
- Xiao-Xia Yin
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China.
| | - Sillas Hadjiloucas
- Department of Biomedical Engineering, The University of Reading, RG6 6AY, UK
| | - Yanchun Zhang
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
| | - Zhihong Tian
- Cyberspace Institute of Advanced Technology, Guangzhou University, Guangzhou 510006, China
| |
Collapse
|
42
|
Koyuncu B, Melek A, Yilmaz D, Tuzer M, Unlu MB. Chemotherapy response prediction with diffuser elapser network. Sci Rep 2022; 12:1628. [PMID: 35102179 PMCID: PMC8803972 DOI: 10.1038/s41598-022-05460-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 11/10/2021] [Indexed: 12/31/2022] Open
Abstract
In solid tumors, elevated fluid pressure and inadequate blood perfusion resulting from unbalanced angiogenesis are the prominent reasons for the ineffective drug delivery inside tumors. To normalize the heterogeneous and tortuous tumor vessel structure, antiangiogenic treatment is an effective approach. Additionally, the combined therapy of antiangiogenic agents and chemotherapy drugs has shown promising effects on enhanced drug delivery. However, the need to find the appropriate scheduling and dosages of the combination therapy is one of the main problems in anticancer therapy. Our study aims to generate a realistic response to the treatment schedule, making it possible for future works to use these patient-specific responses to decide on the optimal starting time and dosages of cytotoxic drug treatment. Our dataset is based on our previous in-silico model with a framework for the tumor microenvironment, consisting of a tumor layer, vasculature network, interstitial fluid pressure, and drug diffusion maps. In this regard, the chemotherapy response prediction problem is discussed in the study, putting forth a proof of concept for deep learning models to capture the tumor growth and drug response behaviors simultaneously. The proposed model utilizes multiple convolutional neural network submodels to predict future tumor microenvironment maps considering the effects of ongoing treatment. Since the model has the task of predicting future tumor microenvironment maps, we use two image quality evaluation metrics, which are structural similarity and peak signal-to-noise ratio, to evaluate model performance. We track tumor cell density values of ground truth and predicted tumor microenvironments. The model predicts tumor microenvironment maps seven days ahead with the average structural similarity score of 0.973 and the average peak signal ratio of 35.41 in the test set. It also predicts tumor cell density at the end day of 7 with the mean absolute percentage error of [Formula: see text].
Collapse
Affiliation(s)
- Batuhan Koyuncu
- Department of Computer Engineering, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Ahmet Melek
- Department of Management, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Defne Yilmaz
- Department of Physics, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Mert Tuzer
- Department of Physics, Bogazici University, Istanbul, 34342, Turkey
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey
| | - Mehmet Burcin Unlu
- Department of Physics, Bogazici University, Istanbul, 34342, Turkey.
- Center for Life Sciences and Technologies, Bogazici University, Istanbul, 34342, Turkey.
- Hokkaido University, Global Station for Quantum Medical Science and Engineering, Global Institution for Collaborative Research and Education (GI-CoRE), Sapporo, 060-8648, Japan.
| |
Collapse
|
43
|
Zhou K, Li W, Zhao D. Deep learning-based breast region extraction of mammographic images combining pre-processing methods and semantic segmentation supported by Deeplab v3+. Technol Health Care 2022; 30:173-190. [PMID: 35124595 PMCID: PMC9028646 DOI: 10.3233/thc-228017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
BACKGROUND: Breast cancer has long been one of the major global life-threatening illnesses among women. Surgery and adjuvant therapy, coupled with early detection, could save many lives. This underscores the importance of mammography, a cost-effective and accurate method for early detection. Due to the poor contrast, noise and artifacts which results in difficulty for radiologists to diagnose, Computer-Aided Diagnosis (CAD) systems are hence developed. The extraction of breast region is a fundamental and crucial preparation step for further development of CAD systems. OBJECTIVE: The proposed method aims to extract breast region accurately from mammographic images where noise is suppressed, contrast is enhanced and pectoral muscle region is removed. METHODS: This paper presents a new deep learning-based breast region extraction method that combines pre-processing methods containing noise suppression using median filter, contrast enhancement using CLAHE and semantic segmentation using Deeplab v3+ model. RESULTS: The method is trained and evaluated on mini-MIAS dataset. It has also been evaluated on INbreast dataset. The results outperform those generated by other recent researches and are indicative of the capacity of the model to retain its accuracy and runtime advantage across different databases with different image resolutions. CONCLUSIONS: The proposed method shows state-of-the-art performance at extracting breast region from mammographic images. Wide range of evaluation on two commonly used mammography datasets proves the ability and adaptability of the method.
Collapse
Affiliation(s)
- Kuochen Zhou
- Corresponding author: Kuochen Zhou, School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning 110819, China. E-mail:
| | | | | |
Collapse
|
44
|
Matsubara K, Ibaraki M, Nemoto M, Watabe H, Kimura Y. A review on AI in PET imaging. Ann Nucl Med 2022; 36:133-143. [PMID: 35029818 DOI: 10.1007/s12149-021-01710-8] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/16/2022]
Abstract
Artificial intelligence (AI) has been applied to various medical imaging tasks, such as computer-aided diagnosis. Specifically, deep learning techniques such as convolutional neural network (CNN) and generative adversarial network (GAN) have been extensively used for medical image generation. Image generation with deep learning has been investigated in studies using positron emission tomography (PET). This article reviews studies that applied deep learning techniques for image generation on PET. We categorized the studies for PET image generation with deep learning into three themes as follows: (1) recovering full PET data from noisy data by denoising with deep learning, (2) PET image reconstruction and attenuation correction with deep learning and (3) PET image translation and synthesis with deep learning. We introduce recent studies based on these three categories. Finally, we mention the limitations of applying deep learning techniques to PET image generation and future prospects for PET image generation.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Mitsutaka Nemoto
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan
| | - Hiroshi Watabe
- Cyclotron and Radioisotope Center (CYRIC), Tohoku University, Miyagi, Japan
| | - Yuichi Kimura
- Faculty of Biology-Oriented Science and Technology, and Cyber Informatics Research Institute, Kindai University, Wakayama, Japan.
| |
Collapse
|
45
|
Hirsch L, Huang Y, Luo S, Rossi Saccarelli C, Lo Gullo R, Daimiel Naranjo I, Bitencourt AGV, Onishi N, Ko ES, Leithner D, Avendano D, Eskreis-Winkler S, Hughes M, Martinez DF, Pinker K, Juluru K, El-Rowmeim AE, Elnajjar P, Morris EA, Makse HA, Parra LC, Sutton EJ. Radiologist-Level Performance by Using Deep Learning for Segmentation of Breast Cancers on MRI Scans. Radiol Artif Intell 2022; 4:e200231. [PMID: 35146431 PMCID: PMC8823456 DOI: 10.1148/ryai.200231] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 10/13/2021] [Accepted: 11/04/2021] [Indexed: 11/17/2022]
Abstract
Purpose To develop a deep network architecture that would achieve fully automated
radiologist-level segmentation of cancers at breast MRI. Materials and Methods In this retrospective study, 38 229 examinations (composed of
64 063 individual breast scans from 14 475 patients) were
performed in female patients (age range, 12–94 years; mean age,
52 years ± 10 [standard deviation]) who presented between 2002
and 2014 at a single clinical site. A total of 2555 breast cancers were
selected that had been segmented on two-dimensional (2D) images by
radiologists, as well as 60 108 benign breasts that served as
examples of noncancerous tissue; all these were used for model training.
For testing, an additional 250 breast cancers were segmented
independently on 2D images by four radiologists. Authors selected among
several three-dimensional (3D) deep convolutional neural network
architectures, input modalities, and harmonization methods. The outcome
measure was the Dice score for 2D segmentation, which was compared
between the network and radiologists by using the Wilcoxon signed rank
test and the two one-sided test procedure. Results The highest-performing network on the training set was a 3D U-Net with
dynamic contrast-enhanced MRI as input and with intensity normalized for
each examination. In the test set, the median Dice score of this network
was 0.77 (interquartile range, 0.26). The performance of the network was
equivalent to that of the radiologists (two one-sided test procedures
with radiologist performance of 0.69–0.84 as equivalence bounds,
P < .001 for both; n =
250). Conclusion When trained on a sufficiently large dataset, the developed 3D U-Net
performed as well as fellowship-trained radiologists in detailed 2D
segmentation of breast cancers at routine clinical MRI. Keywords: MRI, Breast, Segmentation, Supervised Learning,
Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine
Learning Algorithms Published under a CC BY 4.0 license. Supplemental material is available for this
article.
Collapse
Affiliation(s)
- Lukas Hirsch
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Yu Huang
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Shaojun Luo
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Carolina Rossi Saccarelli
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Roberto Lo Gullo
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Isaac Daimiel Naranjo
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Almir G V Bitencourt
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Natsuko Onishi
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Eun Sook Ko
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Doris Leithner
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Daly Avendano
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Sarah Eskreis-Winkler
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Mary Hughes
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Danny F Martinez
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Katja Pinker
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Krishna Juluru
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Amin E El-Rowmeim
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Pierre Elnajjar
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Elizabeth A Morris
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Hernan A Makse
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Lucas C Parra
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| | - Elizabeth J Sutton
- Department of Biomedical Engineering (L.H., Y.H., L.C.P.) and the Benjamin Levich Institute and Department of Physics (S.L., H.A.M.), the City College of the City University of New York, 160 Convent Ave, New York, NY 10031; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065 (Y.H., C.R.S., R.L.G., I.D.N., A.G.V.B., N.O., E.S.K., D.L., D.A., S.E.W., M.H., D.F.M., K.P., K.J., A.E.E., P.E., E.A.M., E.J.S.); Department of Imaging, A.C. Camargo Cancer Center, São Paulo, Brazil (A.G.V.B.); Department of Radiology, University of California, San Francisco, San Francisco, Calif (N.O.); Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea (E.S.K.); and Department of Breast Imaging, Breast Cancer Center TecSalud, ITESM Monterrey, Monterrey, Mexico (D.A.)
| |
Collapse
|
46
|
Asaturyan H, Villarini B, Sarao K, Chow JS, Afacan O, Kurugol S. Improving Automatic Renal Segmentation in Clinically Normal and Abnormal Paediatric DCE-MRI via Contrast Maximisation and Convolutional Networks for Computing Markers of Kidney Function. SENSORS (BASEL, SWITZERLAND) 2021; 21:7942. [PMID: 34883946 PMCID: PMC8659486 DOI: 10.3390/s21237942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 11/24/2021] [Accepted: 11/25/2021] [Indexed: 11/29/2022]
Abstract
There is a growing demand for fast, accurate computation of clinical markers to improve renal function and anatomy assessment with a single study. However, conventional techniques have limitations leading to overestimations of kidney function or failure to provide sufficient spatial resolution to target the disease location. In contrast, the computer-aided analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) could generate significant markers, including the glomerular filtration rate (GFR) and time-intensity curves of the cortex and medulla for determining obstruction in the urinary tract. This paper presents a dual-stage fully modular framework for automatic renal compartment segmentation in 4D DCE-MRI volumes. (1) Memory-efficient 3D deep learning is integrated to localise each kidney by harnessing residual convolutional neural networks for improved convergence; segmentation is performed by efficiently learning spatial-temporal information coupled with boundary-preserving fully convolutional dense nets. (2) Renal contextual information is enhanced via non-linear transformation to segment the cortex and medulla. The proposed framework is evaluated on a paediatric dataset containing 60 4D DCE-MRI volumes exhibiting varying conditions affecting kidney function. Our technique outperforms a state-of-the-art approach based on a GrabCut and support vector machine classifier in mean dice similarity (DSC) by 3.8% and demonstrates higher statistical stability with lower standard deviation by 12.4% and 15.7% for cortex and medulla segmentation, respectively.
Collapse
Affiliation(s)
- Hykoush Asaturyan
- School of Computer Science and Engineering, University of Westminster, London W1W 6UW, UK;
| | - Barbara Villarini
- School of Computer Science and Engineering, University of Westminster, London W1W 6UW, UK;
| | - Karen Sarao
- Department of Radiology, Harvard Medical School and Boston Children’s Hospital, Boston, MA 02115, USA; (K.S.); (J.S.C.); (O.A.); (S.K.)
| | - Jeanne S. Chow
- Department of Radiology, Harvard Medical School and Boston Children’s Hospital, Boston, MA 02115, USA; (K.S.); (J.S.C.); (O.A.); (S.K.)
| | - Onur Afacan
- Department of Radiology, Harvard Medical School and Boston Children’s Hospital, Boston, MA 02115, USA; (K.S.); (J.S.C.); (O.A.); (S.K.)
| | - Sila Kurugol
- Department of Radiology, Harvard Medical School and Boston Children’s Hospital, Boston, MA 02115, USA; (K.S.); (J.S.C.); (O.A.); (S.K.)
| |
Collapse
|
47
|
Ryan F, Román KLL, Gerbolés BZ, Rebescher KM, Txurio MS, Ugarte RC, González MJG, Oliver IM. Unsupervised domain adaptation for the segmentation of breast tissue in mammography images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106368. [PMID: 34537490 DOI: 10.1016/j.cmpb.2021.106368] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 08/17/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Breast density refers to the proportion of glandular and fatty tissue in the breast and is recognized as a useful factor assessing breast cancer risk. Moreover, the segmentation of the high-density glandular tissue from mammograms can assist medical professionals visualizing and localizing areas that may require additional attention. Developing robust methods to segment breast tissues is challenging due to the variations in mammographic acquisition systems and protocols. Deep learning methods are effective in medical image segmentation but they often require large quantities of labelled data. Unsupervised domain adaptation is an area of research that employs unlabelled data to improve model performance on variations of samples derived from different sources. METHODS First, a U-Net architecture was used to perform segmentation of the fatty and glandular tissues with labelled data from a single acquisition device. Then, adversarial-based unsupervised domain adaptation methods were used to incorporate single unlabelled target domains, consisting of images from a different machine, into the training. Finally, the domain adaptation model was extended to include multiple unlabelled target domains by combining a reconstruction task with adversarial training. RESULTS The adversarial training was found to improve the generalization of the initial model on new domain data, demonstrating clearly improved segmentation of the breast tissues. For training with multiple unlabelled domains, combining a reconstruction task with adversarial training improved the stability of the training and yielded adequate segmentation results across all domains with a single model. CONCLUSIONS Results demonstrated the potential for adversarial-based domain adaptation with U-Net architectures for segmentation of breast tissue in mammograms coming from several devices and demonstrated that domain-adapted models could achieve a similar agreement with manual segmentations. It has also been found that combining adversarial and reconstruction-based methods can provide a simple and effective solution for training with multiple unlabelled target domains.
Collapse
|
48
|
Hernández A, Miranda DA, Pertuz S. Algorithms and methods for computerized analysis of mammography images in breast cancer risk assessment. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 212:106443. [PMID: 34656014 DOI: 10.1016/j.cmpb.2021.106443] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 09/22/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVES The computerized analysis of mammograms for the development of quantitative biomarkers is a growing field with applications in breast cancer risk assessment. Computerized image analysis offers the possibility of using different methods and algorithms to extract additional information from screening and diagnosis images to aid in the assessment of breast cancer risk. In this work, we review the algorithms and methods for the automated, computerized analysis of mammography images for the task mentioned, and discuss the main challenges that the development and improvement of these methods face today. METHODS We review the recent progress in two main branches of mammography-based risk assessment: parenchymal analysis and breast density estimation, including performance indicators of most of the studies considered. Parenchymal analysis methods are divided into feature-based methods and deep learning-based methods; breast density methods are grouped into area-based, volume-based, and breast categorization methods. Additionally, we identify the challenges that these study fields currently face. RESULTS Parenchymal analysis using deep learning algorithms are on the rise, with some studies showing high-performance indicators, such as an area under the receiver operating characteristic curve of up to 90. Methods for risk assessment using breast density report a wider variety of performance indicators; however, we can also identify that the approaches using deep learning methods yield high performance in each of the subdivisions considered. CONCLUSIONS Both breast density estimation and parenchymal analysis are promising tools for the task of breast cancer risk assessment; deep learning methods have shown performance comparable or superior to the other considered methods. All methods considered face challenges such as the lack of objective comparison between them and the lack of access to datasets from different populations.
Collapse
Affiliation(s)
| | | | - Said Pertuz
- Universidad Industrial de Santander, Bucaramanga, Colombia.
| |
Collapse
|
49
|
Li H, Mukundan R, Boyd S. Novel Texture Feature Descriptors Based on Multi-Fractal Analysis and LBP for Classifying Breast Density in Mammograms. J Imaging 2021; 7:jimaging7100205. [PMID: 34677291 PMCID: PMC8540831 DOI: 10.3390/jimaging7100205] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2021] [Revised: 09/26/2021] [Accepted: 10/01/2021] [Indexed: 11/16/2022] Open
Abstract
This paper investigates the usefulness of multi-fractal analysis and local binary patterns (LBP) as texture descriptors for classifying mammogram images into different breast density categories. Multi-fractal analysis is also used in the pre-processing step to segment the region of interest (ROI). We use four multi-fractal measures and the LBP method to extract texture features, and to compare their classification performance in experiments. In addition, a feature descriptor combining multi-fractal features and multi-resolution LBP (MLBP) features is proposed and evaluated in this study to improve classification accuracy. An autoencoder network and principal component analysis (PCA) are used for reducing feature redundancy in the classification model. A full field digital mammogram (FFDM) dataset, INBreast, which contains 409 mammogram images, is used in our experiment. BI-RADS density labels given by radiologists are used as the ground truth to evaluate the classification results using the proposed methods. Experimental results show that the proposed feature descriptor based on multi-fractal features and LBP result in higher classification accuracy than using individual texture feature sets.
Collapse
Affiliation(s)
- Haipeng Li
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
- Correspondence:
| | - Ramakrishnan Mukundan
- Department of Computer Science and Software Engineering, University of Canterbury, Christchurch 8140, New Zealand;
| | - Shelley Boyd
- Canterbury Breastcare, St. George’s Medical Centre, Christchurch 8014, New Zealand;
| |
Collapse
|
50
|
Iqbal A, Sharif M. MDA-Net: Multiscale dual attention-based network for breast lesion segmentation using ultrasound images. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2021. [DOI: 10.1016/j.jksuci.2021.10.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|