1
|
Al-Karawi D, Al-Zaidi S, Helael KA, Obeidat N, Mouhsen AM, Ajam T, Alshalabi BA, Salman M, Ahmed MH. A Review of Artificial Intelligence in Breast Imaging. Tomography 2024; 10:705-726. [PMID: 38787015 PMCID: PMC11125819 DOI: 10.3390/tomography10050055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2024] [Revised: 04/14/2024] [Accepted: 05/06/2024] [Indexed: 05/25/2024] Open
Abstract
With the increasing dominance of artificial intelligence (AI) techniques, the important prospects for their application have extended to various medical fields, including domains such as in vitro diagnosis, intelligent rehabilitation, medical imaging, and prognosis. Breast cancer is a common malignancy that critically affects women's physical and mental health. Early breast cancer screening-through mammography, ultrasound, or magnetic resonance imaging (MRI)-can substantially improve the prognosis for breast cancer patients. AI applications have shown excellent performance in various image recognition tasks, and their use in breast cancer screening has been explored in numerous studies. This paper introduces relevant AI techniques and their applications in the field of medical imaging of the breast (mammography and ultrasound), specifically in terms of identifying, segmenting, and classifying lesions; assessing breast cancer risk; and improving image quality. Focusing on medical imaging for breast cancer, this paper also reviews related challenges and prospects for AI.
Collapse
Affiliation(s)
- Dhurgham Al-Karawi
- Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK;
| | - Shakir Al-Zaidi
- Medical Analytica Ltd., 26a Castle Park Industrial Park, Flint CH6 5XA, UK;
| | - Khaled Ahmad Helael
- Royal Medical Services, King Hussein Medical Hospital, King Abdullah II Ben Al-Hussein Street, Amman 11855, Jordan;
| | - Naser Obeidat
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Abdulmajeed Mounzer Mouhsen
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Tarek Ajam
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Bashar A. Alshalabi
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Mohamed Salman
- Department of Diagnostic Radiology and Nuclear Medicine, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan; (N.O.); (A.M.M.); (T.A.); (B.A.A.); (M.S.)
| | - Mohammed H. Ahmed
- School of Computing, Coventry University, 3 Gulson Road, Coventry CV1 5FB, UK;
| |
Collapse
|
2
|
Seeböck P, Orlando JI, Michl M, Mai J, Schmidt-Erfurth U, Bogunović H. Anomaly guided segmentation: Introducing semantic context for lesion segmentation in retinal OCT using weak context supervision from anomaly detection. Med Image Anal 2024; 93:103104. [PMID: 38350222 DOI: 10.1016/j.media.2024.103104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 12/01/2023] [Accepted: 02/05/2024] [Indexed: 02/15/2024]
Abstract
Automated lesion detection in retinal optical coherence tomography (OCT) scans has shown promise for several clinical applications, including diagnosis, monitoring and guidance of treatment decisions. However, segmentation models still struggle to achieve the desired results for some complex lesions or datasets that commonly occur in real-world, e.g. due to variability of lesion phenotypes, image quality or disease appearance. While several techniques have been proposed to improve them, one line of research that has not yet been investigated is the incorporation of additional semantic context through the application of anomaly detection models. In this study we experimentally show that incorporating weak anomaly labels to standard segmentation models consistently improves lesion segmentation results. This can be done relatively easy by detecting anomalies with a separate model and then adding these output masks as an extra class for training the segmentation model. This provides additional semantic context without requiring extra manual labels. We empirically validated this strategy using two in-house and two publicly available retinal OCT datasets for multiple lesion targets, demonstrating the potential of this generic anomaly guided segmentation approach to be used as an extra tool for improving lesion detection models.
Collapse
Affiliation(s)
- Philipp Seeböck
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Computational Imaging Research Lab, Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Austria.
| | - José Ignacio Orlando
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria; Yatiris Group at PLADEMA Institute, CONICET, Universidad Nacional del Centro de la Provincia de Buenos Aires, Gral. Pinto 399, Tandil, Buenos Aires, Argentina
| | - Martin Michl
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Julia Mai
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Ursula Schmidt-Erfurth
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria
| | - Hrvoje Bogunović
- Lab for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University of Vienna, Austria.
| |
Collapse
|
3
|
Romanov S, Howell S, Harkness E, Bydder M, Evans DG, Squires S, Fergie M, Astley S. Artificial Intelligence for Image-Based Breast Cancer Risk Prediction Using Attention. Tomography 2023; 9:2103-2115. [PMID: 38133069 PMCID: PMC10747439 DOI: 10.3390/tomography9060165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 11/17/2023] [Accepted: 11/21/2023] [Indexed: 12/23/2023] Open
Abstract
Accurate prediction of individual breast cancer risk paves the way for personalised prevention and early detection. The incorporation of genetic information and breast density has been shown to improve predictions for existing models, but detailed image-based features are yet to be included despite correlating with risk. Complex information can be extracted from mammograms using deep-learning algorithms, however, this is a challenging area of research, partly due to the lack of data within the field, and partly due to the computational burden. We propose an attention-based Multiple Instance Learning (MIL) model that can make accurate, short-term risk predictions from mammograms taken prior to the detection of cancer at full resolution. Current screen-detected cancers are mixed in with priors during model development to promote the detection of features associated with risk specifically and features associated with cancer formation, in addition to alleviating data scarcity issues. MAI-risk achieves an AUC of 0.747 [0.711, 0.783] in cancer-free screening mammograms of women who went on to develop a screen-detected or interval cancer between 5 and 55 months, outperforming both IBIS (AUC 0.594 [0.557, 0.633]) and VAS (AUC 0.649 [0.614, 0.683]) alone when accounting for established clinical risk factors.
Collapse
Affiliation(s)
- Stepan Romanov
- Division of Informatics, Imaging and Data Science, University of Manchester, Manchester M13 9PT, UK; (E.H.); (M.F.)
| | - Sacha Howell
- Division of Cancer Sciences, University of Manchester, Manchester M20 4GJ, UK;
- Department of Medical Oncology, The Christie NHS Foundation Trust, Manchester M20 4BX, UK
- The Nightingale Centre, Manchester University NHS Foundation Trust, Manchester M23 9LT, UK; (M.B.); (D.G.E.)
| | - Elaine Harkness
- Division of Informatics, Imaging and Data Science, University of Manchester, Manchester M13 9PT, UK; (E.H.); (M.F.)
| | - Megan Bydder
- The Nightingale Centre, Manchester University NHS Foundation Trust, Manchester M23 9LT, UK; (M.B.); (D.G.E.)
| | - D. Gareth Evans
- The Nightingale Centre, Manchester University NHS Foundation Trust, Manchester M23 9LT, UK; (M.B.); (D.G.E.)
- Division of Evolution, Infection and Genomics, University of Manchester, Manchester M13 9PT, UK
| | - Steven Squires
- Department of Clinical and Biomedical Sciences, University of Exeter, Exeter EX4 4PY, UK;
| | - Martin Fergie
- Division of Informatics, Imaging and Data Science, University of Manchester, Manchester M13 9PT, UK; (E.H.); (M.F.)
| | - Sue Astley
- Division of Informatics, Imaging and Data Science, University of Manchester, Manchester M13 9PT, UK; (E.H.); (M.F.)
| |
Collapse
|
4
|
Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, Shin K, Kim KD, Ryu SM, Seo JB, Lee SM, Kim N. Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning. Korean J Radiol 2023; 24:1061-1080. [PMID: 37724586 PMCID: PMC10613849 DOI: 10.3348/kjr.2023.0393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/01/2023] [Accepted: 07/30/2023] [Indexed: 09/21/2023] Open
Abstract
Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.
Collapse
Affiliation(s)
- Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Miso Jang
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Kyungjin Cho
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jiheon Jeong
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Grace Yoojin Lee
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Keewon Shin
- Laboratory for Biosignal Analysis and Perioperative Outcome Research, Biomedical Engineering Center, Asan Institute of Lifesciences, Asan Medical Center, Seoul, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung Min Ryu
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
5
|
Sahu A, Das PK, Meher S. Recent advancements in machine learning and deep learning-based breast cancer detection using mammograms. Phys Med 2023; 114:103138. [PMID: 37914431 DOI: 10.1016/j.ejmp.2023.103138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 07/22/2023] [Accepted: 09/14/2023] [Indexed: 11/03/2023] Open
Abstract
OBJECTIVE Mammogram-based automatic breast cancer detection has a primary role in accurate cancer diagnosis and treatment planning to save valuable lives. Mammography is one basic yet efficient test for screening breast cancer. Very few comprehensive surveys have been presented to briefly analyze methods for detecting breast cancer with mammograms. In this article, our objective is to give an overview of recent advancements in machine learning (ML) and deep learning (DL)-based breast cancer detection systems. METHODS We give a structured framework to categorize mammogram-based breast cancer detection techniques. Several publicly available mammogram databases and different performance measures are also mentioned. RESULTS After deliberate investigation, we find most of the works classify breast tumors either as normal-abnormal or malignant-benign rather than classifying them into three classes. Furthermore, DL-based features are more significant than hand-crafted features. However, transfer learning is preferred over others as it yields better performance in small datasets, unlike classical DL techniques. SIGNIFICANCE AND CONCLUSION In this article, we have made an attempt to give recent advancements in artificial intelligence (AI)-based breast cancer detection systems. Furthermore, a number of challenging issues and possible research directions are mentioned, which will help researchers in further scopes of research in this field.
Collapse
Affiliation(s)
- Adyasha Sahu
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| | - Pradeep Kumar Das
- School of Electronics Engineering (SENSE), VIT Vellore, Tamil Nadu, 632014, India.
| | - Sukadev Meher
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| |
Collapse
|
6
|
You C, Shen Y, Sun S, Zhou J, Li J, Su G, Michalopoulou E, Peng W, Gu Y, Guo W, Cao H. Artificial intelligence in breast imaging: Current situation and clinical challenges. EXPLORATION (BEIJING, CHINA) 2023; 3:20230007. [PMID: 37933287 PMCID: PMC10582610 DOI: 10.1002/exp.20230007] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 04/30/2023] [Indexed: 11/08/2023]
Abstract
Breast cancer ranks among the most prevalent malignant tumours and is the primary contributor to cancer-related deaths in women. Breast imaging is essential for screening, diagnosis, and therapeutic surveillance. With the increasing demand for precision medicine, the heterogeneous nature of breast cancer makes it necessary to deeply mine and rationally utilize the tremendous amount of breast imaging information. With the rapid advancement of computer science, artificial intelligence (AI) has been noted to have great advantages in processing and mining of image information. Therefore, a growing number of scholars have started to focus on and research the utility of AI in breast imaging. Here, an overview of breast imaging databases and recent advances in AI research are provided, the challenges and problems in this field are discussed, and then constructive advice is further provided for ongoing scientific developments from the perspective of the National Natural Science Foundation of China.
Collapse
Affiliation(s)
- Chao You
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Yiyuan Shen
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Shiyun Sun
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Jiayin Zhou
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Jiawei Li
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Guanhua Su
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
- Department of Breast SurgeryKey Laboratory of Breast Cancer in ShanghaiFudan University Shanghai Cancer CenterShanghaiChina
| | | | - Weijun Peng
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Yajia Gu
- Department of RadiologyFudan University Shanghai Cancer CenterShanghaiChina
- Department of OncologyShanghai Medical CollegeFudan UniversityShanghaiChina
| | - Weisheng Guo
- Department of Minimally Invasive Interventional RadiologyKey Laboratory of Molecular Target and Clinical PharmacologySchool of Pharmaceutical Sciences and The Second Affiliated HospitalGuangzhou Medical UniversityGuangzhouChina
| | - Heqi Cao
- Department of Health SciencesNational Natural Science Foundation of ChinaBeijingChina
| |
Collapse
|
7
|
Bobowicz M, Rygusik M, Buler J, Buler R, Ferlin M, Kwasigroch A, Szurowska E, Grochowski M. Attention-Based Deep Learning System for Classification of Breast Lesions-Multimodal, Weakly Supervised Approach. Cancers (Basel) 2023; 15:2704. [PMID: 37345041 DOI: 10.3390/cancers15102704] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 05/02/2023] [Accepted: 05/05/2023] [Indexed: 06/23/2023] Open
Abstract
Breast cancer is the most frequent female cancer, with a considerable disease burden and high mortality. Early diagnosis with screening mammography might be facilitated by automated systems supported by deep learning artificial intelligence. We propose a model based on a weakly supervised Clustering-constrained Attention Multiple Instance Learning (CLAM) classifier able to train under data scarcity effectively. We used a private dataset with 1174 non-cancer and 794 cancer images labelled at the image level with pathological ground truth confirmation. We used feature extractors (ResNet-18, ResNet-34, ResNet-50 and EfficientNet-B0) pre-trained on ImageNet. The best results were achieved with multimodal-view classification using both CC and MLO images simultaneously, resized by half, with a patch size of 224 px and an overlap of 0.25. It resulted in AUC-ROC = 0.896 ± 0.017, F1-score 81.8 ± 3.2, accuracy 81.6 ± 3.2, precision 82.4 ± 3.3, and recall 81.6 ± 3.2. Evaluation with the Chinese Mammography Database, with 5-fold cross-validation, patient-wise breakdowns, and transfer learning, resulted in AUC-ROC 0.848 ± 0.015, F1-score 78.6 ± 2.0, accuracy 78.4 ± 1.9, precision 78.8 ± 2.0, and recall 78.4 ± 1.9. The CLAM algorithm's attentional maps indicate the features most relevant to the algorithm in the images. Our approach was more effective than in many other studies, allowing for some explainability and identifying erroneous predictions based on the wrong premises.
Collapse
Affiliation(s)
- Maciej Bobowicz
- 2nd Department of Radiology, Medical University of Gdansk, 80-214 Gdansk, Poland
| | - Marlena Rygusik
- 2nd Department of Radiology, Medical University of Gdansk, 80-214 Gdansk, Poland
| | - Jakub Buler
- Department of Intelligent Control Systems and Decision Support, Faculty of Electrical and Control Engineering, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Rafał Buler
- Department of Intelligent Control Systems and Decision Support, Faculty of Electrical and Control Engineering, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Maria Ferlin
- Department of Intelligent Control Systems and Decision Support, Faculty of Electrical and Control Engineering, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Arkadiusz Kwasigroch
- Department of Intelligent Control Systems and Decision Support, Faculty of Electrical and Control Engineering, Gdansk University of Technology, 80-233 Gdansk, Poland
| | - Edyta Szurowska
- 2nd Department of Radiology, Medical University of Gdansk, 80-214 Gdansk, Poland
| | - Michał Grochowski
- Department of Intelligent Control Systems and Decision Support, Faculty of Electrical and Control Engineering, Gdansk University of Technology, 80-233 Gdansk, Poland
| |
Collapse
|
8
|
A Heuristic Machine Learning-Based Optimization Technique to Predict Lung Cancer Patient Survival. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:4506488. [PMID: 36776617 PMCID: PMC9911240 DOI: 10.1155/2023/4506488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 08/26/2022] [Accepted: 11/24/2022] [Indexed: 02/05/2023]
Abstract
Cancer has been a significant threat to human health and well-being, posing the biggest obstacle in the history of human sickness. The high death rate in cancer patients is primarily due to the complexity of the disease and the wide range of clinical outcomes. Increasing the accuracy of the prediction is equally crucial as predicting the survival rate of cancer patients, which has become a key issue of cancer research. Many models have been suggested at the moment. However, most of them simply use single genetic data or clinical data to construct prediction models for cancer survival. There is a lot of emphasis in present survival studies on determining whether or not a patient will survive five years. The personal issue of how long a lung cancer patient will survive remains unanswered. The proposed technique Naive Bayes and SSA is estimating the overall survival time with lung cancer. Two machine learning challenges are derived from a single customized query. To begin with, determining whether a patient will survive for more than five years is a simple binary question. The second step is to develop a five-year survival model using regression analysis. When asked to forecast how long a lung cancer patient would survive within five years, the mean absolute error (MAE) of this technique's predictions is accurate within a month. Several biomarker genes have been associated with lung cancers. The accuracy, recall, and precision achieved from this algorithm are 98.78%, 98.4%, and 98.6%, respectively.
Collapse
|
9
|
Fan W, Shangguan W, Bouguila N. Continuous image anomaly detection based on contrastive lifelong learning. APPL INTELL 2023. [DOI: 10.1007/s10489-022-04401-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
10
|
Anomaly localization in regular textures based on deep convolutional generative adversarial networks. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02475-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
11
|
Online Learning of Oil Leak Anomalies in Wind Turbines with Block-Based Binary Reservoir. ELECTRONICS 2021. [DOI: 10.3390/electronics10222836] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The focus of this work is to design a deeply quantized anomaly detector of oil leaks that may happen at the junction between the wind turbine high-speed shaft and the external bracket of the power generator. We propose a block-based binary shallow echo state network (BBS-ESN) architecture belonging to the reservoir computing (RC) category and, as we believe, it also extends the extreme learning machines (ELM) domain. Furthermore, BBS-ESN performs binary block-based online training using fixed and minimal computational complexity to achieve low power consumption and deployability on an off-the-shelf micro-controller (MCU). This has been achieved through binarization of the images and 1-bit quantization of the network weights and activations. 3D rendering has been used to generate a novel publicly available dataset of photo-realistic images similar to those potentially acquired by image sensors on the field while monitoring the junction, without and with oil leaks. Extensive experimentation has been conducted using a STM32H743ZI2 MCU running at 480 MHz and the results achieved show an accurate identification of anomalies, with a reduced computational cost per image and memory occupancy. Based on the obtained results, we conclude that BBS-ESN is feasible on off-the-shelf 32 bit MCUs. Moreover, the solution is also scalable in the number of image cameras to be deployed and to achieve accurate and fast oil leak detections from different viewpoints.
Collapse
|
12
|
Lei YM, Yin M, Yu MH, Yu J, Zeng SE, Lv WZ, Li J, Ye HR, Cui XW, Dietrich CF. Artificial Intelligence in Medical Imaging of the Breast. Front Oncol 2021; 11:600557. [PMID: 34367938 PMCID: PMC8339920 DOI: 10.3389/fonc.2021.600557] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 07/07/2021] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) has invaded our daily lives, and in the last decade, there have been very promising applications of AI in the field of medicine, including medical imaging, in vitro diagnosis, intelligent rehabilitation, and prognosis. Breast cancer is one of the common malignant tumors in women and seriously threatens women’s physical and mental health. Early screening for breast cancer via mammography, ultrasound and magnetic resonance imaging (MRI) can significantly improve the prognosis of patients. AI has shown excellent performance in image recognition tasks and has been widely studied in breast cancer screening. This paper introduces the background of AI and its application in breast medical imaging (mammography, ultrasound and MRI), such as in the identification, segmentation and classification of lesions; breast density assessment; and breast cancer risk assessment. In addition, we also discuss the challenges and future perspectives of the application of AI in medical imaging of the breast.
Collapse
Affiliation(s)
- Yu-Meng Lei
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Miao Yin
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Mei-Hui Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Jing Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Shu-E Zeng
- Department of Medical Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wen-Zhi Lv
- Department of Artificial Intelligence, Julei Technology, Wuhan, China
| | - Jun Li
- Department of Medical Ultrasound, The First Affiliated Hospital of Medical College, Shihezi University, Xinjiang, China
| | - Hua-Rong Ye
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Christoph F Dietrich
- Department Allgemeine Innere Medizin (DAIM), Kliniken Beau Site, Salem und Permanence, Bern, Switzerland
| |
Collapse
|
13
|
Chen Y, Zhang H, Wang Y, Yang Y, Zhou X, Wu QMJ. MAMA Net: Multi-Scale Attention Memory Autoencoder Network for Anomaly Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1032-1041. [PMID: 33326377 PMCID: PMC8544938 DOI: 10.1109/tmi.2020.3045295] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Revised: 11/22/2020] [Accepted: 12/13/2020] [Indexed: 05/13/2023]
Abstract
Anomaly detection refers to the identification of cases that do not conform to the expected pattern, which takes a key role in diverse research areas and application domains. Most of existing methods can be summarized as anomaly object detection-based and reconstruction error-based techniques. However, due to the bottleneck of defining encompasses of real-world high-diversity outliers and inaccessible inference process, individually, most of them have not derived groundbreaking progress. To deal with those imperfectness, and motivated by memory-based decision-making and visual attention mechanism as a filter to select environmental information in human vision perceptual system, in this paper, we propose a Multi-scale Attention Memory with hash addressing Autoencoder network (MAMA Net) for anomaly detection. First, to overcome a battery of problems result from the restricted stationary receptive field of convolution operator, we coin the multi-scale global spatial attention block which can be straightforwardly plugged into any networks as sampling, upsampling and downsampling function. On account of its efficient features representation ability, networks can achieve competitive results with only several level blocks. Second, it's observed that traditional autoencoder can only learn an ambiguous model that also reconstructs anomalies "well" due to lack of constraints in training and inference process. To mitigate this challenge, we design a hash addressing memory module that proves abnormalities to produce higher reconstruction error for classification. In addition, we couple the mean square error (MSE) with Wasserstein loss to improve the encoding data distribution. Experiments on various datasets, including two different COVID-19 datasets and one brain MRI (RIDER) dataset prove the robustness and excellent generalization of the proposed MAMA Net.
Collapse
Affiliation(s)
- Yurong Chen
- National Engineering Laboratory of Robot Visual Perception and Control Technology, School of RoboticsHunan UniversityChangsha410082China
| | - Hui Zhang
- National Engineering Laboratory of Robot Visual Perception and Control Technology, School of RoboticsHunan UniversityChangsha410082China
| | - Yaonan Wang
- National Engineering Laboratory of Robot Visual Perception and Control Technology, School of RoboticsHunan UniversityChangsha410082China
| | - Yimin Yang
- College of Computer ScienceLakehead UniversityThunder BayONP7B 5E1Canada
| | - Xianen Zhou
- National Engineering Laboratory of Robot Visual Perception and Control Technology, School of RoboticsHunan UniversityChangsha410082China
| | - Q. M. Jonathan Wu
- College of Electrical and Computer EngineeringUniversity of WindsorWindsorONN9B 3P4Canada
| |
Collapse
|
14
|
Li J, Li W, Sisk A, Ye H, Wallace WD, Speier W, Arnold CW. A multi-resolution model for histopathology image classification and localization with multiple instance learning. Comput Biol Med 2021; 131:104253. [PMID: 33601084 DOI: 10.1016/j.compbiomed.2021.104253] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Revised: 01/31/2021] [Accepted: 02/03/2021] [Indexed: 12/17/2022]
Abstract
Large numbers of histopathological images have been digitized into high resolution whole slide images, opening opportunities in developing computational image analysis tools to reduce pathologists' workload and potentially improve inter- and intra-observer agreement. Most previous work on whole slide image analysis has focused on classification or segmentation of small pre-selected regions-of-interest, which requires fine-grained annotation and is non-trivial to extend for large-scale whole slide analysis. In this paper, we proposed a multi-resolution multiple instance learning model that leverages saliency maps to detect suspicious regions for fine-grained grade prediction. Instead of relying on expensive region- or pixel-level annotations, our model can be trained end-to-end with only slide-level labels. The model is developed on a large-scale prostate biopsy dataset containing 20,229 slides from 830 patients. The model achieved 92.7% accuracy, 81.8% Cohen's Kappa for benign, low grade (i.e. Grade group 1) and high grade (i.e. Grade group ≥ 2) prediction, an area under the receiver operating characteristic curve (AUROC) of 98.2% and an average precision (AP) of 97.4% for differentiating malignant and benign slides. The model obtained an AUROC of 99.4% and an AP of 99.8% for cancer detection on an external dataset.
Collapse
Affiliation(s)
- Jiayun Li
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA.
| | - Wenyuan Li
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA
| | - Anthony Sisk
- Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA
| | - Huihui Ye
- Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA
| | - W Dean Wallace
- Department of Pathology, USC, 2011 Zonal Avenue, Los Angeles, CA, 90033, USA
| | - William Speier
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA
| | - Corey W Arnold
- Computational Diagnostics Lab, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Radiology, UCLA, 924 Westwood Blvd Suite 600, Los Angeles, CA, 90024, USA; Department of Pathology & Laboratory Medicine, UCLA, 10833 Le Conte Ave, Los Angeles, CA, 90095, USA.
| |
Collapse
|
15
|
Kushwaha S, Bahl S, Bagha AK, Parmar KS, Javaid M, Haleem A, Singh RP. Significant Applications of Machine Learning for COVID-19 Pandemic. JOURNAL OF INDUSTRIAL INTEGRATION AND MANAGEMENT-INNOVATION AND ENTREPRENEURSHIP 2020. [DOI: 10.1142/s2424862220500268] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Machine learning is an innovative approach that has extensive applications in prediction. This technique needs to be applied for the COVID-19 pandemic to identify patients at high risk, their death rate, and other abnormalities. It can be used to understand the nature of this virus and further predict the upcoming issues. This literature-based review is done by searching the relevant papers on machine learning for COVID-19 from the databases of SCOPUS, Academia, Google Scholar, PubMed, and ResearchGate. This research attempts to discuss the significance of machine learning in resolving the COVID-19 pandemic crisis. This paper studied how machine learning algorithms and methods can be employed to fight the COVID-19 virus and the pandemic. It further discusses the primary machine learning methods that are helpful during the COVID-19 pandemic. We further identified and discussed algorithms used in machine learning and their significant applications. Machine learning is a useful technique, and this can be witnessed in various areas to identify the existing drugs, which also seems advantageous for the treatment of COVID-19 patients. This learning algorithm creates interferences out of unlabeled input datasets, which can be applied to analyze the unlabeled data as an input resource for COVID-19. It provides accurate and useful features rather than a traditional explicitly calculation-based method. Further, this technique is beneficial to predict the risk in healthcare during this COVID-19 crisis. Machine learning also analyses the risk factors as per age, social habits, location, and climate.
Collapse
Affiliation(s)
- Shashi Kushwaha
- Department of Mechanical Engineering, Dr. B. R. Ambedkar National Institute of Technology, Jalandhar 144011, India
| | - Shashi Bahl
- Department of Mechanical Engineering, I. K. Gujral Punjab Technical University Hoshiarpur Campus, Hoshiarpur 146001, India
| | - Ashok Kumar Bagha
- Department of Mechanical Engineering, Dr. B. R. Ambedkar National Institute of Technology, Jalandhar 144011, India
| | - Kulwinder Singh Parmar
- Department of Mathematical Sciences, I. K. Gujral Punjab Technical University Hoshiarpur Campus, Hoshiarpur 146001, India
| | - Mohd Javaid
- Department of Mechanical Engineering, Jamia Millia Islamia New Delhi 110025, India
| | - Abid Haleem
- Department of Mechanical Engineering, Jamia Millia Islamia New Delhi 110025, India
| | - Ravi Pratap Singh
- Department of Industrial and Production Engineering, Dr B. R. Ambedkar National Institute of Technology, Jalandhar 144011, India
| |
Collapse
|
16
|
Vu T, Lai P, Raich R, Pham A, Fern XZ, Rao UA. A Novel Attribute-Based Symmetric Multiple Instance Learning for Histopathological Image Analysis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3125-3136. [PMID: 32305904 PMCID: PMC7561004 DOI: 10.1109/tmi.2020.2987796] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Histopathological image analysis is a challenging task due to a diverse histology feature set as well as due to the presence of large non-informative regions in whole slide images. In this paper, we propose a multiple-instance learning (MIL) method for image-level classification as well as for annotating relevant regions in the image. In MIL, a common assumption is that negative bags contain only negative instances while positive bags contain one or more positive instances. This asymmetric assumption may be inappropriate for some application scenarios where negative bags also contain representative negative instances. We introduce a novel symmetric MIL framework associating each instance in a bag with an attribute which can be either negative, positive, or irrelevant. We extend the notion of relevance by introducing control over the number of relevant instances. We develop a probabilistic graphical model that incorporates the aforementioned paradigm and a corresponding computationally efficient inference for learning the model parameters and obtaining an instance level attribute-learning classifier. The effectiveness of the proposed method is evaluated on available histopathology datasets with promising results.
Collapse
|
17
|
Gnanasekaran VS, Joypaul S, Sundaram PM. A Survey on Machine Learning Algorithms for the Diagnosis of Breast Masses with Mammograms. Curr Med Imaging 2020; 16:639-652. [DOI: 10.2174/1573405615666190903141554] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2019] [Revised: 07/08/2019] [Accepted: 07/17/2019] [Indexed: 01/22/2023]
Abstract
Breast cancer is leading cancer among women for the past 60 years. There are no effective
mechanisms for completely preventing breast cancer. Rather it can be detected at its earlier
stages so that unnecessary biopsy can be reduced. Although there are several imaging modalities
available for capturing the abnormalities in breasts, mammography is the most commonly used
technique, because of its low cost. Computer-Aided Detection (CAD) system plays a key role in
analyzing the mammogram images to diagnose the abnormalities. CAD assists the radiologists for
diagnosis. This paper intends to provide an outline of the state-of-the-art machine learning algorithms
used in the detection of breast cancer developed in recent years. We begin the review with
a concise introduction about the fundamental concepts related to mammograms and CAD systems.
We then focus on the techniques used in the diagnosis of breast cancer with mammograms.
Collapse
Affiliation(s)
| | - Sutha Joypaul
- AAA College of Engineering and Technology, Sivakasi 626123, Virudhunagar District, Tamil Nadu, India
| | | |
Collapse
|
18
|
A review of breast boundary and pectoral muscle segmentation methods in computer-aided detection/diagnosis of breast mammography. Artif Intell Rev 2020. [DOI: 10.1007/s10462-019-09721-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
19
|
Jeong Y, Rachmadi MF, Valdés-Hernández MDC, Komura T. Dilated Saliency U-Net for White Matter Hyperintensities Segmentation Using Irregularity Age Map. Front Aging Neurosci 2019; 11:150. [PMID: 31316369 PMCID: PMC6610522 DOI: 10.3389/fnagi.2019.00150] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Accepted: 06/07/2019] [Indexed: 12/03/2022] Open
Abstract
White matter hyperintensities (WMH) appear as regions of abnormally high signal intensity on T2-weighted magnetic resonance image (MRI) sequences. In particular, WMH have been noteworthy in age-related neuroscience for being a crucial biomarker for all types of dementia and brain aging processes. The automatic WMH segmentation is challenging because of their variable intensity range, size and shape. U-Net tackles this problem through the dense prediction and has shown competitive performances not only on WMH segmentation/detection but also on varied image segmentation tasks. However, its network architecture is high complex. In this study, we propose the use of Saliency U-Net and Irregularity map (IAM) to decrease the U-Net architectural complexity without performance loss. We trained Saliency U-Net using both: a T2-FLAIR MRI sequence and its correspondent IAM. Since IAM guides locating image intensity irregularities, in which WMH are possibly included, in the MRI slice, Saliency U-Net performs better than the original U-Net trained only using T2-FLAIR. The best performance was achieved with fewer parameters and shorter training time. Moreover, the application of dilated convolution enhanced Saliency U-Net by recognizing the shape of large WMH more accurately through multi-context learning. This network named Dilated Saliency U-Net improved Dice coefficient score to 0.5588 which was the best score among our experimental models, and recorded a relatively good sensitivity of 0.4747 with the shortest training time and the least number of parameters. In conclusion, based on our experimental results, incorporating IAM through Dilated Saliency U-Net resulted an appropriate approach for WMH segmentation.
Collapse
Affiliation(s)
- Yunhee Jeong
- School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| | - Muhammad Febrian Rachmadi
- School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
- Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | | | - Taku Komura
- School of Informatics, University of Edinburgh, Edinburgh, United Kingdom
| |
Collapse
|
20
|
Yousefi M, Krzyżak A, Suen CY. Mass detection in digital breast tomosynthesis data using convolutional neural networks and multiple instance learning. Comput Biol Med 2018; 96:283-293. [PMID: 29665537 DOI: 10.1016/j.compbiomed.2018.04.004] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2018] [Revised: 04/05/2018] [Accepted: 04/06/2018] [Indexed: 10/17/2022]
Abstract
Digital breast tomosynthesis (DBT) was developed in the field of breast cancer screening as a new tomographic technique to minimize the limitations of conventional digital mammography breast screening methods. A computer-aided detection (CAD) framework for mass detection in DBT has been developed and is described in this paper. The proposed framework operates on a set of two-dimensional (2D) slices. With plane-to-plane analysis on corresponding 2D slices from each DBT, it automatically learns complex patterns of 2D slices through a deep convolutional neural network (DCNN). It then applies multiple instance learning (MIL) with a randomized trees approach to classify DBT images based on extracted information from 2D slices. This CAD framework was developed and evaluated using 5040 2D image slices derived from 87 DBT volumes. The empirical results demonstrate that this proposed CAD framework achieves much better performance than CAD systems that use hand-crafted features and deep cardinality-restricted Bolzmann machines to detect masses in DBTs.
Collapse
Affiliation(s)
- Mina Yousefi
- Department of Computer Science and Software Engineering Concordia University, 1455 De Maisonneuve Blvd. W, Montreal, Quebec H3G 1M8, Canada.
| | - Adam Krzyżak
- Department of Computer Science and Software Engineering Concordia University, 1455 De Maisonneuve Blvd. W, Montreal, Quebec H3G 1M8, Canada
| | - Ching Y Suen
- Department of Computer Science and Software Engineering Concordia University, 1455 De Maisonneuve Blvd. W, Montreal, Quebec H3G 1M8, Canada
| |
Collapse
|
21
|
Yassin NIR, Omran S, El Houby EMF, Allam H. Machine learning techniques for breast cancer computer aided diagnosis using different image modalities: A systematic review. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 156:25-45. [PMID: 29428074 DOI: 10.1016/j.cmpb.2017.12.012] [Citation(s) in RCA: 120] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2017] [Revised: 11/26/2017] [Accepted: 12/11/2017] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE The high incidence of breast cancer in women has increased significantly in the recent years. Physician experience of diagnosing and detecting breast cancer can be assisted by using some computerized features extraction and classification algorithms. This paper presents the conduction and results of a systematic review (SR) that aims to investigate the state of the art regarding the computer aided diagnosis/detection (CAD) systems for breast cancer. METHODS The SR was conducted using a comprehensive selection of scientific databases as reference sources, allowing access to diverse publications in the field. The scientific databases used are Springer Link (SL), Science Direct (SD), IEEE Xplore Digital Library, and PubMed. Inclusion and exclusion criteria were defined and applied to each retrieved work to select those of interest. From 320 studies retrieved, 154 studies were included. However, the scope of this research is limited to scientific and academic works and excludes commercial interests. RESULTS This survey provides a general analysis of the current status of CAD systems according to the used image modalities and the machine learning based classifiers. Potential research studies have been discussed to create a more objective and efficient CAD systems.
Collapse
Affiliation(s)
- Nisreen I R Yassin
- Systems & Information Department, Engineering Research Division, National Research Centre, Dokki, Cairo 12311, Egypt.
| | - Shaimaa Omran
- Systems & Information Department, Engineering Research Division, National Research Centre, Dokki, Cairo 12311, Egypt.
| | - Enas M F El Houby
- Systems & Information Department, Engineering Research Division, National Research Centre, Dokki, Cairo 12311, Egypt.
| | - Hemat Allam
- Anaesthesia & Pain, Medical Division, National Research Centre, Dokki, Cairo 12311, Egypt.
| |
Collapse
|
22
|
Liu C, Huang Y, Ozolek JA, Hanna MG, Singh R, Rohde GK. SetSVM: An Approach to Set Classification in Nuclei-Based Cancer Detection. IEEE J Biomed Health Inform 2018; 23:351-361. [PMID: 29994380 DOI: 10.1109/jbhi.2018.2803793] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Due to the importance of nuclear structure in cancer diagnosis, several predictive models have been described for diagnosing a wide variety of cancers based on nuclear morphology. In many computer-aided diagnosis (CAD) systems, cancer detection tasks can be generally formulated as set classification problems, which can not be directly solved by classifying single instances. In this paper, we propose a novel set classification approach SetSVM to build a predictive model by considering any nuclei set as a whole without specific assumptions. SetSVM features highly discriminative power in cancer detection challenges in the sense that it not only optimizes the classifier decision boundary but also transfers discriminative information to set representation learning. During model training, these two processes are unified in the support vector machine (SVM) maximum separation margin problem. Experiment results show that SetSVM provides significant improvements compared with five commonly used approaches in cancer detection tasks utilizing 260 patients in total across three different cancer types, namely, thyroid cancer, liver cancer, and melanoma. In addition, we show that SetSVM enables visual interpretation of discriminative nuclear characteristics representing the nuclei set. These features make SetSVM a potentially practical tool in building accurate and interpretable CAD systems for cancer detection.
Collapse
|
23
|
Quellec G, Charrière K, Boudi Y, Cochener B, Lamard M. Deep image mining for diabetic retinopathy screening. Med Image Anal 2017; 39:178-193. [PMID: 28511066 DOI: 10.1016/j.media.2017.04.012] [Citation(s) in RCA: 159] [Impact Index Per Article: 22.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Revised: 04/18/2017] [Accepted: 04/27/2017] [Indexed: 01/29/2023]
Abstract
Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: Az=0.954 in Kaggle's dataset and Az=0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image mining tool, which has the potential to discover new biomarkers in images.
Collapse
Affiliation(s)
- Gwenolé Quellec
- Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France.
| | - Katia Charrière
- IMT Atlantique, Département ITI, Technopôle Brest-Iroise, CS 83818, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France
| | - Yassine Boudi
- IMT Atlantique, Département ITI, Technopôle Brest-Iroise, CS 83818, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France
| | - Béatrice Cochener
- Université de Bretagne Occidentale, 3 rue des Archives, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France; Service d'Ophtalmologie, CHRU Brest, 2 avenue Foch, Brest F-29200, France
| | - Mathieu Lamard
- Université de Bretagne Occidentale, 3 rue des Archives, Brest F-29200, France; Inserm, UMR 1101, 22 avenue Camille-Desmoulins, Brest F-29200, France
| |
Collapse
|
24
|
Quellec G, Cazuguel G, Cochener B, Lamard M. Multiple-Instance Learning for Medical Image and Video Analysis. IEEE Rev Biomed Eng 2017; 10:213-234. [DOI: 10.1109/rbme.2017.2651164] [Citation(s) in RCA: 86] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|