1
|
Tejaswi VSD, Rachapudi V. Liver Cancer Diagnosis: Enhanced Deep Maxout Model with Improved Feature Set. Cancer Invest 2024:1-16. [PMID: 39189645 DOI: 10.1080/07357907.2024.2391359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 08/08/2024] [Indexed: 08/28/2024]
Abstract
This work proposed a liver cancer classification scheme that includes Preprocessing, Feature extraction, and classification stages. The source images are pre-processed using Gaussian filtering. For segmentation, this work proposes a LUV transformation-based adaptive thresholding-based segmentation process. After the segmentation, certain features are extracted that include multi-texon based features, Improved Local Ternary Pattern (LTP-based features), and GLCM features during this phase. In the Classification phase, an improved Deep Maxout model is proposed for liver cancer detection. The adopted scheme is evaluated over other schemes based on various metrics. While the learning rate is 60%, an improved deep maxout model achieved a higher F-measure value (0.94) for classifying liver cancer; however, the previous method like Support Vector Machine (SVM), Random Forest (RF), Recurrent Neural Network (RNN), Long Short Term Memory (LSTM), K-Nearest Neighbor (KNN), Deep maxout, Convolutional Neural Network (CNN), and DL model holds less F-measure value. An improved deep maxout model achieved minimal False Positive Rate (FPR), and False Negative Rate (FNR) values with the best outcomes compared to other existing models for liver cancer classification.
Collapse
Affiliation(s)
| | - Venubabu Rachapudi
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, India
| |
Collapse
|
2
|
Claudio Quiros A, Coudray N, Yeaton A, Yang X, Liu B, Le H, Chiriboga L, Karimkhan A, Narula N, Moore DA, Park CY, Pass H, Moreira AL, Le Quesne J, Tsirigos A, Yuan K. Mapping the landscape of histomorphological cancer phenotypes using self-supervised learning on unannotated pathology slides. Nat Commun 2024; 15:4596. [PMID: 38862472 DOI: 10.1038/s41467-024-48666-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 05/08/2024] [Indexed: 06/13/2024] Open
Abstract
Cancer diagnosis and management depend upon the extraction of complex information from microscopy images by pathologists, which requires time-consuming expert interpretation prone to human bias. Supervised deep learning approaches have proven powerful, but are inherently limited by the cost and quality of annotations used for training. Therefore, we present Histomorphological Phenotype Learning, a self-supervised methodology requiring no labels and operating via the automatic discovery of discriminatory features in image tiles. Tiles are grouped into morphologically similar clusters which constitute an atlas of histomorphological phenotypes (HP-Atlas), revealing trajectories from benign to malignant tissue via inflammatory and reactive phenotypes. These clusters have distinct features which can be identified using orthogonal methods, linking histologic, molecular and clinical phenotypes. Applied to lung cancer, we show that they align closely with patient survival, with histopathologically recognised tumor types and growth patterns, and with transcriptomic measures of immunophenotype. These properties are maintained in a multi-cancer study.
Collapse
Affiliation(s)
- Adalberto Claudio Quiros
- School of Computing Science, University of Glasgow, Glasgow, Scotland, UK
- School of Cancer Sciences, University of Glasgow, Glasgow, Scotland, UK
| | - Nicolas Coudray
- Applied Bioinformatics Laboratories, NYU Grossman School of Medicine, New York, NY, USA
- Department of Cell Biology, NYU Grossman School of Medicine, New York, NY, USA
- Department of Medicine, Division of Precision Medicine, NYU Grossman School of Medicine, New York, USA
| | - Anna Yeaton
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - Xinyu Yang
- School of Computing Science, University of Glasgow, Glasgow, Scotland, UK
| | - Bojing Liu
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Soln, Sweden
| | - Hortense Le
- Department of Medicine, Division of Precision Medicine, NYU Grossman School of Medicine, New York, USA
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - Luis Chiriboga
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - Afreen Karimkhan
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - Navneet Narula
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - David A Moore
- Department of Cellular Pathology, University College London Hospital, London, UK
- Cancer Research UK Lung Cancer Centre of Excellence, University College London Cancer Institute, London, UK
| | - Christopher Y Park
- Department of Medicine, Division of Precision Medicine, NYU Grossman School of Medicine, New York, USA
| | - Harvey Pass
- Department of Cardiothoracic Surgery, NYU Grossman School of Medicine, New York, NY, USA
| | - Andre L Moreira
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA
| | - John Le Quesne
- School of Cancer Sciences, University of Glasgow, Glasgow, Scotland, UK.
- Cancer Research UK Scotland Institute, Glasgow, Scotland, UK.
- Queen Elizabeth University Hospital, Greater Glasgow and Clyde NHS Trust, Glasgow, Scotland, UK.
| | - Aristotelis Tsirigos
- Applied Bioinformatics Laboratories, NYU Grossman School of Medicine, New York, NY, USA.
- Department of Medicine, Division of Precision Medicine, NYU Grossman School of Medicine, New York, USA.
- Department of Pathology, NYU Grossman School of Medicine, New York, NY, USA.
| | - Ke Yuan
- School of Computing Science, University of Glasgow, Glasgow, Scotland, UK.
- School of Cancer Sciences, University of Glasgow, Glasgow, Scotland, UK.
- Cancer Research UK Scotland Institute, Glasgow, Scotland, UK.
| |
Collapse
|
3
|
Khan R, Su L, Zaman A, Hassan H, Kang Y, Huang B. Customized m-RCNN and hybrid deep classifier for liver cancer segmentation and classification. Heliyon 2024; 10:e30528. [PMID: 38765046 PMCID: PMC11096931 DOI: 10.1016/j.heliyon.2024.e30528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 04/28/2024] [Accepted: 04/29/2024] [Indexed: 05/21/2024] Open
Abstract
Diagnosing liver disease presents a significant medical challenge in impoverished countries, with over 30 billion individuals succumbing to it each year. Existing models for detecting liver abnormalities suffer from lower accuracy and higher constraint metrics. As a result, there is a pressing need for improved, efficient, and effective liver disease detection methods. To address the limitations of current models, this method introduces a deep liver segmentation and classification system based on a Customized Mask-Region Convolutional Neural Network (cm-RCNN). The process begins with preprocessing the input liver image, which includes Adaptive Histogram Equalization (AHE). AHE helps dehaze the input image, remove color distortion, and apply linear transformations to obtain the preprocessed image. Next, a precise region of interest is segmented from the preprocessed image using a novel deep strategy called cm-RCNN. To enhance segmentation accuracy, the architecture incorporates the ReLU activation function and the modified sigmoid activation function. Subsequently, a variety of features are extracted from the segmented image, including ResNet features, shape features (area, perimeter, approximation, and convex hull), and enhanced median binary pattern. These extracted features are then used to train a hybrid classification model, which incorporates classifiers like SqueezeNet and DeepMaxout models. The final classification outcome is determined by averaging the scores obtained from both classifiers.
Collapse
Affiliation(s)
- Rashid Khan
- College of Applied Sciences, Shenzhen University, Shenzhen, 518060, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen, 518060, China
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China
| | - Liyilei Su
- College of Applied Sciences, Shenzhen University, Shenzhen, 518060, China
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen, 518060, China
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China
| | - Asim Zaman
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, 518188, China
| | - Haseeb Hassan
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, 518188, China
| | - Yan Kang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen, 518060, China
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, 518188, China
| | - Bingding Huang
- College of Applied Sciences, Shenzhen University, Shenzhen, 518060, China
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, 518188, China
| |
Collapse
|
4
|
Zhang P, Gao C, Huang Y, Chen X, Pan Z, Wang L, Dong D, Li S, Qi X. Artificial intelligence in liver imaging: methods and applications. Hepatol Int 2024; 18:422-434. [PMID: 38376649 DOI: 10.1007/s12072-023-10630-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/18/2023] [Indexed: 02/21/2024]
Abstract
Liver disease is regarded as one of the major health threats to humans. Radiographic assessments hold promise in terms of addressing the current demands for precisely diagnosing and treating liver diseases, and artificial intelligence (AI), which excels at automatically making quantitative assessments of complex medical image characteristics, has made great strides regarding the qualitative interpretation of medical imaging by clinicians. Here, we review the current state of medical-imaging-based AI methodologies and their applications concerning the management of liver diseases. We summarize the representative AI methodologies in liver imaging with focusing on deep learning, and illustrate their promising clinical applications across the spectrum of precise liver disease detection, diagnosis and treatment. We also address the current challenges and future perspectives of AI in liver imaging, with an emphasis on feature interpretability, multimodal data integration and multicenter study. Taken together, it is revealed that AI methodologies, together with the large volume of available medical image data, might impact the future of liver disease care.
Collapse
Affiliation(s)
- Peng Zhang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Chaofei Gao
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Yifei Huang
- Department of Gastroenterology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xiangyi Chen
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Zhuoshi Pan
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Lan Wang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Shao Li
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China.
| | - Xiaolong Qi
- Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Nurturing Center of Jiangsu Province for State Laboratory of AI Imaging & Interventional Radiology, Southeast University, Nanjing, China.
| |
Collapse
|
5
|
Tavolara TE, Su Z, Gurcan MN, Niazi MKK. One label is all you need: Interpretable AI-enhanced histopathology for oncology. Semin Cancer Biol 2023; 97:70-85. [PMID: 37832751 DOI: 10.1016/j.semcancer.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 09/06/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023]
Abstract
Artificial Intelligence (AI)-enhanced histopathology presents unprecedented opportunities to benefit oncology through interpretable methods that require only one overall label per hematoxylin and eosin (H&E) slide with no tissue-level annotations. We present a structured review of these methods organized by their degree of verifiability and by commonly recurring application areas in oncological characterization. First, we discuss morphological markers (tumor presence/absence, metastases, subtypes, grades) in which AI-identified regions of interest (ROIs) within whole slide images (WSIs) verifiably overlap with pathologist-identified ROIs. Second, we discuss molecular markers (gene expression, molecular subtyping) that are not verified via H&E but rather based on overlap with positive regions on adjacent tissue. Third, we discuss genetic markers (mutations, mutational burden, microsatellite instability, chromosomal instability) that current technologies cannot verify if AI methods spatially resolve specific genetic alterations. Fourth, we discuss the direct prediction of survival to which AI-identified histopathological features quantitatively correlate but are nonetheless not mechanistically verifiable. Finally, we discuss in detail several opportunities and challenges for these one-label-per-slide methods within oncology. Opportunities include reducing the cost of research and clinical care, reducing the workload of clinicians, personalized medicine, and unlocking the full potential of histopathology through new imaging-based biomarkers. Current challenges include explainability and interpretability, validation via adjacent tissue sections, reproducibility, data availability, computational needs, data requirements, domain adaptability, external validation, dataset imbalances, and finally commercialization and clinical potential. Ultimately, the relative ease and minimum upfront cost with which relevant data can be collected in addition to the plethora of available AI methods for outcome-driven analysis will surmount these current limitations and achieve the innumerable opportunities associated with AI-driven histopathology for the benefit of oncology.
Collapse
Affiliation(s)
- Thomas E Tavolara
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Ziyu Su
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Metin N Gurcan
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - M Khalid Khan Niazi
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA.
| |
Collapse
|
6
|
Albrecht T, Rossberg A, Albrecht JD, Nicolay JP, Straub BK, Gerber TS, Albrecht M, Brinkmann F, Charbel A, Schwab C, Schreck J, Brobeil A, Flechtenmacher C, von Winterfeld M, Köhler BC, Springfeld C, Mehrabi A, Singer S, Vogel MN, Neumann O, Stenzinger A, Schirmacher P, Weis CA, Roessler S, Kather JN, Goeppert B. Deep Learning-Enabled Diagnosis of Liver Adenocarcinoma. Gastroenterology 2023; 165:1262-1275. [PMID: 37562657 DOI: 10.1053/j.gastro.2023.07.026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 07/18/2023] [Accepted: 07/20/2023] [Indexed: 08/12/2023]
Abstract
BACKGROUND & AIMS Diagnosis of adenocarcinoma in the liver is a frequent scenario in routine pathology and has a critical impact on clinical decision making. However, rendering a correct diagnosis can be challenging, and often requires the integration of clinical, radiologic, and immunohistochemical information. We present a deep learning model (HEPNET) to distinguish intrahepatic cholangiocarcinoma from colorectal liver metastasis, as the most frequent primary and secondary forms of liver adenocarcinoma, with clinical grade accuracy using H&E-stained whole-slide images. METHODS HEPNET was trained on 714,589 image tiles from 456 patients who were randomly selected in a stratified manner from a pool of 571 patients who underwent surgical resection or biopsy at Heidelberg University Hospital. Model performance was evaluated on a hold-out internal test set comprising 115 patients and externally validated on 159 patients recruited at Mainz University Hospital. RESULTS On the hold-out internal test set, HEPNET achieved an area under the receiver operating characteristic curve of 0.994 (95% CI, 0.989-1.000) and an accuracy of 96.522% (95% CI, 94.521%-98.694%) at the patient level. Validation on the external test set yielded an area under the receiver operating characteristic curve of 0.997 (95% CI, 0.995-1.000), corresponding to an accuracy of 98.113% (95% CI, 96.907%-100.000%). HEPNET surpassed the performance of 6 pathology experts with different levels of experience in a reader study of 50 patients (P = .0005), boosted the performance of resident pathologists to the level of senior pathologists, and reduced potential downstream analyses. CONCLUSIONS We provided a ready-to-use tool with clinical grade performance that may facilitate routine pathology by rendering a definitive diagnosis and guiding ancillary testing. The incorporation of HEPNET into pathology laboratories may optimize the diagnostic workflow, complemented by test-related labor and cost savings.
Collapse
Affiliation(s)
- Thomas Albrecht
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany; Liver Cancer Center Heidelberg, Heidelberg, Germany.
| | - Annik Rossberg
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany
| | | | - Jan Peter Nicolay
- Department of Dermatology, University Medical Centre Mannheim, Mannheim, Germany
| | - Beate Katharina Straub
- Institute of Pathology, University Medicine, Johannes Gutenberg University, Mainz, Germany
| | - Tiemo Sven Gerber
- Institute of Pathology, University Medicine, Johannes Gutenberg University, Mainz, Germany
| | - Michael Albrecht
- European Center for Angioscience, Medical Faculty of Mannheim, Mannheim, Germany
| | - Fritz Brinkmann
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany
| | - Alphonse Charbel
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany
| | - Constantin Schwab
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany
| | - Johannes Schreck
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany
| | - Alexander Brobeil
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany
| | | | | | - Bruno Christian Köhler
- Liver Cancer Center Heidelberg, Heidelberg, Germany; Department of Medical Oncology, National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany
| | - Christoph Springfeld
- Liver Cancer Center Heidelberg, Heidelberg, Germany; Department of Medical Oncology, National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany
| | - Arianeb Mehrabi
- Liver Cancer Center Heidelberg, Heidelberg, Germany; Department of General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - Stephan Singer
- Institute of Pathology and Neuropathology, Eberhard-Karls University, Tübingen, Germany
| | - Monika Nadja Vogel
- Diagnostic and Interventional Radiology, Thoraxklinik at Heidelberg University Hospital, Heidelberg, Germany
| | - Olaf Neumann
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany
| | | | - Peter Schirmacher
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany; Liver Cancer Center Heidelberg, Heidelberg, Germany
| | - Cleo-Aron Weis
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany
| | - Stephanie Roessler
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany; Liver Cancer Center Heidelberg, Heidelberg, Germany
| | - Jakob Nikolas Kather
- Department of Medical Oncology, National Center for Tumor Diseases, Heidelberg University Hospital, Heidelberg, Germany; Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Benjamin Goeppert
- Institute of Pathology, Heidelberg University Hospital, Heidelberg, Germany; Institute of Pathology and Neuropathology, RKH Hospital Ludwigsburg, Ludwigsburg, Germany; Institute of Tissue Medicine and Pathology, University of Bern, Bern, Switzerland
| |
Collapse
|
7
|
Shanmugam K, Rajaguru H. Exploration and Enhancement of Classifiers in the Detection of Lung Cancer from Histopathological Images. Diagnostics (Basel) 2023; 13:3289. [PMID: 37892110 PMCID: PMC10606104 DOI: 10.3390/diagnostics13203289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 10/20/2023] [Accepted: 10/21/2023] [Indexed: 10/29/2023] Open
Abstract
Lung cancer is a prevalent malignancy that impacts individuals of all genders and is often diagnosed late due to delayed symptoms. To catch it early, researchers are developing algorithms to study lung cancer images. The primary objective of this work is to propose a novel approach for the detection of lung cancer using histopathological images. In this work, the histopathological images underwent preprocessing, followed by segmentation using a modified approach of KFCM-based segmentation and the segmented image intensity values were dimensionally reduced using Particle Swarm Optimization (PSO) and Grey Wolf Optimization (GWO). Algorithms such as KL Divergence and Invasive Weed Optimization (IWO) are used for feature selection. Seven different classifiers such as SVM, KNN, Random Forest, Decision Tree, Softmax Discriminant, Multilayer Perceptron, and BLDC were used to analyze and classify the images as benign or malignant. Results were compared using standard metrics, and kappa analysis assessed classifier agreement. The Decision Tree Classifier with GWO feature extraction achieved good accuracy of 85.01% without feature selection and hyperparameter tuning approaches. Furthermore, we present a methodology to enhance the accuracy of the classifiers by employing hyperparameter tuning algorithms based on Adam and RAdam. By combining features from GWO and IWO, and using the RAdam algorithm, the Decision Tree classifier achieves the commendable accuracy of 91.57%.
Collapse
Affiliation(s)
| | - Harikumar Rajaguru
- Department of Electronics and Communication Engineering, Bannari Amman Institute of Technology, Sathyamangalam 638401, India;
| |
Collapse
|
8
|
Li M, Hu Z, Qiu S, Zhou C, Weng J, Dong Q, Sheng X, Ren N, Zhou M. Dual-branch hybrid encoding embedded network for histopathology image classification. Phys Med Biol 2023; 68:195002. [PMID: 37647919 DOI: 10.1088/1361-6560/acf556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 08/30/2023] [Indexed: 09/01/2023]
Abstract
Objective.Learning-based histopathology image (HI) classification methods serve as important tools for auxiliary diagnosis in the prognosis stage. However, most existing methods are focus on a single target cancer due to inter-domain differences among different cancer types, limiting their applicability to different cancer types. To overcome these limitations, this paper presents a high-performance HI classification method that aims to address inter-domain differences and provide an improved solution for reliable and practical HI classification.Approach.Firstly, we collect a high-quality hepatocellular carcinoma (HCC) dataset with enough data to verify the stability and practicability of the method. Secondly, a novel dual-branch hybrid encoding embedded network is proposed, which integrates the feature extraction capabilities of convolutional neural network and Transformer. This well-designed structure enables the network to extract diverse features while minimizing redundancy from a single complex network. Lastly, we develop a salient area constraint loss function tailored to the unique characteristics of HIs to address inter-domain differences and enhance the robustness and universality of the methods.Main results.Extensive experiments have conducted on the proposed HCC dataset and two other publicly available datasets. The proposed method demonstrates outstanding performance with an impressive accuracy of 99.09% on the HCC dataset and achieves state-of-the-art results on the other two public datasets. These remarkable outcomes underscore the superior performance and versatility of our approach in multiple HI classification.Significance.The advancements presented in this study contribute to the field of HI analysis by providing a reliable and practical solution for multiple cancer classification, potentially improving diagnostic accuracy and patient outcomes. Our code is available athttps://github.com/lms-design/DHEE-net.
Collapse
Affiliation(s)
- Mingshuai Li
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, People's Republic of China
| | - Zhiqiu Hu
- Department of Hepatobiliary and Pancreatic Surgery, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Institute of Fudan-Minhang Academic Health System, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
| | - Song Qiu
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, People's Republic of China
- MOE Engineering Research Center of Software/Hardware Co-design Technology and Application, East China Normal University, Shanghai, 200241, People's Republic of China
| | - Chenhao Zhou
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, Fudan University, Key Laboratory of Carcinogenesis and Cancer Invasion, Ministry of Education, Shanghai, 200032, People's Republic of China
| | - Jialei Weng
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, Fudan University, Key Laboratory of Carcinogenesis and Cancer Invasion, Ministry of Education, Shanghai, 200032, People's Republic of China
| | - Qiongzhu Dong
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Institute of Fudan-Minhang Academic Health System, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
| | - Xia Sheng
- Department of Pathology, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
| | - Ning Ren
- Key Laboratory of Whole-Period Monitoring and Precise Intervention of Digestive Cancer of Shanghai Municipal Health Commission, Shanghai, 201199, People's Republic of China
- Institute of Fudan-Minhang Academic Health System, Minhang Hospital, Fudan University, Shanghai, 201199, People's Republic of China
- Department of Liver Surgery and Transplantation, Liver Cancer Institute, Zhongshan Hospital, Fudan University, Key Laboratory of Carcinogenesis and Cancer Invasion, Ministry of Education, Shanghai, 200032, People's Republic of China
| | - Mei Zhou
- Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, Shanghai, 200241, People's Republic of China
| |
Collapse
|
9
|
Zheng T, Chen W, Li S, Quan H, Zou M, Zheng S, Zhao Y, Gao X, Cui X. Learning how to detect: A deep reinforcement learning method for whole-slide melanoma histopathology images. Comput Med Imaging Graph 2023; 108:102275. [PMID: 37567046 DOI: 10.1016/j.compmedimag.2023.102275] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 07/18/2023] [Accepted: 07/22/2023] [Indexed: 08/13/2023]
Abstract
Cutaneous melanoma represents one of the most life-threatening malignancies. Histopathological image analysis serves as a vital tool for early melanoma detection. Deep neural network (DNN) models are frequently employed to aid pathologists in enhancing the efficiency and accuracy of diagnoses. However, due to the paucity of well-annotated, high-resolution, whole-slide histopathology image (WSI) datasets, WSIs are typically fragmented into numerous patches during the model training and testing stages. This process disregards the inherent interconnectedness among patches, potentially impeding the models' performance. Additionally, the presence of excess, non-contributing patches extends processing times and introduces substantial computational burdens. To mitigate these issues, we draw inspiration from the clinical decision-making processes of dermatopathologists to propose an innovative, weakly supervised deep reinforcement learning framework, titled Fast medical decision-making in melanoma histopathology images (FastMDP-RL). This framework expedites model inference by reducing the number of irrelevant patches identified within WSIs. FastMDP-RL integrates two DNN-based agents: the search agent (SeAgent) and the decision agent (DeAgent). The SeAgent initiates actions, steered by the image features observed in the current viewing field at various magnifications. Simultaneously, the DeAgent provides labeling probabilities for each patch. We utilize multi-instance learning (MIL) to construct a teacher-guided model (MILTG), serving a dual purpose: rewarding the SeAgent and guiding the DeAgent. Our evaluations were conducted using two melanoma datasets: the publicly accessible TCIA-CM dataset and the proprietary MELSC dataset. Our experimental findings affirm FastMDP-RL's ability to expedite inference and accurately predict WSIs, even in the absence of pixel-level annotations. Moreover, our research investigates the WSI-based interactive environment, encompassing the design of agents, state and reward functions, and feature extractors suitable for melanoma tissue images. This investigation offers valuable insights and references for researchers engaged in related studies. The code is available at: https://github.com/titizheng/FastMDP-RL.
Collapse
Affiliation(s)
- Tingting Zheng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Weixing Chen
- Shenzhen College of Advanced Technology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shuqin Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Mingchen Zou
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Song Zheng
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Yue Zhao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xinghua Gao
- National and Local Joint Engineering Research Center of Immunodermatological Theranostics, Department of Dermatology, The First Hospital of China Medical University, Shenyang, China
| | - Xiaoyu Cui
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
| |
Collapse
|
10
|
Prakash NN, Rajesh V, Namakhwa DL, Dwarkanath Pande S, Ahammad SH. A DenseNet CNN-based liver lesion prediction and classification for future medical diagnosis. SCIENTIFIC AFRICAN 2023. [DOI: 10.1016/j.sciaf.2023.e01629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2023] Open
|
11
|
Wang R, Gu Y, Zhang T, Yang J. Fast cancer metastasis location based on dual magnification hard example mining network in whole-slide images. Comput Biol Med 2023; 158:106880. [PMID: 37044050 DOI: 10.1016/j.compbiomed.2023.106880] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 02/28/2023] [Accepted: 03/30/2023] [Indexed: 04/03/2023]
Abstract
Breast cancer has become the most common form of cancer among women. In recent years, deep learning has shown great potential in aiding the diagnosis of pathological images, particularly through the use of convolutional neural networks for locating lymph node metastasis under gigapixel whole slide images (WSIs). However, the massive size of these images at the highest magnification introduces redundant computation during the inference process. Additionally, the diversity of biological textures and structures within WSIs can cause confusion for classifiers, particularly in identifying hard examples. As a result, the trade-off between accuracy and efficiency remains a critical issue for whole-slide image metastasis localization. In this paper, we propose a novel two-stream network that takes a pair of low- and high-magnification image patches as input for identifying hard examples during the training phase. Specifically, our framework focuses on samples where the outputs of the two magnification networks are dissimilar. We adopt a dual magnification hard mining loss to re-weight the ambiguous samples. To more efficiently locate tumor metastasis cells in whole slide images, the two stream networks are decomposed into a cascaded network during the inference phase. The low magnification WSIs scanned by the low-mag network generate a coarse probability map, and the suspicious areas in the map are refined by the high-mag network. Finally, we evaluate our fast location dual magnification hard example mining network on the Camelyon16 breast cancer whole-slide image dataset. Experiments demonstrate that our proposed method achieves a 0.871 FROC score with a faster inference time, and our high magnification network also achieves a 0.88 FROC score.
Collapse
Affiliation(s)
- Rui Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China.
| | - Yun Gu
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China.
| | - Tianyi Zhang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China
| | - Jie Yang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China.
| |
Collapse
|
12
|
Huang P, He P, Tian S, Ma M, Feng P, Xiao H, Mercaldo F, Santone A, Qin J. A ViT-AMC Network With Adaptive Model Fusion and Multiobjective Optimization for Interpretable Laryngeal Tumor Grading From Histopathological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:15-28. [PMID: 36018875 DOI: 10.1109/tmi.2022.3202248] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The tumor grading of laryngeal cancer pathological images needs to be accurate and interpretable. The deep learning model based on the attention mechanism-integrated convolution (AMC) block has good inductive bias capability but poor interpretability, whereas the deep learning model based on the vision transformer (ViT) block has good interpretability but weak inductive bias ability. Therefore, we propose an end-to-end ViT-AMC network (ViT-AMCNet) with adaptive model fusion and multiobjective optimization that integrates and fuses the ViT and AMC blocks. However, existing model fusion methods often have negative fusion: 1). There is no guarantee that the ViT and AMC blocks will simultaneously have good feature representation capability. 2). The difference in feature representations learning between the ViT and AMC blocks is not obvious, so there is much redundant information in the two feature representations. Accordingly, we first prove the feasibility of fusing the ViT and AMC blocks based on Hoeffding's inequality. Then, we propose a multiobjective optimization method to solve the problem that ViT and AMC blocks cannot simultaneously have good feature representation. Finally, an adaptive model fusion method integrating the metrics block and the fusion block is proposed to increase the differences between feature representations and improve the deredundancy capability. Our methods improve the fusion ability of ViT-AMCNet, and experimental results demonstrate that ViT-AMCNet significantly outperforms state-of-the-art methods. Importantly, the visualized interpretive maps are closer to the region of interest of concern by pathologists, and the generalization ability is also excellent. Our code is publicly available at https://github.com/Baron-Huang/ViT-AMCNet.
Collapse
|
13
|
Wei T, Yuan X, Gao R, Johnston L, Zhou J, Wang Y, Kong W, Xie Y, Zhang Y, Xu D, Yu Z. Survival prediction of stomach cancer using expression data and deep learning models with histopathological images. Cancer Sci 2022; 114:690-701. [PMID: 36114747 PMCID: PMC9899622 DOI: 10.1111/cas.15592] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 08/29/2022] [Accepted: 09/12/2022] [Indexed: 11/30/2022] Open
Abstract
Accurately predicting patient survival is essential for cancer treatment decision. However, the prognostic prediction model based on histopathological images of stomach cancer patients is still yet to be developed. We propose a deep learning-based model (MultiDeepCox-SC) that predicts overall survival in patients with stomach cancer by integrating histopathological images, clinical data, and gene expression data. The MultiDeepCox-SC not only automatedly selects patches with more information for survival prediction, without manual labeling for histopathological images, but also identifies genetic and clinical risk factors associated with survival in stomach cancer. The prognostic accuracy of the MultiDeepCox-SC (C-index = 0.744) surpasses the result only based on histopathological image (C-index = 0.660). The risk score of our model was still an independent predictor of survival outcome after adjustment for potential confounders, including pathologic stage, grade, age, race, and gender on The Cancer Genome Atlas dataset (hazard ratio 1.555, p = 3.53e-08) and the external test set (hazard ratio 2.912, p = 9.42e-4). Our fully automated online prognostic tool based on histopathological images, clinical data, and gene expression data could be utilized to improve pathologists' efficiency and accuracy (https://yu.life.sjtu.edu.cn/DeepCoxSC).
Collapse
Affiliation(s)
- Ting Wei
- Department of Bioinformatics and Biostatistics, School of Life Sciences and BiotechnologyShanghai Jiao Tong UniversityShanghaiChina,SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Xin Yuan
- Department of Bioinformatics and Biostatistics, School of Life Sciences and BiotechnologyShanghai Jiao Tong UniversityShanghaiChina,SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Ruitian Gao
- Department of Bioinformatics and Biostatistics, School of Life Sciences and BiotechnologyShanghai Jiao Tong UniversityShanghaiChina,SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Luke Johnston
- SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina,School of Mathematical SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Jie Zhou
- SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina,School of Mathematical SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Yifan Wang
- Department of Bioinformatics and Biostatistics, School of Life Sciences and BiotechnologyShanghai Jiao Tong UniversityShanghaiChina,SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Weiming Kong
- Institute of Transactional MedicineShanghai Jiao Tong University School of MedicineShanghaiChina
| | - Yujing Xie
- SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina,School of Mathematical SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Yue Zhang
- Department of Bioinformatics and Biostatistics, School of Life Sciences and BiotechnologyShanghai Jiao Tong UniversityShanghaiChina,SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina
| | - Dakang Xu
- Faculty of Medical Laboratory Science, Ruijin Hospital, School of MedicineShanghai Jiao Tong UniversityShanghaiChina
| | - Zhangsheng Yu
- Department of Bioinformatics and Biostatistics, School of Life Sciences and BiotechnologyShanghai Jiao Tong UniversityShanghaiChina,SJTU‐Yale Joint Centre for Biostatistics and Data SciencesShanghai Jiao Tong UniversityShanghaiChina,School of Mathematical SciencesShanghai Jiao Tong UniversityShanghaiChina,Clinical Research InstituteShanghai Jiao Tong University School of MedicineShanghaiChina
| |
Collapse
|
14
|
Rashed BM, Popescu N. Critical Analysis of the Current Medical Image-Based Processing Techniques for Automatic Disease Evaluation: Systematic Literature Review. SENSORS (BASEL, SWITZERLAND) 2022; 22:7065. [PMID: 36146414 PMCID: PMC9501859 DOI: 10.3390/s22187065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 09/06/2022] [Accepted: 09/14/2022] [Indexed: 06/16/2023]
Abstract
Medical image processing and analysis techniques play a significant role in diagnosing diseases. Thus, during the last decade, several noteworthy improvements in medical diagnostics have been made based on medical image processing techniques. In this article, we reviewed articles published in the most important journals and conferences that used or proposed medical image analysis techniques to diagnose diseases. Starting from four scientific databases, we applied the PRISMA technique to efficiently process and refine articles until we obtained forty research articles published in the last five years (2017-2021) aimed at answering our research questions. The medical image processing and analysis approaches were identified, examined, and discussed, including preprocessing, segmentation, feature extraction, classification, evaluation metrics, and diagnosis techniques. This article also sheds light on machine learning and deep learning approaches. We also focused on the most important medical image processing techniques used in these articles to establish the best methodologies for future approaches, discussing the most efficient ones and proposing in this way a comprehensive reference source of methods of medical image processing and analysis that can be very useful in future medical diagnosis systems.
Collapse
Affiliation(s)
| | - Nirvana Popescu
- Computer Science Department, University Politehnica of Bucharest, 060042 Bucharest, Romania
| |
Collapse
|
15
|
Wu H, Pang KKY, Pang GKH, Au-Yeung RKH. A soft-computing based approach to overlapped cells analysis in histopathology images with genetic algorithm. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
16
|
Sabitha P, Meeragandhi G. A dual stage AlexNet-HHO-DrpXLM archetype for an effective feature extraction, classification and prediction of liver cancer based on histopathology images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
17
|
Othman E, Mahmoud M, Dhahri H, Abdulkader H, Mahmood A, Ibrahim M. Automatic Detection of Liver Cancer Using Hybrid Pre-Trained Models. SENSORS (BASEL, SWITZERLAND) 2022; 22:5429. [PMID: 35891111 PMCID: PMC9322134 DOI: 10.3390/s22145429] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 07/02/2022] [Accepted: 07/18/2022] [Indexed: 06/15/2023]
Abstract
Liver cancer is a life-threatening illness and one of the fastest-growing cancer types in the world. Consequently, the early detection of liver cancer leads to lower mortality rates. This work aims to build a model that will help clinicians determine the type of tumor when it occurs within the liver region by analyzing images of tissue taken from a biopsy of this tumor. Working within this stage requires effort, time, and accumulated experience that must be possessed by a tissue expert to determine whether this tumor is malignant and needs treatment. Thus, a histology expert can make use of this model to obtain an initial diagnosis. This study aims to propose a deep learning model using convolutional neural networks (CNNs), which are able to transfer knowledge from pre-trained global models and decant this knowledge into a single model to help diagnose liver tumors from CT scans. Thus, we obtained a hybrid model capable of detecting CT images of a biopsy of a liver tumor. The best results that we obtained within this research reached an accuracy of 0.995, a precision value of 0.864, and a recall value of 0.979, which are higher than those obtained using other models. It is worth noting that this model was tested on a limited set of data and gave good detection results. This model can be used as an aid to support the decisions of specialists in this field and save their efforts. In addition, it saves the effort and time incurred by the treatment of this type of cancer by specialists, especially during periodic examination campaigns every year.
Collapse
Affiliation(s)
- Esam Othman
- Faculty of Applied Computer Science, King Saud University, Riyadh 11451, Saudi Arabia; (E.O.); (H.D.); (A.M.)
| | - Muhammad Mahmoud
- Department of Information Systems, Madina Higher Institute of Management and Technology, Shabramant 12947, Egypt;
| | - Habib Dhahri
- Faculty of Applied Computer Science, King Saud University, Riyadh 11451, Saudi Arabia; (E.O.); (H.D.); (A.M.)
| | - Hatem Abdulkader
- Department of Information Systems, Faculty of Computers and Information, Menoufia University, Shebin El-kom 32511, Menoufia, Egypt;
| | - Awais Mahmood
- Faculty of Applied Computer Science, King Saud University, Riyadh 11451, Saudi Arabia; (E.O.); (H.D.); (A.M.)
| | - Mina Ibrahim
- Department of Information Technology, Faculty of Computers and Information, Menoufia University, Shebin El-kom 32511, Menoufia, Egypt
| |
Collapse
|
18
|
Classification of multi-differentiated liver cancer pathological images based on deep learning attention mechanism. BMC Med Inform Decis Mak 2022; 22:176. [PMID: 35787805 PMCID: PMC9254605 DOI: 10.1186/s12911-022-01919-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 06/23/2022] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Liver cancer is one of the most common malignant tumors in the world, ranking fifth in malignant tumors. The degree of differentiation can reflect the degree of malignancy. The degree of malignancy of liver cancer can be divided into three types: poorly differentiated, moderately differentiated, and well differentiated. Diagnosis and treatment of different levels of differentiation are crucial to the survival rate and survival time of patients. As the gold standard for liver cancer diagnosis, histopathological images can accurately distinguish liver cancers of different levels of differentiation. Therefore, the study of intelligent classification of histopathological images is of great significance to patients with liver cancer. At present, the classification of histopathological images of liver cancer with different degrees of differentiation has disadvantages such as time-consuming, labor-intensive, and large manual investment. In this context, the importance of intelligent classification of histopathological images is obvious. METHODS Based on the development of a complete data acquisition scheme, this paper applies the SENet deep learning model to the intelligent classification of all types of differentiated liver cancer histopathological images for the first time, and compares it with the four deep learning models of VGG16, ResNet50, ResNet_CBAM, and SKNet. The evaluation indexes adopted in this paper include confusion matrix, Precision, recall, F1 Score, etc. These evaluation indexes can be used to evaluate the model in a very comprehensive and accurate way. RESULTS Five different deep learning classification models are applied to collect the data set and evaluate model. The experimental results show that the SENet model has achieved the best classification effect with an accuracy of 95.27%. The model also has good reliability and generalization ability. The experiment proves that the SENet deep learning model has a good application prospect in the intelligent classification of histopathological images. CONCLUSIONS This study also proves that deep learning has great application value in solving the time-consuming and laborious problems existing in traditional manual film reading, and it has certain practical significance for the intelligent classification research of other cancer histopathological images.
Collapse
|
19
|
Zhang H, Liu J, Wang P, Yu Z, Liu W, Chen H. Cross-Boosted Multi-Target Domain Adaptation for Multi-Modality Histopathology Image Translation and Segmentation. IEEE J Biomed Health Inform 2022; 26:3197-3208. [PMID: 35196252 DOI: 10.1109/jbhi.2022.3153793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent digital pathology workflows mainly focus on mono-modality histopathology image analysis. However, they ignore the complementarity between Haematoxylin & Eosin (H&E) and Immunohistochemically (IHC) stained images, which can provide comprehensive gold standard for cancer diagnosis. To resolve this issue, we propose a cross-boosted multi-target domain adaptation pipeline for multi-modality histopathology images, which contains Cross-frequency Style-auxiliary Translation Network (CSTN) and Dual Cross-boosted Segmentation Network (DCSN). Firstly, CSTN achieves the one-to-many translation from fluorescence microscopy images to H&E and IHC images for providing source domain training data. To generate images with realistic color and texture, Cross-frequency Feature Transfer Module (CFTM) is developed to pertinently restructure and normalize high-frequency content and low-frequency style features from different domains. Then, DCSN fulfills multi-target domain adaptive segmentation, where a dual-branch encoder is introduced, and Bidirectional Cross-domain Boosting Module (BCBM) is designed to implement cross-modality information complementation through bidirectional inter-domain collaboration. Finally, we establish Multi-modality Thymus Histopathology (MThH) dataset, which is the largest publicly available H&E and IHC image benchmark. Experiments on MThH dataset and several public datasets show that the proposed pipeline outperforms state-of-the-art methods on both histopathology image translation and segmentation.
Collapse
|
20
|
Bai T, Xu J, Zhang Z, Guo S, Luo X. Context-aware learning for cancer cell nucleus recognition in pathology images. Bioinformatics 2022; 38:2892-2898. [PMID: 35561198 DOI: 10.1093/bioinformatics/btac167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 02/28/2022] [Accepted: 03/17/2022] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION Nucleus identification supports many quantitative analysis studies that rely on nuclei positions or categories. Contextual information in pathology images refers to information near the to-be-recognized cell, which can be very helpful for nucleus subtyping. Current CNN-based methods do not explicitly encode contextual information within the input images and point annotations. RESULTS In this article, we propose a novel framework with context to locate and classify nuclei in microscopy image data. Specifically, first we use state-of-the-art network architectures to extract multi-scale feature representations from multi-field-of-view, multi-resolution input images and then conduct feature aggregation on-the-fly with stacked convolutional operations. Then, two auxiliary tasks are added to the model to effectively utilize the contextual information. One for predicting the frequencies of nuclei, and the other for extracting the regional distribution information of the same kind of nuclei. The entire framework is trained in an end-to-end, pixel-to-pixel fashion. We evaluate our method on two histopathological image datasets with different tissue and stain preparations, and experimental results demonstrate that our method outperforms other recent state-of-the-art models in nucleus identification. AVAILABILITY AND IMPLEMENTATION The source code of our method is freely available at https://github.com/qjxjy123/DonRabbit. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Tian Bai
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Jiayu Xu
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Zhenting Zhang
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Shuyu Guo
- College of Computer Science and Technology, Jilin University, 130012 Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering, Ministry of Education, Jilin University, 130012 Changchun, China
| | - Xiao Luo
- Department of Breast Surgery, China-Japan Union Hospital of Jilin University, 130033 Changchun, China
| |
Collapse
|
21
|
Dong X, Li M, Zhou P, Deng X, Li S, Zhao X, Wu Y, Qin J, Guo W. Fusing pre-trained convolutional neural networks features for multi-differentiated subtypes of liver cancer on histopathological images. BMC Med Inform Decis Mak 2022; 22:122. [PMID: 35509058 PMCID: PMC9066403 DOI: 10.1186/s12911-022-01798-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 02/21/2022] [Indexed: 11/10/2022] Open
Abstract
Liver cancer is a malignant tumor with high morbidity and mortality, which has a tremendous negative impact on human survival. However, it is a challenging task to recognize tens of thousands of histopathological images of liver cancer by naked eye, which poses numerous challenges to inexperienced clinicians. In addition, factors such as long time-consuming, tedious work and huge number of images impose a great burden on clinical diagnosis. Therefore, our study combines convolutional neural networks with histopathology images and adopts a feature fusion approach to help clinicians efficiently discriminate the differentiation types of primary hepatocellular carcinoma histopathology images, thus improving their diagnostic efficiency and relieving their work pressure. In this study, for the first time, 73 patients with different differentiation types of primary liver cancer tumors were classified. We performed an adequate classification evaluation of liver cancer differentiation types using four pre-trained deep convolutional neural networks and nine different machine learning (ML) classifiers on a dataset of liver cancer histopathology images with multiple differentiation types. And the test set accuracy, validation set accuracy, running time with different strategies, precision, recall and F1 value were used for adequate comparative evaluation. Proved by experimental results, fusion networks (FuNet) structure is a good choice, which covers both channel attention and spatial attention, and suppresses channel interference with less information. Meanwhile, it can clarify the importance of each spatial location by learning the weights of different locations in space, then apply it to the study of classification of multi-differentiated types of liver cancer. In addition, in most cases, the Stacking-based integrated learning classifier outperforms other ML classifiers in the classification task of multi-differentiation types of liver cancer with the FuNet fusion strategy after dimensionality reduction of the fused features by principle component analysis (PCA) features, and a satisfactory result of 72.46% is achieved in the test set, which has certain practicality.
Collapse
Affiliation(s)
- Xiaogang Dong
- Department of Hepatopancreatobiliary Surgery, Cancer Affiliated Hospital of Xinjiang Medical University, Ürümqi, Xinjiang, China
| | - Min Li
- Key Laboratory of Signal Detection and Processing, Xinjiang University, Ürümqi, 830046, China.,College of Information Science and Engineering, Xinjiang University, Ürümqi, 830046, China
| | - Panyun Zhou
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Xin Deng
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Siyu Li
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Xingyue Zhao
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Yi Wu
- College of Software, Xinjiang University, Ürümqi, 830046, China
| | - Jiwei Qin
- College of Information Science and Engineering, Xinjiang University, Ürümqi, 830046, China.
| | - Wenjia Guo
- Cancer Institute, Affiliated Cancer Hospital of Xinjiang Medical University, Ürümqi, 830011, China. .,Key Laboratory of Oncology of Xinjiang Uyghur Autonomous Region, Ürümqi, 830011, China.
| |
Collapse
|
22
|
A new lightweight convolutional neural network for radiation-induced liver disease classification. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103463] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
23
|
Nguyen KT, Kim HY, Park JO, Choi E, Kim CS. Tripolar Electrode Electrochemical Impedance Spectroscopy for Endoscopic Devices toward Early Colorectal Tumor Detection. ACS Sens 2022; 7:632-640. [PMID: 35147414 DOI: 10.1021/acssensors.1c02571] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Embedded sensors for endoscopy devices have been studied toward a convenient and decision-supportive methodology in colorectal cancer (CRC) diagnosis, but no device could provide direct CRC screening with in situ measurements. In this study, we proposed a millimeter-scale electrical impedance spectroscopy (EIS) device that can be integrated into a biopsy tool in endoscopy for colorectal tumor detection. A minimally invasive tripolar electrode was designed to sense the tissue impedance, and a multilayer neural network was implemented for the classification algorithm. The sensor performance was investigated in terms of sensitivity, reliability, and repeatability using dummy tissues made of agarose hydrogels at various saline concentrations. In addition, an in vivo study was conducted in mice with an implanted CT-26 colon tumor line. The results demonstrated that the prototyped EIS device and algorithm can detect the tumor tissue in suspicious lesions with high sensitivity and specificity of 87.2 and 92.5%, respectively, and a low error of 7.1%. The findings of this study are promising for in situ CRC screening and may advance the diagnostic efficacy of CRC detection during endoscopic procedures.
Collapse
Affiliation(s)
- Kim Tien Nguyen
- Korea Institute of Medical Micorobotics, Gwangju 61011, Korea
| | - Ho Yong Kim
- Korea Institute of Medical Micorobotics, Gwangju 61011, Korea
| | - Jong-Oh Park
- Korea Institute of Medical Micorobotics, Gwangju 61011, Korea
| | - Eunpyo Choi
- School of Mechanical Engineering, Chonnam National University, Gwangju 61186, Korea
| | - Chang-Sei Kim
- School of Mechanical Engineering, Chonnam National University, Gwangju 61186, Korea
| |
Collapse
|
24
|
Nam D, Chapiro J, Paradis V, Seraphin TP, Kather JN. Artificial intelligence in liver diseases: improving diagnostics, prognostics and response prediction. JHEP REPORTS : INNOVATION IN HEPATOLOGY 2022; 4:100443. [PMID: 35243281 PMCID: PMC8867112 DOI: 10.1016/j.jhepr.2022.100443] [Citation(s) in RCA: 54] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 12/26/2021] [Accepted: 01/11/2022] [Indexed: 12/18/2022]
Abstract
Clinical routine in hepatology involves the diagnosis and treatment of a wide spectrum of metabolic, infectious, autoimmune and neoplastic diseases. Clinicians integrate qualitative and quantitative information from multiple data sources to make a diagnosis, prognosticate the disease course, and recommend a treatment. In the last 5 years, advances in artificial intelligence (AI), particularly in deep learning, have made it possible to extract clinically relevant information from complex and diverse clinical datasets. In particular, histopathology and radiology image data contain diagnostic, prognostic and predictive information which AI can extract. Ultimately, such AI systems could be implemented in clinical routine as decision support tools. However, in the context of hepatology, this requires further large-scale clinical validation and regulatory approval. Herein, we summarise the state of the art in AI in hepatology with a particular focus on histopathology and radiology data. We present a roadmap for the further development of novel biomarkers in hepatology and outline critical obstacles which need to be overcome.
Collapse
|
25
|
|
26
|
Wang W, Mohseni P, Kilgore KL, Najafizadeh L. Cuff-less Blood Pressure Estimation from Photoplethysmography via Visibility Graph and Transfer Learning. IEEE J Biomed Health Inform 2021; 26:2075-2085. [PMID: 34784289 DOI: 10.1109/jbhi.2021.3128383] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper presents a new solution that enables the use of transfer learning for cuff-less blood pressure (BP) monitoring via short duration of photoplethysmogram (PPG). The proposed method estimates BP with low computational budget by 1) creating images from segments of PPG via visibility graph (VG) that preserves the temporal information of the PPG waveform, 2) using pre-trained deep convolutional neural network (CNN) to extract feature vectors from VG images, and 3) solving for the weights and bias between the feature vectors and the reference BPs with ridge regression. Using the University of California Irvine (UCI) database consisting of 348 records, the proposed method achieves a best error performance of 0.008.46 mmHg for systolic blood pressure (SBP), and -0.045.36 mmHg for diastolic blood pressure (DBP), respectively, in terms of the mean error (ME) and the standard deviation (SD) of error, ranking grade B for SBP and grade A for DBP under the British Hypertension Society (BHS) protocol. Our novel data-driven method offers a computationally-efficient end-to-end solution for rapid and user-friendly cuff-less PPG-based BP estimation.
Collapse
|
27
|
Jian J, Xia W, Zhang R, Zhao X, Zhang J, Wu X, Li Y, Qiang J, Gao X. Multiple instance convolutional neural network with modality-based attention and contextual multi-instance learning pooling layer for effective differentiation between borderline and malignant epithelial ovarian tumors. Artif Intell Med 2021; 121:102194. [PMID: 34763809 DOI: 10.1016/j.artmed.2021.102194] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 09/01/2021] [Accepted: 10/07/2021] [Indexed: 01/18/2023]
Abstract
Malignant epithelial ovarian tumors (MEOTs) are the most lethal gynecologic malignancies, accounting for 90% of ovarian cancer cases. By contrast, borderline epithelial ovarian tumors (BEOTs) have low malignant potential and are generally associated with a good prognosis. Accurate preoperative differentiation between BEOTs and MEOTs is crucial for determining the appropriate surgical strategies and improving the postoperative quality of life. Multimodal magnetic resonance imaging (MRI) is an essential diagnostic tool. Although state-of-the-art artificial intelligence technologies such as convolutional neural networks can be used for automated diagnoses, their application have been limited owing to their high demand for graphics processing unit memory and hardware resources when dealing with large 3D volumetric data. In this study, we used multimodal MRI with a multiple instance learning (MIL) method to differentiate between BEOT and MEOT. We proposed the use of MAC-Net, a multiple instance convolutional neural network (MICNN) with modality-based attention (MA) and contextual MIL pooling layer (C-MPL). The MA module can learn from the decision-making patterns of clinicians to automatically perceive the importance of different MRI modalities and achieve multimodal MRI feature fusion based on their importance. The C-MPL module uses strong prior knowledge of tumor distribution as an important reference and assesses contextual information between adjacent images, thus achieving a more accurate prediction. The performance of MAC-Net is superior, with an area under the receiver operating characteristic curve of 0.878, surpassing that of several known MICNN approaches. Therefore, it can be used to assist clinical differentiation between BEOTs and MEOTs.
Collapse
Affiliation(s)
- Junming Jian
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China; Jinan Guoke Medical Engineering and Technology Development Co., Ltd., Jinan, Shandong 250109, China
| | - Wei Xia
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Rui Zhang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Xingyu Zhao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Jiayi Zhang
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Xiaodong Wu
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China
| | - Yong'ai Li
- Department of Radiology, Jinshan Hospital, Fudan University, Shanghai 201508, China
| | - Jinwei Qiang
- Department of Radiology, Jinshan Hospital, Fudan University, Shanghai 201508, China
| | - Xin Gao
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, Jiangsu 215163, China; Jinan Guoke Medical Engineering and Technology Development Co., Ltd., Jinan, Shandong 250109, China; Department of Radiology, Shanxi Province Cancer Hospital, Shanxi Medical University, Taiyuan, Shanxi 030013, China.
| |
Collapse
|
28
|
Deep Learning in Cancer Diagnosis and Prognosis Prediction: A Minireview on Challenges, Recent Trends, and Future Directions. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:9025470. [PMID: 34754327 PMCID: PMC8572604 DOI: 10.1155/2021/9025470] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 09/30/2021] [Accepted: 10/05/2021] [Indexed: 12/30/2022]
Abstract
Deep learning (DL) is a branch of machine learning and artificial intelligence that has been applied to many areas in different domains such as health care and drug design. Cancer prognosis estimates the ultimate fate of a cancer subject and provides survival estimation of the subjects. An accurate and timely diagnostic and prognostic decision will greatly benefit cancer subjects. DL has emerged as a technology of choice due to the availability of high computational resources. The main components in a standard computer-aided design (CAD) system are preprocessing, feature recognition, extraction and selection, categorization, and performance assessment. Reduction of costs associated with sequencing systems offers a myriad of opportunities for building precise models for cancer diagnosis and prognosis prediction. In this survey, we provided a summary of current works where DL has helped to determine the best models for the cancer diagnosis and prognosis prediction tasks. DL is a generic model requiring minimal data manipulations and achieves better results while working with enormous volumes of data. Aims are to scrutinize the influence of DL systems using histopathology images, present a summary of state-of-the-art DL methods, and give directions to future researchers to refine the existing methods.
Collapse
|
29
|
Kröner PT, Engels MML, Glicksberg BS, Johnson KW, Mzaik O, van Hooft JE, Wallace MB, El-Serag HB, Krittanawong C. Artificial intelligence in gastroenterology: A state-of-the-art review. World J Gastroenterol 2021; 27:6794-6824. [PMID: 34790008 PMCID: PMC8567482 DOI: 10.3748/wjg.v27.i40.6794] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/15/2021] [Accepted: 09/16/2021] [Indexed: 02/06/2023] Open
Abstract
The development of artificial intelligence (AI) has increased dramatically in the last 20 years, with clinical applications progressively being explored for most of the medical specialties. The field of gastroenterology and hepatology, substantially reliant on vast amounts of imaging studies, is not an exception. The clinical applications of AI systems in this field include the identification of premalignant or malignant lesions (e.g., identification of dysplasia or esophageal adenocarcinoma in Barrett’s esophagus, pancreatic malignancies), detection of lesions (e.g., polyp identification and classification, small-bowel bleeding lesion on capsule endoscopy, pancreatic cystic lesions), development of objective scoring systems for risk stratification, predicting disease prognosis or treatment response [e.g., determining survival in patients post-resection of hepatocellular carcinoma), determining which patients with inflammatory bowel disease (IBD) will benefit from biologic therapy], or evaluation of metrics such as bowel preparation score or quality of endoscopic examination. The objective of this comprehensive review is to analyze the available AI-related studies pertaining to the entirety of the gastrointestinal tract, including the upper, middle and lower tracts; IBD; the hepatobiliary system; and the pancreas, discussing the findings and clinical applications, as well as outlining the current limitations and future directions in this field.
Collapse
Affiliation(s)
- Paul T Kröner
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Megan ML Engels
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Cancer Center Amsterdam, Department of Gastroenterology and Hepatology, Amsterdam UMC, Location AMC, Amsterdam 1105, The Netherlands
| | - Benjamin S Glicksberg
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Kipp W Johnson
- The Hasso Plattner Institute for Digital Health, Icahn School of Medicine at Mount Sinai, New York, NY 10029, United States
| | - Obaie Mzaik
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
| | - Jeanin E van Hooft
- Department of Gastroenterology and Hepatology, Leiden University Medical Center, Amsterdam 2300, The Netherlands
| | - Michael B Wallace
- Division of Gastroenterology and Hepatology, Mayo Clinic, Jacksonville, FL 32224, United States
- Division of Gastroenterology and Hepatology, Sheikh Shakhbout Medical City, Abu Dhabi 11001, United Arab Emirates
| | - Hashem B El-Serag
- Section of Gastroenterology and Hepatology, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
| | - Chayakrit Krittanawong
- Section of Health Services Research, Michael E. DeBakey VA Medical Center and Baylor College of Medicine, Houston, TX 77030, United States
- Section of Cardiology, Michael E. DeBakey VA Medical Center, Houston, TX 77030, United States
| |
Collapse
|
30
|
Fu Q, Zhang Y, Wang P, Pi J, Qiu X, Guo Z, Huang Y, Zhao Y, Li S, Xu J. Rapid identification of the resistance of urinary tract pathogenic bacteria using deep learning-based spectroscopic analysis. Anal Bioanal Chem 2021; 413:7401-7410. [PMID: 34673992 DOI: 10.1007/s00216-021-03691-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Revised: 09/23/2021] [Accepted: 09/23/2021] [Indexed: 11/24/2022]
Abstract
The resistance of urinary tract pathogenic bacteria to various antibiotics is increasing, which requires the rapid detection of infectious pathogens for accurate and timely antibiotic treatment. Here, we propose a rapid diagnosis strategy for the antibiotic resistance of bacteria in urinary tract infections (UTIs) based on surface-enhanced Raman scattering (SERS) using a positively charged gold nanoparticle planar solid SERS substrate. Then, an intelligent identification model for SERS spectra based on the deep learning technique is constructed to realize the rapid, ultrasensitive, and non-labeled detection of pathogenic bacteria. A total of 54,000 SERS spectra were collected from 18 isolates belonging to 6 species of common UTI bacteria in this work to realize identification of bacterial species, antibiotic sensitivity, and multidrug resistance (MDR) via convolutional neural networks (CNN). This method significantly simplify the Raman data processing processes without background removing and smoothing, however, achieving 96% above classification accuracy, which was significantly greater than the 85% accuracy of the traditional multivariate statistical analysis algorithm principal component analysis combined with the K-nearest neighbor (PCA-KNN). This work clearly elucidated the potential of combining SERS and deep learning technique to realize culture-free identification of pathogenic bacteria and their associated antibiotic sensitivity.
Collapse
Affiliation(s)
- Qiuyue Fu
- Biomedical Photonics Laboratory, School of Biomedical Engineering, Guangdong Medical University, Dongguan, 523808, Guangdong, China
| | - Yanjiao Zhang
- School of Basic Medicine, Guangdong Medical University, Dongguan, 523808, China
| | - Peng Wang
- Biomedical Photonics Laboratory, School of Biomedical Engineering, Guangdong Medical University, Dongguan, 523808, Guangdong, China
| | - Jiang Pi
- Guangdong Provincial Key Laboratory of Medical Molecular Diagnostics, Guangdong Medical University, Dongguan, 523808, Guangdong, China
| | - Xun Qiu
- Biomedical Photonics Laboratory, School of Biomedical Engineering, Guangdong Medical University, Dongguan, 523808, Guangdong, China
| | - Zhusheng Guo
- Donghua Hospital Laboratory Department, Dongguan, 523808, Guangdong, China
| | - Ya Huang
- Donghua Hospital Laboratory Department, Dongguan, 523808, Guangdong, China
| | - Yi Zhao
- Guangdong Provincial Key Laboratory of Molecular Diagnosis, Guangdong Medical University, Dongguan, 523808, Guangdong, China
| | - Shaoxin Li
- Biomedical Photonics Laboratory, School of Biomedical Engineering, Guangdong Medical University, Dongguan, 523808, Guangdong, China.
| | - Junfa Xu
- Guangdong Provincial Key Laboratory of Medical Molecular Diagnostics, Guangdong Medical University, Dongguan, 523808, Guangdong, China.
| |
Collapse
|
31
|
Kourou K, Exarchos KP, Papaloukas C, Sakaloglou P, Exarchos T, Fotiadis DI. Applied machine learning in cancer research: A systematic review for patient diagnosis, classification and prognosis. Comput Struct Biotechnol J 2021; 19:5546-5555. [PMID: 34712399 PMCID: PMC8523813 DOI: 10.1016/j.csbj.2021.10.006] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 10/04/2021] [Accepted: 10/04/2021] [Indexed: 02/08/2023] Open
Abstract
Artificial Intelligence (AI) has recently altered the landscape of cancer research and medical oncology using traditional Machine Learning (ML) algorithms and cutting-edge Deep Learning (DL) architectures. In this review article we focus on the ML aspect of AI applications in cancer research and present the most indicative studies with respect to the ML algorithms and data used. The PubMed and dblp databases were considered to obtain the most relevant research works of the last five years. Based on a comparison of the proposed studies and their research clinical outcomes concerning the medical ML application in cancer research, three main clinical scenarios were identified. We give an overview of the well-known DL and Reinforcement Learning (RL) methodologies, as well as their application in clinical practice, and we briefly discuss Systems Biology in cancer research. We also provide a thorough examination of the clinical scenarios with respect to disease diagnosis, patient classification and cancer prognosis and survival. The most relevant studies identified in the preceding year are presented along with their primary findings. Furthermore, we examine the effective implementation and the main points that need to be addressed in the direction of robustness, explainability and transparency of predictive models. Finally, we summarize the most recent advances in the field of AI/ML applications in cancer research and medical oncology, as well as some of the challenges and open issues that need to be addressed before data-driven models can be implemented in healthcare systems to assist physicians in their daily practice.
Collapse
Affiliation(s)
- Konstantina Kourou
- Unit of Medical Technology and Intelligent Information Systems, Dept. of Materials Science and Engineering, University of Ioannina, Ioannina, Greece
- Foundation for Research and Technology-Hellas, Institute of Molecular Biology and Biotechnology, Dept. of Biomedical Research, Ioannina GR45110, Greece
| | | | - Costas Papaloukas
- Dept. of Biological Applications and Technology, University of Ioannina, Ioannina, Greece
| | - Prodromos Sakaloglou
- Dept. of Precision and Molecular Medicine, Unit of Liquid Biopsy in Oncology, Ioannina University Hospital, Ioannina, Greece
- Laboratory of Medical Genetics in Clinical Practice, School of Health Sciences, Faculty of Medicine, University of Ioannina, Ioannina, Greece
| | | | - Dimitrios I. Fotiadis
- Unit of Medical Technology and Intelligent Information Systems, Dept. of Materials Science and Engineering, University of Ioannina, Ioannina, Greece
- Foundation for Research and Technology-Hellas, Institute of Molecular Biology and Biotechnology, Dept. of Biomedical Research, Ioannina GR45110, Greece
| |
Collapse
|
32
|
Oka A, Ishimura N, Ishihara S. A New Dawn for the Use of Artificial Intelligence in Gastroenterology, Hepatology and Pancreatology. Diagnostics (Basel) 2021; 11:1719. [PMID: 34574060 PMCID: PMC8468082 DOI: 10.3390/diagnostics11091719] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 09/17/2021] [Accepted: 09/17/2021] [Indexed: 12/15/2022] Open
Abstract
Artificial intelligence (AI) is rapidly becoming an essential tool in the medical field as well as in daily life. Recent developments in deep learning, a subfield of AI, have brought remarkable advances in image recognition, which facilitates improvement in the early detection of cancer by endoscopy, ultrasonography, and computed tomography. In addition, AI-assisted big data analysis represents a great step forward for precision medicine. This review provides an overview of AI technology, particularly for gastroenterology, hepatology, and pancreatology, to help clinicians utilize AI in the near future.
Collapse
Affiliation(s)
- Akihiko Oka
- Department of Internal Medicine II, Faculty of Medicine, Shimane University, Izumo 693-8501, Shimane, Japan; (N.I.); (S.I.)
| | | | | |
Collapse
|
33
|
Ye Q, Zhang Q, Tian Y, Zhou T, Ge H, Wu J, Lu N, Bai X, Liang T, Li J. Method of Tumor Pathological Micronecrosis Quantification Via Deep Learning From Label Fuzzy Proportions. IEEE J Biomed Health Inform 2021; 25:3288-3299. [PMID: 33822729 DOI: 10.1109/jbhi.2021.3071276] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The presence of necrosis is associated with tumor progression and patient outcomes in many cancers, but existing analyses rarely adopt quantitative methods because the manual quantification of histopathological features is too expensive. We aim to accurately identify necrotic regions on hematoxylin and eosin (HE)-stained slides and to calculate the ratio of necrosis with minimal annotations on the images. An adaptive method named Learning from Label Fuzzy Proportions (LLFP) was introduced to histopathological image analysis. Two datasets of liver cancer HE slides were collected to verify the feasibility of the method by training on the internal set using cross validation and performing validation on the external set, along with ensemble learning to improve performance. The models from cross validation performed relatively stably in identifying necrosis, with a Concordance Index of the Slide Necrosis Score (CISNS) of 0.9165±0.0089 in the internal test set. The integration model improved the CISNS to 0.9341 and achieved a CISNS of 0.8278 on the external set. There were significant differences in survival (p = 0.0060) between the three groups divided according to the calculated necrosis ratio. The proposed method can build an integration model good at distinguishing necrosis and capable of clinical assistance as an automatic tool to stratify patients with different risks or as a cluster tool for the quantification of histopathological features. We presented a method effective for identifying histopathological features and suggested that the extent of necrosis, especially micronecrosis, in liver cancer is related to patient outcomes.
Collapse
|
34
|
Anisuzzaman D, Barzekar H, Tong L, Luo J, Yu Z. A deep learning study on osteosarcoma detection from histological images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102931] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
35
|
Aatresh AA, Alabhya K, Lal S, Kini J, Saxena PUP. LiverNet: efficient and robust deep learning model for automatic diagnosis of sub-types of liver hepatocellular carcinoma cancer from H&E stained liver histopathology images. Int J Comput Assist Radiol Surg 2021; 16:1549-1563. [PMID: 34053009 DOI: 10.1007/s11548-021-02410-4] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2020] [Accepted: 05/14/2021] [Indexed: 01/27/2023]
Abstract
PURPOSE Liver cancer is one of the most common types of cancers in Asia with a high mortality rate. A common method for liver cancer diagnosis is the manual examination of histopathology images. Due to its laborious nature, we focus on alternate deep learning methods for automatic diagnosis, providing significant advantages over manual methods. In this paper, we propose a novel deep learning framework to perform multi-class cancer classification of liver hepatocellular carcinoma (HCC) tumor histopathology images which shows improvements in inference speed and classification quality over other competitive methods. METHOD The BreastNet architecture proposed by Togacar et al. shows great promise in using convolutional block attention modules (CBAM) for effective cancer classification in H&E stained breast histopathology images. As part of our experiments with this framework, we have studied the addition of atrous spatial pyramid pooling (ASPP) blocks to effectively capture multi-scale features in H&E stained liver histopathology data. We classify liver histopathology data into four classes, namely the non-cancerous class, low sub-type liver HCC tumor, medium sub-type liver HCC tumor, and high sub-type liver HCC tumor. To prove the robustness and efficacy of our models, we have shown results for two liver histopathology datasets-a novel KMC dataset and the TCGA dataset. RESULTS Our proposed architecture outperforms state-of-the-art architectures for multi-class cancer classification of HCC histopathology images, not just in terms of quality of classification, but also in computational efficiency on the novel proposed KMC liver data and the publicly available TCGA-LIHC dataset. We have considered precision, recall, F1-score, intersection over union (IoU), accuracy, number of parameters, and FLOPs as metrics for comparison. The results of our meticulous experiments have shown improved classification performance along with added efficiency. LiverNet has been observed to outperform all other frameworks in all metrics under comparison with an approximate improvement of [Formula: see text] in accuracy and F1-score on the KMC and TCGA-LIHC datasets. CONCLUSION To the best of our knowledge, our work is among the first to provide concrete proof and demonstrate results for a successful deep learning architecture to handle multi-class HCC histopathology image classification among various sub-types of liver HCC tumor. Our method shows a high accuracy of [Formula: see text] on the proposed KMC liver dataset requiring only 0.5739 million parameters and 1.1934 million floating point operations per second.
Collapse
Affiliation(s)
- Anirudh Ashok Aatresh
- Department of Electronics and Communication Engineering, National Institute Technology Karnataka, Surathkal, Mangaluru, Karnataka, 575025, India
| | - Kumar Alabhya
- Department of Electronics and Communication Engineering, National Institute Technology Karnataka, Surathkal, Mangaluru, Karnataka, 575025, India
| | - Shyam Lal
- Department of Electronics and Communication Engineering, National Institute Technology Karnataka, Surathkal, Mangaluru, Karnataka, 575025, India.
| | - Jyoti Kini
- Department of pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, Karnataka, India.
| | - P U Prakash Saxena
- Department of Radiotherapy and Oncology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, Karnataka, India
| |
Collapse
|
36
|
Kobayashi S, Saltz JH, Yang VW. State of machine and deep learning in histopathological applications in digestive diseases. World J Gastroenterol 2021; 27:2545-2575. [PMID: 34092975 PMCID: PMC8160628 DOI: 10.3748/wjg.v27.i20.2545] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/27/2021] [Accepted: 04/29/2021] [Indexed: 02/06/2023] Open
Abstract
Machine learning (ML)- and deep learning (DL)-based imaging modalities have exhibited the capacity to handle extremely high dimensional data for a number of computer vision tasks. While these approaches have been applied to numerous data types, this capacity can be especially leveraged by application on histopathological images, which capture cellular and structural features with their high-resolution, microscopic perspectives. Already, these methodologies have demonstrated promising performance in a variety of applications like disease classification, cancer grading, structure and cellular localizations, and prognostic predictions. A wide range of pathologies requiring histopathological evaluation exist in gastroenterology and hepatology, indicating these as disciplines highly targetable for integration of these technologies. Gastroenterologists have also already been primed to consider the impact of these algorithms, as development of real-time endoscopic video analysis software has been an active and popular field of research. This heightened clinical awareness will likely be important for future integration of these methods and to drive interdisciplinary collaborations on emerging studies. To provide an overview on the application of these methodologies for gastrointestinal and hepatological histopathological slides, this review will discuss general ML and DL concepts, introduce recent and emerging literature using these methods, and cover challenges moving forward to further advance the field.
Collapse
Affiliation(s)
- Soma Kobayashi
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
| | - Joel H Saltz
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
| | - Vincent W Yang
- Department of Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
- Department of Physiology and Biophysics, Renaissance School of Medicine, Stony Brook University, Stony Brook , NY 11794, United States
| |
Collapse
|
37
|
Xing F, Zhang X, Cornish TC. Artificial intelligence for pathology. Artif Intell Med 2021. [DOI: 10.1016/b978-0-12-821259-2.00011-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
38
|
Nishida N, Kudo M. Artificial Intelligence in Medical Imaging and Its Application in Sonography for the Management of Liver Tumor. Front Oncol 2020; 10:594580. [PMID: 33409151 PMCID: PMC7779763 DOI: 10.3389/fonc.2020.594580] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 11/16/2020] [Indexed: 12/15/2022] Open
Abstract
Recent advancement in artificial intelligence (AI) facilitate the development of AI-powered medical imaging including ultrasonography (US). However, overlooking or misdiagnosis of malignant lesions may result in serious consequences; the introduction of AI to the imaging modalities may be an ideal solution to prevent human error. For the development of AI for medical imaging, it is necessary to understand the characteristics of modalities on the context of task setting, required data sets, suitable AI algorism, and expected performance with clinical impact. Regarding the AI-aided US diagnosis, several attempts have been made to construct an image database and develop an AI-aided diagnosis system in the field of oncology. Regarding the diagnosis of liver tumors using US images, 4- or 5-class classifications, including the discrimination of hepatocellular carcinoma (HCC), metastatic tumors, hemangiomas, liver cysts, and focal nodular hyperplasia, have been reported using AI. Combination of radiomic approach with AI is also becoming a powerful tool for predicting the outcome in patients with HCC after treatment, indicating the potential of AI for applying personalized medical care. However, US images show high heterogeneity because of differences in conditions during the examination, and a variety of imaging parameters may affect the quality of images; such conditions may hamper the development of US-based AI. In this review, we summarized the development of AI in medical images with challenges to task setting, data curation, and focus on the application of AI for the managements of liver tumor, especially for US diagnosis.
Collapse
Affiliation(s)
- Naoshi Nishida
- Department of Gastroenterology and Hepatology, Kindai University Faculty of Medicine, Osaka-Sayama, Japan
| | - Masatoshi Kudo
- Department of Gastroenterology and Hepatology, Kindai University Faculty of Medicine, Osaka-Sayama, Japan
| |
Collapse
|