1
|
Williams TL, Gonen M, Wray R, Do RKG, Simpson AL. Quantitation of Oncologic Image Features for Radiomic Analyses in PET. Methods Mol Biol 2024; 2729:409-421. [PMID: 38006509 DOI: 10.1007/978-1-0716-3499-8_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2023]
Abstract
Radiomics is an emerging and exciting field of study involving the extraction of many quantitative features from radiographic images. Positron emission tomography (PET) images are used in cancer diagnosis and staging. Utilizing radiomics on PET images can better quantify the spatial relationships between image voxels and generate more consistent and accurate results for diagnosis, prognosis, treatment, etc. This chapter gives the general steps a researcher would take to extract PET radiomic features from medical images and properly develop models to implement.
Collapse
Affiliation(s)
- Travis L Williams
- Department of Epidemiology & Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Mithat Gonen
- Department of Epidemiology & Biostatistics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Rick Wray
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Richard K G Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Amber L Simpson
- School of Computing and Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada.
| |
Collapse
|
2
|
He W, Liu T, Han Y, Ming W, Du J, Liu Y, Yang Y, Wang L, Jiang Z, Wang Y, Yuan J, Cao C. A review: The detection of cancer cells in histopathology based on machine vision. Comput Biol Med 2022; 146:105636. [PMID: 35751182 DOI: 10.1016/j.compbiomed.2022.105636] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 04/04/2022] [Accepted: 04/28/2022] [Indexed: 12/24/2022]
Abstract
Machine vision is being employed in defect detection, size measurement, pattern recognition, image fusion, target tracking and 3D reconstruction. Traditional cancer detection methods are dominated by manual detection, which wastes time and manpower, and heavily relies on the pathologists' skill and work experience. Therefore, these manual detection approaches are not convenient for the inheritance of domain knowledge, and are not suitable for the rapid development of medical care in the future. The emergence of machine vision can iteratively update and learn the domain knowledge of cancer cell pathology detection to achieve automated, high-precision, and consistent detection. Consequently, this paper reviews the use of machine vision to detect cancer cells in histopathology images, as well as the benefits and drawbacks of various detection approaches. First, we review the application of image preprocessing and image segmentation in histopathology for the detection of cancer cells, and compare the benefits and drawbacks of different algorithms. Secondly, for the characteristics of histopathological cancer cell images, the research progress of shape, color and texture features and other methods is mainly reviewed. Furthermore, for the classification methods of histopathological cancer cell images, the benefits and drawbacks of traditional machine vision approaches and deep learning methods are compared and analyzed. Finally, the above research is discussed and forecasted, with the expected future development tendency serving as a guide for future research.
Collapse
Affiliation(s)
- Wenbin He
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Ting Liu
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongjie Han
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Wuyi Ming
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China.
| | - Jinguang Du
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yinxia Liu
- Laboratory Medicine of Dongguan Kanghua Hospital, Dongguan, 523808, China
| | - Yuan Yang
- Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, 510120, China.
| | - Leijie Wang
- School of Mechanical Engineering, Dongguan University of Technology Dongguan, 523808, China
| | - Zhiwen Jiang
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongqiang Wang
- Zhengzhou Coal Mining Machinery Group Co., Ltd, Zhengzhou, 450016, China
| | - Jie Yuan
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Chen Cao
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China
| |
Collapse
|
3
|
Peyret R, alSaeed D, Khelifi F, Al-Ghreimil N, Al-Baity H, Bouridane A. Convolutional Neural Network-Based Automatic Classification of Colorectal and Prostate Tumor Biopsies Using Multispectral Imagery: System Development Study. JMIR BIOINFORMATICS AND BIOTECHNOLOGY 2022; 3:e27394. [PMID: 38935960 PMCID: PMC11135179 DOI: 10.2196/27394] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Revised: 09/08/2021] [Accepted: 12/11/2021] [Indexed: 06/29/2024]
Abstract
BACKGROUND Colorectal and prostate cancers are the most common types of cancer in men worldwide. To diagnose colorectal and prostate cancer, a pathologist performs a histological analysis on needle biopsy samples. This manual process is time-consuming and error-prone, resulting in high intra- and interobserver variability, which affects diagnosis reliability. OBJECTIVE This study aims to develop an automatic computerized system for diagnosing colorectal and prostate tumors by using images of biopsy samples to reduce time and diagnosis error rates associated with human analysis. METHODS In this study, we proposed a convolutional neural network (CNN) model for classifying colorectal and prostate tumors from multispectral images of biopsy samples. The key idea was to remove the last block of the convolutional layers and halve the number of filters per layer. RESULTS Our results showed excellent performance, with an average test accuracy of 99.8% and 99.5% for the prostate and colorectal data sets, respectively. The system showed excellent performance when compared with pretrained CNNs and other classification methods, as it avoids the preprocessing phase while using a single CNN model for the whole classification task. Overall, the proposed CNN architecture was globally the best-performing system for classifying colorectal and prostate tumor images. CONCLUSIONS The proposed CNN architecture was detailed and compared with previously trained network models used as feature extractors. These CNNs were also compared with other classification techniques. As opposed to pretrained CNNs and other classification approaches, the proposed CNN yielded excellent results. The computational complexity of the CNNs was also investigated, and it was shown that the proposed CNN is better at classifying images than pretrained networks because it does not require preprocessing. Thus, the overall analysis was that the proposed CNN architecture was globally the best-performing system for classifying colorectal and prostate tumor images.
Collapse
Affiliation(s)
- Remy Peyret
- Northumbria University at Newcastle, Newcastle, United Kingdom
| | - Duaa alSaeed
- College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Fouad Khelifi
- Northumbria University at Newcastle, Newcastle, United Kingdom
| | - Nadia Al-Ghreimil
- College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Heyam Al-Baity
- College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Ahmed Bouridane
- Northumbria University at Newcastle, Newcastle, United Kingdom
| |
Collapse
|
4
|
Ayyad SM, Shehata M, Shalaby A, Abou El-Ghar M, Ghazal M, El-Melegy M, Abdel-Hamid NB, Labib LM, Ali HA, El-Baz A. Role of AI and Histopathological Images in Detecting Prostate Cancer: A Survey. SENSORS (BASEL, SWITZERLAND) 2021; 21:2586. [PMID: 33917035 PMCID: PMC8067693 DOI: 10.3390/s21082586] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 03/29/2021] [Accepted: 04/04/2021] [Indexed: 02/07/2023]
Abstract
Prostate cancer is one of the most identified cancers and second most prevalent among cancer-related deaths of men worldwide. Early diagnosis and treatment are substantial to stop or handle the increase and spread of cancer cells in the body. Histopathological image diagnosis is a gold standard for detecting prostate cancer as it has different visual characteristics but interpreting those type of images needs a high level of expertise and takes too much time. One of the ways to accelerate such an analysis is by employing artificial intelligence (AI) through the use of computer-aided diagnosis (CAD) systems. The recent developments in artificial intelligence along with its sub-fields of conventional machine learning and deep learning provide new insights to clinicians and researchers, and an abundance of research is presented specifically for histopathology images tailored for prostate cancer. However, there is a lack of comprehensive surveys that focus on prostate cancer using histopathology images. In this paper, we provide a very comprehensive review of most, if not all, studies that handled the prostate cancer diagnosis using histopathological images. The survey begins with an overview of histopathological image preparation and its challenges. We also briefly review the computing techniques that are commonly applied in image processing, segmentation, feature selection, and classification that can help in detecting prostate malignancies in histopathological images.
Collapse
Affiliation(s)
- Sarah M. Ayyad
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Mohamed Shehata
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.S.)
| | - Ahmed Shalaby
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.S.)
| | - Mohamed Abou El-Ghar
- Department of Radiology, Urology and Nephrology Center, Mansoura University, Mansoura 35516, Egypt;
| | - Mohammed Ghazal
- Department of Electrical and Computer Engineering, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Moumen El-Melegy
- Department of Electrical Engineering, Assiut University, Assiut 71511, Egypt;
| | - Nahla B. Abdel-Hamid
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Labib M. Labib
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - H. Arafat Ali
- Computers and Systems Department, Faculty of Engineering, Mansoura University, Mansoura 35511, Egypt; (S.M.A.); (N.B.A.-H.); (L.M.L.); (H.A.A.)
| | - Ayman El-Baz
- BioImaging Laboratory, Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (M.S.); (A.S.)
| |
Collapse
|
5
|
Abstract
Histopathological images (HIs) are the gold standard for evaluating some types of tumors for cancer diagnosis. The analysis of such images is time and resource-consuming and very challenging even for experienced pathologists, resulting in inter-observer and intra-observer disagreements. One of the ways of accelerating such an analysis is to use computer-aided diagnosis (CAD) systems. This paper presents a review on machine learning methods for histopathological image analysis, including shallow and deep learning methods. We also cover the most common tasks in HI analysis, such as segmentation and feature extraction. Besides, we present a list of publicly available and private datasets that have been used in HI research.
Collapse
|
6
|
Tang H, Mao L, Zeng S, Deng S, Ai Z. Discriminative dictionary learning algorithm with pairwise local constraints for histopathological image classification. Med Biol Eng Comput 2021; 59:153-164. [PMID: 33386592 DOI: 10.1007/s11517-020-02281-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Accepted: 10/22/2020] [Indexed: 10/22/2022]
Abstract
Histopathological image contains rich pathological information that is valued for the aided diagnosis of many diseases such as cancer. An important issue in histopathological image classification is how to learn a high-quality discriminative dictionary due to diverse tissue pattern, a variety of texture, and different morphologies structure. In this paper, we propose a discriminative dictionary learning algorithm with pairwise local constraints (PLCDDL) for histopathological image classification. Inspired by the one-to-one mapping between dictionary atom and profile, we learn a pair of discriminative graph Laplacian matrices that are less sensitive to noise or outliers to capture the locality and discriminating information of data manifold by utilizing the local geometry information of category-specific dictionaries rather than input data. Furthermore, graph-based pairwise local constraints are designed and incorporated into the original dictionary learning model to effectively encode the locality consistency with intra-class samples and the locality inconsistency with inter-class samples. Specifically, we learn the discriminative localities for representations by jointly optimizing both the intra-class locality and inter-class locality, which can significantly improve the discriminability and robustness of dictionary. Extensive experiments on the challenging datasets verify that the proposed PLCDDL algorithm can achieve a better classification accuracy and powerful robustness compared with the state-of-the-art dictionary learning methods. Graphical abstract The proposed PLCDDL algorithm. 1) A pair of graph Laplacian matrices are first learned based on the class-specific dictionaries. 2) Graph-based pairwise local constraints are designed to transfer the locality for coding coefficients. 3) Class-specific dictionaries can be further updated.
Collapse
Affiliation(s)
- Hongzhong Tang
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China. .,College of Automation and Electronic Information, Xiangtan University, Xiangtan, Hunan, People's Republic of China. .,Key Laboratory of Intelligent Computing & Information Processing of Ministry of Education, Xiangtan University, Xiangtan, Hunan, People's Republic of China.
| | - Lizhen Mao
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China
| | - Shuying Zeng
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China
| | - Shijun Deng
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China.,College of Automation and Electronic Information, Xiangtan University, Xiangtan, Hunan, People's Republic of China
| | - Zhaoyang Ai
- Institute of Biophysics Linguistics, College of Foreign Languages, Hunan University, Changsha, Hunan, People's Republic of China
| |
Collapse
|
7
|
Han W, Johnson C, Warner A, Gaed M, Gomez JA, Moussa M, Chin J, Pautler S, Bauman G, Ward AD. Automatic cancer detection on digital histopathology images of mid-gland radical prostatectomy specimens. J Med Imaging (Bellingham) 2020; 7:047501. [PMID: 32715024 DOI: 10.1117/1.jmi.7.4.047501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Accepted: 07/06/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Automatic cancer detection on radical prostatectomy (RP) sections facilitates graphical and quantitative surgical pathology reporting, which can potentially benefit postsurgery follow-up care and treatment planning. It can also support imaging validation studies using a histologic reference standard and pathology research studies. This problem is challenging due to the large sizes of digital histopathology whole-mount whole-slide images (WSIs) of RP sections and staining variability across different WSIs. Approach: We proposed a calibration-free adaptive thresholding algorithm, which compensates for staining variability and yields consistent tissue component maps (TCMs) of the nuclei, lumina, and other tissues. We used and compared three machine learning methods for classifying each cancer versus noncancer region of interest (ROI) throughout each WSI: (1) conventional machine learning methods and 14 texture features extracted from TCMs, (2) transfer learning with pretrained AlexNet fine-tuned by TCM ROIs, and (3) transfer learning with pretrained AlexNet fine-tuned with raw image ROIs. Results: The three methods yielded areas under the receiver operating characteristic curve of 0.96, 0.98, and 0.98, respectively, in leave-one-patient-out cross validation using 1.3 million ROIs from 286 mid-gland whole-mount WSIs from 68 patients. Conclusion: Transfer learning with the use of TCMs demonstrated state-of-the-art overall performance and is more stable with respect to sample size across different tissue types. For the tissue types involving Gleason 5 (most aggressive) cancer, it achieved the best performance compared to the other tested methods. This tool can be translated to clinical workflow to assist graphical and quantitative pathology reporting for surgical specimens upon further multicenter validation.
Collapse
Affiliation(s)
- Wenchao Han
- Baines Imaging Research Laboratory, London Regional Cancer Program, London, Canada.,Lawson Health Research Institute, London, Ontario, Canada.,Western University, Department of Medical Biophysics, London, Ontario, Canada
| | - Carol Johnson
- Baines Imaging Research Laboratory, London Regional Cancer Program, London, Canada
| | - Andrew Warner
- Baines Imaging Research Laboratory, London Regional Cancer Program, London, Canada
| | - Mena Gaed
- Western University, Department of Pathology and Laboratory Medicine, London, Ontario, Canada
| | - Jose A Gomez
- Western University, Department of Pathology and Laboratory Medicine, London, Ontario, Canada
| | - Madeleine Moussa
- Western University, Department of Pathology and Laboratory Medicine, London, Ontario, Canada
| | - Joseph Chin
- Western University, Department of Oncology, London, Ontario, Canada.,Western University, Department of Surgery, London, Ontario, Canada
| | - Stephen Pautler
- Western University, Department of Oncology, London, Ontario, Canada.,Western University, Department of Surgery, London, Ontario, Canada
| | - Glenn Bauman
- Lawson Health Research Institute, London, Ontario, Canada.,Western University, Department of Medical Biophysics, London, Ontario, Canada.,Western University, Department of Oncology, London, Ontario, Canada
| | - Aaron D Ward
- Baines Imaging Research Laboratory, London Regional Cancer Program, London, Canada.,Lawson Health Research Institute, London, Ontario, Canada.,Western University, Department of Medical Biophysics, London, Ontario, Canada.,Western University, Department of Oncology, London, Ontario, Canada
| |
Collapse
|
8
|
Ortega S, Halicek M, Fabelo H, Callico GM, Fei B. Hyperspectral and multispectral imaging in digital and computational pathology: a systematic review [Invited]. BIOMEDICAL OPTICS EXPRESS 2020; 11:3195-3233. [PMID: 32637250 PMCID: PMC7315999 DOI: 10.1364/boe.386338] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Revised: 03/28/2020] [Accepted: 05/08/2020] [Indexed: 05/06/2023]
Abstract
Hyperspectral imaging (HSI) and multispectral imaging (MSI) technologies have the potential to transform the fields of digital and computational pathology. Traditional digitized histopathological slides are imaged with RGB imaging. Utilizing HSI/MSI, spectral information across wavelengths within and beyond the visual range can complement spatial information for the creation of computer-aided diagnostic tools for both stained and unstained histological specimens. In this systematic review, we summarize the methods and uses of HSI/MSI for staining and color correction, immunohistochemistry, autofluorescence, and histopathological diagnostic research. Studies include hematology, breast cancer, head and neck cancer, skin cancer, and diseases of central nervous, gastrointestinal, and genitourinary systems. The use of HSI/MSI suggest an improvement in the detection of diseases and clinical practice compared with traditional RGB analysis, and brings new opportunities in histological analysis of samples, such as digital staining or alleviating the inter-laboratory variability of digitized samples. Nevertheless, the number of studies in this field is currently limited, and more research is needed to confirm the advantages of this technology compared to conventional imagery.
Collapse
Affiliation(s)
- Samuel Ortega
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX 75080, USA
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), Campus de Tafira, 35017, Las Palmas de Gran Canaria, Las Palmas, Spain
- These authors contributed equally to this work
| | - Martin Halicek
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX 75080, USA
- Department of Biomedical Engineering, Georgia Inst. of Tech. and Emory University, Atlanta, GA 30322, USA
- These authors contributed equally to this work
| | - Himar Fabelo
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), Campus de Tafira, 35017, Las Palmas de Gran Canaria, Las Palmas, Spain
| | - Gustavo M Callico
- Institute for Applied Microelectronics (IUMA), University of Las Palmas de Gran Canaria (ULPGC), Campus de Tafira, 35017, Las Palmas de Gran Canaria, Las Palmas, Spain
| | - Baowei Fei
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX 75080, USA
- University of Texas Southwestern Medical Center, Advanced Imaging Research Center, Dallas, TX 75235, USA
- University of Texas Southwestern Medical Center, Department of Radiology, Dallas, TX 75235, USA
| |
Collapse
|
9
|
Ji L, Chang M, Shen Y, Zhang Q. Recurrent convolutions of binary-constraint Cellular Neural Network for texture recognition. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.12.119] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
10
|
Correlating Changes in the Epithelial Gland Tissue With Advancing Colorectal Cancer Histologic Grade, Using IHC Stained for AIB1 Expression Biopsy Material. Appl Immunohistochem Mol Morphol 2019; 27:749-757. [DOI: 10.1097/pai.0000000000000691] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
11
|
Awan R, Al-Maadeed S, Al-Saady R. Using spectral imaging for the analysis of abnormalities for colorectal cancer: When is it helpful? PLoS One 2018; 13:e0197431. [PMID: 29874262 PMCID: PMC5991384 DOI: 10.1371/journal.pone.0197431] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2017] [Accepted: 05/02/2018] [Indexed: 12/16/2022] Open
Abstract
The spectral imaging technique has been shown to provide more discriminative information than the RGB images and has been proposed for a range of problems. There are many studies demonstrating its potential for the analysis of histopathology images for abnormality detection but there have been discrepancies among previous studies as well. Many multispectral based methods have been proposed for histopathology images but the significance of the use of whole multispectral cube versus a subset of bands or a single band is still arguable. We performed comprehensive analysis using individual bands and different subsets of bands to determine the effectiveness of spectral information for determining the anomaly in colorectal images. Our multispectral colorectal dataset consists of four classes, each represented by infra-red spectrum bands in addition to the visual spectrum bands. We performed our analysis of spectral imaging by stratifying the abnormalities using both spatial and spectral information. For our experiments, we used a combination of texture descriptors with an ensemble classification approach that performed best on our dataset. We applied our method to another dataset and got comparable results with those obtained using the state-of-the-art method and convolutional neural network based method. Moreover, we explored the relationship of the number of bands with the problem complexity and found that higher number of bands is required for a complex task to achieve improved performance. Our results demonstrate a synergy between infra-red and visual spectrum by improving the classification accuracy (by 6%) on incorporating the infra-red representation. We also highlight the importance of how the dataset should be divided into training and testing set for evaluating the histopathology image-based approaches, which has not been considered in previous studies on multispectral histopathology images.
Collapse
Affiliation(s)
- Ruqayya Awan
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
- * E-mail:
| | - Somaya Al-Maadeed
- Department of Computer Science and Engineering, Qatar University, Doha, Qatar
| | | |
Collapse
|
12
|
Chaddad A, Daniel P, Niazi T. Radiomics Evaluation of Histological Heterogeneity Using Multiscale Textures Derived From 3D Wavelet Transformation of Multispectral Images. Front Oncol 2018; 8:96. [PMID: 29670857 PMCID: PMC5893871 DOI: 10.3389/fonc.2018.00096] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Accepted: 03/19/2018] [Indexed: 12/18/2022] Open
Abstract
Purpose Colorectal cancer (CRC) is markedly heterogeneous and develops progressively toward malignancy through several stages which include stroma (ST), benign hyperplasia (BH), intraepithelial neoplasia (IN) or precursor cancerous lesion, and carcinoma (CA). Identification of the malignancy stage of CRC pathology tissues (PT) allows the most appropriate therapeutic intervention. Methods This study investigates multiscale texture features extracted from CRC pathology sections using 3D wavelet transform (3D-WT) filter. Multiscale features were extracted from digital whole slide images of 39 patients that were segmented in a pre-processing step using an active contour model. The capacity for multiscale texture to compare and classify between PTs was investigated using ANOVA significance test and random forest classifier models, respectively. Results 12 significant features derived from the multiscale texture (i.e., variance, entropy, and energy) were found to discriminate between CRC grades at a significance value of p < 0.01 after correction. Combining multiscale texture features lead to a better predictive capacity compared to prediction models based on individual scale features with an average (±SD) classification accuracy of 93.33 (±3.52)%, sensitivity of 88.33 (± 4.12)%, and specificity of 96.89 (± 3.88)%. Entropy was found to be the best classifier feature across all the PT grades with an average of the area under the curve (AUC) value of 91.17, 94.21, 97.70, 100% for ST, BH, IN, and CA, respectively. Conclusion Our results suggest that multiscale texture features based on 3D-WT are sensitive enough to discriminate between CRC grades with the entropy feature, the best predictor of pathology grade.
Collapse
Affiliation(s)
- Ahmad Chaddad
- Division of Radiation Oncology, McGill University, Montreal, QC, Canada
| | - Paul Daniel
- Division of Radiation Oncology, McGill University, Montreal, QC, Canada
| | - Tamim Niazi
- Division of Radiation Oncology, McGill University, Montreal, QC, Canada
| |
Collapse
|