1
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
2
|
Li H, Zeng N, Wu P, Clawson K. Cov-Net: A computer-aided diagnosis method for recognizing COVID-19 from chest X-ray images via machine vision. Expert Syst Appl 2022; 207:118029. [PMID: 35812003 PMCID: PMC9252868 DOI: 10.1016/j.eswa.2022.118029] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 06/17/2022] [Accepted: 06/29/2022] [Indexed: 05/05/2023]
Abstract
In the context of global pandemic Coronavirus disease 2019 (COVID-19) that threatens life of all human beings, it is of vital importance to achieve early detection of COVID-19 among symptomatic patients. In this paper, a computer aided diagnosis (CAD) model Cov-Net is proposed for accurate recognition of COVID-19 from chest X-ray images via machine vision techniques, which mainly concentrates on powerful and robust feature learning ability. In particular, a modified residual network with asymmetric convolution and attention mechanism embedded is selected as the backbone of feature extractor, after which skip-connected dilated convolution with varying dilation rates is applied to achieve sufficient feature fusion among high-level semantic and low-level detailed information. Experimental results on two public COVID-19 radiography databases have demonstrated the practicality of proposed Cov-Net in accurate COVID-19 recognition with accuracy of 0.9966 and 0.9901, respectively. Furthermore, within same experimental conditions, proposed Cov-Net outperforms other six state-of-the-art computer vision algorithms, which validates the superiority and competitiveness of Cov-Net in building highly discriminative features from the perspective of methodology. Hence, it is deemed that proposed Cov-Net has a good generalization ability so that it can be applied to other CAD scenarios. Consequently, one can conclude that this work has both practical value in providing reliable reference to the radiologist and theoretical significance in developing methods to build robust features with strong presentation ability.
Collapse
Affiliation(s)
- Han Li
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361102, China
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361102, China
| | - Peishu Wu
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361102, China
| | - Kathy Clawson
- School of Computer Science, University of Sunderland, Saint Peter Campus, United Kingdom
| |
Collapse
|
3
|
Musleh S, Islam MT, Alam MT, Househ M, Shah Z, Alam T. ALLD: Acute Lymphoblastic Leukemia Detector. Stud Health Technol Inform 2022; 289:77-80. [PMID: 35062096 DOI: 10.3233/shti210863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Acute Lymphoblastic Leukemia (ALL) is a life-threatening type of cancer wherein mortality rate is unquestionably high. Early detection of ALL can reduce both the rate of fatality as well as improve the diagnosis plan for patients. In this study, we developed the ALL Detector (ALLD), which is a deep learning-based network to distinguish ALL patients from healthy individuals based on blast cell microscopic images. We evaluated multiple DL-based models and the ResNet-based model performed the best with 98% accuracy in the classification task. We also compared the performance of ALLD against state-of-the-art tools utilized for the same purpose, and ALLD outperformed them all. We believe that ALLD will support pathologists to explicitly diagnose ALL in the early stages and reduce the burden on clinical practice overall.
Collapse
Affiliation(s)
- Saleh Musleh
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mohammad Tariqul Islam
- Computer Science Department, Southern Connecticut State University, New Haven, CT 06515, USA
| | - Mohammad Towfik Alam
- Department of Vascular Biology and Molecular Pathology, Faculty of Dental Medicine and Graduate School of Dental Medicine, Hokkaido University, Sapporo 060-8586, Japan
| | - Mowafa Househ
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Tanvir Alam
- College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
4
|
George K, Sankaran P, K PJ. Computer assisted recognition of breast cancer in biopsy images via fusion of nucleus-guided deep convolutional features. Comput Methods Programs Biomed 2020; 194:105531. [PMID: 32422473 DOI: 10.1016/j.cmpb.2020.105531] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2020] [Revised: 04/20/2020] [Accepted: 05/05/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Breast cancer is a commonly detected cancer among women, resulting in a high number of cancer-related mortality. Biopsy performed by pathologists is the final confirmation procedure for breast cancer diagnosis. Computer-aided diagnosis systems can support the pathologist for better diagnosis and also in reducing subjective errors. METHODS In the automation of breast cancer analysis, feature extraction is a challenging task due to the structural diversity of the breast tissue images. Here, we propose a nucleus feature extraction methodology using a convolutional neural network (CNN), 'NucDeep', for automated breast cancer detection. Non-overlapping nuclei patches detected from the images enable the design of a low complexity CNN for feature extraction. A feature fusion approach with support vector machine classifier (FF + SVM) is used to classify breast tumor images based on the extracted CNN features. The feature fusion method transforms the local nuclei features into a compact image-level feature, thus improving the classifier performance. A patch class probability based decision scheme (NucDeep + SVM + PD) for image-level classification is also introduced in this work. RESULTS The proposed framework is evaluated on the publicly available BreaKHis dataset by conducting 5 random trials with 70-30 train-test data split, achieving average image level recognition rate of 96.66 ± 0.77%, 100% specificity and 96.21% sensitivity. CONCLUSION It was found that the proposed NucDeep + FF + SVM model outperforms several recent existing methods and reveals a comparable state of the art performance even with low training complexity. As an effective and inexpensive model, the classification of biopsy images for breast tumor diagnosis introduced in this research will thus help to develop a reliable support tool for pathologists.
Collapse
Affiliation(s)
- Kalpana George
- Department of Electronics and Communication Engineering, National Institute of Technology Calicut, Kerala, India.
| | - Praveen Sankaran
- Department of Electronics and Communication Engineering, National Institute of Technology Calicut, Kerala, India.
| | - Paul Joseph K
- Department of Electrical Engineering, National Institute of Technology Calicut, Kerala, India.
| |
Collapse
|
5
|
Avuti SK, Bajaj V, Kumar A, Singh GK. A novel pectoral muscle segmentation from scanned mammograms using EMO algorithm. Biomed Eng Lett 2019; 9:481-96. [PMID: 31799016 DOI: 10.1007/s13534-019-00135-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2019] [Revised: 09/06/2019] [Accepted: 10/20/2019] [Indexed: 10/25/2022] Open
Abstract
Mammogram images are majorly used for detecting the breast cancer. The level of positivity of breast cancer is detected after excluding the pectoral muscle from mammogram images. Hence, it is very significant to identify and segment the pectoral muscle from the mammographic images. In this work, a new multilevel thresholding, on the basis of electro-magnetism optimization (EMO) technique, is proposed. The EMO works on the principle of attractive and repulsive forces among the charges to develop the members of a population. Here, both Kapur's and Otsu based cost functions are employed with EMO separately. These standard functions are executed over the EMO operator till the best solution is achieved. Thus, optimal threshold levels can be identified for the considered mammographic image. The proposed methodology is applied on all the three twenty-two mammogram images available in mammographic image analysis society dataset, and successful segmentation of the pectoral muscle is achieved for majority of the mammogram images. Hence, the proposed algorithm is found to be robust for variations in the pectoral muscle.
Collapse
|
6
|
van Leeuwen KG, Sun H, Tabaeizadeh M, Struck AF, van Putten MJAM, Westover MB. Detecting abnormal electroencephalograms using deep convolutional networks. Clin Neurophysiol 2018; 130:77-84. [PMID: 30481649 DOI: 10.1016/j.clinph.2018.10.012] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Revised: 09/22/2018] [Accepted: 10/27/2018] [Indexed: 12/14/2022]
Abstract
OBJECTIVES Electroencephalography (EEG) is a central part of the medical evaluation for patients with neurological disorders. Training an algorithm to label the EEG normal vs abnormal seems challenging, because of EEG heterogeneity and dependence of contextual factors, including age and sleep stage. Our objectives were to validate prior work on an independent data set suggesting that deep learning methods can discriminate between normal vs abnormal EEGs, to understand whether age and sleep stage information can improve discrimination, and to understand what factors lead to errors. METHODS We train a deep convolutional neural network on a heterogeneous set of 8522 routine EEGs from the Massachusetts General Hospital. We explore several strategies for optimizing model performance, including accounting for age and sleep stage. RESULTS The area under the receiver operating characteristic curve (AUC) on an independent test set (n = 851) is 0.917 marginally improved by including age (AUC = 0.924), and both age and sleep stages (AUC = 0.925), though not statistically significant. CONCLUSIONS The model architecture generalizes well to an independent dataset. Adding age and sleep stage to the model does not significantly improve performance. SIGNIFICANCE Insights learned from misclassified examples, and minimal improvement by adding sleep stage and age suggest fruitful directions for further research.
Collapse
Affiliation(s)
- K G van Leeuwen
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA; University of Twente, Enschede, the Netherlands
| | - H Sun
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - M Tabaeizadeh
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA
| | - A F Struck
- Department of Neurology, Wisconsin Hospital and Clinics, Madison, WI, USA
| | - M J A M van Putten
- University of Twente, Enschede, the Netherlands; Department of Neurology and Clinical Neurophysiology, Medisch Spectrum Twente, Enschede, the Netherlands
| | - M B Westover
- Department of Neurology, Massachusetts General Hospital, Boston, MA, USA.
| |
Collapse
|
7
|
Qiu Y, Yan S, Gundreddy RR, Wang Y, Cheng S, Liu H, Zheng B. A new approach to develop computer-aided diagnosis scheme of breast mass classification using deep learning technology. J Xray Sci Technol 2017; 25:751-763. [PMID: 28436410 PMCID: PMC5647205 DOI: 10.3233/xst-16226] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
PURPOSE To develop and test a deep learning based computer-aided diagnosis (CAD) scheme of mammograms for classifying between malignant and benign masses. METHODS An image dataset involving 560 regions of interest (ROIs) extracted from digital mammograms was used. After down-sampling each ROI from 512×512 to 64×64 pixel size, we applied an 8 layer deep learning network that involves 3 pairs of convolution-max-pooling layers for automatic feature extraction and a multiple layer perceptron (MLP) classifier for feature categorization to process ROIs. The 3 pairs of convolution layers contain 20, 10, and 5 feature maps, respectively. Each convolution layer is connected with a max-pooling layer to improve the feature robustness. The output of the sixth layer is fully connected with a MLP classifier, which is composed of one hidden layer and one logistic regression layer. The network then generates a classification score to predict the likelihood of ROI depicting a malignant mass. A four-fold cross validation method was applied to train and test this deep learning network. RESULTS The results revealed that this CAD scheme yields an area under the receiver operation characteristic curve (AUC) of 0.696±0.044, 0.802±0.037, 0.836±0.036, and 0.822±0.035 for fold 1 to 4 testing datasets, respectively. The overall AUC of the entire dataset is 0.790±0.019. CONCLUSIONS This study demonstrates the feasibility of applying a deep learning based CAD scheme to classify between malignant and benign breast masses without a lesion segmentation, image feature computation and selection process.
Collapse
Affiliation(s)
- Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Shiju Yan
- University of Shanghai for Sciences and Technology, Shanghai, 200093, China
| | - Rohith Reddy Gundreddy
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yunzhi Wang
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Samuel Cheng
- School of Electrical and Computer Engineering, University of Oklahoma, Tulsa, OK, 74135, USA
| | - Hong Liu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
8
|
Liu JK, Jiang HY, Gao MD, He CG, Wang Y, Wang P, Ma H, Li Y. An Assisted Diagnosis System for Detection of Early Pulmonary Nodule in Computed Tomography Images. J Med Syst 2016; 41:30. [PMID: 28032305 DOI: 10.1007/s10916-016-0669-0] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2016] [Accepted: 12/07/2016] [Indexed: 11/28/2022]
Abstract
Lung cancer is still the most concerned disease around the world. Lung nodule generates in the pulmonary parenchyma which indicates the latent risk of lung cancer. Computer-aided pulmonary nodules detection system is necessary, which can reduce diagnosis time and decrease mortality of patients. In this study, we have proposed a new computer aided diagnosis (CAD) system for detection of early pulmonary nodule, which can help radiologists quickly locate suspected nodules and make judgments. This system consists of four main sections: pulmonary parenchyma segmentation, nodule candidate detection, features extraction (total 22 features) and nodule classification. The publicly available data set created by the Lung Image Database Consortium (LIDC) is used for training and testing. This study selects 6400 slices from 80 CT scans containing totally 978 nodules, which is labeled by four radiologists. Through a fast segmentation method proposed in this paper, pulmonary nodules including 888 true nodules and 11,379 false positive nodules are segmented. By means of an ensemble classifier, Random Forest (RF), this study acquires 93.2, 92.4, 94.8, 97.6% of accuracy, sensitivity, specificity, area under the curve (AUC), respectively. Compared with support vector machine (SVM) classifier, RF can reduce more false positive nodules and acquire larger AUC. With the help of this CAD system, radiologist can be provided with a great reference for pulmonary nodule diagnosis timely.
Collapse
Affiliation(s)
- Ji-Kui Liu
- Key Laboratory for Health Informatics of the Chinese Academy of Sciences (HICAS), Shenzhen Institutes of Advanced Technology, Shenzhen, 518055, Guangdong, China
| | - Hong-Yang Jiang
- Sino-Dutch Biomedical and Information Engineering School, Hunnan Campus, Northeastern University, Shenyang, 110169, Liaoning, China
| | - Meng-di Gao
- Sino-Dutch Biomedical and Information Engineering School, Hunnan Campus, Northeastern University, Shenyang, 110169, Liaoning, China
| | - Chen-Guang He
- Software School, North China University of Water Resources and Electric Power, Zhengzhou, 450045, Henan, China
| | - Yu Wang
- Sino-Dutch Biomedical and Information Engineering School, Hunnan Campus, Northeastern University, Shenyang, 110169, Liaoning, China
| | - Pu Wang
- Key Laboratory for Health Informatics of the Chinese Academy of Sciences (HICAS), Shenzhen Institutes of Advanced Technology, Shenzhen, 518055, Guangdong, China
| | - He Ma
- Sino-Dutch Biomedical and Information Engineering School, Hunnan Campus, Northeastern University, Shenyang, 110169, Liaoning, China.
| | - Ye Li
- Key Laboratory for Health Informatics of the Chinese Academy of Sciences (HICAS), Shenzhen Institutes of Advanced Technology, Shenzhen, 518055, Guangdong, China.
| |
Collapse
|
9
|
Shiradkar R, Podder TK, Algohary A, Viswanath S, Ellis RJ, Madabhushi A. Radiomics based targeted radiotherapy planning (Rad-TRaP): a computational framework for prostate cancer treatment planning with MRI. Radiat Oncol 2016; 11:148. [PMID: 27829431 PMCID: PMC5103611 DOI: 10.1186/s13014-016-0718-3] [Citation(s) in RCA: 61] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2016] [Accepted: 10/17/2016] [Indexed: 12/28/2022] Open
Abstract
BACKGROUND Radiomics or computer - extracted texture features have been shown to achieve superior performance than multiparametric MRI (mpMRI) signal intensities alone in targeting prostate cancer (PCa) lesions. Radiomics along with deformable co-registration tools can be used to develop a framework to generate targeted focal radiotherapy treatment plans. METHODS The Rad-TRaP framework comprises three distinct modules. Firstly, a module for radiomics based detection of PCa lesions on mpMRI via a feature enabled machine learning classifier. The second module comprises a multi-modal deformable co-registration scheme to map tissue, organ, and delineated target volumes from MRI onto CT. Finally, the third module involves generation of a radiomics based dose plan on MRI for brachytherapy and on CT for EBRT using the target delineations transferred from the MRI to the CT. RESULTS Rad-TRaP framework was evaluated using a retrospective cohort of 23 patient studies from two different institutions. 11 patients from the first institution were used to train a radiomics classifier, which was used to detect tumor regions in 12 patients from the second institution. The ground truth cancer delineations for training the machine learning classifier were made by an experienced radiation oncologist using mpMRI, knowledge of biopsy location and radiology reports. The detected tumor regions were used to generate treatment plans for brachytherapy using mpMRI, and tumor regions mapped from MRI to CT to generate corresponding treatment plans for EBRT. For each of EBRT and brachytherapy, 3 dose plans were generated - whole gland homogeneous ([Formula: see text]) which is the current clinical standard, radiomics based focal ([Formula: see text]), and whole gland with a radiomics based focal boost ([Formula: see text]). Comparison of [Formula: see text] against conventional [Formula: see text] revealed that targeted focal brachytherapy would result in a marked reduction in dosage to the OARs while ensuring that the prescribed dose is delivered to the lesions. [Formula: see text] resulted in only a marginal increase in dosage to the OARs compared to [Formula: see text]. A similar trend was observed in case of EBRT with [Formula: see text] and [Formula: see text] compared to [Formula: see text]. CONCLUSIONS A radiotherapy planning framework to generate targeted focal treatment plans has been presented. The focal treatment plans generated using the framework showed reduction in dosage to the organs at risk and a boosted dose delivered to the cancerous lesions.
Collapse
Affiliation(s)
- Rakesh Shiradkar
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, 44106 USA
| | - Tarun K Podder
- Department of Radiation Oncology, Case School of Medicine, Cleveland, 44106 USA
| | - Ahmad Algohary
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, 44106 USA
| | - Satish Viswanath
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, 44106 USA
| | - Rodney J. Ellis
- Department of Radiation Oncology, Case School of Medicine, Cleveland, 44106 USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, 44106 USA
| |
Collapse
|