1
|
Saikia MJ, Kuanar S, Mahapatra D, Faghani S. Multi-Modal Ensemble Deep Learning in Head and Neck Cancer HPV Sub-Typing. Bioengineering (Basel) 2023; 11:13. [PMID: 38247890 DOI: 10.3390/bioengineering11010013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 12/14/2023] [Accepted: 12/21/2023] [Indexed: 01/23/2024] Open
Abstract
Oropharyngeal Squamous Cell Carcinoma (OPSCC) is one of the common forms of heterogeneity in head and neck cancer. Infection with human papillomavirus (HPV) has been identified as a major risk factor for OPSCC. Therefore, differentiating the HPV-positive and negative cases in OPSCC patients is an essential diagnostic factor influencing future treatment decisions. In this study, we investigated the accuracy of a deep learning-based method for image interpretation and automatically detected the HPV status of OPSCC in routinely acquired Computed Tomography (CT) and Positron Emission Tomography (PET) images. We introduce a 3D CNN-based multi-modal feature fusion architecture for HPV status prediction in primary tumor lesions. The architecture is composed of an ensemble of CNN networks and merges image features in a softmax classification layer. The pipeline separately learns the intensity, contrast variation, shape, texture heterogeneity, and metabolic assessment from CT and PET tumor volume regions and fuses those multi-modal features for final HPV status classification. The precision, recall, and AUC scores of the proposed method are computed, and the results are compared with other existing models. The experimental results demonstrate that the multi-modal ensemble model with soft voting outperformed single-modality PET/CT, with an AUC of 0.76 and F1 score of 0.746 on publicly available TCGA and MAASTRO datasets. In the MAASTRO dataset, our model achieved an AUC score of 0.74 over primary tumor volumes of interest (VOIs). In the future, more extensive cohort validation may suffice for better diagnostic accuracy and provide preliminary assessment before the biopsy.
Collapse
Affiliation(s)
- Manob Jyoti Saikia
- Electrical Engineering, University of North Florida, Jacksonville, FL 32224, USA
| | - Shiba Kuanar
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Dwarikanath Mahapatra
- Inception Institute of Artificial Intelligence, Abu Dhabi 127788, United Arab Emirates
| | | |
Collapse
|
2
|
Nakai H, Takahashi H, Adamo DA, LeGout JD, Kawashima A, Thomas JV, Froemming AT, Kuanar S, Lomas DJ, Humphreys MR, Dora C, Takahashi N. Decreased prostate MRI cancer detection rate due to moderate to severe susceptibility artifacts from hip prosthesis. Eur Radiol 2023:10.1007/s00330-023-10345-4. [PMID: 37889268 DOI: 10.1007/s00330-023-10345-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 08/15/2023] [Accepted: 08/24/2023] [Indexed: 10/28/2023]
Abstract
OBJECTIVES To evaluate the impact of susceptibility artifacts from hip prosthesis on cancer detection rate (CDR) in prostate MRI. MATERIALS AND METHODS This three-center retrospective study included prostate MRI studies for patients without known prostate cancer between 2017 and 2021. Exams with hip prosthesis were searched on MRI reports. The degree of susceptibility artifact on diffusion-weighted images was retrospectively categorized into mild, moderate, and severe (> 66%, 33-66%, and < 33% of the prostate volume are evaluable) by blind reviewers. CDR was defined as the number of exams with Gleason score ≥7 detected by MRI (PI-RADS ≥3) divided by the total number of exams. For each artifact grade, control exams without hip prosthesis were matched (1:6 match), and CDR was compared. The degree of CDR reduction was evaluated with ratio, and influential factors were evaluated by expanding the equation. RESULTS Hip arthroplasty was present in 548 (4.8%) of the 11,319 MRI exams. CDR of the cases and matched control exams for each artifact grade were as follows: mild (n = 238), 0.27 vs 0.25, CDR ratio = 1.09 [95% CI: 0.87-1.37]; moderate (n = 143), 0.18 vs 0.27, CDR ratio = 0.67 [95% CI: 0.46-0.96]; severe (n = 167), 0.22 vs 0.28, CDR ratio = 0.80 [95% CI: 0.59-1.08]. When moderate and severe artifact grades were combined, CDR ratio was 0.74 [95% CI: 0.58-0.93]. CDR reduction was mostly attributed to the increased frequency of PI-RADS 1-2. CONCLUSION With moderate to severe susceptibility artifacts from hip prosthesis, CDR was decreased to 74% compared to the matched control. CLINICAL RELEVANCE STATEMENT Moderate to severe susceptibility artifacts from hip prosthesis may cause a non-negligible CDR reduction in prostate MRI. Expanding indications for systematic prostate biopsy may be considered when PI-RADS 1-2 was assigned. KEY POINTS • We proposed cancer detection rate as a diagnostic performance metric in prostate MRI. • With moderate to severe susceptibility artifacts secondary to hip arthroplasty, cancer detection rate decreased to 74% compared to the matched control. • Expanding indications for systematic prostate biopsy may be considered when PI-RADS 1-2 is assigned.
Collapse
Affiliation(s)
| | | | - Daniel A Adamo
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | | | - John V Thomas
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | - Shiba Kuanar
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Derek J Lomas
- Department of Urology, Mayo Clinic, Rochester, MN, USA
| | | | - Chandler Dora
- Department of Urology, Mayo Clinic, Jacksonville, FL, USA
| | | |
Collapse
|
3
|
Moassefi M, Rouzrokh P, Conte GM, Vahdati S, Fu T, Tahmasebi A, Younis M, Farahani K, Gentili A, Kline T, Kitamura FC, Huo Y, Kuanar S, Younis K, Erickson BJ, Faghani S. Reproducibility of Deep Learning Algorithms Developed for Medical Imaging Analysis: A Systematic Review. J Digit Imaging 2023; 36:2306-2312. [PMID: 37407841 PMCID: PMC10501962 DOI: 10.1007/s10278-023-00870-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 06/08/2023] [Accepted: 06/09/2023] [Indexed: 07/07/2023] Open
Abstract
Since 2000, there have been more than 8000 publications on radiology artificial intelligence (AI). AI breakthroughs allow complex tasks to be automated and even performed beyond human capabilities. However, the lack of details on the methods and algorithm code undercuts its scientific value. Many science subfields have recently faced a reproducibility crisis, eroding trust in processes and results, and influencing the rise in retractions of scientific papers. For the same reasons, conducting research in deep learning (DL) also requires reproducibility. Although several valuable manuscript checklists for AI in medical imaging exist, they are not focused specifically on reproducibility. In this study, we conducted a systematic review of recently published papers in the field of DL to evaluate if the description of their methodology could allow the reproducibility of their findings. We focused on the Journal of Digital Imaging (JDI), a specialized journal that publishes papers on AI and medical imaging. We used the keyword "Deep Learning" and collected the articles published between January 2020 and January 2022. We screened all the articles and included the ones which reported the development of a DL tool in medical imaging. We extracted the reported details about the dataset, data handling steps, data splitting, model details, and performance metrics of each included article. We found 148 articles. Eighty were included after screening for articles that reported developing a DL model for medical image analysis. Five studies have made their code publicly available, and 35 studies have utilized publicly available datasets. We provided figures to show the ratio and absolute count of reported items from included studies. According to our cross-sectional study, in JDI publications on DL in medical imaging, authors infrequently report the key elements of their study to make it reproducible.
Collapse
Affiliation(s)
- Mana Moassefi
- Artificial Intelligence Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA.
| | - Pouria Rouzrokh
- Artificial Intelligence Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Gian Marco Conte
- Artificial Intelligence Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Sanaz Vahdati
- Artificial Intelligence Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Tianyuan Fu
- Department of Radiology, University Hospitals Cleveland, Cleveland, OH, USA
| | - Aylin Tahmasebi
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA, USA
| | - Mira Younis
- Cleveland Clinic Children's, Cleveland, OH, USA
| | - Keyvan Farahani
- National Cancer Institute, National Institutes of Health, Bethesda, MA, USA
| | - Amilcare Gentili
- Department of Radiology, University of California, San Diego, CA, USA
| | - Timothy Kline
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | - Yuankai Huo
- Department of Electrical Engineering & Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Shiba Kuanar
- Artificial Intelligence Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | | | - Bradley J Erickson
- Artificial Intelligence Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Shahriar Faghani
- Artificial Intelligence Lab, Department of Radiology, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|